简介:iLogtail入门-iLogtail本地配置方式部署(ForKafkaFlusher)阿里正式开源了可观察数据采集器iLogtail。iLogtail作为阿里巴巴内部可观察数据采集的基础设施,承载了阿里巴巴集团和蚂蚁集团的日志、监控、trace、事件等各类可观察数据的采集。iLogtail作为阿里云SLS的采集代理,一般与SLS配合使用,采集配置通常通过SLS控制台或API进行。是否可以在不依赖SLS的情况下使用iLogtail?本文将详细介绍如何在不依赖SLS控制台的情况下,以本地配置方式部署iLogtail,将json格式的日志文件采集到非SLS(如Kafka等)。场景采集/root/bin/input_data/json.log(json格式的单行日志),将采集到的日志写入本地部署的kafka中。前提条件Kafka在本地安装,并创建名为logtail-flusher-kafka的主题。有关部署详细信息,请参阅链接。安装ilogtail下载最新的ilogtail版本并解压。解压tar包$tarzxvflogtail-linux64.tar.gz查看目录结构$lllogtail-linux64drwxr-xr-x35005004096bindrwxr-xr-x18450050012288conf-rw-r--r--1500500597READMEdrwxr-xr-x25005004096resources进入bin目录$cdlogtail-linux64/bin$ll-rwxr-xr-x150050010052072ilogtail_1.0.28#ilogtail可执行文件-rwxr-xr-x15005004191ilogtaild-rwxr-xr-x15005005976libPluginAdapter.so-rw-r--r--150050089560656libPluginBase.so-rwxr-xr-x15005002333024LogtailInsight集合配置格式format日志文件收集到本地kafa配置格式:{"metrics":{"{config_name1}":{"enable":true,"category":"file","log_type":"json_log","log_path":"/root/bin/input_data","file_pattern":"json.log","plugin":{"processors":[{"detail":{"SplitSep":"","SplitKey":"content"},"type":"processor_split_log_string"},{“细节”:{“ExpandConnector”:“”,“ExpandDepth”:1,“SourceKey”:“content”,“KeepSource”:false},“type”:“processor_json”}],“flushers”:[{“type":"flusher_kafka","detail":{"Brokers":["localhost:9092"],"Topic":"logtail-flusher-kafka"}}]},"version":1},"{config_name2}":{...}}}详细格式说明:文件最外层key为metrics,每个具体集合配置的内层key为配置名称。该名称在此文件中必须是唯一的。建议命名:“##1.0##collectionconfigurationname”。集合内部配置值是具体集合参数的配置,关键参数及其含义如下:参数名称类型描述enablebool配置是否有效,为false则配置不生效。categorystring文件集合场景的值为“文件”。log_type字符串日志类型。在json采集场景中,使用值json_log。log_path字符串集合路径。file_pattern字符串捕获文件。plugin对象具体集合配置为json对象。具体配置请参考下面的说明。数组处理方式配置,详见链接。processor_json:以json格式展开原始日志。flushers对象数组flusher_stdout:收集标准输出,一般用于调试场景;flusher_kafka:收集kafka。完整的配置示例,进入bin目录,创建sys_conf_dir文件夹和ilogtail_config.json文件。1.创建sys_conf_dir$mkdirsys_conf_dir2.创建ilogtail_config.json并完成配置。logtail_sys_conf_dir的值为:$pwd/sys_conf_dir/config_server_address为固定值,保持不变。$pwd/root/bin/logtail-linux64/bin$catilogtail_config.json{"logtail_sys_conf_dir":"/root/bin/logtail-linux64/bin/sys_conf_dir/","config_server_address":"http://logtail.cn-zhangjiakou.log.aliyuncs.com"}3.当前目录结构$ll-rwxr-xr-x1500500ilogtail_1.0.28-rw-r--r--1rootrootilogtail_config.json-rwxr-xr-x1500500ilogtaild-rwxr-xr-x1500500libPluginAdapter.so-rw-r--r--1500500libPluginBase.so-rwxr-xr-x1500500LogtailInsightdrwxr-xr-x2rootrootsys_conf_dir在sys_conf_dir下创建采集配置文件user_local_config.json。注意:在json_log场景下,user_local_config.json只需要修改采集路径的相关参数log_path和file_pattern,其他参数不变。$catsys_conf_dir/user_local_config.json{"metrics":{"##1.0##kafka_output_test":{"category":"file","log_type":"json_log","log_path":"/root/bin/input_data“,“file_pattern”:“json.log”,“create_time”:1631018645,“defaultEndpoint”:“”,“delay_alarm_bytes”:0,“delay_skip_bytes”:0,“discard_none_utf8”:false,“discard_unmatch”:false,“docker_exclude_env":{},"docker_exclude_label":{},"docker_file":false,"docker_include_env":{},"docker_include_label":{},"enable":true,"enable_tag":false,"file_encoding":"utf8","filter_keys":[],"filter_regs":[],"group_topic":"","plugin":{"processors":[{"detail":{"SplitSep":"","SplitKey":"content"},"type":"processor_split_log_string"},{"detail":{"ExpandConnector":"","ExpandDepth":1,"SourceKey":"content","KeepSource":false},"type":"processor_json"}],"flushers":[{"type":"flusher_kafka","detail":{"Brokers":["localhost:9092"],"Topic":"logtail-flusher-kafka"}}]},"local_storage":true,"log_tz":"","max_depth":10,"max_send_rate":-1,“merge_type”:“topic”,“preserve”:true,“preserve_depth”:1,“priority”:0,“raw_log”:false,“aliuid”:“”,“region”:“”,“project_name”:“”,“send_rate_expire”:0,“sensitive_keys”:[],“shard_hash_key”:[],“tail_existed”:false,“time_key”:“”,“timeformat”:“”,“topic_format”:“无”,"tz_adjust":false,"version":1,"advanced":{"force_multiconfig":false,"tail_size_kb":1024}}}}启动ilogtail终端模式运行$./ilogtail_1.0.28--ilogtail_daemon_flag=false可以选择daemon模式运行$./ilogtail_1.0.28$ps-ef|greplogtailroot484531./ilogtail_1.0.28root4845448453./ilogtail_1.0.28采集场景模拟构建json到/root/bin/input_data/json.log格式数据,代码如下:$echo'{"seq":"1","action":"kkkk","extend1":"","extend2":"","type":"1"}'>>json.log$echo'{"seq":"2","action":"kkkk","extend1":"","extend2":"","type":"1"}'>>json.log消费主题为logtail-flusher-kafka$bin/kafka-console-consumer.sh--bootstrap-serverlocalhost:9092--topiclogtail-flusher-kafka{"Time":1640862641,"Contents":[{"Key":"__tag__:__path__","Value":"/root/bin/input_data/json.log"},{"Key":"seq","Value":"1"},{"Key":"action","Value":"kkkk"},{"Key":"extend1","值":""},{"键":"extend2","值":""},{"键":"类型","值":"1"}]}{"时间":1640862646,"内容":[{"Key":"__tag__:__path__","Value":"/root/bin/input_data/json.log"},{"Key":"seq","Value":"2"},{"Key":"action","Value":"kkkk"},{"Key":"extend1","Value":""},{"Key":"extend2","Value":""},{"Key":"type","Value":"1"}]}本地调试为了快速方便的验证配置是否正确,可以将收集到的日志打印到标准输出中完成快速功能验证。将本地采集配置插件-flushers替换为flusher_stdout,在终端模式下运行$./ilogtail_1.0.28--ilogtail_daemon_flag=false,将采集到的日志打印到标准输出,方便本地快速调试。{"type":"flusher_stdout","detail":{"OnlyStdout":true}}原文链接本文为阿里云原创内容,未经允许不得转载。
