首页 > 解决方案 > fluentd不解析JSON日志文件条目

问题描述

我在 Stackoverflow 上看到了许多类似的问题,包括这个。但没有一个解决我的特定问题。

该应用程序部署在 Kubernetes (v1.15) 集群中。我正在使用基于fluent/fluentd-docker-image GitHub存储库的 docker 映像v1.9/armhf,已修改为包含 elasticsearch 插件。Elasticsearch 和 Kibana 都是version 7.6.0.

日志将输出到标准输出,如下所示:

{"Application":"customer","HTTPMethod":"GET","HostName":"","RemoteAddr":"10.244.4.154:51776","URLPath":"/customers","level":"info","msg":"HTTP request received","time":"2020-03-10T20:17:32Z"}

在 Kibana 我看到这样的事情:

{
  "_index": "logstash-2020.03.10",
  "_type": "_doc",
  "_id": "p-UZxnABBcooPsDQMBy_",
  "_version": 1,
  "_score": null,
  "_source": {
    "log": "{\"Application\":\"customer\",\"HTTPMethod\":\"GET\",\"HostName\":\"\",\"RemoteAddr\":\"10.244.4.154:46160\",\"URLPath\":\"/customers\",\"level\":\"info\",\"msg\":\"HTTP request received\",\"time\":\"2020-03-10T20:18:18Z\"}\n",
    "stream": "stdout",
    "docker": {
      "container_id": "cd1634b0ce410f3c89fe63f508fe6208396be87adf1f27fa9d47a01d81ff7904"
    },
    "kubernetes": {

我期待看到从log:值中提取的 JSON 有点像这样(缩写):

{
  "_index": "logstash-2020.03.10",
  ...
  "_source": {
    "log": "...",   
    "Application":"customer",
    "HTTPMethod":"GET",
    "HostName":"",
    "RemoteAddr":"10.244.4.154:46160",
    "URLPath":"/customers",
    "level":"info",
    "msg":"HTTP request received",
    "time":"2020-03-10T20:18:18Z",
    "stream": "stdout",
    "docker": {
      "container_id": "cd1634b0ce410f3c89fe63f508fe6208396be87adf1f27fa9d47a01d81ff7904"
    },
    "kubernetes": {

我流利的配置是:

match fluent.**>
  @type null
</match>

<source>
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/fluentd-containers.log.pos
  time_format %Y-%m-%dT%H:%M:%S.%NZ
  tag kubernetes.*
  format json
  read_from_head true
</source>

<match kubernetes.var.log.containers.**fluentd**.log>
  @type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
  @type null
</match>
<filter kubernetes.**>
  @type kubernetes_metadata
</filter>

<match **>
   @type elasticsearch
   @id out_es
   @log_level info
   include_tag_key true
   host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
   port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
   path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}"
   <format>
      @type json
   </format>
</match>

我确定我错过了一些东西。谁能指出我正确的方向?

谢谢,丰富

标签: jsonelasticsearchkibanafluentd

解决方案


这个配置对我有用:

<source>
  @type tail
  path /var/log/containers/*.log,/var/log/containers/*.log
  pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos
  tag kubernetes.*
  read_from_head true
  <parse>
    @type json
    time_key time
    time_format %iso8601
  </parse>
</source>

<filter kubernetes.**>
  @type parser
  key_name "$.log"
  hash_value_field "log"
  reserve_data true
  <parse>
    @type json
  </parse> 
</filter>

<filter kubernetes.**>
  @type kubernetes_metadata
</filter>

确保编辑路径以使其与您的用例匹配。

发生这种情况是因为 docker/var/log/containers/*.log在“log”键下将容器 STDOUT 记录为字符串,因此要将这些 JSON 日志作为字符串放在那里,它们必须首先序列化为字符串。您需要做的是添加一个额外的步骤,该步骤将在“log”键下解析此字符串:

<filter kubernetes.**>
  @type parser
  key_name "$.log"
  hash_value_field "log"
  reserve_data true
  <parse>
    @type json
  </parse> 
</filter>

推荐阅读