首页 > 解决方案 > fluentd日志的格式

问题描述

我在弄清楚如何使用 fluentd 解析我的 k8s 集群中的日志时遇到问题。

我得到一些看起来像的日志 2019-09-19 16:05:44.257 [info] some log message

我想解析出时间、日志级别和日志

我在这里测试了正则表达式,它似乎解析出了我需要的部分。但是,当我将其添加到包含流利配置的配置映射中时,当我的日志发送时,我仍然只看到一条长日志消息,看起来像上面那样,并且级别没有被解析出来。全新使用流利的,所以我不知道如何做到这一点。

配置图看起来像

apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-config-map
  namespace: logging
 labels:
  k8s-app: fluentd-logzio
data:
fluent.conf: |-
<match fluent.**>
    # this tells fluentd to not output its log on stdout
    @type null
</match>

# here we read the logs from Docker's containers and parse them
 <source>
  @id fluentd-containers.log
  @type tail
  path /var/log/containers/*.log
  pos_file /var/log/es-containers.log.pos
  tag raw.kubernetes.*
  read_from_head true
  <parse>
    @type multi_format
    <pattern>
      format /^(?<time>.+) (\[(?<level>.*)\]) (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>
    <pattern>
      format json
      time_key time
      time_format %Y-%m-%dT%H:%M:%S.%NZ
    </pattern>
    <pattern>
      format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
      time_format %Y-%m-%dT%H:%M:%S.%N%:z
    </pattern>

  </parse>
</source>

# Detect exceptions in the log output and forward them as one log entry.
<match raw.kubernetes.**>
  @id raw.kubernetes
  @type detect_exceptions
  remove_tag_prefix raw
  message log
  stream stream
  multiline_flush_interval 5
  max_bytes 500000
  max_lines 1000
</match>

# Concatenate multi-line logs
<filter **>
  @id filter_concat
  @type concat
  key message
  multiline_end_regexp /\n$/
  separator ""
</filter>

# Enriches records with Kubernetes metadata
<filter kubernetes.**>
  @id filter_kubernetes_metadata
  @type kubernetes_metadata
</filter>

<match kubernetes.**>
  @type logzio_buffered
  @id out_logzio
  endpoint_url ###
  output_include_time true
  output_include_tags true
  <buffer>
    # Set the buffer type to file to improve the reliability and reduce the memory consumption
    @type file
    path /var/log/fluentd-buffers/stackdriver.buffer
    # Set queue_full action to block because we want to pause gracefully
    # in case of the off-the-limits load instead of throwing an exception
    overflow_action block
    # Set the chunk limit conservatively to avoid exceeding the GCL limit
    # of 10MiB per write request.
    chunk_limit_size 2M
    # Cap the combined memory usage of this buffer and the one below to
    # 2MiB/chunk * (6 + 2) chunks = 16 MiB
    queue_limit_length 6
    # Never wait more than 5 seconds before flushing logs in the non-error case.
    flush_interval 5s
    # Never wait longer than 30 seconds between retries.
    retry_max_interval 30
    # Disable the limit on the number of retries (retry forever).
    retry_forever true
    # Use multiple threads for processing.
    flush_thread_count 2
  </buffer>
</match>

标签: dockerloggingkubernetesfluentd

解决方案


推荐阅读