首页 > 解决方案 > ELK 将 json 字段解析为单独的字段

问题描述

我有这样的json:

{"date":"2018-12-14 00:00:44,292","service":"aaa","severity":"DEBUG","trace":"abb161a98c23fc04","span":"cd782a330dd3271b","parent":"abb161a98c23fc04","pid":"12691","thread":"http-nio-9080-exec-12","message":"{\"type\":\"Request\",\"lang\":\"pl\",\"method\":\"POST\",\"sessionId\":5200,\"ipAddress\":\"127.0.0.1\",\"username\":\"kap@wp.pl\",\"contentType\":\"null\",\"url\":\"/aaa/getTime\",\"queryString\":\"null\",\"payload\":\",}"}

问题是上面我们有:

"message":"{\"type\":\"Request\",\"lang\":\"pl\",\"method\":\"POST\",\"sessionId\":5200,\"ipAddress\":\"127.0.0.1\",\"username\":\"kap@wp.pl\",\"contentType\":\"null\",\"url\":\"/aaa/getTime\",\"queryString\":\"null\",\"payload\":\",}

该应用程序以这种方式保存日志文件,而 filebeat 和 logstash 不会按我的意愿解析它。Kibana我在命名中只看到一个字段,message但我希望有单独的字段,例如:type、、langmethod

我认为这个问题发生在字符\附近的原因。"如何更改 filebeat/logstash 的行为以实现它?

该应用程序对我来说非常庞大,可以 net.logstash.logback.encoder.LogstashEncoder在项目 java 文件中随处添加。

我有很多logback-json.xml文件。这些文件有:

<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <pattern>
                    <pattern>
                        {
                        "date":"%date",
                        "severity": "%level",
                        "service": "${springAppName}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "parent": "%X{X-B3-ParentSpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{26}",
                        "message": "%message",
                        "ex": "%ex"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>

我尝试添加"jsonMessage": "#asJson{%message}" 此处提到的内容:https ://stackoverflow.com/a/45095983/4983983

但是如果消息就像之前提到的那样,我看到它无法解析并且我得到"jsonMessage":null 在更简单的情况下,我得到: "jsonMessage":{"type":"Response","payload":"2018-12-17T09:23:23.414"} 例如而不是 null。

我的文件节拍配置:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /opt/tomcat-gw/logs/*.json
    - /opt/tomcat-bo/logs/*.json
    - /opt/tomcat-gw/logs/localhost_access_log*.txt
    - /opt/tomcat-bo/logs/localhost_access_log*.txt

  json:
    message_key: event
    keys_under_root: true

    # - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*



  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  multiline:
    pattern: '^({|Traceback)'
    negate:  true
    match:   after


  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601    
  host: "hiddenIp:5602"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["hiddenIp:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

标签: elasticsearchlogstashkibanafilebeat

解决方案


我写了以下代码,如果我用这个文件启动logstash,那么我可以在kibana中看到正确的json。

input {
  file {
    path => "C:/Temp/logFile.log"
    start_position => "beginning"
  }
}

filter {
    json{
        source => "message"
        target => "parsedJson"
    }  
}


output {
  elasticsearch {
    hosts => "localhost:9200"
    index => "demo"
    document_type => "demo"
  }
  stdout { }
}

请参考 Kibana 图像 在此处输入图像描述 参考来自:参考


推荐阅读