首页 > 解决方案 > Logstash 无法创建主管道,我该如何解决?

问题描述

我已经使用logstash 一个月了,从一周前开始我似乎无法启动它。Logstash 在带有 linux 的机器 aws 上运行 docker 映像,版本 6.2.4。我以前工作得很好,所以我不知道发生了什么。我老板唯一做的就是从版本 6.2.3 升级到 6.2.4,但错误并没有从那一刻开始而是几天后开始,所以我猜这不是问题所在。

我有一个包含我的配置的 logstash.conf 文件,内部日志指向“配置文件”中的特定行,但有趣的是该行不存在,因为该文件的行数较少。我在某处读到logstash合并所有conf文件,但我似乎无法找到合并文件来检查该行。我很沮丧。

我将粘贴 logstash.conf 文件:

input {
    beats {
            port => "5044"
        client_inactivity_timeout => "120"
    }
  }
filter {
    ruby {
            code => "event.set('day',event.get('source').split('/')[5].split('-').last)"
    }

ruby {
            code => "event.set('app',event.get('source').split('/')[5].split('-').first)"
    }

    ruby {
            code => "event.set('categoria',event.get('source').split('/').last.split('.').first.split('-')[1])"
    }

    ruby {
            code => "event.set('nodo',event.get('source').split('/')[4])"
    }

    ruby {
            code => "event.set('conFecha',true)"
    }

    if [day] == "A"{
                    ruby {
                            code => "event.set('day',Time.now.getlocal('-03:00').strftime('%Y%m%d'))"
                    }
                    ruby {
                            code => "event.set('conFecha',false)"
                    }
    }

grok {
            patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
            match => { "message" => "%{SV_TIME:time}, %{SV_TIMESTAMP:numero}, cliente\[%{DATA:client}\], %{WORD:level} , performance - (.)* .*\[%{NUMBER:milliseconds:float}\].*\[com.vtr.servicesvtr.ws.client.factory.([a-z])*.*%{WORD:crm}.%{WORD:grupo}.%{WORD:method}.*.(ejecutar)\]" }
            add_field => {
                    "tipo" => "SOA"
                    "fechahora" => "%{day} %{time}"
                    "performance" => "performance"
            }
    }

    grok {
            patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
            match => { "message" => "%{SV_TIME:time}, %{SV_TIMESTAMP:numero}, cliente\[%{DATA:client}\], %{WORD:level} , performance - (.)* .*\[%{NUMBER:milliseconds:float}\].*\[/%{SV_PACKAGE:microservicio}/%{DATA:url}\]" }
            add_field => {
                    "tipo" => "MS"
                    "fechahora" => "%{day} %{time}"
                    "performance" => "performance"
            }
       }

    if [performance] != "performance"{

            grok {
                    patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
                    match => { "message" => "%{SV_TIME:time}, %{SV_TIMESTAMP:numero}, cliente\[%{DATA:client}\], %{WORD:level}.*\, %{GREEDYDATA:resultado}" }
                    add_field => {
                            "tipo" => "AUDIT"
                            "fechahora" => "%{day} %{time}"
                            "auditoria" => "auditoria"
                    }
            }
    }

    if [performance] != "performance" and [auditoria] != "auditoria"{
            grok {
                    patterns_dir => ["/usr/share/logstash/pipeline/pattern/patterns"]
                    match => { "message" => "%{SV_DATE_TIME:fecha}.* seguridad  \- %{DATA:usuario}\|%{DATA:rut}\|%{DATA:resultado} \-> %{GREEDYDATA:metodo}" }
                    add_field => {
                            "tipo" => "SEGURIDAD"
                            "fechahora" => "%{fecha}"
                            "su" => "su"
                    }
            }
            mutate {
                lowercase => [ "usuario" ]
            }

            date {
                    match => ["fechahora", "yyyy-MM-dd HH:mm:ss,SSS"]
            }
    }

    if [su] != "su"{
            date {
                    match => ["fechahora", "yyyyMMdd HH:mm:ss.SSS"]
            }
    }

    date {
            match => ["timestamp" , "yyyyMMdd'T'HH:mm:ss.SSS"]
            target => "@timestamp"
    }
 }
output {
       if [performance] == "performance" and [url] != "manage/health"{
           if [conFecha] {
               elasticsearch {
                      hosts => ["elasticsearch:9200"]
                      index => "%{[app_id]}-performances-%{+YYYY.MM.dd}"
              }

           }else {
               elasticsearch {
                      hosts => ["elasticsearch:9200"]
                       index => "temp-%{[app_id]}-performances-%
{+YYYY.MM.dd}"
            }
        }
    }else if [auditoria] == "auditoria" {
        if [conFecha] {
            elasticsearch {
                    hosts => ["elasticsearch:9200"]
                    index => "%{[app_id]}-auditoria-%{+YYYY.MM.dd}"
            }
        }else {
            elasticsearch {
                    hosts => ["elasticsearch:9200"]
                    index => "temp-%{[app_id]}-auditoria-%{+YYYY.MM.dd}"
            }
        }

    }

    if [performance] == "performance" and [milliseconds] >= 1000 and [url] != "manage/health"{
        if [conFecha] {
            elasticsearch {
                    hosts => ["elasticsearch:9200"]
                    index => "%{[app_id]}-performance-scache-%{+YYYY.MM.dd}"
            }
        }else {
                elasticsearch {
                    hosts => ["elasticsearch:9200"]
                    index => "temp-%{[app_id]}-performance-scache-%{+YYYY.MM.dd}"
            }
        }
    }
   }

我还将粘贴日志:

Sending Logstash's logs to /usr/share/logstash/logs which is now 
configured via log4j2.properties
[2018-04-27T15:59:06,770][INFO ][logstash.modules.scaffold] 
Initializing module {:module_name=>"fb_apache", 
:directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-04-27T15:59:06,879][INFO ][logstash.modules.scaffold] 
Initializing module {:module_name=>"netflow", 
:directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-04-27T15:59:13,551][INFO ][logstash.runner          ] Starting 
Logstash {"logstash.version"=>"6.2.4"}
[2018-04-27T15:59:17,073][INFO ][logstash.agent           ] 
Successfully started Logstash API endpoint {:port=>9600}
 [2018-04-27T15:59:27,711][ERROR][logstash.agent           ] Failed to 
 execute action 
 {:action=>LogStash::PipelineAction::Create/pipeline_id:main, 
 :exception=>"LogStash::ConfigurationError", :message=>"Expected one of 
 #, input, filter, output at line 205, column 1 (byte 6943) after ", 
 :backtrace=>["/usr/share/logstash/logstash-
 core/lib/logstash/compiler.rb:42:in `compile_imperative'", 
 "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:in 
`compile_graph'", "/usr/share/logstash/logstash-
 core/lib/logstash/compiler.rb:12:in `block in compile_sources'", 
 "org/jruby/RubyArray.java:2486:in `map'", 
  "/usr/share/logstash/logstash-
  core/lib/logstash/compiler.rb:11:in `compile_sources'", 
  "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:51:in 
  `initialize'", "/usr/share/logstash/logstash-
  core/lib/logstash/pipeline.rb:169:in `initialize'", 
  "/usr/share/logstash/logstash-
  core/lib/logstash/pipeline_action/create.rb:40:in `execute'", 
  "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in 
 `block 
  in converge_state'", "/usr/share/logstash/logstash-
  core/lib/logstash/agent.rb:141:in `with_pipelines'", 
 "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in 
`block 
 in converge_state'", "org/jruby/RubyArray.java:1734:in `each'", 
 "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in ` 
 converge_state'", "/usr/share/logstash/logstash-
 core/lib/logstash/agent.rb:166:in `block in 
 converge_state_and_update'", 
 "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in 
 `with_pipelines'", "/usr/share/logstash/logstash-
core/lib/logstash/agent.rb:164:in `converge_state_and_update'", 
 "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in 
 `execute'", "/usr/share/logstash/logstash-
 core/lib/logstash/runner.rb:348:in `block in execute'", 
 "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-
  0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}

我真的希望你们能帮助我。我很绝望。再见

标签: logstash

解决方案


该问题通过删除 config 目录中的所有文件来解决,因为 logstash 正在合并所有文件。在这种情况下,它是 uup 文件。无论如何,多亏了 docker 镜像,logstash 稍后会再次创建它们。


推荐阅读