首页 > 解决方案 > traefik 入口不转发 tcp 消息

问题描述

我正在尝试将树莓派日志(物联网设备)聚合到在 EKS 中运行的 Logstash/ElasticSearch 中。

filebeat已在 EKS 中运行以聚合容器日志。

这是我的清单文件

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: kube-logging
  labels:
    app: logstash
data:
  logstash.conf: |-
    input {
      tcp {
        port => 5000
        type => syslog
      }
    }

    filter {
        grok {
            match => {"message" => "%{SYSLOGLINE}"}
        }
    }

    output {
      elasticsearch {
        hosts => ["http://elasticsearch:9200"]
        index => "syslog-%{+YYYY.MM.dd}"
      }
      stdout { codec => rubydebug }
    }

---

kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: logstash
  namespace: kube-logging
  labels:
    app: logstash
spec:
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:7.2.1
        imagePullPolicy: Always
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        ports:
        - name: logstash
          containerPort: 5000
          protocol: TCP
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 800Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /usr/share/logstash/pipeline/logstash.conf
          readOnly: true
          subPath: logstash.conf
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: logstash-config

---

kind: Service
apiVersion: v1
metadata:
  name: logstash
  namespace: kube-logging
  labels:
    app: logstash
spec:
  selector:
    app: logstash
  clusterIP: None
  ports:
    - name: tcp-port
      protocol: TCP
      port: 5000
      targetPort: 5000
---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: logstash-external
  namespace: kube-logging
  labels:
    app: logstash
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/frontend-entry-points: tcp
spec:
  rules:
  - host: logstash.dev.domain.com
    http:
      paths:
      - backend:
          serviceName: logstash
          servicePort: 5000

能够发送测试消息:

echo -n "test message" | nc logstash.dev.domain.com 5000

但是tcpdump port 5000在 logstash 容器中看不到任何东西。

如果我echo -n "test message" | nc logstash.dev.domain.com 5000从 logstash 容器运行,那么我会tcpdump port 5000在 logstash 容器上看到此消息。

EKS任何容器中,我都可以发送测试消息echo -n "test message 4" | nc -q 0 logstash 5000,并将其接收logstash并推送到ElasticSearch.

但不是来自集群外部。所以看起来traefik入口控制器是这里的问题。

我有traefikEKS 的入口控制器。

traefik.toml: |
  defaultEntryPoints = ["http","https"]
  logLevel = "INFO"
  [entryPoints]
    [entryPoints.http]
      address = ":80"
      compress = true
      [entryPoints.http.redirect]
      entryPoint = "https"
      [entryPoints.http.whiteList]
      sourceRange = ["0.0.0.0/0""]
    [entryPoints.https]
      address = ":443"
      compress = true
      [entryPoints.https.tls]
      [entryPoints.https.whiteList]
      sourceRange = ["0.0.0.0/0"]
    [entryPoints.tcp]
      address = ":5000"
      compress = true

和服务:

kind: Service
apiVersion: v1
metadata:
  name: ingress-external
  namespace: kube-system
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  selector:
    app: traefik-ingress-lb
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
    - name: tcp-5000
      protocol: TCP
      port: 5000
      targetPort: 5000

这里有什么问题?

标签: kubernetestcptraefikkubernetes-ingresstraefik-ingress

解决方案


如果您以前没有使用过 logstash,那么您可能需要手动创建一个 logstash 索引。数据不会出现在 filebeat 下,因为 elasticsearch 不是从 filebeat 接收数据,而是从 logstash 本身接收数据。这个答案我可能完全错了。但是,如果你去:

设置>索引模式>创建索引模式然后继续输入logstash,它要求一个名称并从下面选择logstash,如下所示:

在此处输入图像描述

创建后,您应该会在 Discover 页面上看到一个显示 logstash 的下拉菜单。在 logstash 下拉菜单下,您应该看到您正在推送的所有数据

您可能已经设置了 logstash 索引,这可能根本不是问题


推荐阅读