首页 > 解决方案 > K8s 服务无法自动绑定相应的端点

问题描述

我目前在部署时遇到问题,目前无法解决问题。问题是服务的端点没有自动绑定,并且选择器标签匹配。

关于设置的几句话:

此外,在我的 K8s 集群中,一些成功的 Ingress 路由(不同的命名空间)和关联的自动端点分配已经成功,因此我得出结论,我的基本设置不会有问题。此外,我可以手动将 pod 的适当 IP 地址输入到端点中,从而保证功能。

现在我的部署设置:

部署舵图:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: "{{ .Values.basic.name }}-de"
  namespace: "{{ .Values.basic.namespace }}"
  labels:
    app.kubernetes.io/name: "{{ .Values.basic.name }}"
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: "{{ .Values.basic.name }}"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: "{{ .Values.basic.name }}"
    spec:
      containers:
        - name: "{{ .Values.basic.database.name }}"
          image: "{{ .Values.docker.database.image }}:{{ .Values.docker.database.tag }}"
          resources:
            requests:
              memory: "64Mi"
              cpu: "250m"
            limits:
              memory: "128Mi"
              cpu: "500m"
          securityContext:
            runAsUser: 500
          env:
            - name: LOGGING_REDIS_HOST
              value: "144.91.86.56"
            - name: LOGGING_REDIS_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: "{{ .Values.basic.database.name }}-sc"
                  key: LOGGING_REDIS_PASSWORD
            - name: POSTGRES_INITDB_NAME
              value: "{{ .Values.config.database.POSTGRES_INITDB_NAME }}"
            - name: POSTGRES_INITDB_ROOT_USERNAME
              value: "{{ .Values.config.database.POSTGRES_INITDB_ROOT_USERNAME }}"
            - name: POSTGRES_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: "{{ .Values.basic.database.name }}-sc"
                  key: POSTGRES_INITDB_ROOT_PASSWORD
            - name: POSTGRES_INITDB_MONITORING_USERNAME
              value: "{{ .Values.config.database.POSTGRES_INITDB_MONITORING_USERNAME }}"
            - name: POSTGRES_INITDB_MONITORING_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: "{{ .Values.basic.database.name }}-sc"
                  key: POSTGRES_INITDB_MONITORING_PASSWORD
            - name: POSTGRES_INITDB_USER_USERNAME
              value: "{{ .Values.config.database.POSTGRES_INITDB_USER_USERNAME }}"
            - name: POSTGRES_INITDB_USER_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: "{{ .Values.basic.database.name }}-sc"
                  key: POSTGRES_INITDB_USER_PASSWORD
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /storage
              name: "{{ .Values.basic.database.name }}-storage"
              readOnly: false
        - name: "{{ .Values.basic.app.name }}"
          image: "{{ .Values.docker.app.image }}:{{ .Values.docker.app.tag }}"
          resources:
            requests:
              memory: "1024Mi"
              cpu: "250m"
            limits:
              memory: "2048Mi"
              cpu: "500m"
          securityContext:
            runAsUser: 500
          env:
            - name: LOGGING_REDIS_HOST
              value: "144.91.86.56"
            - name: LOGGING_REDIS_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: "{{ .Values.basic.app.name }}-sc"
                  key: LOGGING_REDIS_PASSWORD
            - name: JDBC_USER
              value: "{{ .Values.config.database.POSTGRES_INITDB_USER_USERNAME }}"
            - name: JDBC_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: "{{ .Values.basic.app.name }}-sc"
                  key: JDBC_PASSWORD
            - name: JDBC_URL
              value: "{{ .Values.config.app.JDBC_URL }}"
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /storage
              name: "{{ .Values.basic.app.name }}-storage"
              readOnly: false
      imagePullSecrets:
        - name: "docker-registry-{{ .Values.basic.namespace }}-sc"
      volumes:
        - name: "{{ .Values.basic.database.name }}-storage"
          persistentVolumeClaim:
            claimName: "gluster-{{ .Values.basic.database.name }}-{{ .Values.basic.namespace }}-pvc"
        - name: "{{ .Values.basic.app.name }}-storage"
          persistentVolumeClaim:
            claimName: "gluster-{{ .Values.basic.app.name }}-{{ .Values.basic.namespace }}-pvc"
      securityContext:
        fsGroup: 500

服务舵图:

apiVersion: v1
kind: Service
metadata:
  name: "{{ .Values.basic.name }}-sv"
  namespace: "{{ .Values.basic.namespace }}"
  labels:
    app.kubernetes.io/name: "{{ .Values.basic.name }}"
spec:
  type: ClusterIP
  ports:
    - port: 9187
      targetPort: http
      protocol: TCP
      name: metrics
    - port: 9000
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: "{{ .Values.basic.name }}"

现在我的 K8s 集群的输出:

kubectl get pods -l app.kubernetes.io/name=sonarqube -n development
NAME                           READY   STATUS    RESTARTS   AGE
sonarqube-de-b47bd9f75-tsbxc   2/2     Running   0          2d11h

kubectl get endpoints sonarqube-sv -n development
NAME           ENDPOINTS   AGE
sonarqube-sv   <none>      3d10h

kubectl get pods -l app.kubernetes.io/name=sonarqube -n development --show-labels
NAME                           READY   STATUS    RESTARTS   AGE     LABELS
sonarqube-de-b47bd9f75-tsbxc   2/2     Running   0          3d11h   app.kubernetes.io/name=sonarqube,pod-template-hash=b47bd9f75

kubectl get endpointslices  -n development --show-labels
NAME                       ADDRESSTYPE   PORTS     ENDPOINTS        AGE     LABELS
sonarqube-sv-fgsg2         IPv4          <unset>   192.168.202.213  3d11h   endpointslice.kubernetes.io/managed-by=endpointslice-controller.k8s.io,kubernetes.io/service-name=sonarqube-sv

kubectl get deployment sonarqube-de -n development --show-labels
NAME           READY   UP-TO-DATE   AVAILABLE   AGE     LABELS
sonarqube-de   1/1     1            1           4d10h   app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=sonarqube

大家能帮我解决这个问题吗?

标签: kuberneteskubernetes-helmkubernetes-ingressnginx-ingress

解决方案


我已经通过移除targetPort: http内部的 service helm YAML 解决了这个问题。

apiVersion: v1
kind: Service
metadata:
  name: "{{ .Values.basic.name }}-sv"
  namespace: "{{ .Values.basic.namespace }}"
  labels:
    app.kubernetes.io/name: "{{ .Values.basic.name }}"
spec:
  type: ClusterIP
  ports:
    - port: 9187
      protocol: TCP
      name: metrics
    - port: 9000
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: "{{ .Values.basic.name }}"

推荐阅读