首页 > 解决方案 > 为什么 Istio “身份验证策略”示例页面没有按预期工作?

问题描述

此处的文章:https ://istio.io/docs/tasks/security/authn-policy/ 具体来说,当我按照该Setup部分的说明进行操作时,我无法连接任何httpbin驻留在命名空间foobar. 但是那个legacy没问题。我预计正在安装的边车代理有问题。

这是 pod yaml 文件的输出(使用命令httpbin注入后)。istioctl kubeinject --includeIPRanges "10.32.0.0/16"我使用--includeIPRanges以便 pod 可以与外部 ip 通信(用于我的调试目的来安装dnsutils等包)

apiVersion: v1
kind: Pod
metadata:
  annotations:
    sidecar.istio.io/inject: "true"
    sidecar.istio.io/status: '{"version":"4120ea817406fd7ed43b7ecf3f2e22abe453c44d3919389dcaff79b210c4cd86","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
  creationTimestamp: 2018-08-15T11:40:59Z
  generateName: httpbin-8b9cf99f5-
  labels:
    app: httpbin
    pod-template-hash: "465795591"
    version: v1
  name: httpbin-8b9cf99f5-9c47z
  namespace: foo
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: httpbin-8b9cf99f5
    uid: 1450d75d-a080-11e8-aece-42010a940168
  resourceVersion: "65722138"
  selfLink: /api/v1/namespaces/foo/pods/httpbin-8b9cf99f5-9c47z
  uid: 1454b68d-a080-11e8-aece-42010a940168
spec:
  containers:
  - image: docker.io/citizenstig/httpbin
    imagePullPolicy: IfNotPresent
    name: httpbin
    ports:
    - containerPort: 8000
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-pkpvf
      readOnly: true
  - args:
    - proxy
    - sidecar
    - --configPath
    - /etc/istio/proxy
    - --binaryPath
    - /usr/local/bin/envoy
    - --serviceCluster
    - httpbin
    - --drainDuration
    - 45s
    - --parentShutdownDuration
    - 1m0s
    - --discoveryAddress
    - istio-pilot.istio-system:15007
    - --discoveryRefreshDelay
    - 1s
    - --zipkinAddress
    - zipkin.istio-system:9411
    - --connectTimeout
    - 10s
    - --statsdUdpAddress
    - istio-statsd-prom-bridge.istio-system.istio-system:9125
    - --proxyAdminPort
    - "15000"
    - --controlPlaneAuthPolicy
    - NONE
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: REDIRECT
    image: docker.io/istio/proxyv2:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-proxy
    resources:
      requests:
        cpu: 10m
    securityContext:
      privileged: false
      readOnlyRootFilesystem: true
      runAsUser: 1337
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  dnsPolicy: ClusterFirst
  initContainers:
  - args:
    - -p
    - "15001"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - 10.32.0.0/16
    - -x
    - ""
    - -b
    - 8000,
    - -d
    - ""
    image: docker.io/istio/proxy_init:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  nodeName: gke-tvlk-data-dev-default-medium-pool-46397778-q2sb
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-pkpvf
    secret:
      defaultMode: 420
      secretName: default-token-pkpvf
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio.default
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:41:01Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:44:28Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:40:59Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://758e130a4c31a15c1b8bc1e1f72bd7739d5fa1103132861eea9ae1a6ae1f080e
    image: citizenstig/httpbin:latest
    imageID: docker-pullable://citizenstig/httpbin@sha256:b81c818ccb8668575eb3771de2f72f8a5530b515365842ad374db76ad8bcf875
    lastState: {}
    name: httpbin
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-08-15T11:41:01Z
  - containerID: docker://9c78eac46a99457f628493975f5b0c5bbffa1dac96dab5521d2efe4143219575
    image: istio/proxyv2:1.0.0
    imageID: docker-pullable://istio/proxyv2@sha256:77915a0b8c88cce11f04caf88c9ee30300d5ba1fe13146ad5ece9abf8826204c
    lastState:
      terminated:
        containerID: docker://52299a80a0fa8949578397357861a9066ab0148ac8771058b83e4c59e422a029
        exitCode: 255
        finishedAt: 2018-08-15T11:44:27Z
        reason: Error
        startedAt: 2018-08-15T11:41:02Z
    name: istio-proxy
    ready: true
    restartCount: 1
    state:
      running:
        startedAt: 2018-08-15T11:44:28Z
  hostIP: 10.32.96.27
  initContainerStatuses:
  - containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
    image: istio/proxy_init:1.0.0
    imageID: docker-pullable://istio/proxy_init@sha256:345c40053b53b7cc70d12fb94379e5aa0befd979a99db80833cde671bd1f9fad
    lastState: {}
    name: istio-init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
        exitCode: 0
        finishedAt: 2018-08-15T11:41:00Z
        reason: Completed
        startedAt: 2018-08-15T11:41:00Z
  phase: Running
  podIP: 10.32.19.61
  qosClass: Burstable
  startTime: 2018-08-15T11:40:59Z

这是我收到错误 sleep.legacy -> httpbin.foo 时的示例命令

> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"

000
command terminated with exit code 7

** 这是我获得成功状态时的示例命令:sleep.legacy -> httpbin.legacy **

> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -csleep -n legacy -- curl http://httpbin.legacy:8000/ip -s -o /dev/null -w "%{http_code}\n"

200

我已按照说明确保没有定义 mtls 策略等。

> kubectl get policies.authentication.istio.io --all-namespaces
No resources found.
> kubectl get meshpolicies.authentication.istio.io
No resources found.
> kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"
host: istio-policy.istio-system.svc.cluster.local
host: istio-telemetry.istio-system.svc.cluster.local

标签: kubernetesistio

解决方案


NVM,我想我找到了原因。我的配置搞砸了。如果你看一下 statsd 地址,它是用 unrecognized hostname 定义的istio-statsd-prom-bridge.istio-system.istio-system:9125。我注意到在查看代理容器多次重新启动/崩溃之后。


推荐阅读