首页 > 解决方案 > kubectl 代理无法在 Ubuntu LTS 18.04 上运行

问题描述

我已经使用这篇文章在 ubuntu 18.04 上安装了 Kubernetes 。一切正常,然后我尝试使用这些说明安装 Kubernetes 仪表板。

现在,当我尝试运行时kubectl proxy,仪表板没有出现,当尝试使用默认的 kubernetes-dashboard URL 访问它时,它会在浏览器中显示以下错误消息。

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "no endpoints available for service \"https:kubernetes-dashboard:\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

以下命令给出此输出,其中 kubernetes-dashboard 显示状态为 CrashLoopBackOff

$> kubectl 获取 pods --all-namespaces

NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
default                amazing-app-rs-59jt9                         1/1     Running            5          23d
default                amazing-app-rs-k6fg5                         1/1     Running            5          23d
default                amazing-app-rs-qd767                         1/1     Running            5          23d
default                amazingapp-one-deployment-57dddd6fb7-xdxlp   1/1     Running            5          23d
default                nginx-86c57db685-vwfzf                       1/1     Running            4          22d
kube-system            coredns-6955765f44-nqphx                     0/1     Running            14         25d
kube-system            coredns-6955765f44-psdv4                     0/1     Running            14         25d
kube-system            etcd-master-node                             1/1     Running            8          25d
kube-system            kube-apiserver-master-node                   1/1     Running            42         25d
kube-system            kube-controller-manager-master-node          1/1     Running            11         25d
kube-system            kube-flannel-ds-amd64-95lvl                  1/1     Running            8          25d
kube-system            kube-proxy-qcpqm                             1/1     Running            8          25d
kube-system            kube-scheduler-master-node                   1/1     Running            11         25d
kubernetes-dashboard   dashboard-metrics-scraper-7b64584c5c-kvz5d   1/1     Running            0          41m
kubernetes-dashboard   kubernetes-dashboard-566f567dc7-w2sbk        0/1     CrashLoopBackOff   12         41m

$> kubectl 获取服务 --all-namespaces

NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   ----------      <none>        443/TCP                  25d
default                nginx                       NodePort    ----------    <none>        80:32188/TCP             22d
kube-system            kube-dns                    ClusterIP   ----------      <none>        53/UDP,53/TCP,9153/TCP   25d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   ----------   <none>        8000/TCP                 24d
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   ----------    <none>        443/TCP                  24d



$ kubectl get services --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   ======       <none>        443/TCP                  25d
default                nginx                       NodePort    ======    <none>        80:32188/TCP             22d
kube-system            kube-dns                    ClusterIP   ======      <none>        53/UDP,53/TCP,9153/TCP   25d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   ======   <none>        8000/TCP                 24d
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   ======    <none>        443/TCP                  24d

$ kubectl 获取事件 -n kubernetes-dashboard

LAST SEEN   TYPE      REASON    OBJECT                                      MESSAGE
24m         Normal    Pulling   pod/kubernetes-dashboard-566f567dc7-w2sbk   Pulling image "kubernetesui/dashboard:v2.0.0-rc2"
4m46s       Warning   BackOff   pod/kubernetes-dashboard-566f567dc7-w2sbk   Back-off restarting failed container

$ kubectl 描述服务 kubernetes-dashboard -n kubernetes-dashboard

Name:              kubernetes-dashboard
Namespace:         kubernetes-dashboard
Labels:            k8s-app=kubernetes-dashboard
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard"...
Selector:          k8s-app=kubernetes-dashboard
Type:              ClusterIP
IP:                10.96.241.62
Port:              <unset>  443/TCP
TargetPort:        8443/TCP
Endpoints:         
Session Affinity:  None
Events:            <none>

$ kubectl 记录 kubernetes-dashboard-566f567dc7-w2sbk -n kubernetes-dashboard

> 2020/01/29 16:00:34 Starting overwatch 2020/01/29 16:00:34 Using
> namespace: kubernetes-dashboard 2020/01/29 16:00:34 Using in-cluster
> config to connect to apiserver 2020/01/29 16:00:34 Using secret token
> for csrf signing 2020/01/29 16:00:34 Initializing csrf token from
> kubernetes-dashboard-csrf secret panic: Get
> https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf:
> dial tcp 10.96.0.1:443: i/o timeout
> 
> goroutine 1 [running]:
> github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0003dac80)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40
> +0x3b4 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
> github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000534200)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:494
> +0xc7 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000534200)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:462
> +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
>         /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:543
> main.main()
>         /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105
> +0x212

有什么建议可以解决这个问题吗?提前致谢。

标签: kubernetesubuntu-18.04kubectlkubernetes-dashboard

解决方案


我注意到您用于安装 Kubernetes 集群的指南缺少一个重要部分。

根据 Kubernetes 文档:

为了 flannel 正常工作,您必须传递 --pod-network-cidr=10.244.0.0/16kubeadm init.

通过运行 将 桥接的 IPv4 流量传递到 iptables 的链来/proc/sys/net/bridge/bridge-nf-call-iptables 设置 。这是某些 CNI 插件工作的必要条件,更多信息请参见 此处1sysctl net.bridge.bridge-nf-call-iptables=1

确保您的防火墙规则允许参与覆盖网络的所有主机使用 UDP 端口 8285 和 8472 流量。看 这里

请注意,它 适用 flannelamd64armarm64和 Linux 下。Windows ( ) 声称在 v0.11.0 中受支持,但未记录其用法。ppc64les390xamd64

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

有关更多信息 flannel,请参阅 GitHub 上的 CoreOS flannel 存储库

要解决这个问题:

我建议使用以下命令:

sysctl net.bridge.bridge-nf-call-iptables=1

然后重新安装法兰绒:

kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

更新:验证该/proc/sys/net/bridge/bridge-nf-call-iptables值是1默认值后ubuntu-18-04-lts。所以这里的问题是您需要在本地访问仪表板。

如果您通过 ssh 连接到您的主节点。可以将-X标志与 ssh 一起使用,以便通过ForwardX11. 幸运ubuntu-18-04-lts的是它默认打开了。

ssh -X server

然后安装本地网络浏览器,如铬。

sudo apt-get install chromium-browser
chromium-browser

最后从节点本地访问仪表板。

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

希望能帮助到你。


推荐阅读