知乎专栏 |
检查一下 BIOS 是否开启 VT-X/AMD-v
如果在虚拟机安装 Minikube 也会遇到这个问题。 可以使用 --vm-driver=none 参数启动。
neo@ubuntu:~$ sudo minikube start --vm-driver=none
解决方法
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
然后在 minikube start
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: 3.1: Pulling from pause Get https://k8s.gcr.io/v2/pause/manifests/sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610: net/http: TLS handshake timeout
更换镜像再重试
[root@localhost ~]# minikube start --vm-driver=none --registry-mirror=https://registry.docker-cn.com
启动提示如下错误,一般出现这种错误是因为 minikube stop, minikube delete 后再重启 minikube start
error execution phase kubeconfig/admin: a kubeconfig file "/etc/kubernetes/admin.conf" exists already but has got the wrong CA cert error execution phase kubeconfig/kubelet: a kubeconfig file "/etc/kubernetes/kubelet.conf" exists already but has got the wrong CA cert error execution phase kubeconfig/controller-manager: a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has got the wrong CA cert error execution phase kubeconfig/scheduler: a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has got the wrong CA cert
解决方法
[root@localhost ~]# mv /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.backup [root@localhost ~]# mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.backup [root@localhost ~]# mv /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.backup [root@localhost ~]# mv /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.backup
现在启动 minikube start 不会再出错
[root@localhost ~]# minikube start --vm-driver=none Starting local Kubernetes v1.13.2 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Stopping extra container runtimes... Starting cluster components... Verifying kubelet health ... Verifying apiserver health ... Kubectl is now configured to use the cluster. =================== WARNING: IT IS RECOMMENDED NOT TO RUN THE NONE DRIVER ON PERSONAL WORKSTATIONS The 'none' driver will run an insecure kubernetes apiserver as root that may leave the host vulnerable to CSRF attacks When using the none driver, the kubectl config and credentials generated will be root owned and will appear in the root home directory. You will need to move the files to the appropriate location and then set the correct permissions. An example of this is below: sudo mv /root/.kube $HOME/.kube # this will write over any previous configuration sudo chown -R $USER $HOME/.kube sudo chgrp -R $USER $HOME/.kube sudo mv /root/.minikube $HOME/.minikube # this will write over any previous configuration sudo chown -R $USER $HOME/.minikube sudo chgrp -R $USER $HOME/.minikube This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true Loading cached images from config file. Everything looks great. Please enjoy minikube!
问题原因,使用私有 registry 由于没有 HTTPS 导致 kubectl 使用 https 去访问私有 registry.
Failed to pull image "192.168.3.85:5000/netkiller/config:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://192.168.3.85:5000/v2/: http: server gave HTTP response to HTTPS client
minikube 并不会使用 docker 配置文件中的 insecure-registry 配置项
解决办法
minikube start --insecure-registry=127.0.0.1:5000
或指定网段
minikube start --insecure-registry "10.0.0.0/24"
iMac:kubernetes neo$ kubectl create -f redis/redis.yml configmap/redis-config created deployment.apps/redis created The Service "redis" is invalid: spec.ports[0].nodePort: Invalid value: 6379: provided port is not in the valid range. The range of valid ports is 30000-32767
编辑kube-apiserver.yaml文件
$ minikube ssh $ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
增加kube-apiserver的启动配置项
--service-node-port-range=1024-65535
$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.64.5:8443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=192.168.64.5 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/var/lib/minikube/certs/ca.crt - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota - --enable-bootstrap-token-auth=true - --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt - --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt - --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --insecure-port=0 - --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt - --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt - --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=8443 - --service-account-key-file=/var/lib/minikube/certs/sa.pub - --service-cluster-ip-range=10.10.0.0/24 - --service-node-port-range=1024-65535 - --tls-cert-file=/var/lib/minikube/certs/apiserver.crt - --tls-private-key-file=/var/lib/minikube/certs/apiserver.key image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.2 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 192.168.64.5 path: /livez port: 8443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-apiserver readinessProbe: failureThreshold: 3 httpGet: host: 192.168.64.5 path: /readyz port: 8443 scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 resources: requests: cpu: 250m startupProbe: failureThreshold: 24 httpGet: host: 192.168.64.5 path: /livez port: 8443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /var/lib/minikube/certs name: k8s-certs readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true hostNetwork: true priorityClassName: system-node-critical volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /var/lib/minikube/certs type: DirectoryOrCreate name: k8s-certs - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {}
sudo systemctl restart kubelet
iMac:~ neo$ minikube addons enable registry 🔎 Verifying registry addon... ❌ Exiting due to MK_ENABLE: run callbacks: running callbacks: [verifying registry addon pods : timed out waiting for the condition: timed out waiting for the condition] 😿 If the above advice does not help, please let us know: 👉 https://github.com/kubernetes/minikube/issues/new/choose
minikube dashboard --alsologtostderr -v=1
[docker@localhost ~]$ kubectl get pods --all-namespaces | grep dashboard kubernetes-dashboard dashboard-metrics-scraper-6f7955cd98-xjzkq 0/1 ImagePullBackOff 0 11d kubernetes-dashboard kubernetes-dashboard-7bf64fd654-ckr7v 0/1 ImagePullBackOff 0 11d
[docker@localhost ~]$ kubectl logs --namespace=kubernetes-dashboard kubernetes-dashboard-7bf64fd654-ckr7v Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-7bf64fd654-ckr7v" is waiting to start: trying and failing to pull image
minikube start --image-mirror-country=cn --insecure-registry="registry.netkiller.cn" --cache-images=true
Neo-iMac:~ neo$ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create--1-qpckk 0/1 Completed 0 18h ingress-nginx-admission-patch--1-5x94l 0/1 Completed 0 18h ingress-nginx-controller-78d858bdc7-nrszs 1/1 Running 1 18h Neo-iMac:~ neo$ kubectl create deployment web --image=nginx:latest deployment.apps/web created Neo-iMac:~ neo$ kubectl expose deployment web --type=NodePort --port=80 service/web exposed Neo-iMac:~ neo$ kubectl get service web NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web NodePort 10.109.55.204 <none> 8080:30857/TCP 19s Neo-iMac:~ neo$ minikube service web --url 🏃 Starting tunnel for service web. |-----------|------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|------|-------------|------------------------| | default | web | | http://127.0.0.1:62956 | |-----------|------|-------------|------------------------| http://127.0.0.1:62956 ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
ingress.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: www.netkiller.cn http: paths: - path: / pathType: Prefix backend: service: name: web port: number: 80
http://www.netkiller.cn 无法访问,解决方案 minikube tunnel
Neo-iMac:~ neo$ minikube tunnel ❗ The service/ingress example-ingress requires privileged ports to be exposed: [80 443] 🔑 sudo permission will be asked for it. 🏃 Starting tunnel for service example-ingress. Password:
如果注意观察,在启动的时候系统已经提示:After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
Neo-iMac:nginx neo$ minikube start --image-mirror-country=cn --insecure-registry="registry.netkiller.cn" --cache-images=true 😄 minikube v1.24.0 on Darwin 12.0.1 ✨ Using the docker driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... 🔄 Restarting existing docker container for "minikube" ... 🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 ... 🔎 Verifying Kubernetes components... 💡 After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1" ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.3.1 ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v5 ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.0.4 ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.7 ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 ▪ Using image registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 🔎 Verifying ingress addon... 🌟 Enabled addons: dashboard, storage-provisioner, default-storageclass, ingress 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default