知乎专栏 |
autok3s/k3s/k3d 三种封装,安装最简单的是 autok3s,其次是 k3d,如果喜欢蒸腾就安装原生 k3s。
https://github.com/cnrancher/autok3s
挂载 iptables 内核模块,否则 traefik slb 和 service 起不来
modprobe ip_tables
cat > /etc/modules-load.d/k3s.conf <<-EOF ip_tables ip_conntrack br_netfilter EOF
设置主机名
hostnamectl set-hostname master
安装 AutoK3s
docker run -itd --name=autok3s --restart=unless-stopped --net=host -v /var/run/docker.sock:/var/run/docker.sock cnrancher/autok3s:v0.5.2
安装 AutoK3s 命令行
curl -sS https://rancher-mirror.oss-cn-beijing.aliyuncs.com/autok3s/install.sh | INSTALL_AUTOK3S_MIRROR=cn sh
首次运行
[root@master ~]# autok3s ? This is the very first time using autok3s, would you like to share metrics with us? You can always your mind with telemetry command Yes , , ,------------|'------'| _ _ _____ / . '-' |- | | | | |____ | \\/| | | __ _ _ _| |_ ___ | | __ / / ___ | .________.'----' / _ | | | | __/ _ \| |/ / \ \/ __| | | | | | (_| | |_| | || (_) | <.___/ /\__ \ \\___/ \\___/ \__,_|\__,_|\__\___/|_|\_\____/ |___/ Usage: autok3s [flags] autok3s [command] Available Commands: completion Generate completion script create Create a K3s cluster delete Delete a K3s cluster describe Show details of a specific resource explorer Enable kube-explorer for K3s cluster help Help about any command join Join one or more K3s node(s) to an existing cluster kubectl Kubectl controls the Kubernetes cluster manager list Display all K3s clusters serve Run as daemon and serve HTTP/HTTPS request ssh Connect to a K3s node through SSH telemetry Telemetry status for autok3s upgrade Upgrade a K3s cluster to specified version version Display autok3s version Flags: -d, --debug Enable log debug level -h, --help help for autok3s --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s) Global Environments: AUTOK3S_CONFIG Path to the cfg file to use for CLI requests (default ~/.autok3s) AUTOK3S_RETRY The number of retries waiting for the desired state (default 20) Use "autok3s [command] --help" for more information about a command.
如果你想卸载它
Creating uninstall script /usr/local/bin/autok3s-uninstall.sh kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get pods --all-namespaces
创建 k3d 集群
autok3s create --provider k3d --master 1 --name test --worker 1 --api-port 0.0.0.0:6443 --image rancher/k3s:v1.21.7-k3s1
指定私有镜像库
autok3s create --provider k3d --master 1 --name test --worker 1 --api-port 0.0.0.0:6443 --image rancher/k3s:v1.21.7-k3s1 --registry https://registry.netkiller.cn
https://rancher.com/docs/k3s/latest/en/installation/private-registry/
给宿主主机暴漏 ingress 80/443 端口
autok3s create --provider k3d --master 1 --name test --token 0ab46344f7f62488f771f1332feeabf6 --worker 1 --k3s-install-script https://get.k3s.io --api-port 172.18.200.5:6443 --image rancher/k3s:v1.21.7-k3s1 --ports '80:80@loadbalancer' --ports '443:443@loadbalancer'
验证集群是否工作正常
l kubectl create service clusterip nginx --tcp=80:80 cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx annotations: ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 EOF
默认 ingress 地址是 br 网桥的
[root@master ~]# ip addr | grep br- 4: br-2ad0dd2291af: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default inet 172.19.0.1/16 brd 172.19.255.255 scope global br-2ad0dd2291af
# Run kubectl commands inside here # e.g. kubectl get all > kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE nginx <none> * 172.19.0.2,172.19.0.3 80 4m18s
我们已经将 80/443 暴漏给了宿主主机,所以可以直接用宿主主机IP访问 kubernetes 集群
[root@master ~]# curl http://localhost <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
服务器是OS安装在一块 256G 的 SSD 上,默认本地存储路径是 /var/lib/rancher/k3s/storage,我们需要扩展本地存储的空间容量,有两个方案:
将 1TB 硬盘挂载到 /var/lib/rancher/k3s/storage,另一种方案,由于1TB硬盘已经在使用,并且挂载到了 /opt 目录,这时我们使用 --volumes '/opt/kubernetes:/var/lib/rancher/k3s/storage' 将 /var/lib/rancher/k3s/storage 挂载到 /opt/kubernetes 目录。
autok3s create --provider k3d --master 1 --name dev --token 7fc4b9a088a3c02ed9f3285359f1d322 --worker 1 --k3s-install-script https://get.k3s.io --api-port 0.0.0.0:26080 --image rancher/k3s:v1.21.7-k3s1 --volumes '/opt/kubernetes:/var/lib/rancher/k3s/storage'
配置节点路径映射,修改 local-path-config
config.json: |- { "nodePathMap":[ { "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES", "paths":["/opt/local-path-provisioner"] }, { "node":"yasker-lp-dev1", "paths":["/opt/local-path-provisioner", "/data1"] }, { "node":"yasker-lp-dev3", "paths":[] } ] }
hostnamectl set-hostname node1
查看 Master Token
[docker@master ~]$ docker ps | egrep "k3d.*server" |grep -v lb 12b9c210b858 rancher/k3s:v1.21.7-k3s1 "/bin/k3d-entrypoint…" 2 days ago Up 2 days k3d-test-server-0 [docker@master ~]$ docker exec -it k3d-test-server-0 cat /var/lib/rancher/k3s/server/node-token K1083de74aba3f4fe80d744ab2a506d037165f4c475d0ca3636d48a371aac6ef0ac::server:0ab46344f7f62488f771f1332feeabf6
在节点服务器安装代理
SERVER=172.18.200.5 TOKEN=K1083de74aba3f4fe80d744ab2a506d037165f4c475d0ca3636d48a371aac6ef0ac::server:0ab46344f7f62488f771f1332feeabf6 curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://${SERVER}:6443 K3S_TOKEN=${TOKEN} sh - systemctl enable k3s-agent
加入集群
K3S_TOKEN="K104fddbe58cad213694b0346db17ae060fc0974e7cfdbb9063aa1309363de16996::server:0ab46344f7f62488f771f1332feeabf6" K3S_URL="https://172.18.200.5:6443" curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=${K3S_URL} K3S_TOKEN=${K3S_TOKEN} sh -s - --docker
回到 Master 查看节点
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION localhost.localdomain Ready control-plane,master 28m v1.24.4+k3s1 node1 Ready <none> 117s v1.24.4+k3s1
如果此前已经安装了 K3s,需要手工加入 Master
k3s agent --server https://10.12.1.40:6443 --token "K1083de74aba3f4fe80d744ab2a506d037165f4c475d0ca3636d48a371aac6ef0ac::server:0ab46344f7f62488f771f1332feeabf6"
也可以修改环境变量配置文件
[root@node1 ~]# cat /etc/systemd/system/k3s-agent.service.env K3S_TOKEN="K1083de74aba3f4fe80d744ab2a506d037165f4c475d0ca3636d48a371aac6ef0ac::server:0ab46344f7f62488f771f1332feeabf6" K3S_URL="https://172.18.200.5:6443"
> kubectl describe nodes agent-1 Name: agent-1 Roles: <none> Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=k3s beta.kubernetes.io/os=linux egress.k3s.io/cluster=true kubernetes.io/arch=amd64 kubernetes.io/hostname=agent-1 kubernetes.io/os=linux node.kubernetes.io/instance-type=k3s Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"0e:14:1e:7c:fc:e9"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 172.18.200.51 k3s.io/hostname: agent-1 k3s.io/internal-ip: 172.18.200.51 k3s.io/node-args: ["agent"] k3s.io/node-config-hash: HJIVMRMG74UTQMXBAZD4NLDPY3FZHN7PYGB7RA7CUGXEDUTUTBTQ==== k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2","K3S_TOKEN":"********","K3S_U... node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 06 Sep 2022 17:33:21 +0000 Taints: <none> Unschedulable: false Lease: HolderIdentity: agent-1 AcquireTime: <unset> RenewTime: Wed, 07 Sep 2022 18:40:08 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 07 Sep 2022 18:35:57 +0000 Wed, 07 Sep 2022 03:48:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 07 Sep 2022 18:35:57 +0000 Wed, 07 Sep 2022 03:48:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 07 Sep 2022 18:35:57 +0000 Wed, 07 Sep 2022 03:48:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 07 Sep 2022 18:35:57 +0000 Wed, 07 Sep 2022 03:48:43 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.18.200.51 Hostname: agent-1 Capacity: cpu: 16 ephemeral-storage: 181197372Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 65237592Ki pods: 110 Allocatable: cpu: 16 ephemeral-storage: 176268803344 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 65237592Ki pods: 110 System Info: Machine ID: bfc31b708a794f8bad984bd60770ed0f System UUID: 1514a1f0-c451-11eb-8522-ac3ccdeb3900 Boot ID: 5c0c8375-220a-4abd-8a6d-7debafc6a331 Kernel Version: 5.14.0-70.22.1.el9_0.x86_64 OS Image: AlmaLinux 9.0 (Emerald Puma) Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.6.6-k3s1 Kubelet Version: v1.24.4+k3s1 Kube-Proxy Version: v1.24.4+k3s1 PodCIDR: 10.42.2.0/24 PodCIDRs: 10.42.2.0/24 ProviderID: k3s://agent-1 Non-terminated Pods: (11 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system svclb-traefik-hhvvv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h default nacos-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14h default nacos-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14h default elasticsearch-data-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36m default nginx-565785f75c-gmblp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 35m default nginx-565785f75c-lhhcl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30m default nginx-565785f75c-rpc4k 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m default nginx-565785f75c-fr2s7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m default nginx-565785f75c-5rjj9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m default nginx-565785f75c-2bc9p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28m default quickstart-es-default-0 100m (0%) 100m (0%) 2Gi (3%) 2Gi (3%) 10h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 100m (0%) 100m (0%) memory 2Gi (3%) 2Gi (3%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: <none>
设置主机名
hostnamectl set-hostname master
Docker 方式安装
curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -s - --docker
设置主机名
hostnamectl set-hostname agent-1
前往 master 查看 Token
[root@master ~]# cat /var/lib/rancher/k3s/server/node-token K10b614928142836a5262a802c0d3056f0047f057c895373651b723697a261b128b::server:1d436565a84f8e4bdd434b17752a2071
在 Agent 节点服务器执行下面命令,加入 master 集群(Docker 方式)
K3S_TOKEN="K10b614928142836a5262a802c0d3056f0047f057c895373651b723697a261b128b::server:1d436565a84f8e4bdd434b17752a2071" K3S_URL="https://172.18.200.5:6443" curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=${K3S_URL} K3S_TOKEN=${K3S_TOKEN} sh -s - --docker
前往 master 查看节点
[root@master ~]# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME agent-1 Ready <none> 2d v1.24.4+k3s1 172.18.200.51 <none> AlmaLinux 9.0 (Emerald Puma) 5.14.0-70.22.1.el9_0.x86_64 docker://20.10.17 master Ready control-plane,master 2d v1.24.4+k3s1 172.18.200.5 <none> AlmaLinux 9.0 (Emerald Puma) 5.14.0-70.22.1.el9_0.x86_64 docker://20.10.17 agent-2 NotReady <none> 6s v1.24.4+k3s1 172.18.200.52 <none> AlmaLinux 9.0 (Emerald Puma) 5.14.0-70.13.1.el9_0.x86_64 docker://20.10.18
https://github.com/cnrancher/kube-explorer
docker rm -f kube-explorer docker run -itd --name=kube-explorer --restart=unless-stopped --net=host -v /etc/rancher/k3s/k3s.yaml:/etc/rancher/k3s/k3s.yaml:ro -e KUBECONFIG=/etc/rancher/k3s/k3s.yaml cnrancher/kube-explorer:latest
https://127.0.0.1:9443/dashboard/
K3S 的安装方式有多种,官方提供的 k3s-install.sh,还有第三方的 k3d 和 k3sup
设置主机名
hostnamectl set-hostname master
运行在虚拟机之下
curl -sfL https://get.k3s.io | sh -
国内镜像
curl -sfL http://rancher-mirror.cnrancher.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh - systemctl enable k3s
查看节点启动状态
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION localhost.localdomain Ready control-plane,master 28m v1.24.4+k3s1
查看节点 Pod 状态
kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get pods --all-namespaces
设置主机名
hostnamectl set-hostname node1
查看 Master Token
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION localhost.localdomain Ready control-plane,master 28m v1.24.4+k3s1 [root@master ~]# cat /var/lib/rancher/k3s/server/node-token K1000ba39a142b3712d2ffb1459a63f6a7f58b082aeb53406dab15d8cee0f3c2ff0::server:5713047feb086388c19663f69cccc966
在节点服务器安装代理
SERVER=172.18.200.5 TOKEN=K1000ba39a142b3712d2ffb1459a63f6a7f58b082aeb53406dab15d8cee0f3c2ff0::server:5713047feb086388c19663f69cccc966 curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://${SERVER}:6443 K3S_TOKEN=${TOKEN} sh - systemctl enable k3s-agent
回到 Master 查看节点
[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION localhost.localdomain Ready control-plane,master 28m v1.24.4+k3s1 node1 Ready <none> 117s v1.24.4+k3s1 [root@master ~]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master Ready control-plane,master 22h v1.24.4+k3s1 172.18.200.5 <none> AlmaLinux 9.0 (Emerald Puma) 5.14.0-70.22.1.el9_0.x86_64 docker://20.10.17 agent-1 Ready <none> 22h v1.24.4+k3s1 172.18.200.51 <none> AlmaLinux 9.0 (Emerald Puma) 5.14.0-70.22.1.el9_0.x86_64 docker://20.10.17
k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.
Mac 安装 k3d
Neo-iMac:~ neo$ brew install k3d
Linux 安装 k3d
wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash[root@netkiller ~]# wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash Preparing to install k3d into /usr/local/bin k3d installed into /usr/local/bin/k3d Run 'k3d --help' to see what you can do with it.
创建并启动集群
Neo-iMac:~ neo$ k3d cluster create mycluster INFO[0000] Prep: Network INFO[0000] Created network 'k3d-mycluster' INFO[0000] Created volume 'k3d-mycluster-images' INFO[0000] Starting new tools node... INFO[0001] Creating node 'k3d-mycluster-server-0' INFO[0006] Pulling image 'docker.io/rancher/k3d-tools:5.2.2' INFO[0006] Pulling image 'docker.io/rancher/k3s:v1.21.7-k3s1' INFO[0016] Starting Node 'k3d-mycluster-tools' INFO[0036] Creating LoadBalancer 'k3d-mycluster-serverlb' INFO[0041] Pulling image 'docker.io/rancher/k3d-proxy:5.2.2' INFO[0057] Using the k3d-tools node to gather environment information INFO[0058] Starting cluster 'mycluster' INFO[0058] Starting servers... INFO[0059] Starting Node 'k3d-mycluster-server-0' INFO[0078] All agents already running. INFO[0078] Starting helpers... INFO[0079] Starting Node 'k3d-mycluster-serverlb' INFO[0087] Injecting '192.168.65.2 host.k3d.internal' into /etc/hosts of all nodes... INFO[0087] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap... INFO[0088] Cluster 'mycluster' created successfully! INFO[0088] You can now use it like this: kubectl cluster-info
映射80端口
k3d cluster create mycluster --api-port 127.0.0.1:6445 --servers 3 --agents 2 --port '80:80@loadbalancer'
Neo-iMac:~ neo$ k3d cluster create mycluster --api-port 127.0.0.1:6445 --servers 3 --agents 2 --port '80:80@loadbalancer' INFO[0000] portmapping '80:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] INFO[0000] Prep: Network INFO[0000] Created network 'k3d-mycluster' INFO[0000] Created volume 'k3d-mycluster-images' INFO[0000] Creating initializing server node INFO[0000] Creating node 'k3d-mycluster-server-0' INFO[0000] Starting new tools node... INFO[0001] Starting Node 'k3d-mycluster-tools' INFO[0002] Creating node 'k3d-mycluster-server-1' INFO[0003] Creating node 'k3d-mycluster-server-2' INFO[0004] Creating node 'k3d-mycluster-agent-0' INFO[0005] Creating node 'k3d-mycluster-agent-1' INFO[0005] Creating LoadBalancer 'k3d-mycluster-serverlb' INFO[0005] Using the k3d-tools node to gather environment information INFO[0007] Starting cluster 'mycluster' INFO[0007] Starting the initializing server... INFO[0007] Starting Node 'k3d-mycluster-server-0' INFO[0012] Starting servers... INFO[0013] Starting Node 'k3d-mycluster-server-1' INFO[0045] Starting Node 'k3d-mycluster-server-2' INFO[0069] Starting agents... INFO[0070] Starting Node 'k3d-mycluster-agent-1' INFO[0070] Starting Node 'k3d-mycluster-agent-0' INFO[0081] Starting helpers... INFO[0081] Starting Node 'k3d-mycluster-serverlb' INFO[0089] Injecting '192.168.65.2 host.k3d.internal' into /etc/hosts of all nodes... INFO[0089] Injecting records for host.k3d.internal and for 6 network members into CoreDNS configmap... INFO[0090] Cluster 'mycluster' created successfully! INFO[0091] You can now use it like this: kubectl cluster-info
除了使用命令,还可以使用 yaml 配置文件创建集群
apiVersion: k3d.io/v1alpha2 kind: Simple name: mycluster servers: 1 agents: 2 kubeAPI: hostPort: "6443" # same as `--api-port '6443'` ports: - port: 8080:80 # same as `--port '8080:80@loadbalancer'` nodeFilters: - loadbalancer - port: 8443:443 # same as `--port '8443:443@loadbalancer'` nodeFilters: - loadbalancer
$ k3d cluster create --config /path/to/mycluster.yaml
Neo-iMac:~ neo$ k3d cluster list NAME SERVERS AGENTS LOADBALANCER mycluster 3/3 2/2 true
查看集群信息
Neo-iMac:~ neo$ kubectl cluster-info Kubernetes control plane is running at https://0.0.0.0:60268 CoreDNS is running at https://0.0.0.0:60268/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://0.0.0.0:60268/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Neo-iMac:~ neo$
查看节点
Neo-iMac:~ neo$ kubectl get nodes NAME STATUS ROLES AGE VERSION k3d-mycluster-server-0 Ready control-plane,master 2m10s v1.21.7+k3s1
删除集群
Neo-iMac:~ neo$ k3d cluster delete mycluster INFO[0000] Deleting cluster 'mycluster' INFO[0002] Deleting cluster network 'k3d-mycluster' INFO[0003] Deleting image volume 'k3d-mycluster-images' INFO[0003] Removing cluster details from default kubeconfig... INFO[0003] Removing standalone kubeconfig file (if there is one)... INFO[0003] Successfully deleted cluster mycluster!
kubectl create deployment nginx --image=nginx:alpine kubectl create service clusterip nginx --tcp=80:80 cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx annotations: ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 EOF
操作演示
Neo-iMac:~ neo$ kubectl create deployment nginx --image=nginx:alpine deployment.apps/nginx created Neo-iMac:~ neo$ kubectl create service clusterip nginx --tcp=80:80 service/nginx created Neo-iMac:~ neo$ cat <<EOF | kubectl apply -f - > apiVersion: networking.k8s.io/v1 > kind: Ingress > metadata: > name: nginx > annotations: > ingress.kubernetes.io/ssl-redirect: "false" > spec: > rules: > - http: > paths: > - path: / > pathType: Prefix > backend: > service: > name: nginx > port: > number: 80 > EOF ingress.networking.k8s.io/nginx created
使用浏览器或者CURL命令访问 http://localhost
Neo-iMac:~ neo$ curl http://localhost <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
Netkiller-iMac:~ neo$ k3d kubeconfig write mycluster /Users/neo/.k3d/kubeconfig-mycluster.yaml Netkiller-iMac:~ neo$ cat /Users/neo/.k3d/kubeconfig-mycluster.yaml apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTkRFME16WTVNelV3SGhjTk1qSXdNVEEyTURJME1qRTFXaGNOTXpJd01UQTBNREkwTWpFMQpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTkRFME16WTVNelV3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUQVZKN01XdVY3dzA5dGZybUswbDAybkxOcjFiaGpXM1hIZEgrQUtCdWEKREFBZ3UrNHF4dVdyNHBkbGpraVNrL3ZZMEJjVWJMZ1RkemJnSEY4UnA1OVpvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUZ2UXVRTVBjeStrbTFla2pqaUtUCmRoZ1c4TjB3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnVGMvZDBHWjN5aWRuZ2dXamZGWnowc0R6V3diVXkzV0IKVmZYamZ1Tis3UjRDSUJ4ZmttSUs1Z1NTL0RNUjltc0VxYUsxZVNGTEl2bHZuNXhaeE53RDJoUlgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://127.0.0.1:6445 name: k3d-mycluster contexts: - context: cluster: k3d-mycluster user: admin@k3d-mycluster name: k3d-mycluster current-context: k3d-mycluster kind: Config preferences: {} users: - name: admin@k3d-mycluster user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJVnR3SGsxWDlUam93Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOalF4TkRNMk9UTTFNQjRYRFRJeU1ERXdOakF5TkRJeE5Wb1hEVEl6TURFdwpOakF5TkRJMU0xb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJCcFNScmNGMW9VQUFCRW4Kb2hZM1haWmpoMUhkNks0eEtXVUpsc3A2blR0UzNFbDJJQjZrUmZIcGNwaDdjQ3NaUnFvV2RsT1MxdlFtNGM3VgplNVZ6aEY2alNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCVFhrTVpDYnJXVTNKQmxIb0t2Z0F4MDF6TUJUVEFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlFQTFIQ0M1OUlaS3FieVQ2MExSS2pvcWNWMFJiK3BWZ1FLdU1aR3YxZXFvOGdDSUZFMjB6OTg1ZStnR3dGYQppK3FkenFYQTVKU2FrV05naVE0TUZLcExpVDI3Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUyTkRFME16WTVNelV3SGhjTk1qSXdNVEEyTURJME1qRTFXaGNOTXpJd01UQTBNREkwTWpFMQpXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUyTkRFME16WTVNelV3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTd0c2dk9tay8vL01jNlUwU3BLZm9ERFM1NDNkQnZSdzVZUnNlZmpmWm0KT01BQUNRbkViYS9QY0FGc2ZIUlBWWU9HczRnWTQ3TVlDbzF3L2swV3had3lvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTE1REdRbTYxbE55UVpSNkNyNEFNCmROY3pBVTB3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUtQcjE3T0lDNk94a1hBYnpxUGl2R0QwZkptVjFmTnIKVFNzc2IvMktWMjh4QWlFQTFEUVlHU2F0V3R6Y2tFdk1JNnYzeTcyQ2hwdDZWMHZUdWNEWWJsOWxRVFU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUxjTWt1aW9mTHo1Z1lUZGVrWmlsOEhTZVMzSXVONHVHUGU2VXFxRWJkN0dvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFR2xKR3R3WFdoUUFBRVNlaUZqZGRsbU9IVWQzb3JqRXBaUW1XeW5xZE8xTGNTWFlnSHFSRgo4ZWx5bUh0d0t4bEdxaFoyVTVMVzlDYmh6dFY3bFhPRVhnPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
导入本地镜像
Netkiller-iMac:~ neo$ docker image ls | grep netkiller netkiller openjdk8 52e22fa28d43 3 weeks ago 552MB
将本地 netkiller:openjdk8 镜像导入到 mycluster 中
Netkiller-iMac:~ neo$ k3d image import netkiller:openjdk8 -c mycluster INFO[0000] Importing image(s) into cluster 'mycluster' INFO[0000] Loading 1 image(s) from runtime into nodes... INFO[0051] Importing images '[netkiller:openjdk8]' into node 'k3d-mycluster-server-0'... INFO[0050] Importing images '[netkiller:openjdk8]' into node 'k3d-mycluster-server-2'... INFO[0050] Importing images '[netkiller:openjdk8]' into node 'k3d-mycluster-agent-1'... INFO[0050] Importing images '[netkiller:openjdk8]' into node 'k3d-mycluster-server-1'... INFO[0050] Importing images '[netkiller:openjdk8]' into node 'k3d-mycluster-agent-0'... INFO[0355] Successfully imported image(s) INFO[0355] Successfully imported 1 image(s) into 1 cluster(s)
[root@netkiller k3d]# k3d cluster start mycluster
k3d cluster create netkiller --api-port 6443 --servers 1 --agents 1 --port '80:80@loadbalancer' --port '443:443@loadbalancer'
[root@netkiller ~]# cat .kube/config | grep server server: https://0.0.0.0:6445 [root@netkiller ~]# ss -lnt | grep 6445 LISTEN 0 1024 0.0.0.0:6445 0.0.0.0:*
[root@netkiller ~]# firewall-cmd --add-service=http --permanent success [root@netkiller ~]# firewall-cmd --add-service=https --permanent success [root@netkiller ~]# firewall-cmd --zone=public --add-service=kube-api --permanent success
k3d cluster create netkiller --api-port 172.16.0.1:6443 --servers 1 --agents 1 --port '80:80@loadbalancer' --port '443:443@loadbalancer' --k3s-arg "--no-deploy=traefik@server:*"export http_proxy="socks://127.0.0.1:1080" export https_proxy="socks://127.0.0.1:1080"
export KUBECONFIG="$(k3d kubeconfig write netkiller)"
[root@netkiller ~]# kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://172.18.200.10:6445 name: k3d-netkiller contexts: - context: cluster: k3d-netkiller user: admin@k3d-netkiller name: k3d-netkiller current-context: k3d-netkiller kind: Config preferences: {} users: - name: admin@k3d-netkiller user: client-certificate-data: REDACTED client-key-data: REDACTED
neo@Netkiller-iMac ~> vim ~/.k3d/registries.yaml mirrors: "registry.netkiller.cn": endpoint: - http://registry.netkiller.cn
neo@Netkiller-iMac ~> k3d cluster create mycluster --api-port 6443 --servers 1 --agents 1 --port '80:80@loadbalancer' --port '443:443@loadbalancer' --registry-config ~/.k3d/registries.yaml
neo@Netkiller-iMac ~> kubectl edit -n kube-system deployment traefik deployment.apps/traefik edited
spec: containers: - args: - --global.checknewversion - --global.sendanonymoususage - --entrypoints.traefik.address=:9000/tcp - --entrypoints.web.address=:8000/tcp - --entrypoints.websecure.address=:8443/tcp - --entrypoints.redis.address=:6379/tcp - --entrypoints.mysql.address=:3306/tcp - --entrypoints.mongo.address=:27017/tcp - --api.dashboard=true - --ping=true - --providers.kubernetescrd - --providers.kubernetesingress - --providers.kubernetesingress.ingressendpoint.publishedservice=kube-system/traefik - --entrypoints.websecure.http.tls=true image: rancher/library-traefik:2.4.8 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /ping port: 9000 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 name: traefik ports: - containerPort: 9000 name: traefik protocol: TCP - containerPort: 8000 name: web protocol: TCP - containerPort: 8443 name: websecure protocol: TCP - containerPort: 6379 name: redis protocol: TCP - containerPort: 3306 name: mysql protocol: TCP - containerPort: 27017 name: mongo protocol: TCP
args 处加入
- --entrypoints.redis.address=:6379/tcp
ports 处加入
- containerPort: 6379 name: redis protocol: TCP
[root@netkiller k3d]# k3d cluster edit mycluster --port-add '6379:6379@loadbalancer'
[root@netkiller k3d]# cat redis.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - name: redis image: redis:latest ports: - containerPort: 6379 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: redis spec: ports: - port: 6379 targetPort: 6379 selector: app: redis --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRouteTCP metadata: name: redis spec: entryPoints: - redis routes: - match: HostSNI(`*`) services: - name: redis port: 6379
[root@netkiller k3d]# kubectl apply -f redis.yaml deployment.apps/redis created service/redis created ingressroutetcp.traefik.containo.us/redis created [root@netkiller k3d]# kubectl get pods NAME READY STATUS RESTARTS AGE redis-5c9986b94b-gsctv 1/1 Running 0 6m49s [root@netkiller k3d]# kubectl exec redis-5c9986b94b-gsctv -it -- redis-cli 127.0.0.1:6379> set nickname netkiller OK 127.0.0.1:6379> get nickname "netkiller" 127.0.0.1:6379> 127.0.0.1:6379> exit
[root@netkiller k3d]# dnf install redis [root@netkiller k3d]# redis-cli -h 127.0.0.1 127.0.0.1:6379> get nickname
我们希望使用 nginx ingress,所以需要讲 traefik 卸载
kubectl -n kube-system delete helmcharts.helm.cattle.io traefik helm uninstall traefik-crd --namespace kube-system
ingress-nginx: https://kubernetes.github.io/ingress-nginx/deploy/
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.7.0/deploy/static/provider/cloud/deploy.yaml
修改镜像库地址,否则无法下载
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml vim deploy.yaml :%s:registry.k8s.io/ingress-nginx/:registry.cn-hangzhou.aliyuncs.com/google_containers/:g :%s:registry.cn-hangzhou.aliyuncs.com/google_containers/controller:registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:g kubectl apply -f deploy.yaml
svclb-ingress-nginx-controller 启动不起来
neo@MacBook-Pro-Neo-3 ~ [1]> kubectl logs -n kube-system svclb-ingress-nginx-controller-8b62cc7d-qbqtv Defaulted container "lb-tcp-80" out of: lb-tcp-80, lb-tcp-443 + trap exit TERM INT + echo 10.43.36.160 + grep -Eq : + cat /proc/sys/net/ipv4/ip_forward + '[' 1 '!=' 1 ] + iptables -t nat -I PREROUTING '!' -s 10.43.36.160/32 -p TCP --dport 80 -j DNAT --to 10.43.36.160:80 iptables v1.8.4 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded.
解决方法
root@netkiller ~ # modprobe ip_tables root@netkiller ~# lsmod|grep iptable iptable_nat 16384 2 ip_tables 28672 1 iptable_nat nf_nat 53248 4 xt_nat,nft_chain_nat,iptable_nat,xt_MASQUERADE root@netkiller ~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx ingress-nginx-admission-create-nqv2f 0/1 Completed 0 6m9s ingress-nginx ingress-nginx-admission-patch-m9hcf 0/1 Completed 1 6m9s kube-system metrics-server-7cd5fcb6b7-8wrqx 1/1 Running 3 (6m30s ago) 82m ingress-nginx ingress-nginx-controller-75d55647d-nstch 1/1 Running 0 6m9s kube-system coredns-d76bd69b-rgvwj 1/1 Running 3 (6m21s ago) 82m kube-system local-path-provisioner-6c79684f77-psmgs 1/1 Running 3 (6m21s ago) 82m kube-system svclb-ingress-nginx-controller-8b62cc7d-5lb8d 2/2 Running 12 (3m17s ago) 6m9s kube-system svclb-ingress-nginx-controller-8b62cc7d-qbqtv 2/2 Running 12 (3m20s ago) 6m9s
部署 Nginx Web 服务器,用来检查 ingress
Neo-iMac:~ neo$ kubectl create deployment nginx --image=nginx:alpine deployment.apps/nginx created Neo-iMac:~ neo$ kubectl create service clusterip nginx --tcp=80:80 service/nginx created
cat <<EOF | kubectl apply -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx annotations: kubernetes.io/ingress.class: nginx ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 EOF
[root@master ~]# ll /var/lib/rancher/k3s/server/tls total 116 -rw-r--r-- 1 root root 1173 2022-09-08 13:48 client-admin.crt -rw------- 1 root root 227 2022-09-08 13:48 client-admin.key -rw-r--r-- 1 root root 1178 2022-09-08 13:48 client-auth-proxy.crt -rw------- 1 root root 227 2022-09-08 13:48 client-auth-proxy.key -rw-r--r-- 1 root root 570 2022-09-08 13:48 client-ca.crt -rw------- 1 root root 227 2022-09-08 13:48 client-ca.key -rw-r--r-- 1 root root 1165 2022-09-08 13:48 client-controller.crt -rw------- 1 root root 227 2022-09-08 13:48 client-controller.key -rw-r--r-- 1 root root 1161 2022-09-08 13:48 client-k3s-cloud-controller.crt -rw------- 1 root root 227 2022-09-08 13:48 client-k3s-cloud-controller.key -rw-r--r-- 1 root root 1153 2022-09-08 13:48 client-k3s-controller.crt -rw------- 1 root root 227 2022-09-08 13:48 client-k3s-controller.key -rw-r--r-- 1 root root 1181 2022-09-08 13:48 client-kube-apiserver.crt -rw------- 1 root root 227 2022-09-08 13:48 client-kube-apiserver.key -rw-r--r-- 1 root root 1149 2022-09-08 13:48 client-kube-proxy.crt -rw------- 1 root root 227 2022-09-08 13:48 client-kube-proxy.key -rw------- 1 root root 227 2022-09-08 13:48 client-kubelet.key -rw-r--r-- 1 root root 1153 2022-09-08 13:48 client-scheduler.crt -rw------- 1 root root 227 2022-09-08 13:48 client-scheduler.key -rw-r--r-- 1 root root 3789 2022-09-08 13:48 dynamic-cert.json drwxr-xr-x 2 root root 4096 2022-09-08 13:48 etcd -rw-r--r-- 1 root root 591 2022-09-08 13:48 request-header-ca.crt -rw------- 1 root root 227 2022-09-08 13:48 request-header-ca.key -rw-r--r-- 1 root root 570 2022-09-08 13:48 server-ca.crt -rw------- 1 root root 227 2022-09-08 13:48 server-ca.key -rw------- 1 root root 1675 2022-09-08 13:48 service.key -rw-r--r-- 1 root root 1368 2022-09-08 13:48 serving-kube-apiserver.crt -rw------- 1 root root 227 2022-09-08 13:48 serving-kube-apiserver.key -rw------- 1 root root 227 2022-09-08 13:48 serving-kubelet.key drwx------ 2 root root 84 2022-09-08 13:48 temporary-certs
[root@master ~]#kubectl create serviceaccount secrets serviceaccount/gitlab created [root@master ~]# kubectl create token secrets eyJhbGciOiJSUzI1NiIsImtpZCI6IktCOHRvYlZOLXFPRmEyb1JWdlQxSzBvN0tvZF9HNFBGRnlraDR5UU1jakkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNjY2MTcyOTc4LCJpYXQiOjE2NjYxNjkzNzgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImdpdGxhYiIsInVpZCI6IjAzNTdkOWIwLWY2YWEtNGFlMy05MDc0LWM2YzM5Y2Q1YTdiNiJ9fSwibmJmIjoxNjY2MTY5Mzc4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpnaXRsYWIifQ.oDWjQmVH7BOqHUp4AgjxNncfJ0Nz9oY_jS9DU5E-geKmX5GnchC96-t0ZsdtgPiWXFbieb0aUH1wZXCrkFAuGeM-XNDEvfhbK4UL9GiDl98KaYMjTSwXipp4bIZeSctL-Zpc0nSKwaWdWNwxmmlC30HwMwjQPdwBgCDM8SEr9aepUuJD9rHdclKWv8NcXlLq4t5c9sV3qEQRKbGOTnSeY3RokoAY-tYD7FT3jzFktbkTk4SHZAKYUeILlc2eaE0cOm9N4yhl8IYZvEcrBGZV_-Nl0XzGu5XpDrVVXlk2k2RdYQHj3Iw5l4sSFfnRVg1Q-1B45y7FJDEbXa-tCXeRKA [root@master ~]# token=eyJhbGciOiJSUzI1NiIsImtpZCI6IktCOHRvYlZOLXFPRmEyb1JWdlQxSzBvN0tvZF9HNFBGRnlraDR5UU1jakkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNjY2MTcyOTc4LCJpYXQiOjE2NjYxNjkzNzgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImdpdGxhYiIsInVpZCI6IjAzNTdkOWIwLWY2YWEtNGFlMy05MDc0LWM2YzM5Y2Q1YTdiNiJ9fSwibmJmIjoxNjY2MTY5Mzc4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpnaXRsYWIifQ.oDWjQmVH7BOqHUp4AgjxNncfJ0Nz9oY_jS9DU5E-geKmX5GnchC96-t0ZsdtgPiWXFbieb0aUH1wZXCrkFAuGeM-XNDEvfhbK4UL9GiDl98KaYMjTSwXipp4bIZeSctL-Zpc0nSKwaWdWNwxmmlC30HwMwjQPdwBgCDM8SEr9aepUuJD9rHdclKWv8NcXlLq4t5c9sV3qEQRKbGOTnSeY3RokoAY-tYD7FT3jzFktbkTk4SHZAKYUeILlc2eaE0cOm9N4yhl8IYZvEcrBGZV_-Nl0XzGu5XpDrVVXlk2k2RdYQHj3Iw5l4sSFfnRVg1Q-1B45y7FJDEbXa-tCXeRKA [root@master ~]# curl -k https://127.0.0.1:6443/api --header "Authorization: bearer $token" { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "172.18.200.5:6443" } ] }
创建集群始终停止在这里,这是因为 ghcr.io 被墙,无法访问。
INFO[0004] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.4'
找一台境外VPS安装K3D并创建集群,然后讲 k3d-proxy 镜像保存为文件。
[docker@netkiller ~]$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ghcr.io/k3d-io/k3d-proxy 5.4.4 5a963719cb39 2 weeks ago 42.4MB ghcr.io/k3d-io/k3d-tools 5.4.4 741f01cb5093 2 weeks ago 18.7MB [docker@netkiller ~]$ docker save 5a963719cb39 -o k3d-proxy.tar
复制到国内,导入镜像
docker load --input k3d-proxy.tar
[root@master ~]# kubectl get svc --namespace=kube_system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d2h metrics-server ClusterIP 10.43.88.112 <none> 443/TCP 4d2h traefik LoadBalancer 10.43.125.52 172.18.200.5,172.18.200.51 80:32623/TCP,443:31516/TCP 4d2h
本地没有 80 和 443 端口
[root@master ~]# ss -tnlp | egrep "80|443" LISTEN 0 1024 *:6443 *:* users:(("k3s-server",pid=173779,fd=17)) [root@master ~]# lsof -i :80 [root@master ~]# lsof -i :443
telnet 测试后可工作
[root@master ~]# telnet 172.18.200.5 80 Trying 172.18.200.5... Connected to 172.18.200.5. Escape character is '^]'.
80/443 是 Iptable NAT映射出来的端口
[root@master ~]# iptables -nL -t nat | grep traefik # Warning: iptables-legacy tables present, use iptables-legacy to see them KUBE-MARK-MASQ all -- 0.0.0.0/0 0.0.0.0/0 /* masquerade traffic for kube-system/traefik:websecure external destinations */ KUBE-MARK-MASQ all -- 0.0.0.0/0 0.0.0.0/0 /* masquerade traffic for kube-system/traefik:web external destinations */ KUBE-EXT-CVG3OEGEH7H5P3HQ tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:websecure */ tcp dpt:31516 KUBE-EXT-UQMCRMJZLI3FTLDP tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:web */ tcp dpt:32623 KUBE-MARK-MASQ all -- 10.42.2.3 0.0.0.0/0 /* kube-system/traefik:web */ DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:web */ tcp to:10.42.2.3:8000 KUBE-MARK-MASQ all -- 10.42.2.3 0.0.0.0/0 /* kube-system/traefik:websecure */ DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:websecure */ tcp to:10.42.2.3:8443 KUBE-SVC-CVG3OEGEH7H5P3HQ tcp -- 0.0.0.0/0 10.43.125.52 /* kube-system/traefik:websecure cluster IP */ tcp dpt:443 KUBE-EXT-CVG3OEGEH7H5P3HQ tcp -- 0.0.0.0/0 172.18.200.5 /* kube-system/traefik:websecure loadbalancer IP */ tcp dpt:443 KUBE-EXT-CVG3OEGEH7H5P3HQ tcp -- 0.0.0.0/0 172.18.200.51 /* kube-system/traefik:websecure loadbalancer IP */ tcp dpt:443 KUBE-SVC-UQMCRMJZLI3FTLDP tcp -- 0.0.0.0/0 10.43.125.52 /* kube-system/traefik:web cluster IP */ tcp dpt:80 KUBE-EXT-UQMCRMJZLI3FTLDP tcp -- 0.0.0.0/0 172.18.200.5 /* kube-system/traefik:web loadbalancer IP */ tcp dpt:80 KUBE-EXT-UQMCRMJZLI3FTLDP tcp -- 0.0.0.0/0 172.18.200.51 /* kube-system/traefik:web loadbalancer IP */ tcp dpt:80 KUBE-MARK-MASQ tcp -- !10.42.0.0/16 10.43.125.52 /* kube-system/traefik:websecure cluster IP */ tcp dpt:443 KUBE-SEP-NTYW4CRSJDKN6UYK all -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:websecure -> 10.42.2.3:8443 */ KUBE-MARK-MASQ tcp -- !10.42.0.0/16 10.43.125.52 /* kube-system/traefik:web cluster IP */ tcp dpt:80 KUBE-SEP-M4A3OJBNTWBZ5ISS all -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/traefik:web -> 10.42.2.3:8000 */
NAT 端口可以通过 nmap 扫描出来
[root@master ~]# nmap localhost Starting Nmap 7.91 ( https://nmap.org ) at 2022-09-01 10:04 CST Nmap scan report for localhost (127.0.0.1) Host is up (0.0000050s latency). Other addresses for localhost (not scanned): ::1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp filtered http 443/tcp filtered https 10010/tcp open rxapi Nmap done: 1 IP address (1 host up) scanned in 1.26 seconds
[root@master ~]# iptables-save | grep "CNI-DN" | grep "to-destination" # Warning: iptables-legacy tables present, use iptables-legacy-save to see them -A CNI-DN-485265bef43fea7142e9d -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.42.0.10:80 -A CNI-DN-485265bef43fea7142e9d -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.42.0.10:443
[root@netkiller ~]# systemctl disable firewalld Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@master ~]# ifconfig br-6ac52d42db64: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.20.0.1 netmask 255.255.0.0 broadcast 172.20.255.255 inet6 fe80::42:94ff:fefd:1fc3 prefixlen 64 scopeid 0x20<link> ether 02:42:94:fd:1f:c3 txqueuelen 0 (Ethernet) RX packets 782783 bytes 200925233 (191.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 625170 bytes 194933933 (185.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.42.0.1 netmask 255.255.255.0 broadcast 10.42.0.255 inet6 fe80::6448:6dff:fe75:5e8d prefixlen 64 scopeid 0x20<link> ether 66:48:6d:75:5e:8d txqueuelen 1000 (Ethernet) RX packets 2049669 bytes 371281787 (354.0 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2235678 bytes 334579428 (319.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.16.0.1 netmask 255.255.255.0 broadcast 172.16.0.255 inet6 fe80::42:4cff:fe70:883 prefixlen 64 scopeid 0x20<link> ether 02:42:4c:70:08:83 txqueuelen 0 (Ethernet) RX packets 14 bytes 616 (616.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 788 (788.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.18.200.5 netmask 255.255.255.0 broadcast 172.18.200.255 inet6 fe80::2ef0:5dff:fec7:387 prefixlen 64 scopeid 0x20<link> ether 2c:f0:5d:c7:03:87 txqueuelen 1000 (Ethernet) RX packets 782783 bytes 200925233 (191.6 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 625171 bytes 194934547 (185.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.42.0.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::c051:5cff:fe09:4e18 prefixlen 64 scopeid 0x20<link> ether c2:51:5c:09:4e:18 txqueuelen 0 (Ethernet) RX packets 180007 bytes 21310049 (20.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 222507 bytes 39026179 (37.2 MiB) TX errors 0 dropped 5 overruns 0 carrier 0 collisions 0 [root@master ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.18.200.254 0.0.0.0 UG 100 0 0 enp3s0 10.42.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 10.42.1.0 10.42.1.0 255.255.255.0 UG 0 0 0 flannel.1 172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0 172.18.200.0 0.0.0.0 255.255.255.0 U 100 0 0 enp3s0 172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-6ac52d42db64 [root@master ~]# cat /proc/sys/net/ipv4/ip_forward 1 [root@master ~]# sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1 [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nacos-1 1/1 Running 5 (12h ago) 35h 10.42.0.50 master <none> <none> elasticsearch-data-1 1/1 Running 5 (12h ago) 35h 10.42.0.44 master <none> <none> nacos-2 1/1 Running 7 (6m39s ago) 35h 10.42.1.49 agent-1 <none> <none> nacos-0 1/1 Running 7 (6m32s ago) 35h 10.42.1.50 agent-1 <none> <none> elasticsearch-master-0 1/1 Running 6 (6m32s ago) 35h 10.42.1.47 agent-1 <none> <none> busybox 0/1 Error 0 11h 10.42.1.46 agent-1 <none> <none> elasticsearch-data-2 1/1 Running 6 (6m32s ago) 35h 10.42.1.48 agent-1 <none> <none> elasticsearch-data-0 1/1 Running 6 (6m32s ago) 35h 10.42.1.51 agent-1 <none> <none> [root@master ~]# ping 10.42.0.50 PING 10.42.0.50 (10.42.0.50) 56(84) bytes of data. 64 bytes from 10.42.0.50: icmp_seq=1 ttl=64 time=0.039 ms 64 bytes from 10.42.0.50: icmp_seq=2 ttl=64 time=0.031 ms 64 bytes from 10.42.0.50: icmp_seq=3 ttl=64 time=0.042 ms 64 bytes from 10.42.0.50: icmp_seq=4 ttl=64 time=0.038 ms ^C --- 10.42.0.50 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3054ms rtt min/avg/max/mdev = 0.031/0.037/0.042/0.004 ms [root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nacos-1 1/1 Running 5 (12h ago) 35h 10.42.0.50 master <none> <none> elasticsearch-data-1 1/1 Running 5 (12h ago) 35h 10.42.0.44 master <none> <none> nacos-2 1/1 Running 7 (29m ago) 35h 10.42.1.49 agent-1 <none> <none> nacos-0 1/1 Running 7 (29m ago) 35h 10.42.1.50 agent-1 <none> <none> elasticsearch-master-0 1/1 Running 6 (29m ago) 35h 10.42.1.47 agent-1 <none> <none> busybox 0/1 Error 0 11h 10.42.1.46 agent-1 <none> <none> elasticsearch-data-2 1/1 Running 6 (29m ago) 35h 10.42.1.48 agent-1 <none> <none> elasticsearch-data-0 1/1 Running 6 (29m ago) 35h 10.42.1.51 agent-1 <none> <none> [root@master ~]# ping 10.42.1.51 -c 5 PING 10.42.1.51 (10.42.1.51) 56(84) bytes of data. 64 bytes from 10.42.1.51: icmp_seq=1 ttl=63 time=0.402 ms 64 bytes from 10.42.1.51: icmp_seq=2 ttl=63 time=0.171 ms 64 bytes from 10.42.1.51: icmp_seq=3 ttl=63 time=0.170 ms 64 bytes from 10.42.1.51: icmp_seq=4 ttl=63 time=0.410 ms 64 bytes from 10.42.1.51: icmp_seq=5 ttl=63 time=0.414 ms --- 10.42.1.51 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4105ms rtt min/avg/max/mdev = 0.170/0.313/0.414/0.116 ms [root@agent-1 ~]# ping 10.42.0.50 -c 5 PING 10.42.0.50 (10.42.0.50) 56(84) bytes of data. 64 bytes from 10.42.0.50: icmp_seq=1 ttl=63 time=0.154 ms 64 bytes from 10.42.0.50: icmp_seq=2 ttl=63 time=0.206 ms 64 bytes from 10.42.0.50: icmp_seq=3 ttl=63 time=0.213 ms 64 bytes from 10.42.0.50: icmp_seq=4 ttl=63 time=0.218 ms 64 bytes from 10.42.0.50: icmp_seq=5 ttl=63 time=0.220 ms --- 10.42.0.50 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4125ms rtt min/avg/max/mdev = 0.154/0.202/0.220/0.024 ms [root@master ~]# kubectl exec -it nacos-1 -- ping nacos-0.nacos.default.svc.cluster.local -c 5 PING nacos-0.nacos.default.svc.cluster.local (10.42.1.50) 56(84) bytes of data. 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=1 ttl=62 time=0.440 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=2 ttl=62 time=0.429 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=3 ttl=62 time=0.431 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=4 ttl=62 time=0.343 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=5 ttl=62 time=0.229 ms --- nacos-0.nacos.default.svc.cluster.local ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4127ms rtt min/avg/max/mdev = 0.229/0.374/0.440/0.082 ms [root@master ~]# kubectl exec -it nacos-2 -- ping nacos-0.nacos.default.svc.cluster.local -c 5 PING nacos-0.nacos.default.svc.cluster.local (10.42.1.50) 56(84) bytes of data. 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=1 ttl=64 time=0.053 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=2 ttl=64 time=0.039 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=3 ttl=64 time=0.038 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=4 ttl=64 time=0.077 ms 64 bytes from nacos-0.nacos.default.svc.cluster.local (10.42.1.50): icmp_seq=5 ttl=64 time=0.039 ms --- nacos-0.nacos.default.svc.cluster.local ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4113ms rtt min/avg/max/mdev = 0.038/0.049/0.077/0.015 ms [root@master ~]# kubectl delete pod busybox [root@master ~]# kubectl run -i --tty busybox --image=busybox --restart=Never If you don't see a command prompt, try pressing enter. / # ping nacos-0.nacos.default.svc.cluster.local -c 3 PING nacos-0.nacos.default.svc.cluster.local (10.42.1.50): 56 data bytes 64 bytes from 10.42.1.50: seq=0 ttl=64 time=0.052 ms 64 bytes from 10.42.1.50: seq=1 ttl=64 time=0.049 ms 64 bytes from 10.42.1.50: seq=2 ttl=64 time=0.047 ms --- nacos-0.nacos.default.svc.cluster.local ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.047/0.049/0.052 ms / #
[root@netkiller ~]# ulimit -a real-time non-blocking time (microseconds, -R) unlimited core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 254690 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 6553500 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 254690 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
[root@netkiller ~]# sysctl fs.inotify.max_user_instances fs.inotify.max_user_watches fs.inotify.max_user_instances = 128 fs.inotify.max_user_watches = 508881 [root@netkiller ~]# sysctl -w fs.inotify.max_user_watches=5088800 fs.inotify.max_user_watches = 5088800 [root@netkiller ~]# sysctl -w fs.inotify.max_user_instances=4096 fs.inotify.max_user_instances = 4096