Home | 简体中文 | 繁体中文 | 杂文 | Github | 知乎专栏 | Facebook | Linkedin | Youtube | 打赏(Donations) | About
知乎专栏

106.3. Kubernetes 集群管理

kubectl - controls the Kubernetes cluster manager.

kubectl是Kubernetes的命令行管理工具

	
kubectl controls the Kubernetes cluster manager. 

Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create         Create a resource from a file or from stdin.
  expose         Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  explain        Documentation of resources
  get            Display one or many resources
  edit           Edit a resource on the server
  delete         Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout        Manage the rollout of a resource
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate    Modify certificate resources.
  cluster-info   Display cluster info
  top            Display Resource (CPU/Memory/Storage) usage.
  cordon         Mark node as unschedulable
  uncordon       Mark node as schedulable
  drain          Drain node in preparation for maintenance
  taint          Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe       Show details of a specific resource or group of resources
  logs           Print the logs for a container in a pod
  attach         Attach to a running container
  exec           Execute a command in a container
  port-forward   Forward one or more local ports to a pod
  proxy          Run a proxy to the Kubernetes API server
  cp             Copy files and directories to and from containers.
  auth           Inspect authorization

Advanced Commands:
  diff           Diff live version against would-be applied version
  apply          Apply a configuration to a resource by filename or stdin
  patch          Update field(s) of a resource using strategic merge patch
  replace        Replace a resource by filename or stdin
  wait           Experimental: Wait for a specific condition on one or many resources.
  convert        Convert config files between different API versions

Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  completion     Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources  Print the supported API resources on the server
  api-versions   Print the supported API versions on the server, in the form of "group/version"
  config         Modify kubeconfig files
  plugin         Provides utilities for interacting with plugins.
  version        Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).	
	
	

106.3.1. 配置

106.3.1.1. KUBECONFIG

KUBECONFIG 环境变量

106.3.1.2. use-context

			
[root@netkiller ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:6445
  name: k3d-mycluster
contexts:
- context:
    cluster: k3d-mycluster
    user: admin@k3d-mycluster
  name: k3d-mycluster
current-context: k3d-mycluster
kind: Config
preferences: {}
users:
- name: admin@k3d-mycluster
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED			
			
			
			
$ kubectl config use-context
			
			

106.3.2. 如何从 docker 过渡到 kubectl 命令

docker run 命令

		
$ docker run -d --restart=always -e DOMAIN=cluster --name nginx -p 80:80 nginx		
		
		

kubectl 命令

		
$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
$ kubectl expose deployment nginx-app --port=80 --name=nginx-http	
		
		

docker exec 命令

		
$ docker run -t -i ubuntu:14.10 /bin/bash
		
		

kubectl 命令

		
$ kubectl exec -ti nginx-app-5jyvm -- /bin/sh	
		
		

docker ps 命令

		
$ docker ps
		
		

kubectl 命令

		
$ kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
mongodba-6d5d6ddf64-jw4fv   1/1     Running   0          16h

# kubectl exec -it mongodba-6d5d6ddf64-jw4fv bash		
		
		

106.3.2.1. 执行 Shell

进入容器内部.

		
$ kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
mongodba-6d5d6ddf64-jw4fv   1/1     Running   0          16h

$ kubectl exec -it mongodba-6d5d6ddf64-jw4fv bash		
		
		
		
kubectl run busybox --image=busybox:latest		

iMac:kubernetes neo$ kubectl exec -it busybox -- nslookup www.netkiller.cn
Server:		10.10.0.10
Address:	10.10.0.10:53

Non-authoritative answer:
www.netkiller.cn	canonical name = netkiller.github.io
Name:	netkiller.github.io
Address: 185.199.110.153
Name:	netkiller.github.io
Address: 185.199.108.153
Name:	netkiller.github.io
Address: 185.199.111.153
Name:	netkiller.github.io
Address: 185.199.109.153

*** Can't find www.netkiller.cn: No answer
		
		

106.3.2.2. 查看信息

api-versions
		
iMac:springboot neo$ kubectl api-versions
admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1
certificates.k8s.io/v1beta1
coordination.k8s.io/v1
coordination.k8s.io/v1beta1
discovery.k8s.io/v1beta1
events.k8s.io/v1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
networking.k8s.io/v1beta1
node.k8s.io/v1beta1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
		
		
			
节点
		
[root@localhost ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   23m   v1.13.2		
		
			
nodes
		
[root@localhost ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
minikube   Ready    master   119m   v1.13.2		
		
				
		
iMac:~ neo$ kubectl get node 
NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   42h   v1.19.0

iMac:~ neo$ kubectl get node -o wide
NAME       STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION   CONTAINER-RUNTIME
minikube   Ready    master   42h   v1.19.0   192.168.64.2   <none>        Buildroot 2019.02.11   4.19.114         docker://19.3.12		
		
				
查询集群状态
		
[root@localhost ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   		
		
			
config
		
[root@localhost ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/.minikube/ca.crt
    server: https://172.16.0.121:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /root/.minikube/client.crt
    client-key: /root/.minikube/client.key		
		
			
		
iMac:~ neo$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://kubernetes.docker.internal:6443
  name: docker-desktop
- cluster:
    certificate-authority: /Users/neo/.minikube/ca.crt
    server: https://192.168.64.2:8443
  name: minikube
contexts:
- context:
    cluster: docker-desktop
    user: docker-desktop
  name: docker-desktop
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: docker-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: minikube
  user:
    client-certificate: /Users/neo/.minikube/profiles/minikube/client.crt
    client-key: /Users/neo/.minikube/profiles/minikube/client.key		
		
			
use-context

如果之前用其他方式运行Kubernetes,如 minikube, mircok8s 等等,可以使用下面命令切换。

			
$ kubectl config use-context docker-for-desktop		
			
				
cluster-info
		
[root@localhost ~]# kubectl cluster-info
Kubernetes master is running at https://172.16.0.121:8443
KubeDNS is running at https://172.16.0.121:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.		
		
			

106.3.2.3. 查看 pod 日志

		
kubectl logs <pod-name>
kubectl logs --previous <pod-name>
kubectl logs -l app=your-app-name | grep "xxx"
kubectl logs --selector role=cool-app | grep "xxx"
		
		

106.3.2.4. 复制文件

		
kubectl cp netkiller/job-executor-77fc6b4db-5dzxz:logs/info.2022-07-29.log Downloads/info.2022-07-29.log -c job-executor		
		
		
		
kubectl cp Downloads/myfile netkiller/job-executor-77fc6b4db-5dzxz:/tmp/myfile -c job-executor		
		
		

106.3.2.5. edit

		
kubectl edit --namespace=kube-system rc kubernetes-dashboard		
		
		

106.3.2.6. 端口转发

Service 端口映射
			
$ kubectl port-forward svc/demo 8080:8080		
			
			
绑定地址

将本地 0.0.0.0:27017 端口转发到 service 端口

			
	neo@Netkiller-iMac ~> kubectl port-forward --address 0.0.0.0 service/mongo 27017
	Forwarding from 0.0.0.0:27017 -> 27017		
			
			

106.3.2.7. 操作系统资源配置

sysctls
			
kubelet --experimental-allowed-unsafe-sysctls 'kernel.msg*,kernel.shmmax,kernel.sem,net.ipv4.route.min_pmtu'
			
			

106.3.2.8. endpoints

		
Neo-iMac:kubernetes neo$ rancher kubectl get endpoints nginx
NAME    ENDPOINTS                                   AGE
nginx   10.42.0.19:80,10.42.0.20:80,10.42.0.21:80   3m56s		
		
		

106.3.2.9. explain

ingress
			
iMac:kubernetes neo$ kubectl explain ingress
KIND:     Ingress
VERSION:  extensions/v1beta1

DESCRIPTION:
     Ingress is a collection of rules that allow inbound connections to reach
     the endpoints defined by a backend. An Ingress can be configured to give
     services externally-reachable urls, load balance traffic, terminate SSL,
     offer name based virtual hosting etc. DEPRECATED - This group version of
     Ingress is deprecated by networking.k8s.io/v1beta1 Ingress. See the release
     notes for more information.

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind	<string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata	<Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec	<Object>
     Spec is the desired state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status	<Object>
     Status is the current state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status			
			
			

查看 ingress.spec 配置清单

			
iMac:kubernetes neo$ kubectl explain ingress.spec
KIND:     Ingress
VERSION:  extensions/v1beta1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec is the desired state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

     IngressSpec describes the Ingress the user wishes to exist.

FIELDS:
   backend	<Object>
     A default backend capable of servicing requests that don't match any rule.
     At least one of 'backend' or 'rules' must be specified. This field is
     optional to allow the loadbalancer controller or defaulting logic to
     specify a global default.

   ingressClassName	<string>
     IngressClassName is the name of the IngressClass cluster resource. The
     associated IngressClass defines which controller will implement the
     resource. This replaces the deprecated `kubernetes.io/ingress.class`
     annotation. For backwards compatibility, when that annotation is set, it
     must be given precedence over this field. The controller may emit a warning
     if the field and annotation have different values. Implementations of this
     API should ignore Ingresses without a class specified. An IngressClass
     resource may be marked as default, which can be used to set a default value
     for this field. For more information, refer to the IngressClass
     documentation.

   rules	<[]Object>
     A list of host rules used to configure the Ingress. If unspecified, or no
     rule matches, all traffic is sent to the default backend.

   tls	<[]Object>
     TLS configuration. Currently the Ingress only supports a single TLS port,
     443. If multiple members of this list specify different hosts, they will be
     multiplexed on the same port according to the hostname specified through
     the SNI TLS extension, if the ingress controller fulfilling the ingress
     supports SNI.			
			
			

106.3.2.10. describe

storageclasses.storage.k8s.io
			
[root@master ~]# kubectl describe storageclasses.storage.k8s.io
Name:                  longhorn-storage
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           driver.longhorn.io
Parameters:            diskSelector=hdd,numberOfReplicas=2,staleReplicaTimeout=2880
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>


Name:            longhorn
IsDefaultClass:  No
Annotations:     longhorn.io/last-applied-configmap=kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhorn
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Delete"
volumeBindingMode: Immediate
parameters:
  numberOfReplicas: "3"
  staleReplicaTimeout: "30"
  fromBackup: ""
  fsType: "ext4"
  dataLocality: "disabled"
,storageclass.beta.kubernetes.io/is-default-class=false,storageclass.kubernetes.io/is-default-class=false
Provisioner:           driver.longhorn.io
Parameters:            dataLocality=disabled,fromBackup=,fsType=ext4,numberOfReplicas=3,staleReplicaTimeout=30
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>


Name:                  local-path
IsDefaultClass:        Yes
Annotations:           objectset.rio.cattle.io/applied=H4sIAAAAAAAA/4yRT+vUMBCGv4rMua1bu1tKwIOu7EUEQdDzNJlux6aZkkwry7LfXbIqrIffn2PyZN7hfXIFXPg7xcQSwEBSiXimaupSxfJ2q6GAiYMDA9/+oKPHlKCAmRQdKoK5AoYgisoSUj5K/5OsJtIqslQWVT3lNM4xUDzJ5VegWJ63CQxMTXogW128+czBvf/gnIQXIwLOBAa8WPTl30qvGkoL2jw5rT2V6ZKUZij+SbG5eZVRDKR0F8SpdDTg6rW8YzCgcSW4FeCxJ/+sjxHTCAbqrhmag20Pw9DbZtfu210z7JuhPnQ719m2w3cOe7fPof81W1DHfLlE2Th/IEUwEDHYkWJe8PCsgJgL8PxVPNsLGPhEnjRr2cSvM33k4Dicv4jLC34g60niiWPSo4S0zhTh9jsAAP//ytgh5S0CAAA,objectset.rio.cattle.io/id=,objectset.rio.cattle.io/owner-gvk=k3s.cattle.io/v1, Kind=Addon,objectset.rio.cattle.io/owner-name=local-storage,objectset.rio.cattle.io/owner-namespace=kube-system,storageclass.beta.kubernetes.io/is-default-class=true,storageclass.kubernetes.io/is-default-class=true
Provisioner:           rancher.io/local-path
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>
			
			
pod
			
[root@master ~]# kubectl describe pvc
Name:          elasticsearch-elasticsearch-data-0
Namespace:     default
StorageClass:  local-path
Status:        Bound
Volume:        pvc-a2ebce5a-9ae1-46e9-ae9f-8840027bf5d8
Labels:        app=elasticsearch
               role=data
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
               volume.kubernetes.io/selected-node: agent-1
               volume.kubernetes.io/storage-provisioner: rancher.io/local-path
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       elasticsearch-data-0
Events:        <none>


Name:          elasticsearch-elasticsearch-data-1
Namespace:     default
StorageClass:  local-path
Status:        Bound
Volume:        pvc-f0d9d5df-9704-44a7-93ff-8a4f431af226
Labels:        app=elasticsearch
               role=data
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
               volume.kubernetes.io/selected-node: master
               volume.kubernetes.io/storage-provisioner: rancher.io/local-path
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       elasticsearch-data-1
Events:        <none>


Name:          elasticsearch-elasticsearch-data-2
Namespace:     default
StorageClass:  local-path
Status:        Bound
Volume:        pvc-722cce94-b2c5-457a-8e01-9a2a52b12128
Labels:        app=elasticsearch
               role=data
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
               volume.kubernetes.io/selected-node: agent-1
               volume.kubernetes.io/storage-provisioner: rancher.io/local-path
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       elasticsearch-data-2
Events:        <none>


Name:          longhorn-volv-pvc
Namespace:     default
StorageClass:  longhorn
Status:        Bound
Volume:        pvc-5dc3ae33-9f86-4650-82ba-a7b681963adc
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: driver.longhorn.io
               volume.kubernetes.io/storage-provisioner: driver.longhorn.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      2Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       volume-test
Events:        <none>


Name:          redis
Namespace:     default
StorageClass:  local-path
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       redis-0
Events:
  Type    Reason                Age                   From                         Message
  ----    ------                ----                  ----                         -------
  Normal  WaitForFirstConsumer  29s (x481 over 120m)  persistentvolume-controller  waiting for first consumer to be created before binding
[root@master ~]# 
			
			

106.3.3. namespace 命名空间

106.3.3.1. 查看命名空间

			
root@netkiller ~# kubectl get ns
NAME              STATUS   AGE
default           Active   197d
kube-system       Active   197d
kube-public       Active   197d
kube-node-lease   Active   197d
longhorn-system   Active   195d
test              Active   163d
gitlab            Active   156d
dev               Active   155d
training          Active   133d
project           Active   24h

root@netkiller ~# kubectl get namespace
NAME              STATUS   AGE
default           Active   197d
kube-system       Active   197d
kube-public       Active   197d
kube-node-lease   Active   197d
longhorn-system   Active   195d
test              Active   163d
gitlab            Active   156d
dev               Active   155d
training          Active   133d
project           Active   24h			
			
			

106.3.3.2. 创建命名空间

			
$ kubectl create namespace new-namespace		
			
			

106.3.3.3. 使用 yaml 创建命名空间

创建 jenkins-namespace.yaml

			
apiVersion: v1
kind: Namespace
metadata:
  name: jenkins-project
			
			
			
$ kubectl create -f jenkins-namespace.yaml
namespace ”jenkins-project“ created			
			
			

106.3.3.4. 删除命名空间

			
root@netkiller ~# kubectl delete namespace new-namespace
namespace "new-namespace" deleted			
			
			

106.3.4. label 标签

label 用于识别对象,管理关联关系等目的,如Pod、Service、Deployment、Node的关联。

		
kubectl label nodes <node-name> <label-key>=<label-value>		
		
		

打标签,例如 disk-type=ssd

			
[root@master ~]# kubectl label nodes agent-1 disk-type=ssd
node/agent-1 labeled			
			
		

查看标签

			
[root@master ~]# kubectl get node --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS
master   Ready    master   42d   v1.17.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
agent-1   Ready    <none>   42d   v1.17.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk-type=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=agent-1,kubernetes.io/os=linux
agent-2   Ready    <none>   42d   v1.17.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=agent-2,kubernetes.io/os=linux			
			
		

删除标签

			
[root@master ~]# kubectl label nodes agent-1 disk-type-
node/agent-1 unlabeled		
			
		

106.3.5. 服务管理

106.3.5.1. 列出服务

			
[root@localhost ~]# kubectl get service
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort    10.109.33.86   <none>        8080:30436/TCP   134m
kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP          147m		
			
			

排序

			
iMac:kubernetes neo$ kubectl get services --sort-by=.metadata.name
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          121m
my-service   ClusterIP   10.106.157.143   <none>        80/TCP,443/TCP   9m43s			
			
			

106.3.5.2. 创建服务

创建 service.yaml 文件

			
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
  - name: https
    protocol: TCP
    port: 443
    targetPort: 443
			
			

			
iMac:kubernetes neo$ kubectl create -f service.yaml 
service/my-service created			
			
			

查看服务

			
iMac:kubernetes neo$ kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          113m
my-service   ClusterIP   10.106.157.143   <none>        80/TCP,443/TCP   64s			
			
			

查看 service 后端代理的 pod 的 ip,这里没有挂载 pod 所以显示 none

			
iMac:kubernetes neo$ kubectl get endpoints my-service
NAME         ENDPOINTS   AGE
my-service   <none>      2m20s			
			
			

106.3.5.3. 查看服务详细信息

			
iMac:kubernetes neo$ kubectl describe service/registry
Name:                     registry
Namespace:                default
Labels:                   app=registry
Annotations:              <none>
Selector:                 app=registry
Type:                     NodePort
IP:                       10.10.0.188
Port:                     registry  5000/TCP
TargetPort:               5000/TCP
NodePort:                 registry  32050/TCP
Endpoints:                172.17.0.6:5000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>			
			
			
查看服务
				
	> kubectl get service 
	NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
	kubernetes           ClusterIP      10.43.0.1       <none>        443/TCP                      4d13h
	nacos                ClusterIP      10.43.175.40    <none>        8848/TCP,9848/TCP,9555/TCP   4d13h
	redis                NodePort       10.43.129.224   <none>        6379:31436/TCP               42h
	kube-explorer        ClusterIP      10.43.208.84    <none>        80/TCP                       36h
	elasticsearch        ClusterIP      10.43.241.136   <none>        9200/TCP,9300/TCP            13h
	elasticsearch-data   ClusterIP      10.43.39.228    <none>        9300/TCP                     13h
	kibana               ClusterIP      10.43.193.15    <none>        80/TCP                       13h
	mysql                ExternalName   <none>          master        3306/TCP                     6m24s
	mongo                ExternalName   <none>          master        27017/TCP                    6m24s			
	
	> kubectl get service -o wide
	NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE     SELECTOR
	kubernetes           ClusterIP      10.43.0.1       <none>        443/TCP                      4d13h   <none>
	nacos                ClusterIP      10.43.175.40    <none>        8848/TCP,9848/TCP,9555/TCP   4d13h   app=nacos
	redis                NodePort       10.43.129.224   <none>        6379:31436/TCP               42h     app=redis
	kube-explorer        ClusterIP      10.43.208.84    <none>        80/TCP                       36h     app=kube-explorer
	elasticsearch        ClusterIP      10.43.241.136   <none>        9200/TCP,9300/TCP            13h     app=elasticsearch,role=master
	elasticsearch-data   ClusterIP      10.43.39.228    <none>        9300/TCP                     13h     app=elasticsearch,role=data
	kibana               ClusterIP      10.43.193.15    <none>        80/TCP                       13h     app=kibana
	mysql                ExternalName   <none>          master        3306/TCP                     6m45s   <none>
	mongo                ExternalName   <none>          master        27017/TCP                    6m45s   <none>
				
				

106.3.5.4. 更新服务

			
kubectl replace -f service.yaml --force
			
			

106.3.5.5. 删除服务

			
kubectl delete service hello-minikube			
			
			

106.3.5.6. clusterip

语法

			
$ kubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run]			
			
			

演示

			
kubectl create service clusterip my-service --tcp=5678:8080			
			
			

headless 模式

			
kubectl create service clusterip my-service --clusterip="None"			
			
			
selector
				
	apiVersion: v1
	kind: Service
	metadata:
	  name: spring-cloud-config-server
	  namespace: default
	  labels:
		app: springboot
	spec:
	  ports: web
	  - port: 8888
		targetPort: web
	  clusterIP: 10.10.0.1
	  selector:
		app: spring-cloud-config-server
				
				

106.3.5.7. 设置外部IP

报漏 80.11.12.10:80 地址

			
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
	app: MyApp
  ports:
	- name: http
	  protocol: TCP
	  port: 80
	  targetPort: 9376
  externalIPs:
	- 80.11.12.10			
			
			

106.3.5.8. externalname

语法

			
$ kubectl create service externalname NAME --external-name external.name [--dry-run]		
			
			

演示

			
kubectl create service externalname my-externalname --external-name bar.com	
			
			
绑定外部域名
				
	apiVersion: v1
	kind: Service
	metadata:
	  name: my-service
	  namespace: prod
	spec:
	  type: ExternalName
	  externalName: my.database.example.com			
				
				

应用案例,在master节点宿主主机上安装了mysql和mongo地址,pod链接他们可以使用宿主IP链接,或者写 master 主机名。

我认为更好的方法使用使用 Service 做一层映射,然后使用统一容器域名访问 mysql.default.svc.cluster.local,mongo.default.svc.cluster.local

				
	metadata:
	  name: mysql
	  namespace: default
	spec:
	  ports:
		- name: mysql
		  protocol: TCP
		  port: 3306
		  targetPort: 3306
	  type: ExternalName
	  externalName: master
	apiVersion: v1
	kind: Service
	---
	metadata:
	  name: mongo
	  namespace: default
	spec:
	  ports:
		- name: mongo
		  protocol: TCP
		  port: 27017
		  targetPort: 27017
	  type: ExternalName
	  externalName: master
	apiVersion: v1
	kind: Service			
				
				
Example mongo
				
apiVersion: v1
kind: Service
metadata:
  name: mongo
  namespace: default
spec:
  externalName: master
  ports:
  - name: mongo
    port: 27017
    protocol: TCP
    targetPort: 27017
  sessionAffinity: None
  type: ExternalName				
				
				
Example MySQL
				
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: default
spec:
  externalName: dev.mysql.netkiller.cn
  sessionAffinity: None
  type: ExternalName				
				
				

106.3.5.9. 负载均衡

语法

			
$ kubectl create service loadbalancer NAME [--tcp=port:targetPort] [--dry-run]		
			
			

演示

			
kubectl create service loadbalancer my-lb --tcp=5678:8080
			
			
LoadBalancer YAML

一般 HTTP 服务通过 ingress 对外报漏服务,TCP 的 Socket 服务可以使用 LoadBalancer 进行报漏

				
	apiVersion: v1
	kind: Service
	metadata:
	  name: my-service
	spec:
	  selector:
		app: MyApp
	  ports:
		- protocol: TCP
		  port: 80
		  targetPort: 9376
	  clusterIP: 10.0.171.239
	  type: LoadBalancer
	status:
	  loadBalancer:
		ingress:
		- ip: 192.0.2.127			
				
				

		   
	apiVersion: v1
	kind: Service
	metadata:
	  name: example-service
	spec:
	  selector:
		app: example
	  ports:
		- port: 8765
		  targetPort: 9376
	  type: LoadBalancer      
		  
				
Example Redis
				
apiVersion: v1
kind: Service
metadata:
  name: test
  namespace: default
  resourceVersion: "42471353"
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.43.242.167
  clusterIPs:
  - 10.43.242.167
  externalIPs:
  - 172.18.200.55
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: redis
    nodePort: 31143
    port: 6380
    protocol: TCP
    targetPort: 6379
  selector:
    app: redis
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 172.18.200.5
    - ip: 172.18.200.50
    - ip: 172.18.200.51
				
				
				

106.3.5.10. nodeport

语法

			
$ kubectl create service nodeport NAME [--tcp=port:targetPort] [--dry-run]
			
			

演示

			
kubectl create service nodeport my-nodeport --tcp=5678:8080
			
			
NodePort YAML
				
	apiVersion: v1
	kind: Service
	metadata:
	  name: my-service
	spec:
	  type: NodePort
	  selector:
		app: MyApp
	  ports:
		  # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
		- port: 80
		  targetPort: 80
		  # Optional field
		  # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
		  nodePort: 30007			
				
				

106.3.5.11. Example

		
apiVersion: v1
kind: Service
metadata:
  name: registry
  namespace: default
  labels:
    app: registry
spec:
  type: NodePort
  selector:
    app: registry
  ports:
  - name: registry
    port: 5000
    nodePort: 30050
    protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: registry
  namespace: default
  labels:
    app: registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: registry
  template:
    metadata:
      labels:
        app: registry
    spec:
      containers:
      - name: registry
        image: registry:latest
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
        env:
        - name: REGISTRY_HTTP_ADDR
          value: :5000
        - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
          value: /var/lib/registry
        ports:
        - containerPort: 5000
          name: registry
          protocol: TCP	
		
			

106.3.6. serviceaccount

语法

			
$ kubectl create serviceaccount NAME [--dry-run]
			
		

演示

			
kubectl create serviceaccount my-service-account
			
		
		
apiVersion: v1
kind: ServiceAccount
metadata:
 labels:
   app: elasticsearch
 name: elasticsearch
 namespace: elastic		
		
		

106.3.7. Pod 管理

Pod 状态说明

Pod 状态:

  • Pending:Pod已经被创建,但还没有完成调度,或者说有一个或多个镜像正处于从远程仓库下载的过程。处在这个阶段的Pod可能正在写数据到etcd中、调度、pull镜像或启动容器。
  • Pending:Pod已经被创建,但还没有完成调度,或者说有一个或多个镜像正处于从远程仓库下载的过程。处在这个阶段的Pod可能正在写数据到etcd中、调度、pull镜像或启动容器。
  • Running:该Pod已经绑定到了一个节点上,Pod中所有的容器都已被创建。至少有一个容器正在运行,或者正处于启动或重启状态。
  • Succeeded:Pod中的所有的容器已经正常的执行后退出,并且不会自动重启,一般会是在部署job的时候会出现。
  • Failed:Pod中的所有容器都已终止了,并且至少有一个容器是因为失败终止。也就是说,容器以非0状态退出或者被系统终止。
  • Unkonwn:APIServer无法正常获取到Pod对象的状态信息,通常是由于其无法与所在工作节点的kubelet通信所致。

Pod 错误的详细的说明

		
状态						描述
CrashLoopBackOff		容器退出,kubelet正在将它重启
InvalidImageName		无法解析镜像名称
ImageInspectError		无法校验镜像
ErrImageNeverPull		策略禁止拉取镜像
ImagePullBackOff		正在重试拉取
RegistryUnavailable		连接不到镜像中心
ErrImagePull			通用的拉取镜像出错
CreateContainerConfigError	不能创建kubelet使用的容器配置
CreateContainerError	创建容器失败
m.internalLifecycle.PreStartContainer	执行hook报错
RunContainerError		启动容器失败
PostStartHookError		执行hook报错
ContainersNotInitialized	容器没有初始化完毕
ContainersNotRead		容器没有准备完毕
ContainerCreating		容器创建中
PodInitializing	pod 	初始化中
DockerDaemonNotReady	docker还没有完全启动
NetworkPluginNotReady	网络插件还没有完全启动		
		
			

106.3.7.1. 查看 POD 状态

			
kubectl get pod <pod-name> -o wide		
kubectl get pods --all-namespaces			
			
				

查看默认命名空间下的 pod

			 
[root@localhost ~]# kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
hello-minikube-5c856cbf98-6vfvp   1/1     Running   0          6m59s
			
				

查看所有命名空间下的 Pod

						
[root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
default       hello-minikube-5c856cbf98-6vfvp        1/1     Running   1          4d18h
kube-system   coredns-86c58d9df4-2rfqf               1/1     Running   51         4d18h
kube-system   coredns-86c58d9df4-wkb7l               1/1     Running   49         4d18h
kube-system   etcd-minikube                          1/1     Running   12         4d18h
kube-system   kube-addon-manager-minikube            1/1     Running   11         4d18h
kube-system   kube-apiserver-minikube                1/1     Running   74         4d18h
kube-system   kube-controller-manager-minikube       1/1     Running   31         4d18h
kube-system   kube-proxy-brrdd                       1/1     Running   1          4d18h
kube-system   kube-scheduler-minikube                1/1     Running   31         4d18h
kube-system   kubernetes-dashboard-ccc79bfc9-dxcq2   1/1     Running   7          4d17h
kube-system   storage-provisioner                    1/1     Running   2          4d18h		
		
				
			
iMac:~ neo$ kubectl get pods --output=wide
NAME                        READY   STATUS             RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
registry-65854b565b-bkhvq   0/1     ImagePullBackOff   0          18m   172.17.0.4   minikube   <none>           <none>
			
			
				

查看pod标签

			
kubectl get pods --show-labels			
			
				

查看指定标签的pod

			
kubectl get pods -l run=nginx			
			
				

指定命名空间

		
[root@localhost ~]# kubectl get pod --namespace=kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-2rfqf               1/1     Running   0          40m
coredns-86c58d9df4-wkb7l               1/1     Running   0          40m
etcd-minikube                          1/1     Running   0          40m
kube-addon-manager-minikube            1/1     Running   0          41m
kube-apiserver-minikube                1/1     Running   2          40m
kube-controller-manager-minikube       1/1     Running   6          40m
kube-proxy-brrdd                       1/1     Running   0          40m
kube-scheduler-minikube                1/1     Running   5          41m
kubernetes-dashboard-ccc79bfc9-dxcq2   1/1     Running   5          16m
storage-provisioner                    1/1     Running   0          39m		
		
				
格式化输出
			
neo@Netkiller-iMac ~> kubectl get pods -l app=nacos -o jsonpath='{.items[0].metadata.name}'
nacos-0⏎   			
			
					
查看 pod 下面容器

			
root@logging ~# kubectl --kubeconfig=/home/prod/.kube/config -n netkiller get pod neo-6787cfcb9-8s8pp -o jsonpath="{.spec.containers[*].name}"
filebeat neo  
			
					

106.3.7.2. 运行 POD

			
iMac:kubernetes neo$ kubectl run registry --image=registry:latest			
			
				

			
kubectl run busybox --image=busybox --command -- ping www.netkiller.cn			
			
				

			
kubectl run nginx --replicas=3 --labels="app=example" --image=nginx:latest --port=80			
			
				

			
kubectl run busybox --rm=true --image=busybox --restart=Never -it			
			
				

通过 Yaml 文件运行 Pod

		
apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']		
		
				

创建 pod

		
iMac:kubernetes neo$ kubectl create -f pod.yaml 
pod/counter created

iMac:kubernetes neo$ kubectl logs counter
0: Sun Oct  4 12:32:44 UTC 2020
1: Sun Oct  4 12:32:45 UTC 2020
2: Sun Oct  4 12:32:46 UTC 2020
3: Sun Oct  4 12:32:47 UTC 2020
4: Sun Oct  4 12:32:48 UTC 2020
5: Sun Oct  4 12:32:49 UTC 2020
6: Sun Oct  4 12:32:50 UTC 2020
7: Sun Oct  4 12:32:51 UTC 2020
8: Sun Oct  4 12:32:52 UTC 2020
9: Sun Oct  4 12:32:53 UTC 2020
		
				

106.3.7.3. 删除 pod

			
kubectl delete -n default pod registry	
kubectl delete -n default pod counter			
			
				

106.3.7.4. 查看 Pod 的事件

		
kubectl describe pod <pod-name> 		
		
				
		
iMac:~ neo$ kubectl describe pod springboot
Name:         springboot
Namespace:    default
Priority:     0
Node:         minikube/192.168.64.2
Start Time:   Mon, 21 Sep 2020 16:17:03 +0800
Labels:       run=springboot
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  springboot:
    Container ID:   
    Image:          127.0.0.1:5000/netkiller/config:latest
    Image ID:       
    Port:           8888/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fhfn8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-fhfn8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fhfn8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  80s   default-scheduler  Successfully assigned default/springboot to minikube
  Normal  Pulling    79s   kubelet            Pulling image "127.0.0.1:5000/netkiller/config:latest"		
		
				

106.3.7.5. Taint(污点)和 Toleration(容忍)

其目的是分配 pod 在集群间的调度,Taint 和 toleration 相互配合,可以用来避免 pod 被分配到某个节点上。这跟节点亲和性作用相反。

给 node 节点设置 label,通过给 pod 设置 nodeSelector 将 pod 调度到匹配标签的节点上。

如果设置 toleration 应用于 pod 上,则表示 pod 可以被调度到 taint 的节点上。

Taint(污点)设置

设置污点: kubectl taint node [node] key=value:[effect]

effect 参数

  1. NoSchedule :不能被调度。
  2. PreferNoSchedule:尽量不要调度。
  3. NoExecute:不允许该节点有 Pod。

在 shenzhen 节点上设置Taint,键为key,值为value,effect是NoSchedule。

				
kubectl taint nodes shenzhen key=value:NoSchedule
				
					

这意味着除非pod只有明确声明toleration可以容忍这个Taint,否则就不会被调度到该节点。

				
apiVersion: v1
kind: Pod
metadata:
  name: pod-taints
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"
  containers:
    - name: pod-taints
      image: busybox:latest				
				
					
Toleration(容忍)调度

key 存在即可匹配

				
spec:
  tolerations:
  - key: "key"
    operator: "Exists"
    effect: "NoSchedule"				
				
					

key 必须存在,并且值等 value

				
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"				
				
					

在pod上设置多个toleration:

				
spec:				
  tolerations:
  - key: "key1"
    operator: "Equal"
    value: "value1"
    effect: "NoSchedule"
  - key: "key2"
    operator: "Equal"
    value: "value2"
    effect: "NoExecute"				
				
					

如果给node加上Taint effect=NoExecute的,该节点上的没有设置toleration的pod都会被立刻驱逐,设置 tolerationSeconds 后会给 Pod 一个宽限期。

				
spec:		
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "value"
    effect: "NoSchedule"
    tolerationSeconds: 3600
				
					
使用场景

例如有些节点上挂了SSD,给redis,mongodb,mysql 使用,有些节点上安装了显卡GPU。就可以使用 taint

				
kubectl taint nodes shenzhen special=true:NoSchedule
kubectl taint nodes guangdong special=true:PreferNoSchedule				
				
					

106.3.7.6. 镜像拉取策略

imagePullPolicy: Always 总是拉取

imagePullPolicy: IfNotPresent 默认值,本地有则使用本地镜像,不拉取

imagePullPolicy: Never 只使用本地镜像,从不拉取

106.3.7.7. 指定主机名

			
apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
    - "bar.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"			
			
					

106.3.7.8. 环境变量

			
apiVersion: v1
kind: Pod
metadata:
  name: envars-fieldref
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv NODE_NAME POD_NAME POD_NAMESPACE;
          printenv POD_IP POD_SERVICE_ACCOUNT;
          sleep 10;
        done;
      env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
  restartPolicy: Never			
			
					

			
apiVersion: v1
kind: Pod
metadata:
  name: envars-resourcefieldref
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox:1.24
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv CPU_REQUEST CPU_LIMIT;
          printenv MEM_REQUEST MEM_LIMIT;
          sleep 10;
        done;
      resources:
        requests:
          memory: "32Mi"
          cpu: "125m"
        limits:
          memory: "64Mi"
          cpu: "250m"
      env:
        - name: CPU_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: requests.cpu
        - name: CPU_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: limits.cpu
        - name: MEM_REQUEST
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: requests.memory
        - name: MEM_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: test-container
              resource: limits.memory
  restartPolicy: Never			
			
					

106.3.7.9. 健康状态检查

readinessProbe (就绪探测)

就绪探针检查容器是否能够正常对外提供服务

				
        readinessProbe: 
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 10         #10s之后开始第一次探测
          periodSeconds: 5                #第一次探测之后每隔5s探测一次			
				
				
livenessProbe (存活探测)

检测容器中的应用是否健康,然后将检查结果和重启策略restartPolicy来对Pod进行重启

命令方式

				 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-health
spec:
  containers:
  - name: nginx-liveness
    image: nginx:latest
    command:
    - /bin/sh
    - -c
    - /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    livenessProbe:
      exec:
        command: [ "/bin/sh", "-c", "test", "-e", "/run/nginx.pid" ]
  restartPolicy: Always				
				
				

TCP 方式

				 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-health
spec:
  containers:
  - name: nginx-liveness
    image: nginx:latest
    command:
    - /bin/sh
    - -c
    - /usr/sbin/nginx; sleep 60; rm -rf /run/nginx.pid
    livenessProbe:
      tcpSocket:
        port: 80
  restartPolicy: Always				
				
				

106.3.7.10. securityContext

sysctls

				
kubelet --allowed-unsafe-sysctls \
  'kernel.msg*,net.core.somaxconn' ...				
				
						
				
apiVersion: v1
kind: Pod
metadata:
  name: sysctl-example
spec:
  securityContext:
    sysctls:
    - name: kernel.shm_rmid_forced
      value: "0"
    - name: net.core.somaxconn
      value: "1024"
    - name: kernel.msgmax
      value: "65536"				
				
						
runAsUser

allowPrivilegeEscalation 表示是否继承父进程权限,runAsUser 表示使用 UID 1000 的用户运行

				
apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
  containers:
  - name: sec-ctx-demo
    image: busybox:latest
    securityContext:
      runAsUser: 1000
      allowPrivilegeEscalation: false				
				
						
				
   spec:
     securityContext:
        runAsUser: 1000
        fsGroup: 2000
        runAsNonRoot: true				
				
						
security.alpha.kubernetes.io/sysctls

security.alpha.kubernetes.io/sysctls

			
apiVersion: v1
kind: Pod
metadata:
  name: sysctl-example
  annotations:
    security.alpha.kubernetes.io/sysctls: kernel.shm_rmid_forced=1
spec:			
			
						

unsafe-sysctls

			
apiVersion: v1
kind: Pod
metadata:
  name: sysctl-example
  annotations:
    security.alpha.kubernetes.io/unsafe-sysctls: net.core.somaxconn=65535                 #使用unsafe sysctl,设置最大连接数
spec:
  securityContext:
    privileged: true                                                                      #开启privileged权限			
			
						

106.3.7.11. nodeName 选择节点

首先查看节点名称

			
[root@master ~]# kubectl get node
NAME      STATUS   ROLES                  AGE     VERSION
agent-1   Ready    <none>                 2d13h   v1.24.4+k3s1
master    Ready    control-plane,master   2d13h   v1.24.4+k3s1
agent-2   Ready    <none>                 13h     v1.24.4+k3s1			
			
					

使用 nodeName: master 选择节点

			
metadata:
  name: redis
  labels:
    app: redis
spec:
  replicas: 1
  serviceName: redis
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
        - name: redis
          image: redis:latest
          ports:
            - containerPort: 6379
          volumeMounts:
            - name: data
              mountPath: /data
            - name: config
              mountPath: /usr/local/etc/redis.conf
              subPath: redis.conf
          livenessProbe:
            tcpSocket:
              port: 6379
            initialDelaySeconds: 60
            failureThreshold: 3
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            tcpSocket:
              port: 6379
            initialDelaySeconds: 5
            failureThreshold: 3
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: redis
        - name: config
          configMap:
            name: redis
      nodeName: master
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: longhorn
        resources:
          requests:
            storage: 2Gi
apiVersion: apps/v1
kind: StatefulSet			
			
					

106.3.7.12. nodeSelector 选择节点

首先给节点打标签,例如 disk-type=ssd

			
[root@master ~]# kubectl label nodes agent-1 disk-type=ssd
node/agent-1 labeled			
			
					

查看标签

			
[root@master ~]# kubectl get node --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS
master   Ready    master   42d   v1.17.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
agent-1   Ready    <none>   42d   v1.17.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk-type=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=agent-1,kubernetes.io/os=linux
agent-2   Ready    <none>   42d   v1.17.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=agent-2,kubernetes.io/os=linux			
			
					

			
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  replicas: 5
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 80
      # 指定标签节点
      nodeSelector:
        disk-type: ssd			
			
					

删除标签

			
[root@master ~]# kubectl label nodes agent-1 disk-type-
node/agent-1 unlabeled		
			
					

106.3.7.13. nodeAffinity 选择节点

			
nodeAffinity可对应的两种策略:
preferredDuringScheduling(IgnoredDuringExecution / RequiredDuringExecution) 软策略
requiredDuringScheduling(IgnoredDuringExecution / RequiredDuringExecution) 硬策略

operator 表达式
In: label的值在某个列表中
NotIn:label的值不在某个列表中
Exists:某个label存在
DoesNotExist:某个label不存在
Gt:label的值大于某个值(字符串比较)
Lt:label的值小于某个值(字符串比较)
			
					

106.3.7.14. Taint(污点)和 Toleration(容忍)

			
			
			
					

106.3.7.15. strategy

滚动升级策略:

超过期望的Pod数量:1

不可用Pod最大数量:0

			
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
			
					
			
  strategy:
      type: RollingUpdate
      rollingUpdate: {
        maxUnavailable: 25%
        maxSurge: 25%
			
					

106.3.8. 部署管理

		
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl get pods --namespace=kube-system		
		
		

106.3.8.1. expose

		
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort --name=nginx-service	
kubectl describe service nginx-service	
		
			
		
将服务暴露出去,在服务前面加一个负载均衡,因为pod可能分布在不同的结点上。 
–port:暴露出去的端口 
–type=NodePort:使用结点+端口方式访问服务 
–target-port:容器的端口 
–name:创建service指定的名称		
		
			
		
kubectl expose deployment nginx --port=80 --target-port=8080 --type=NodePort
kubectl expose deployment nginx --port=80 --target-port=8080 --type=LoadBalancer	
		
			

106.3.8.2. 部署容器

		
kubectl create deployment registry --image=registry:latest
kubectl get deploy		
		
			

106.3.8.3. 删除 deployment

			
kubectl delete deployment hello-minikube			
			
			

106.3.8.4. 扩容管理

		
kubectl scale -n default deployment nginx --replicas=1	
kubectl scale deployment springbootdemo --replicas=4	
kubectl scale deployment nginx --replicas=10	
		
			

106.3.8.5. rollout

查看发布历史

		
kubectl rollout history deployment/nginx		
		
			

指定版本号

		
kubectl rollout history deployment/nginx --revision=3		
		
			

查看部署状态

		
kubectl rollout status deployment/nginx		
		
			

回滚到上一个版本

		
kubectl rollout undo deployment/nginx-deployment		
		
			

回滚到指定版本

		
kubectl rollout undo deployment/nginx-deployment --to-revision=3		
		
			

106.3.8.6. 重启容器

			
root@netkiller ~/neo (master)# kubectl rollout restart deployment netkiller -n project			
			
			

106.3.8.7. 更新镜像

更新资源对象的容器镜像

可使用资源对象包括(不区分大小写):pod (po)、replicationcontroller(rc)、deployment(deploy)、daemonset(ds)、job、replicaset (rs)

			
kubectl set image deployment/nginx nginx=nginx:1.20.0
kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.10.1
			
			

携带参数

			
kubectl set image deployments,rc nginx=nginx:1.9.1 --all		
			
			

使用通配符

			
kubectl set image daemonset abc *=nginx:1.9.1		
			
			

106.3.9. secret 密钥管理

106.3.9.1. 获取 Token

			
[gitlab-runner@agent-5 ~]$ kubectl get secrets -n gitlab -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='gitlab-runner')].data.token}" | base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IktCOHRvYlZOLXFPRmEyb1JWdlQxSzBvN0tvZF9HNFBGRnlraDR5UU1jakkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJnaXRsYWIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoiZ2l0bGFiLXJ1bm5lci10b2tlbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJnaXRsYWItcnVubmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiN2NiOGY2MzctNzliNC00NzliLWFmMDMtNGE4ZDZkOWIzZjM5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmdpdGxhYjpnaXRsYWItcnVubmVyIn0.pU4-8D4szeL8iud1SvesdN7nV7L3GLaNsa2UbsxkGQ4SDGN85zKTXJl6MtqDsuJB9HBUlOTMnyEa0gCbgHOJlR3fd2HcegitrRLeybvUuotniiLpCPO7vAO-oS5Fej7oUFBXqZJYIx-xMbFoyt3rnGs273c_yE8avI8EGdEPNhOWRgF_GZBYstvwiEjO2IUDWbutzCTtGloPvJ5Ur0s7drLJkCQvT2nod5tSSnY5R0lpNyD2FodkFR28KU1EgFoHUnH_ERtUAS5qObIETWSwm5SmCnd2Ogjh70DDxmIHSU-saFU0zSqPpZ1oX9hgO9YMkcJXPHOEnqIVEagZ5CSf2w			
			
			

106.3.9.2. 创建 Secret

			
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  password: $(echo "passw0rd" | base64)
  username: $(echo "neo" | base64)
EOF			
			
			

106.3.9.3. Private Registry 用户认证

		
kubectl create secret docker-registry docker-hub \
--docker-server=https://index.docker.io/v1/ \
--docker-username=netkiller \
--docker-password=password \
--docker-email=netkiller@msn.com
		
			

		
iMac:spring neo$ kubectl get secret
NAME                  TYPE                                  DATA   AGE
default-token-fhfn8   kubernetes.io/service-account-token   3      2d23h
docker-hub            kubernetes.io/dockerconfigjson        1      15s		
		
			
		
apiVersion: apps/v1
kind: Deployment 
metadata:
  name: springboot 
spec:
  replicas: 3 
  selector:
    matchLabels:
      app: springboot
  template:
    metadata:
      labels:
        app: springboot
    spec:
      containers: 
      - name: springboot
        image: netkiller/config:latest
        imagePullPolicy: IfNotPresent 
        ports:
        - containerPort: 8888
      imagePullSecrets:
        - name: docker-hub		
		
			

		
kubectl delete -n default secret docker-hub	
		
			

106.3.9.4. 配置TLS SSL

			
# 证书生成
mkdir cert && cd cert

# 生成 CA 自签证书

openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"

# 编辑 openssl 配置
cp /etc/pki/tls/openssl.cnf .
vim openssl.cnf

[req]
req_extensions = v3_req # 注释删掉
# 新增下面配置是
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = ns.netkiller.cn

# 生成证书
openssl genrsa -out ingress-key.pem 2048
openssl req -new -key ingress-key.pem -out ingress.csr -subj "/CN=www.netkiller.cn" -config openssl.cnf
openssl x509 -req -in ingress.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out ingress.pem -days 365 -extensions v3_req -extfile openssl.cnf
			
			
			
kubectl create secret tls ingress-secret --namespace=kube-system --key cert/ingress-key.pem --cert cert/ingress.pem 			
			
			

106.3.10. ConfigMap

ConfigMap 用于保存配置数据的键值,也可以用来保存配置文件。

106.3.10.1. 创建 Key-Value 配置项

从key-value字符串创建ConfigMap

		
neo@MacBook-Pro-Neo ~ % kubectl create configmap config --from-literal=nickname=netkiller
configmap/config created		
		
			

			
neo@MacBook-Pro-Neo ~ % kubectl get configmap config -o go-template='{{.data}}'
map[nickname:netkiller]			
			
			

创建多个KV对

			
neo@MacBook-Pro-Neo ~ % kubectl create configmap user --from-literal=username=neo --from-literal=nickname=netkiller --from-literal=age=35
configmap/user created

neo@MacBook-Pro-Neo ~ % kubectl get configmap user -o go-template='{{.data}}'                                                        
map[age:35 nickname:netkiller username:neo]%  			
			
			

			
neo@MacBook-Pro-Neo ~ % kubectl create configmap db-config --from-literal=db.host=172.16.0.10 --from-literal=db.port='3306' 
configmap/db-config created
neo@MacBook-Pro-Neo ~ % kubectl describe configmap db-config                                                  
Name:         db-config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
db.port:
----
3306
db.host:
----
172.16.0.10
Events:  <none>			
			
			

106.3.10.2. 从文件创建 ConfigMap

			
neo@MacBook-Pro-Neo ~ % kubectl create configmap passwd --from-file=/etc/passwd
configmap/passwd created

neo@MacBook-Pro-Neo ~ % kubectl describe configmap passwd                      
Name:         passwd
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
passwd:
----
##
# User Database
# 
# Note that this file is consulted directly only when the system is running
# in single-user mode.  At other times this information is provided by
# Open Directory.
#
# See the opendirectoryd(8) man page for additional information about
# Open Directory.
##
nobody:*:-2:-2:Unprivileged User:/var/empty:/usr/bin/false
root:*:0:0:System Administrator:/var/root:/bin/sh
daemon:*:1:1:System Services:/var/root:/usr/bin/false
_uucp:*:4:4:Unix to Unix Copy Protocol:/var/spool/uucp:/usr/sbin/uucico
_taskgated:*:13:13:Task Gate Daemon:/var/empty:/usr/bin/false
_networkd:*:24:24:Network Services:/var/networkd:/usr/bin/false
_installassistant:*:25:25:Install Assistant:/var/empty:/usr/bin/false
_lp:*:26:26:Printing Services:/var/spool/cups:/usr/bin/false
_postfix:*:27:27:Postfix Mail Server:/var/spool/postfix:/usr/bin/false
_scsd:*:31:31:Service Configuration Service:/var/empty:/usr/bin/false
_ces:*:32:32:Certificate Enrollment Service:/var/empty:/usr/bin/false
_appstore:*:33:33:Mac App Store Service:/var/db/appstore:/usr/bin/false
_mcxalr:*:54:54:MCX AppLaunch:/var/empty:/usr/bin/false
_appleevents:*:55:55:AppleEvents Daemon:/var/empty:/usr/bin/false
_geod:*:56:56:Geo Services Daemon:/var/db/geod:/usr/bin/false
_devdocs:*:59:59:Developer Documentation:/var/empty:/usr/bin/false
_sandbox:*:60:60:Seatbelt:/var/empty:/usr/bin/false
_mdnsresponder:*:65:65:mDNSResponder:/var/empty:/usr/bin/false
_ard:*:67:67:Apple Remote Desktop:/var/empty:/usr/bin/false
_www:*:70:70:World Wide Web Server:/Library/WebServer:/usr/bin/false
_eppc:*:71:71:Apple Events User:/var/empty:/usr/bin/false
_cvs:*:72:72:CVS Server:/var/empty:/usr/bin/false
_svn:*:73:73:SVN Server:/var/empty:/usr/bin/false
_mysql:*:74:74:MySQL Server:/var/empty:/usr/bin/false
_sshd:*:75:75:sshd Privilege separation:/var/empty:/usr/bin/false
_qtss:*:76:76:QuickTime Streaming Server:/var/empty:/usr/bin/false
_cyrus:*:77:6:Cyrus Administrator:/var/imap:/usr/bin/false
_mailman:*:78:78:Mailman List Server:/var/empty:/usr/bin/false
_appserver:*:79:79:Application Server:/var/empty:/usr/bin/false
_clamav:*:82:82:ClamAV Daemon:/var/virusmails:/usr/bin/false
_amavisd:*:83:83:AMaViS Daemon:/var/virusmails:/usr/bin/false
_jabber:*:84:84:Jabber XMPP Server:/var/empty:/usr/bin/false
_appowner:*:87:87:Application Owner:/var/empty:/usr/bin/false
_windowserver:*:88:88:WindowServer:/var/empty:/usr/bin/false
_spotlight:*:89:89:Spotlight:/var/empty:/usr/bin/false
_tokend:*:91:91:Token Daemon:/var/empty:/usr/bin/false
_securityagent:*:92:92:SecurityAgent:/var/db/securityagent:/usr/bin/false
_calendar:*:93:93:Calendar:/var/empty:/usr/bin/false
_teamsserver:*:94:94:TeamsServer:/var/teamsserver:/usr/bin/false
_update_sharing:*:95:-2:Update Sharing:/var/empty:/usr/bin/false
_installer:*:96:-2:Installer:/var/empty:/usr/bin/false
_atsserver:*:97:97:ATS Server:/var/empty:/usr/bin/false
_ftp:*:98:-2:FTP Daemon:/var/empty:/usr/bin/false
_unknown:*:99:99:Unknown User:/var/empty:/usr/bin/false
_softwareupdate:*:200:200:Software Update Service:/var/db/softwareupdate:/usr/bin/false
_coreaudiod:*:202:202:Core Audio Daemon:/var/empty:/usr/bin/false
_screensaver:*:203:203:Screensaver:/var/empty:/usr/bin/false
_locationd:*:205:205:Location Daemon:/var/db/locationd:/usr/bin/false
_trustevaluationagent:*:208:208:Trust Evaluation Agent:/var/empty:/usr/bin/false
_timezone:*:210:210:AutoTimeZoneDaemon:/var/empty:/usr/bin/false
_lda:*:211:211:Local Delivery Agent:/var/empty:/usr/bin/false
_cvmsroot:*:212:212:CVMS Root:/var/empty:/usr/bin/false
_usbmuxd:*:213:213:iPhone OS Device Helper:/var/db/lockdown:/usr/bin/false
_dovecot:*:214:6:Dovecot Administrator:/var/empty:/usr/bin/false
_dpaudio:*:215:215:DP Audio:/var/empty:/usr/bin/false
_postgres:*:216:216:PostgreSQL Server:/var/empty:/usr/bin/false
_krbtgt:*:217:-2:Kerberos Ticket Granting Ticket:/var/empty:/usr/bin/false
_kadmin_admin:*:218:-2:Kerberos Admin Service:/var/empty:/usr/bin/false
_kadmin_changepw:*:219:-2:Kerberos Change Password Service:/var/empty:/usr/bin/false
_devicemgr:*:220:220:Device Management Server:/var/empty:/usr/bin/false
_webauthserver:*:221:221:Web Auth Server:/var/empty:/usr/bin/false
_netbios:*:222:222:NetBIOS:/var/empty:/usr/bin/false
_warmd:*:224:224:Warm Daemon:/var/empty:/usr/bin/false
_dovenull:*:227:227:Dovecot Authentication:/var/empty:/usr/bin/false
_netstatistics:*:228:228:Network Statistics Daemon:/var/empty:/usr/bin/false
_avbdeviced:*:229:-2:Ethernet AVB Device Daemon:/var/empty:/usr/bin/false
_krb_krbtgt:*:230:-2:Open Directory Kerberos Ticket Granting Ticket:/var/empty:/usr/bin/false
_krb_kadmin:*:231:-2:Open Directory Kerberos Admin Service:/var/empty:/usr/bin/false
_krb_changepw:*:232:-2:Open Directory Kerberos Change Password Service:/var/empty:/usr/bin/false
_krb_kerberos:*:233:-2:Open Directory Kerberos:/var/empty:/usr/bin/false
_krb_anonymous:*:234:-2:Open Directory Kerberos Anonymous:/var/empty:/usr/bin/false
_assetcache:*:235:235:Asset Cache Service:/var/empty:/usr/bin/false
_coremediaiod:*:236:236:Core Media IO Daemon:/var/empty:/usr/bin/false
_launchservicesd:*:239:239:_launchservicesd:/var/empty:/usr/bin/false
_iconservices:*:240:240:IconServices:/var/empty:/usr/bin/false
_distnote:*:241:241:DistNote:/var/empty:/usr/bin/false
_nsurlsessiond:*:242:242:NSURLSession Daemon:/var/db/nsurlsessiond:/usr/bin/false
_displaypolicyd:*:244:244:Display Policy Daemon:/var/empty:/usr/bin/false
_astris:*:245:245:Astris Services:/var/db/astris:/usr/bin/false
_krbfast:*:246:-2:Kerberos FAST Account:/var/empty:/usr/bin/false
_gamecontrollerd:*:247:247:Game Controller Daemon:/var/empty:/usr/bin/false
_mbsetupuser:*:248:248:Setup User:/var/setup:/bin/bash
_ondemand:*:249:249:On Demand Resource Daemon:/var/db/ondemand:/usr/bin/false
_xserverdocs:*:251:251:macOS Server Documents Service:/var/empty:/usr/bin/false
_wwwproxy:*:252:252:WWW Proxy:/var/empty:/usr/bin/false
_mobileasset:*:253:253:MobileAsset User:/var/ma:/usr/bin/false
_findmydevice:*:254:254:Find My Device Daemon:/var/db/findmydevice:/usr/bin/false
_datadetectors:*:257:257:DataDetectors:/var/db/datadetectors:/usr/bin/false
_captiveagent:*:258:258:captiveagent:/var/empty:/usr/bin/false
_ctkd:*:259:259:ctkd Account:/var/empty:/usr/bin/false
_applepay:*:260:260:applepay Account:/var/db/applepay:/usr/bin/false
_hidd:*:261:261:HID Service User:/var/db/hidd:/usr/bin/false
_cmiodalassistants:*:262:262:CoreMedia IO Assistants User:/var/db/cmiodalassistants:/usr/bin/false
_analyticsd:*:263:263:Analytics Daemon:/var/db/analyticsd:/usr/bin/false
_fpsd:*:265:265:FPS Daemon:/var/db/fpsd:/usr/bin/false
_timed:*:266:266:Time Sync Daemon:/var/db/timed:/usr/bin/false
_nearbyd:*:268:268:Proximity and Ranging Daemon:/var/db/nearbyd:/usr/bin/false
_reportmemoryexception:*:269:269:ReportMemoryException:/var/db/reportmemoryexception:/usr/bin/false
_driverkit:*:270:270:DriverKit:/var/empty:/usr/bin/false
_diskimagesiod:*:271:271:DiskImages IO Daemon:/var/db/diskimagesiod:/usr/bin/false
_logd:*:272:272:Log Daemon:/var/db/diagnostics:/usr/bin/false
_appinstalld:*:273:273:App Install Daemon:/var/db/appinstalld:/usr/bin/false
_installcoordinationd:*:274:274:Install Coordination Daemon:/var/db/installcoordinationd:/usr/bin/false
_demod:*:275:275:Demo Daemon:/var/empty:/usr/bin/false
_rmd:*:277:277:Remote Management Daemon:/var/db/rmd:/usr/bin/false
_fud:*:278:278:Firmware Update Daemon:/var/db/fud:/usr/bin/false
_knowledgegraphd:*:279:279:Knowledge Graph Daemon:/var/db/knowledgegraphd:/usr/bin/false
_coreml:*:280:280:CoreML Services:/var/empty:/usr/bin/false
_oahd:*:441:441:OAH Daemon:/var/empty:/usr/bin/false

Events:  <none>			
			
			

处理多个文件

			
neo@MacBook-Pro-Neo ~ % kubectl create configmap apache-httpd --from-file=/etc/apache2/httpd.conf --from-file=/etc/apache2/extra/httpd-vhosts.conf
configmap/apache-httpd created			
			
			

处理目录内的所有文件

			
neo@MacBook-Pro-Neo ~ % kubectl create configmap apache-httpd-users --from-file=/etc/apache2/users             
configmap/apache-httpd-users created			
			
			

106.3.10.3. 从环境变量文件创建 ConfigMap

			
cat <<EOF > /tmp/test.env
username=neo
nickname=netkiller
age=38
sex=Y
EOF
			
			

			
neo@MacBook-Pro-Neo ~ % cat <<EOF > /tmp/test.env
username=neo
nickname=netkiller
age=38
sex=Y
EOF
neo@MacBook-Pro-Neo ~ % cat /tmp/test.env 
username=neo
nickname=netkiller
age=38
sex=Y
neo@MacBook-Pro-Neo ~ % kubectl create configmap env-config --from-env-file=/tmp/test.env          
configmap/env-config created			
			
			

106.3.10.4. 查看 ConfigMap

			
neo@MacBook-Pro-Neo ~ % kubectl get configmap                                       
NAME             DATA   AGE
config           1      52s			
			
			

			
neo@MacBook-Pro-Neo ~ % kubectl describe configmap config
Name:         config
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
nickname:
----
netkiller
Events:  <none>			
			
			

			
neo@MacBook-Pro-Neo ~ % kubectl get configmap config -o yaml 
apiVersion: v1
data:
  nickname: netkiller
kind: ConfigMap
metadata:
  creationTimestamp: "2020-10-02T05:05:59Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:nickname: {}
    manager: kubectl-create
    operation: Update
    time: "2020-10-02T05:05:59Z"
  name: config
  namespace: default
  resourceVersion: "18065"
  selfLink: /api/v1/namespaces/default/configmaps/config
  uid: 35381fa6-681b-417a-afc1-f45fdff5406d			
			
			

			
neo@MacBook-Pro-Neo ~ % kubectl get configmap user -o json                   
{
    "apiVersion": "v1",
    "data": {
        "age": "35",
        "nickname": "netkiller",
        "username": "neo"
    },
    "kind": "ConfigMap",
    "metadata": {
        "creationTimestamp": "2020-10-02T05:13:09Z",
        "managedFields": [
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:data": {
                        ".": {},
                        "f:age": {},
                        "f:nickname": {},
                        "f:username": {}
                    }
                },
                "manager": "kubectl-create",
                "operation": "Update",
                "time": "2020-10-02T05:13:09Z"
            }
        ],
        "name": "user",
        "namespace": "default",
        "resourceVersion": "18381",
        "selfLink": "/api/v1/namespaces/default/configmaps/user",
        "uid": "51e3aa61-21cf-4ed1-871c-ac7119aec7a1"
    }
}			
			
			

106.3.10.5. 删除 ConfigMap

			
neo@MacBook-Pro-Neo ~ % kubectl delete -n default configmap config
configmap "config" deleted			
			
			

106.3.10.6. ConfigMap

Key-Value 配置
			
apiVersion: v1
kind: ConfigMap
metadata:
  name: db-config
  namespace: default
data:
  db.host: 172.16.0.10
  db.port: '3306'
  db.user: neo
  db.pass: chen
			
				

创建配置

			
neo@MacBook-Pro-Neo ~/tmp/kubernetes % kubectl create -f key-value.yaml
configmap/db-config created
			
				

将配置项保存到文件

			
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: test-container
      image: gcr.io/google_containers/busybox
      command: [ "/bin/sh", "-c", "cat /usr/local/etc/config/db.host" ]
      volumeMounts:
      - name: config-volume
        mountPath: /usr/local/etc/config
  volumes:
    - name: config-volume
      configMap:
        name: db-config
  restartPolicy: Never			
			
				

定义多组配置项

			
apiVersion: v1
kind: ConfigMap
metadata:
  name: spring-cloud-config
  namespace: default
data:
  config: |
    spring.security.user=config
    spring.security.user=passw0rd
  euerka: |
    spring.security.user=eureka
    spring.security.user=passw0rd
  gateway: |
    spring.security.user=gateway
    spring.security.user=passw0rd    
			
				
Secret

制作私钥证书

			
openssl genrsa -out ingress.key 2048
			
				

制作公钥证书

			
openssl req -new -x509 -days 3650 -key ingress.key -out ingress.crt						
			
				

生成 BASE64

			
neo@MacBook-Pro-Neo ~/workspace/devops/demo % base64 ingress.crt 
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURhRENDQWxBQ0NRRFdsVG0x……
neo@MacBook-Pro-Neo ~/workspace/devops/demo % base64 ingress.key
LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVB……
			
				
			
apiVersion: v1
kind: Secret
metadata:
  name: tls
  namespace: development
data:
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURhRENDQWxBQ0NRRFdsVG0x……
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVB……		
			
				
环境变量

envFrom 可将 ConfigMap 中的配置项定义为容器环境变量

			
apiVersion: v1
kind: Pod
metadata:
  name: neo-test-pod
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh", "-c", "env" ]
      envFrom:
      - configMapRef:
          name: special-config
  restartPolicy: Never			
			
				

引用单个配置项使用 valueFrom

			
neo@MacBook-Pro-Neo ~/tmp/kubernetes % cat key-value.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: db-config
  namespace: default
data:
  db.host: 172.16.0.10
  db.port: '3306'		
  db.user: neo
  db.pass: chen
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: test-container
      image: busybox
      command: [ "/bin/sh", "-c", "env" ]
      env:
        - name: DBHOST
          valueFrom:
            configMapKeyRef:
              name: db-config
              key: db.host
        - name: DBPORT
          valueFrom:
            configMapKeyRef:
              name: db-config
              key: db.port
  restartPolicy: Never				
			
neo@MacBook-Pro-Neo ~/tmp/kubernetes % kubectl create -f key-value.yaml
configmap/db-config created
pod/test-pod created		
			
				
配置文件

定义配置

		
apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-config
  labels:
    app: redis
data:
  redis.conf: |-
    pidfile /var/lib/redis/redis.pid
    dir /var/lib/redis
    port 6379
    bind 0.0.0.0
    appendonly yes
    protected-mode no
    requirepass 123456
		
				

引用配置

		
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:5.0.8
        command: 
          - "sh"
          - "-c"
          - "redis-server /usr/local/etc/redis/redis.conf"
        ports:
        - containerPort: 6379
        resources:
          limits:
            cpu: 1000m
            memory: 1024Mi
          requests:
            cpu: 1000m
            memory: 1024Mi
        livenessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 300
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 5
          timeoutSeconds: 1
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
        volumeMounts:
        - name: data
          mountPath: /data
        - name: config
          mountPath:  /usr/local/etc/redis/redis.conf
          subPath: redis.conf
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: redis
      - name: config
        configMap:
          name: redis-config
		
				
			
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: test-container
      image: gcr.io/google_containers/busybox
      command: [ "/bin/sh","-c","find /etc/config/" ]
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        name: special-config
        items:
        - key: special.how
          path: path/to/special-key
  restartPolicy: Never			
			
				

106.3.11. Job/CronJob

106.3.11.1. CronJob

			
kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"

kubectl delete cronjob hello
			
			

106.3.11.2. Job

执行单词任务

.spec.completions 标志Job结束需要成功运行的Pod个数,默认为1

.spec.parallelism 标志并行运行的Pod的个数,默认为1

.spec.activeDeadlineSeconds 标志失败Pod的重试最大时间,超过这个时间不会继续重试

			
apiVersion: batch/v1
kind: Job
metadata:
  name: busybox
spec:
  completions: 1
  parallelism: 1
  template:
    metadata:
      name: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ["echo", "hello"]
      restartPolicy: Never			
			
				

			
$ kubectl create -f job.yaml
job "busybox" created
$ pods=$(kubectl get pods --selector=job-name=busybox --output=jsonpath={.items..metadata.name})
$ kubectl logs $pods		
			
				
计划任务

.spec.schedule 指定任务运行周期,格式同Cron

.spec.startingDeadlineSeconds 指定任务开始的截止期限

.spec.concurrencyPolicy 指定任务的并发策略,支持Allow、Forbid和Replace三个选项

			
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure			
			
				

106.3.12. clusterrolebinding

		
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user [USER ACCOUNT]		
		
		

106.3.13. Volume

		
PersistentVolume 的访问模式(accessModes)有三种:

ReadWriteOnce(RWO):是最基本的方式,可读可写,但只支持被单个节点挂载。
ReadOnlyMany(ROX):可以以只读的方式被多个节点挂载。
ReadWriteMany(RWX):这种存储可以以读写的方式被多个节点共享。不是每一种存储都支持这三种方式,像共享方式,目前支持的还比较少,比较常用的是 NFS。在 PVC 绑定 PV 时通常根据两个条件来绑定,一个是存储的大小,另一个就是访问模式。


PersistentVolume 的回收策略(persistentVolumeReclaimPolicy,即 PVC 释放卷的时候 PV 该如何操作)也有三种

Retain,不清理, 保留 Volume(需要手动清理)
Recycle,删除数据,即 rm -rf /thevolume/*(只有 NFS 和 HostPath 支持)
Delete,删除存储资源,比如删除 AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持)		
		
		

106.3.13.1. local

			
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 100Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - example-node			
			
			
案例
				
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-volume
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: netkiller-local-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-volume
  local:
    path: /tmp/neo
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: netkiller-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-volume
---
kind: Pod
apiVersion: v1
metadata:
  name: busybox
  namespace: default
spec:
  containers:
    - name: busybox
      image: busybox:latest
      # image: registry.netkiller.cn:5000/netkiller/welcome:latest
      imagePullPolicy: IfNotPresent
      command:
        - sleep
        - "3600"
      volumeMounts:
      - mountPath: "/srv"
        name: mypd
  restartPolicy: Always
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: netkiller-pvc			
				
				

部署 POD

				
iMac:kubernetes neo$ kubectl create -f example/volume/local.yaml 
storageclass.storage.k8s.io/local-volume created
persistentvolume/netkiller-local-pv created
persistentvolumeclaim/netkiller-pvc created
pod/busybox created				
				
				

查看POD状态

				
iMac:kubernetes neo$ kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          2m28s				
				
				

进入POD查看local卷的挂载情况,同时创建一个测试文件。

				
iMac:kubernetes neo$ kubectl exec -it busybox sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # mount | grep /srv
tmpfs on /srv type tmpfs (rw)

/ # echo helloworld > /srv/netkiller
/ # cat /srv/netkiller 
helloworld
				
				

进入宿主主机查看挂载目录

				
$ cat /tmp/neo/netkiller 
helloworld
				
				

106.3.14. Ingress

正常情况 Service 只是暴露了端口,这个端口是可以对外访问的,但是80端口只有一个,很多 Service 都要使用 80端口,这时就需要使用虚拟主机技术。

多个 Service 共同使用一个 80 端口,通过域名区分业务。这就是 Ingress 存在的意义。

106.3.14.1. 管理 Ingress

			
# 查看已有配置
kubectl describe ingress test

# 修改配置
kubectl edit ingress test	

# 来重新载入配置
kubectl replace -f ingress.yaml		
			
			

106.3.14.2. 挂载 SSL 证书上

自签名证书

				
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=bar.foo.com/O=bar.foo.com"				
				
				

如果是购买的SSL证书,通常有两个问题,*.key 和 *.pem,这里的 pem 证书就是 cert 证书。

				
[root@agent-5 tmp]# kubectl create secret tls netkiller --key netkiller.cn.key --cert netkiller.cn.pem
secret/netkiller created
[root@agent-5 tmp]# kubectl get secret netkiller
NAME      TYPE                DATA   AGE
netkiller   kubernetes.io/tls   2      26s				
				
				

yaml 中添加 tls 配置项

				
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: netkiller-test
  namespace: project
spec:
  rules:
    - host: project.netkiller.cn
      http:
        paths:
          - backend:
              service:
                name: netkiller-test
                port:
                  number: 80
            path: /netkiller-test-service
            pathType: ImplementationSpecific
  tls:
    - hosts:
        - project.netkiller.cn
      secretName: netkiller
				
				

106.3.14.3. 端口

			
+----------+  Ingress   +---------+    Pod    +----------+
| internet | ---------> | Service | --------> | Pod Node |
+----------+            +---------+           +----------+
			
			
			
			
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: springboot
spec:
  backend:
    service:
      name: springboot
      port: 
        number: 80			
			
			

106.3.14.4. URI 规则

			
                  Ingress	 / ---> /api --> api-service:8080		
www.netkiller.cn ---------> |  ---> /usr --> usr-service:8080
                             \ ---> /img --> img-service:8080			
			
			
			
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: uri-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: www.netkiller.cn
    http:
      paths:
      - path: /api
        backend:
          serviceName: api-service
          servicePort: 8080
      - path: /usr
        backend:
          serviceName: usr-service
          servicePort: 8080		
      - path: /img
        backend:
          serviceName: img-service
          servicePort: 8080		
			
			

106.3.14.5. vhost 虚拟主机

			
www.netkiller.cn --|     Ingress     |-> www.netkiller.cn www:80
                   | --------------> |
img.netkiller.cn --|                 |-> img.netkiller.cn img:80			
			
			
			
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: vhost-ingress
spec:
  rules:
  - host: www.netkiller.cn
    http:
      paths:
      - backend:
          serviceName: www
          servicePort: 80
  - host: img.netkiller.cn
    http:
      paths:
      - backend:
          serviceName: img
          servicePort: 80			
			
			

106.3.14.6. rewrite

			
http://www.netkiller.cn/1100 => /article/1100			
			
			
			
apiVersion: networking.k8s.io/v1beta1
kind: Ingress			
metadata:
  name: rewrite-ingress
  annotations: 
    nginx.ingress.kubernetes.io/rewrite-target: /article/$1
spec:
  rules:
  - host: www.netkiller.cn
    http:
      paths:
        # 可以有多个(可以正则)
        - path: /($/.*)
          backend:
            serviceName: article
            servicePort: 80	
			
			

106.3.14.7. annotations 配置

HTTP 跳转到 HTTPS
				
# 该注解只在配置了HTTPS之后才会生效进行跳转
nginx.ingress.kubernetes.io/ssl-redirect: "true"

# 强制跳转到https,不论是否配置了https证书
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"				
				
				
server-snippet

server-snippet 可以让你直接编排 Nginx 配置

				
nginx.ingress.kubernetes.io/server-snippet: |
    rewrite /api/($|.*) /api/v2/$1 break;
    rewrite /img/($|.*) /img/thumbnail/$1 break;
				
				

106.3.14.8. 金丝雀发布(灰度发布)

三种annotation按匹配优先级顺序:

			
canary-by-header > canary-by-cookie > canary-weight			
			
			
准备服务
				
# Release Version
apiVersion: v1
kind: Service
metadata:
    name: hello-service
    labels:
    app: hello-service
spec:
ports:
- port: 80
    protocol: TCP
selector:
    app: hello-service
---
# canary Version
apiVersion: v1
kind: Service
metadata:
    name: canary-hello-service
    labels:
    app: canary-hello-service
spec:
ports:
- port: 80
    protocol: TCP
selector:
    app: canary-hello-service				
				
				
方案一,权重分配
				
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: canary
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "30"
spec:
  rules:
  - host: canary.netkiller.cn
    http:
      paths:
      - backend:
          serviceName: canary-hello-service 
				
				

				
$ for i in $(seq 1 10); do curl http://canary.netkiller.cn; echo '\n'; done				
				
				
通过HTTP头开启灰度发布
				
annotations:
  kubernetes.io/ingress.class: nginx
  nginx.ingress.kubernetes.io/canary: "true"
  nginx.ingress.kubernetes.io/canary-by-header: "canary"				
				
				

				
$ for i in $(seq 1 5); do curl -H 'canary:always' http://canary.netkiller.cn; echo '\n'; done				
				
				

				
annotations:
  kubernetes.io/ingress.class: nginx
  nginx.ingress.kubernetes.io/canary: "true"
  nginx.ingress.kubernetes.io/canary-by-header: "canary"
  nginx.ingress.kubernetes.io/canary-by-header-value: "true"				
				
				

				
$ for i in $(seq 1 5); do curl -H 'canary:true' http://canary.netkiller.cn; echo '\n'; done						
				
				
通过 Cookie 开启
				
annotations:
  kubernetes.io/ingress.class: nginx
  nginx.ingress.kubernetes.io/canary: "true"
  nginx.ingress.kubernetes.io/canary-by-cookie: "canary"				
				
				

				
$ for i in $(seq 1 5); do curl -b 'canary=always' http://canary.netkiller.cn; echo '\n'; done				
				
				

106.3.14.9. 解决 504 网关超时

增加下面配置项

			
    nginx.ingress.kubernetes.io/proxy-connect-timeout: '300'
    nginx.ingress.kubernetes.io/proxy-read-timeout: '300'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '300'			
			
			
			
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-connect-timeout: '300'
    nginx.ingress.kubernetes.io/proxy-read-timeout: '300'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '300'
  name: netkiller-test
  namespace: project
spec:
  rules:
    - host: project.netkiller.cn
      http:
        paths:
          - backend:
              service:
                name: netkiller-test
                port:
                  number: 80
            path: /netkiller-test-service
            pathType: ImplementationSpecific
  tls:
    - hosts:
        - project.netkiller.cn
      secretName: netkiller