虽然 Kubeadm 的安装方式比较简单,并且是官方推荐的安装方式,但是 Kubeadm 安装的 Kubernetes 集群证书有效期默认只有一年,到期后需要进行升级或者更新证书。
目前 Kubernetes 保持一年 3 个大版本的更新,推荐采用升级的方式更新证书。
使用 Kubeadm 安装集群时,需要一个 Master 节点初始化集群,然后加入其他节点即可 。初始化集群时,可以直接使用 Kubeadm 命令进行初始化,也可以使用一个配置文件进行初始化,由于使用命令行的形式可能需要配置的字段比较多,因此本示例采用配置文件进行初始化。
1. 配置 Master01 能够免密登录其他节点
复制 # ssh-keygen
# for i in k8s-master01 k8s-node01 k8s-node02 k8s-master02 k8s-master03;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
2. Master 节点创建 kubeadm-config.yaml 配置
创建的是 kubeadm 配置文件,宿主机网段、podSubnet 网段、serviceSubnet 网段不能重复;
kubernetesVersion 的值与安装的 Kubeadm 版本一致,可以通过 kubeadm version 命令查询,此处的是 1.27.0;
根据是否配置了高可用,apiServer.certSANs 和 controlPlaneEndpoint 这两个属性需要相应的变化:
如果配置了高可用,这两个属性都是在高可用组件中配置的 VIP 地址。此处是 192.168.10.129,端口号是 16443,即 HAProxy 监听的 IP 地址和端口。
如果没有配置高可用,这两个属性应该设置成唯一的 Master 的地址。此处是 192.168.10.121,端口号是 6443。
criSocket 更改为自己的 Runtime。
Master 节点创建 kubeadm-config.yaml 配置文件如下(也可以使用如下命令自动生成 kubeadm config print init-defaults > kubeadm-config.yaml ):
复制 kubeadm config print init-defaults > kubeadm-config.yaml
vim kubeadm-config.yaml
单Master节点 高可用
复制 apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.10.121 # Master IP(Modified)
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock # (Modified)
imagePullPolicy: IfNotPresent
name: k8s-master01 # Master host(Modified)
taints:
- effect: NoSchedule #(Added)
key: node-role.kubernetes.io/master #(Added)
---
apiServer:
certSANs:
- 192.168.10.121 # Master IP(Added)
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.10.121:6443 # Master IP:6443(Added)
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # (Modified)
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 172.16.0.0/12 # Pod 网段(Added)
scheduler: {}
复制 apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.10.121 # Master IP(Modified)
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock # (Modified)
imagePullPolicy: IfNotPresent
name: k8s-master01 # Master host(Modified)
taints:
- effect: NoSchedule #(Added)
key: node-role.kubernetes.io/master #(Added)
---
apiServer:
certSANs:
- 192.168.10.121 # VIP(Added)
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.10.121:6443 # VIP:6443(Added)
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # (Modified)
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 172.16.0.0/12 # Pod 网段(Added)
scheduler: {}
由于版本和此示例可能不太一致,因此需要更新一下 kubeadm 配置文件:
复制 kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
将 new.yaml 文件复制到其他 Master 节点:
复制 for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done
之后所有 Master 节点提前下载镜像,可以节省初始化时间(其他节点不需要更改任何配置,包括 IP 地址也不需要更改):
复制 kubeadm config images pull --config /root/new.yaml
3. 初始化 Master01 节点
初始化以后会在 /etc/kubernetes 目录下生成对应的证书和配置文件,之后其他 Master 节点加入Master01 即可:
复制 kubeadm init --config /root/new.yaml --upload-certs
初始化成功 初始化失败
初始化成功以后,会产生 Token 值,用于其他节点加入时使用,因此要记录一下初始化成功生成的token 值(令牌值)
成功后的提示信息:
复制 Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
# Master节点加入
kubeadm join 192.168.10.121:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5c64e636b37a6b1e5ad53b51f31e9ab20cf69325ee46b2542d3904dd5c080415 \
--control-plane --certificate-key af56cf3573ade22c2ff94e4184208dc21f2a2a240fddd0e41b6168410958c4c6
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
// Node 节点加入
kubeadm join 192.168.10.121:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5c64e636b37a6b1e5ad53b51f31e9ab20cf69325ee46b2542d3904dd5c080415
如果初始化失败,需要检查各项配置是否正确,之后再次初始化,清理命令如下:
复制 kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
4. 配置环境变量
初始化成功后,Master01 节点配置 KUBECONFIG 环境变量 ,之后 Kubectl 即可访问 Kubernetes 集群:
复制 echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /root/.bashrc
source /root/.bashrc
如果使用了高可用的配置,并希望能够在其他 Master 节点上也能够使用 kubectl 命令管理集群,只需要同样执行上述修改环境变量的命令即可!
工作节点上不存在 /etc/kubernetes/admin.conf ,如果想在工作节点上也能够使用 kubectl,则需要按照 从控制平面节点以外的计算机控制集群 中所述的那样,将 admin.conf 分发到工作节点上后才行。
查看节点状态:
复制 # kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady control-plane 6m4s v1.27.6
采用初始化安装方式,所有的系统组件均以容器的方式运行并且在 kube-system 命名空间内,此时可以查看 Pod 状态:
复制 # kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-65dcc469f7-bj87n 0/1 Pending 0 6m13s
coredns-65dcc469f7-hvw2m 0/1 Pending 0 6m13s
etcd-master01 1/1 Running 0 6m26s
kube-apiserver-master01 1/1 Running 0 6m26s
kube-controller-manager-master01 1/1 Running 0 6m26s
kube-proxy-hq6hp 1/1 Running 0 6m13s
kube-scheduler-master01 1/1 Running 0 6m28s