openEuler安装k8s1.23.17

  • 时间:2025-11-10 17:39 作者: 来源: 阅读:0
  • 扫一扫,手机访问
摘要: 以下是基于3台服务器(1主2从)在openEuler 23.09上部署K8s集群的详细步骤,明确每步操作的执行节点,已提前安装docker,openEuler 23.09安装docker直通车https://blog.csdn.net/wdy_2099/article/details/154344916 环境说明 节点角色IP地址主机名(建议设置)Master192.2.12.76mas


以下是基于3台服务器(1主2从)在openEuler 23.09上部署K8s集群的详细步骤,明确每步操作的执行节点,已提前安装docker,openEuler 23.09安装docker直通车https://blog.csdn.net/wdy_2099/article/details/154344916

环境说明

节点角色IP地址主机名(建议设置)
Master192.2.12.76master-node
Worker1192.2.12.77worker-node-1
Worker2192.2.101.3worker-node-2

一、通用配置(所有节点执行)

1. 关闭SELinux、防火墙、禁用Swap


# 关闭SELinux(临时+永久)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭防火墙(临时+永久)
systemctl stop firewalld
systemctl disable firewalld

# 禁用Swap(临时+永久)
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab

2. 配置内核参数(K8s网络依赖)


# 加载必要内核模块
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

modprobe overlay
modprobe br_netfilter
modprobe ip_vs
modprobe nf_conntrack

# 配置网络参数(IP转发、桥接过滤)
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sysctl --system  # 生效配置

3. 配置主机名与hosts(避免DNS解析问题)


# Master节点设置主机名
hostnamectl set-hostname master-node
# Worker1设置主机名
hostnamectl set-hostname worker-node-1
# Worker2设置主机名
hostnamectl set-hostname worker-node-2

# 所有节点添加hosts映射(替换为实际IP)
cat <<EOF | tee -a /etc/hosts
192.2.12.76    master-node
192.2.12.77    worker-node-1
192.2.101.3    worker-node-2
EOF

4. 时间同步(避免证书过期)


dnf install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources  # 验证同步状态(出现^*表示成功)

执行明细:


[root@172 home]# dnf install -y chrony
OS                            11 kB/s | 2.5 kB     00:00
everything                    12 kB/s | 2.6 kB     00:00
EPOL                          12 kB/s | 2.6 kB     00:00
debuginfo                     12 kB/s | 2.6 kB     00:00
source                        11 kB/s | 2.5 kB     00:00
update                        12 kB/s | 2.5 kB     00:00
update-source                410  B/s | 2.5 kB     00:06
Package chrony-4.3-2.oe2309.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
[root@172 home]# systemctl start chronyd
[root@172 home]# systemctl enable chronyd
[root@172 home]# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^? tick.ntp.infomaniak.ch        1   6   365    53   +200ms[ +200ms] +/-  150ms
^- tock.ntp.infomaniak.ch        1   6   367    53   +211ms[ +211ms] +/-  111ms
^- time.cloudflare.com           3   6   377    53   +229ms[ +229ms] +/-  112ms

5. 添加K8s国内仓库


# 添加阿里云K8s仓库
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# 清理缓存
dnf clean all
dnf makecache

执行明细:


[root@worker-node-1 yum.repos.d]# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl

6. 安装k8s组件


dnf install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17 --disableexcludes=kubernetes

# 启用kubelet(暂不启动,初始化后自动启动)
systemctl enable kubelet

执行明细:


[root@worker-node-1 yum.repos.d]# dnf install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17 --disableexcludes=kubernetes
Kubernetes                     2.3 kB/s | 454  B     00:00
Kubernetes                      21 kB/s | 2.6 kB     00:00
Importing GPG key 0x13EDEF05:
 Userid     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
 Fingerprint: A362 B822 F6DE DC65 2817 EA46 B53D C80D 13ED EF05
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Importing GPG key 0xDC6315A3:
 Userid     : "Artifact Registry Repository Signer <artifact-registry-repository-signer@google.com>"
 Fingerprint: 35BA A0B3 3E9E B396 F59C A838 C0BA 5CE6 DC63 15A3
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Kubernetes                      14 kB/s | 975  B     00:00
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Kubernetes                     619 kB/s | 182 kB     00:00
OS                              12 kB/s | 2.5 kB     00:00
everything                      12 kB/s | 2.6 kB     00:00
EPOL                            12 kB/s | 2.6 kB     00:00
debuginfo                       12 kB/s | 2.6 kB     00:00
source                          11 kB/s | 2.5 kB     00:00
update                          12 kB/s | 2.5 kB     00:00
update-source                   12 kB/s | 2.5 kB     00:00
Dependencies resolved.
===============================================================
 Package                Arch   Version        Repository  Size
===============================================================
Installing:
 kubeadm                x86_64 1.23.17-0      kubernetes 9.4 M
 kubectl                x86_64 1.23.17-0      kubernetes 9.8 M
 kubelet                x86_64 1.23.17-0      kubernetes  21 M
Installing dependencies:
 conntrack-tools        x86_64 1.4.7-1.oe2309 everything 177 k
 containernetworking-plugins
                        x86_64 1.2.0-1.oe2309 OS          20 M
 cri-tools              x86_64 1.26.0-0       kubernetes 8.6 M
 libnetfilter_cthelper  x86_64 1.0.1-1.oe2309 everything  20 k
 libnetfilter_cttimeout x86_64 1.0.1-1.oe2309 everything  21 k
 libnetfilter_queue     x86_64 1.0.5-2.oe2309 OS          25 k
 socat                  x86_64 1.7.4.4-1.oe2309
                                              everything 165 k

Transaction Summary
===============================================================
Install  10 Packages

Total download size: 69 M
Installed size: 298 M
Downloading Packages:
(1/10): cb2ed23fb25cc5b2f73ffc  17 MB/s | 9.8 MB     00:00
(2/10): 52c389a4598f61bdf251c5  14 MB/s | 9.4 MB     00:00
(3/10): 3f5ba2b53701ac9102ea7c  10 MB/s | 8.6 MB     00:00
(4/10): 552c4d4494c1de798baf4b  16 MB/s |  21 MB     00:01
(5/10): libnetfilter_queue-1.0  21 kB/s |  25 kB     00:01
(6/10): conntrack-tools-1.4.7- 568 kB/s | 177 kB     00:00
(7/10): libnetfilter_cttimeout 102 kB/s |  21 kB     00:00
(8/10): socat-1.7.4.4-1.oe2309 2.0 MB/s | 165 kB     00:00
(9/10): libnetfilter_cthelper-  17 kB/s |  20 kB     00:01
(10/10): containernetworking-p 664 kB/s |  20 MB     00:30
---------------------------------------------------------------
Total                          2.2 MB/s |  69 MB     00:32
Kubernetes                      35 kB/s | 2.6 kB     00:00
Importing GPG key 0x13EDEF05:
 Userid     : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)"
 Fingerprint: A362 B822 F6DE DC65 2817 EA46 B53D C80D 13ED EF05
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Key imported successfully
Importing GPG key 0xDC6315A3:
 Userid     : "Artifact Registry Repository Signer <artifact-registry-repository-signer@google.com>"
 Fingerprint: 35BA A0B3 3E9E B396 F59C A838 C0BA 5CE6 DC63 15A3
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Key imported successfully
Kubernetes                      14 kB/s | 975  B     00:00
Importing GPG key 0x3E1BA8D5:
 Userid     : "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 Fingerprint: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 From       : https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                       1/1
  Installing       : containernetworking-plugins-1.2.0    1/10
  Installing       : socat-1.7.4.4-1.oe2309.x86_64        2/10
  Installing       : libnetfilter_cttimeout-1.0.1-1.oe    3/10
  Running scriptlet: libnetfilter_cttimeout-1.0.1-1.oe    3/10
  Installing       : libnetfilter_cthelper-1.0.1-1.oe2    4/10
  Running scriptlet: libnetfilter_cthelper-1.0.1-1.oe2    4/10
  Running scriptlet: libnetfilter_queue-1.0.5-2.oe2309    5/10
  Installing       : libnetfilter_queue-1.0.5-2.oe2309    5/10
  Running scriptlet: libnetfilter_queue-1.0.5-2.oe2309    5/10
  Installing       : conntrack-tools-1.4.7-1.oe2309.x8    6/10
  Running scriptlet: conntrack-tools-1.4.7-1.oe2309.x8    6/10
  Installing       : kubelet-1.23.17-0.x86_64             7/10
  Installing       : kubectl-1.23.17-0.x86_64             8/10
  Installing       : cri-tools-1.26.0-0.x86_64            9/10
  Installing       : kubeadm-1.23.17-0.x86_64            10/10
  Running scriptlet: kubeadm-1.23.17-0.x86_64            10/10
  Verifying        : cri-tools-1.26.0-0.x86_64            1/10
  Verifying        : kubeadm-1.23.17-0.x86_64             2/10
  Verifying        : kubectl-1.23.17-0.x86_64             3/10
  Verifying        : kubelet-1.23.17-0.x86_64             4/10
  Verifying        : containernetworking-plugins-1.2.0    5/10
  Verifying        : libnetfilter_queue-1.0.5-2.oe2309    6/10
  Verifying        : conntrack-tools-1.4.7-1.oe2309.x8    7/10
  Verifying        : libnetfilter_cthelper-1.0.1-1.oe2    8/10
  Verifying        : libnetfilter_cttimeout-1.0.1-1.oe    9/10
  Verifying        : socat-1.7.4.4-1.oe2309.x86_64       10/10

Installed:
  conntrack-tools-1.4.7-1.oe2309.x86_64
  containernetworking-plugins-1.2.0-1.oe2309.x86_64
  cri-tools-1.26.0-0.x86_64
  kubeadm-1.23.17-0.x86_64
  kubectl-1.23.17-0.x86_64
  kubelet-1.23.17-0.x86_64
  libnetfilter_cthelper-1.0.1-1.oe2309.x86_64
  libnetfilter_cttimeout-1.0.1-1.oe2309.x86_64
  libnetfilter_queue-1.0.5-2.oe2309.x86_64
  socat-1.7.4.4-1.oe2309.x86_64

Complete!
[root@worker-node-1 yum.repos.d]#
[root@worker-node-1 yum.repos.d]# systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@worker-node-1 yum.repos.d]# kubelet --version
Kubernetes v1.23.17

二、Master节点配置(仅192.2.12.76执行)

1. 拉取镜像(通过 Docker 拉取,指定阿里云镜像源)


[root@master-node yum.repos.d]# kubeadm config images pull 
  --image-repository registry.aliyuncs.com/google_containers 
  --kubernetes-version v1.23.17
  [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master-node yum.repos.d]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.23.17   62bc5d8258d6   2 years ago   130MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.23.17   bc6794cb54ac   2 years ago   51.9MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.23.17   f21c8d21558c   2 years ago   111MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.23.17   1dab4fc7b6e0   2 years ago   120MB
registry.aliyuncs.com/google_containers/etcd                      3.5.6-0    fce326961ae2   2 years ago   299MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.6     a4ca41631cc7   4 years ago   46.8MB
registry.aliyuncs.com/google_containers/pause                     3.6        6270bb605e12   4 years ago   683kB
[root@master-node yum.repos.d]#

2. 初始化集群(Pod 网段与 flannel 匹配)

执行命令:


kubeadm init 
  --kubernetes-version v1.23.17 
  --image-repository registry.aliyuncs.com/google_containers 
  --pod-network-cidr=10.244.0.0/16

执行明细:


[root@master-node yum.repos.d]# kubeadm init 
  --kubernetes-version v1.23.17 
  --image-repository registry.aliyuncs.com/google_containers 
  --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.17
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.3. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master-node] and IPs [10.96.0.1 192.2.12.76]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master-node localhost] and IPs [192.2.12.76 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master-node localhost] and IPs [192.2.12.76 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.502844 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-node as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 5xw8le.pibty2i2oyzicnaw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.2.12.76:6443 --token 5xw8le.pibty2i2oyzicnaw 
	--discovery-token-ca-cert-hash sha256:17cb8ff1c588cc7c20c0f7aed471c52b0940fcc68bceb8bbd07cbee7639d227d
[root@master-node yum.repos.d]#

3. 配置kubectl权限(Master节点)


mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

执行明细:


[root@master-node yum.repos.d]# mkdir -p $HOME/.kube
[root@master-node yum.repos.d]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-node yum.repos.d]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master-node yum.repos.d]#

4. 安装网络插件(flannel)

应用 Flannel 配置(v0.14.0 与 K8s 1.23 兼容)

4.1下载kube-flannel.yml,并执行安装

kubectl apply -f kube-flannel.yml


[root@master-node softs]# wget https://raw.githubusercontent.com/coreos/flannel/v0.14.0/Documentation/kube-flannel.yml
--2025-11-04 08:34:45--  https://raw.githubusercontent.com/coreos/flannel/v0.14.0/Documentation/kube-flannel.yml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.108.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:4813 (4.7K) [text/plain]
正在保存至: “kube-flannel.yml”

kube-flannel.yml                 100%[=======================================================>]   4.70K  --.-KB/s  用时 0s

2025-11-04 08:34:45 (66.7 MB/s) - 已保存 “kube-flannel.yml” [4813/4813])

[root@master-node softs]# ls
kube-flannel.yml
[root@master-node softs]#
[root@master-node softs]# kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master-node softs]#

4.2 验证 Flannel 状态

注意:(等待 1-2 分钟,确保所有 pod 为 Running)


[root@master-node softs]# kubectl get pods -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-ms8rj         0/1     Pending   0          8m34s
coredns-6d8c4cb4d-t7lhl         0/1     Pending   0          8m34s
etcd-master-node                      1/1     Running   0          8m45s
kube-apiserver-master-node            1/1     Running   0          8m45s
kube-controller-manager-master-node   1/1     Running   0          8m45s
kube-flannel-ds-sjnwt           1/1     Running   0          41s
kube-proxy-89hlw                1/1     Running   0          8m34s
kube-scheduler-master-node            1/1     Running   0          8m44s
[root@master-node softs]#

注意:coredns没有就绪是正常的。worker节点加入就好了

5. 验证Master节点状态


# 等待3-5分钟,查看节点状态(此时Master应为Ready)
kubectl get nodes

三、Worker节点配置

Worker 节点执行的操作(192.2.12.77和192.2.101.3均执行)

1.生成worker加入集群的join命令

在master上执行:


kubeadm token create --print-join-command

2.Worker 节点加入集群


[root@worker-node-1 yum.repos.d]# kubeadm join 192.2.12.76:6443 --token 5xw8le.pibty2i2oyzicnaw 
        --discovery-token-ca-cert-hash sha256:17cb8ff1c588cc7c20c0f7aed471c52b0940fcc68bceb8bbd07cbee7639d227d
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 26.1.3. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@worker-node-1 yum.repos.d]#

2. 验证加入状态(在Master节点执行)


# 等待2-3分钟,查看所有节点状态(均为Ready)
kubectl get nodes

四、最终验证(Master节点执行)


# 查看集群信息
kubectl get nodes

# 查看节点详细信息
kubectl describe nodes

# 查看所有命名空间Pod
kubectl get pods --all-namespaces

执行明细:


[root@master-node ~]# kubectl get nodes
NAME    STATUS   ROLES                  AGE   VERSION
master-node   Ready    control-plane,master   26h   v1.23.17
worker-node-1   Ready    <none>                 26h   v1.23.17
worker-node-2   Ready    <none>                 26h   v1.23.17
[root@master-node ~]#
[root@master-node ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS      AGE
kube-system   coredns-6d8c4cb4d-ms8rj         1/1     Running   0             26h
kube-system   coredns-6d8c4cb4d-t7lhl         1/1     Running   0             26h
kube-system   etcd-master-node                      1/1     Running   0             26h
kube-system   kube-apiserver-master-node            1/1     Running   0             26h
kube-system   kube-controller-manager-master-node   1/1     Running   0             26h
kube-system   kube-flannel-ds-6l59r           1/1     Running   1 (26h ago)   26h
kube-system   kube-flannel-ds-cnv47           1/1     Running   1 (26h ago)   26h
kube-system   kube-flannel-ds-pdxc7           1/1     Running   0             26h
kube-system   kube-proxy-4hvvl                1/1     Running   1 (26h ago)   26h
kube-system   kube-proxy-89hlw                1/1     Running   0             26h
kube-system   kube-proxy-fl7qk                1/1     Running   1 (26h ago)   26h
kube-system   kube-scheduler-master-node            1/1     Running   0             26h
[root@master-node ~]#

若所有节点状态为 Ready,且 kube-system命名空间下的Pod均为 Running,则集群部署成功。

END

  • 全部评论(0)
手机二维码手机访问领取大礼包
返回顶部