CentOS 7上部署一个可以使用100年的K8S 1.32.8集群(cri - dockerd)

  • 时间:2025-11-15 20:51 作者: 来源: 阅读:0
  • 扫一扫,手机访问
摘要:一、配置环境Kubernetes集群规划主机名IP地址说明k8s-master01 ~ 03192.168.128.10master节点k8s-master-lb192.168.128.10keepalived虚拟IP(不占硬件资源)k8s-node01 ~ 02192.168.128.11 ~ 12worker节点 * 2配置信息备注系统版本CentOS 7.9Docker版本24.0.0Pod

CentOS 7上部署一个可以使用100年的K8S 1.32.8集群(cri - dockerd)

一、配置环境

Kubernetes集群规划

主机名

IP地址

说明

k8s-master01 ~ 03

192.168.128.10

master节点

k8s-master-lb

192.168.128.10

keepalived虚拟IP(不占硬件资源)

k8s-node01 ~ 02

192.168.128.11 ~ 12

worker节点 * 2

配置信息

备注

系统版本

CentOS 7.9

Docker版本

24.0.0

Pod网段

172.16.0.0/12

Service网段

10.96.0.0/16

Service:

①kubernetes service 10.96.0.1 第一个地址

②Coredns:10.96.0.10 第10个地址

二、系统配置

配置host解析

所有节点配置hosts,修改/etc/hosts如下:

[root@k8s-master01 ~]# cat >/etc/hosts<<EOF

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.128.10 k8s-master01

192.168.128.10 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP

192.168.128.11 k8s-node01

192.168.128.12 k8s-node02

EOF

配置yum源

CentOS 7安装yum源如下:

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

配置k8s源

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/

enabled=1

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key

EOF

sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

必备工具安装

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

所有节点关闭防火墙、selinux、dnsmasq、swap。

服务器配置如下:

systemctl disable --now firewalld

systemctl disable --now dnsmasq

systemctl disable --now NetworkManager

setenforce 0

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭swap分区

swapoff -a && sysctl -w vm.swappiness=0

sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

安装ntpdate

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm

yum install ntpdate -y

同步时间

时间同步配置如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

echo 'Asia/Shanghai' >/etc/timezone

ntpdate time2.aliyun.com

# 加入到crontab

*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

配置limit:

ulimit -SHn 65535 临时生效

永久生效:

cat >/etc/security/limits.conf<<EOF

# 末尾添加如下内容

* soft nofile 65536

* hard nofile 131072

* soft nproc 65535

* hard nproc 655350

* soft memlock unlimited

* hard memlock unlimited

EOF

各节点互信免密配置

ssh-keygen -t rsa

for i in k8s-master01 k8s-node01 k8s-node02;do ssh-copy-id -i ~/.ssh/id_rsa.pub $i;done

系统内核升级

Wget http://mirrors.coreix.net/elrepo-archive-archive/kernel/el7/x86_64/RPMS/kernel-lt-5.4.216-1.el7.elrepo.x86_64.rpm

Wget http://mirrors.coreix.net/elrepo-archive-archive/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.216-1.el7.elrepo.x86_64.rpm

安装内核补丁

rpm -ivh kernel-lt-5.4.216-1.el7.elrepo.x86_64.rpm kernel-lt-devel-5.4.216-1.el7.elrepo.x86_64.rpm

# 查看内核启动序号

[root@localhost ~] awk -F' '$1=="menuentry " {print $2}' /etc/grub2.cfg

CentOS Linux (5.4.216-1.el7.elrepo.x86_64) 7 (Core)

CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)

CentOS Linux (0-rescue-6b682a6d112141ea9e611b665919d59e) 7 (Core)

把 GRUB2 的默认启动项设置为第 0 个菜单项

grub2-set-default 0

重启服务器

reboot

安装ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -y

所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack

开启依赖的内核模块

cat >/etc/modules-load.d/ipvs.conf<<EOF

# 加入以下内容

ip_vs

ip_vs_lc

ip_vs_wlc

ip_vs_rr

ip_vs_wrr

ip_vs_lblc

ip_vs_lblcr

ip_vs_dh

ip_vs_sh

ip_vs_fo

ip_vs_nq

ip_vs_sed

ip_vs_ftp

ip_vs_sh

nf_conntrack

ip_tables

ip_set

xt_set

ipt_set

ipt_rpfilter

ipt_REJECT

Ipip

EOF

然后执行systemctl enable --now
systemd-modules-load.service即可

配置k8s内核参数:

cat <<EOF > /etc/sysctl.d/k8s.conf

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

fs.may_detach_mounts = 1

net.ipv4.conf.all.route_localnet = 1

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600

net.ipv4.tcp_keepalive_probes = 3

net.ipv4.tcp_keepalive_intvl =15

net.ipv4.tcp_max_tw_buckets = 36000

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_max_orphans = 327680

net.ipv4.tcp_orphan_retries = 3

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 65536

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.tcp_timestamps = 0

net.core.somaxconn = 16384

EOF

sysctl --system

#调整系统时区

#设置系统时区为 中国/上海

timedatectl set-timezone Asia/Shanghai

#将当前的UTC时间写入硬件时钟

timedatectl set-local-rtc 0

#重启依赖于系统时间的服务

systemctl restart rsyslog

systemctl restart crond

三、k8s部署

安装k8s组件

yum list kubeadm.x86_64 --showduplicates | sort -r

所有节点安装1.32.8版本kubeadm组件

yum install kubeadm-1.32.8* kubelet-1.32.8* kubectl-1.32.8* -y

默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:

cat >/etc/sysconfig/kubelet<<EOF

KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10"

EOF

设置Kubelet开机自启动(此时kubelet无法启动,无需管理):

systemctl daemon-reload

systemctl enable --now kubelet

安装cri-docker

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm

yum install containerd.io conntrack -y

rpm -ivh cri-dockerd-0.3.8-3.el7.x86_64.rpm

创建cri-docker.service服务

vim /usr/lib/systemd/system/cri-docker.service

这里加上镜像地址,

ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10 --container-runtime-endpoint fd://

# 重载系统守护进程

sudo systemctl daemon-reload

# 创建docker组,由于cri-docker依赖docker组,但是我们又是二进制安装的docker,没有这个组

sudo groupadd docker

# 设置cri-dockerd自启动

sudo systemctl enable cri-docker.socket cri-docker

# 启动cri-dockerd

sudo systemctl start cri-docker.socket cri-docker

# 查看cri-dockerd状态

sudo systemctl status cri-docker.socket

重新编译kubeadm(master节点),修改证书有效期为100年

安装go语言

目前新版本的k8s重新编译都需要高版本的go语言了。我这边安装的是V1.25.0

mkdir -p go

wget https://dl.google.com/go/go1.25.0.linux-amd64.tar.gz

先下载:
go1.25.0.linux-amd64.tar.gz

tar -zxvf go1.23.1.linux-amd64.tar.gz

mv go /usr/local/

添加环境变量:

export PATH=$PATH:/usr/local/go/bin

source /etc/profile

查看go语言版本

go verison

warning: GOPATH set to GOROOT (/usr/local/go) has no effect
go version

go1.25.0 linux/amd64

必定要确定go语言有环境变量 不然后面编译会报错。

下载源码(下载对应的k8s版本)

我是用的浏览器下载的,得到一个kubernetes-master.zip 解压后:得到文件夹 kubernetes-master

修改证书有效期时间

mv kubernetes-master.zip go/

cd kubernetes-master

vim cmd/kubeadm/app/constants/constants.go

#搜索CertificateValidity #修改证书时间 :

CertificateValidity = time.Hour * 24 * 365 * 100

CACertificateValidityPeriod = time.Hour * 24 * 365 * 100

vim ./staging/src/k8s.io/client-go/util/cert/cert.go

#搜索KeyUsageDigitalSignatur #修改

NotAfter: now.Add(duration365d * 100).UTC()

#验证是否修改成功

cat ./staging/src/k8s.io/client-go/util/cert/cert.go | grep NotAfter

cat ./cmd/kubeadm/app/constants/constants.go | grep CertificateValidity

重新编译kubeadm

yum install -y gcc*

在当前kubernetes-master下执行:make WHAT=cmd/kubeadm GOFLAGS=-v

注意:若出现 ./hack/run-in-gopath.sh:行34:
_output/bin/prerelease-lifecycle-gen: 权限不够 问题时,需要添加权限后再编译源码文件

[root@k8s-master01 kubernetes-1.32.8]# yum install rsync jq -y

[root@k8s-master01 kubernetes-1.32.8]# chmod +x _output/bin/prerelease-lifecycle-gen

[root@k8s-master01 kubernetes-1.32.8]# chmod +x _output/bin/deepcopy-gen

新生成的kubeadm在 _output/bin/ 目录下

开始备份替换原先yum安装的kubeadm

先备份

mv /usr/bin/kubeadm /usr/bin/kubeadm_$(date +%F)

再替换

cd /root/go/kubernetes-1.32.8

cp _output/bin/kubeadm /usr/bin/kubeadm

初始化k8s集群

手动生成kubeadm-config.yaml配置文件

kubeadm config print init-defaults > kubeadm-config.yaml

cat kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta4

bootstrapTokens:

- groups:

- system:bootstrappers:kubeadm:default-node-token

token: abcdef.0123456789abcdef

ttl: 24h0m0s

usages:

- signing

- authentication

kind: InitConfiguration

localAPIEndpoint:

advertiseAddress: 192.168.128.10

bindPort: 6443

nodeRegistration:

#criSocket: unix:///var/run/containerd/containerd.sock

criSocket: unix:///var/run/cri-dockerd.sock

imagePullPolicy: IfNotPresent

imagePullSerial: true

name: node

taints: null

timeouts:

controlPlaneComponentHealthCheck: 4m0s

discovery: 5m0s

etcdAPICall: 2m0s

kubeletHealthCheck: 4m0s

kubernetesAPICall: 1m0s

tlsBootstrap: 5m0s

upgradeManifests: 5m0s

---

apiServer: {}

apiVersion: kubeadm.k8s.io/v1beta4

caCertificateValidityPeriod: 876000h0m0s

certificateValidityPeriod: 876000h0m0s

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns: {}

encryptionAlgorithm: RSA-2048

etcd:

local:

dataDir: /var/lib/etcd

imageRepository: registry.k8s.io

kind: ClusterConfiguration

kubernetesVersion: 1.32.8

networking:

dnsDomain: cluster.local

serviceSubnet: 10.96.0.0/12

podSubnet: 172.16.0.0/12

proxy: {}

scheduler: {}

集群所需镜像拉取

# 定义镜像列表(v1.32.8 所需)

images=(

kube-apiserver:v1.32.8

kube-controller-manager:v1.32.8

kube-scheduler:v1.32.8

kube-proxy:v1.32.8

coredns:v1.11.3

pause:3.10

etcd:3.5.16-0

)

# 拉取并打标签

for img in "${images[@]}"; do

docker pull "registry.cn-hangzhou.aliyuncs.com/google_containers/$img"

docker image tag "registry.cn-hangzhou.aliyuncs.com/google_containers/$img" "registry.k8s.io/$img"

done

初始化k8s集群

kubeadm init --config=kubeadm-config.yaml --upload-certs

[init] Using Kubernetes version: v1.32.8

[preflight] Running pre-flight checks

[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action beforehand using 'kubeadm config images pull'

W1106 10:18:25.277306 60526 checks.go:846] detected that the sandbox image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.10" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.128.10]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.128.10 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.128.10 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "super-admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"

[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s

[kubelet-check] The kubelet is healthy after 1.001607783s

[api-check] Waiting for a healthy API server. This can take up to 4m0s

[api-check] The API server is healthy after 9.501720583s

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace

[upload-certs] Using certificate key:

174f1e62a5ab8feea4756fcc4dc72747b40b77ab3511a9def8eebcd267db5b53

[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]

[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

[bootstrap-token] Using token: abcdef.0123456789abcdef

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.128.10:6443 --token abcdef.0123456789abcdef

--discovery-token-ca-cert-hash sha256:edb0081a5c3c8156177fa4f2ee7e3c500cbe1eeb3c03d4b02724487992b73362

主节点执行

[root@k8s-master01 ~]# mkdir -p $HOME/.kube

[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

从节点:

kubeadm join 192.168.128.10:6443 --token abcdef.0123456789abcdef

--discovery-token-ca-cert-hash sha256:edb0081a5c3c8156177fa4f2ee7e3c500cbe1eeb3c03d4b02724487992b73362

所有节点初始化完成后,查看集群状态

kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s-master01 NotReady control-plane 29m v1.32.8

k8s-node01 NotReady <none> 54s v1.32.8

k8s-node02 NotReady <none> 25s v1.32.8

安装网络CIN插件

mkdir -p calico && cd calico

wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/calico.yaml

Kubectl apply -f calico.yaml

测试k8s集群

创建一个nginx-demployment

mkdir -p nginx-test && cd nginx-test

cat >nginx-deployment.yaml<<EOF

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

labels:

app: nginx

spec:

replicas: 2 # 设置副本数为2

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:latest # 使用 Nginx 的官方镜像

ports:

- containerPort: 80

EOF

cat >nginx-service.yaml<<EOF

apiVersion: v1

kind: Service

metadata:

name: nginx-service

spec:

selector:

app: nginx # 选择具有 app=nginx 标签的 Pods

ports:

- protocol: TCP

port: 80 # Service 的端口号

targetPort: 80 # Pod 中的目标端口号,与 containerPort 一致

type: NodePort

EOF

kubectl create -f .

创建一个centos7的deployment

mkdir -p centos-test && cd centos-test

cat >centos7-deploy.yaml<<EOF

apiVersion: apps/v1

kind: Deployment

metadata:

name: centos7

namespace: default

spec:

replicas: 1

selector:

matchLabels:

app: centos7

template:

metadata:

labels:

app: centos7

spec:

containers:

- name: centos

image: centos:7

command: ["/bin/bash"]

args: ["-c", "while true; do sleep 30; done"]

resources:

limits:

cpu: 100m

memory: 128Mi

EOF

kubectl create -f .

CentOS 7上部署一个可以使用100年的K8S 1.32.8集群(cri - dockerd)

centos访问nginx的pod

CentOS 7上部署一个可以使用100年的K8S 1.32.8集群(cri - dockerd)

pod访问ClusterIp类型service

CentOS 7上部署一个可以使用100年的K8S 1.32.8集群(cri - dockerd)

  • 全部评论(0)
最新发布的资讯信息
【系统环境|】Office 2010 自带公式编辑器的公式字体怎么修改?(2025-11-15 22:07)
【系统环境|】PGC世界赛 A组队伍概览 #绝地求生(2025-11-15 22:07)
【系统环境|】讲透 Spring Boot Cloud(2025-11-15 22:06)
【系统环境|】Dubbo和SpringCloud区别详解(4大核心区别)(2025-11-15 22:06)
【系统环境|】Spring Boot3 中实现全链路追踪,你 get 了吗?(2025-11-15 22:05)
【系统环境|】SpringCloud最全详解(万字图文总结)(2025-11-15 22:05)
【系统环境|】爆了爆了,Spring Cloud面试题(2025-11-15 22:04)
【系统环境|】一文部署skywalking(2025-11-15 22:03)
【系统环境|】使用Qt实现一个简单的绘图软件(2025-11-15 22:03)
【系统环境|】用Python做科学计算(工具篇)——scikit-learn(机器学习)2(2025-11-15 22:02)
手机二维码手机访问领取大礼包
返回顶部