文末有惊喜
文章目录
自建高可用k8s集群搭建
一、所有节点基础环境
1、环境准备与内核升级
2、安装Docker
二、PKI
三、证书工具准备
1、下载证书工具
2、ca根配置
3、ca签名请求
4、生成证书
5、k8s集群是如何使用证书的
四、etcd高可用搭建
1、etcd文档
2、下载etcd
3、etcd证书
4、etcd高可用安装
五、k8s组件与证书
1、K8s离线安装包
2、master节点准备
3、apiserver 证书生成
4、front-proxy证书生成
5、controller-manage证书生成与配置
6、scheduler证书生成与配置
7、admin证书生成与配置
8、ServiceAccount Key生成
9、发送证书到其他节点
六、高可用配置
七、组件启动
1、所有master执行
2、配置apiserver服务
3、配置controller-manager服务
4、配置scheduler
八、TLS与引导启动原理
1、master1配置bootstrap
2、master1设置kubectl执行权限
3、创建集群引导权限文件
九、引导Node节点启动
1、发送核心证书到节点
2、所有节点配置kubelet
3、kube-proxy配置
十、部署calico
十一、部署coreDNS
十二、给机器打上role标签
十三、集群验证
自建高可用k8s集群搭建
一、所有节点基础环境
192.168.0.x : 为机器的网段
10.96.0.0/16: 为Service网段
196.16.0.0/16: 为Pod网段
1、环境准备与内核升级
先升级所有机器内核
#我的机器版本cat /etc/redhat-release # CentOS Linux release 7.9.2009 (Core)#修改域名,一定不是localhosthostnamectl set-hostname k8s-xxx#集群规划k8s-master1k8s-master2k8s-master3 k8s-master-lb k8s-node01k8s-node02 ... k8s-nodeN# 每个机器准备域名vi /etc/hosts192.168.0.10 k8s-master1192.168.0.11 k8s-master2192.168.0.12 k8s-master3192.168.0.13 k8s-node1192.168.0.14 k8s-node2192.168.0.15 k8s-node3192.168.0.250 k8s-master-lb # 非高可用,可以不用这个。这个使用keepalive配置
# 关闭selinuxsetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinuxsed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
# 关闭swapswapoff -a && sysctl -w vm.swappiness=0sed -ri 's/.*swap.*/#&/' /etc/fstab
#修改limitulimit -SHn 65535vi /etc/security/limits.conf# 末尾添加如下内容* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* soft memlock unlimited* hard memlock unlimited
#为了方便以后操作配置ssh免密连接,master1运行ssh-keygen -t rsafor i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
#安装后续用的一些工具yum install wget git jq psmisc net-tools yum-utils device-mapper-persistent-data lvm2-y
# 所有节点# 安装ipvs工具,方便以后操作ipvs,ipset,conntrack等yum install ipvsadm ipset sysstat conntrack libseccomp -y# 所有节点配置ipvs模块,执行以下命令,在内核4.19+版本改为nf_conntrack, 4.18下改为nf_conntrack_ipv4modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack#修改ipvs配置,加入以下内容vi /etc/modules-load.d/ipvs.confip_vsip_vs_lcip_vs_wlcip_vs_rrip_vs_wrrip_vs_lblcip_vs_lblcrip_vs_dhip_vs_ship_vs_foip_vs_nqip_vs_sedip_vs_ftpip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipip# 执行命令systemctl enable --now systemd-modules-load.service#--now = enable+start#检测是否加载lsmod | grep -e ip_vs -e nf_conntrack
## 所有节点cat < /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1net.ipv4.conf.all.route_localnet = 1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16768net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16768EOFsysctl --system
# 所有节点配置完内核后,重启服务器,保证重启后内核依旧加载rebootlsmod | grep -e ip_vs -e nf_conntrack
2、安装Docker
# 安装dockeryum remove docker*yum install -y yum-utilsyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y docker-ce-19.03.9docker-ce-cli-19.03.9 containerd.io-1.4.4
# 修改docker配置,新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemdmkdir /etc/dockercat > /etc/docker/daemon.json <<EOF{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://82m9ayutr63.mirror.aliyuncs.com"]}EOFsystemctl daemon-reload && systemctl enable --now docker
#也可以自己下载rpm离线包进行安装http://mirrors.aliyun.com/docker-ce/linux/centos/7.9/x86_64/stable/Packages/yum localinstall xxxx
二、PKI
百度百科:公钥基础设施_百度百科
Kubernetes 需要 PKI 才能执行以下操作:
Kubelet 的客户端证书,用于 API 服务器身份验证
API 服务器端点的证书
集群管理员的客户端证书,用于 API 服务器身份认证
API 服务器的客户端证书,用于和 Kubelet 的会话
API 服务器的客户端证书,用于和 etcd 的会话
控制器管理器的客户端证书/kubeconfig,用于和 API 服务器的会话
调度器的客户端证书/kubeconfig,用于和 API 服务器的会话
前端代理的客户端及服务端证书
说明: 只有当你运行 kube-proxy 并要支持扩展 API 服务器 时,才需要
front-proxy
证书
etcd 还实现了双向 TLS 来对客户端和对其他对等节点进行身份验证
PKI 证书和要求 | Kubernetes
三、证书工具准备
# 准备文件夹存放所有证书信息。看看kubeadm 如何组织有序的结构的# 三个节点都执行mkdir -p /etc/kubernetes/pki
1、下载证书工具
# 下载cfssl核心组件wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64#授予执行权限chmod +x cfssl*#批量重命名for name in `ls cfssl*`; do mv $name ${name%_1.5.0_linux_amd64};done#移动到文件mv cfssl* /usr/bin
2、ca根配置
ca-config.json
mkdir -p /etc/kubernetes/pkicd /etc/kubernetes/pkivi ca-config.json
{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "server": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] }, "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] }, "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }}
3、ca签名请求
CSR是Certificate Signing Request的英文缩写,即证书签名请求文件
ca-csr.json
vi /etc/kubernetes/pki/ca-csr.json
{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes"}],"ca": {"expiry": "87600h"}}
CN(Common Name):
公用名(Common Name)必须填写,一般可以是网站域
O(Organization):
Organization(组织名)是必须填写的,如果申请的是OV、EV型证书,组织名称必须严格和企业在政府登记名称一致,一般需要和营业执照上的名称完全一致。不可以使用缩写或者商标。如果需要使用英文名称,需要有DUNS编码或者律师信证明。
OU(Organization Unit)
OU单位部门,这里一般没有太多限制,可以直接填写IT DEPT等皆可。
C(City)
City是指申请单位所在的城市。
ST(State/Province)
ST是指申请单位所在的省份。
C(Country Name)
C是指国家名称,这里用的是两位大写的国家代码,中国是CN。
4、生成证书
生成ca证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -# ca.csr ca.pem(ca公钥) ca-key.pem(ca私钥,妥善保管)
5、k8s集群是如何使用证书的
参考官方文档:PKI 证书和要求 | Kubernetes
四、etcd高可用搭建
1、etcd文档
etcd示例:Demo | etcd 参照示例学习etcd使用
etcd构建:Install | etcd 参照etcd-k8s集群量规划指南,大家参照这个标准建立集群
etcd部署:Operations guide | etcd 参照部署手册,学习etcd配置和集群部署
2、下载etcd
# 给所有master节点,发送etcd包准备部署etcd高可用wget https://github.com/etcd-io/etcd/releases/download/v3.4.16/etcd-v3.4.16-linux-amd64.tar.gz## 复制到其他节点for i in k8s-master1 k8s-master2 k8s-master3;do scp etcd-* root@$i:/root/;done## 解压到 /usr/local/bintar -zxvf etcd-v3.4.16-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.16-linux-amd64/etcd{,ctl}##验证etcdctl #只要有打印就ok
3、etcd证书
Hardware recommendations | etcd安装参考 :Hardware recommendations | etcd
生成etcd证书
etcd-ca-csr.json
{"CN": "etcd","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "etcd","OU": "etcd"}],"ca": {"expiry": "87600h"}}
# 生成etcd根ca证书cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/ca -
etcd-itdachang-csr.json
{"CN": "etcd-itdachang","key": {"algo": "rsa","size": 2048},"hosts": ["127.0.0.1","k8s-master1","k8s-master2","k8s-master3","192.168.0.10","192.168.0.11","192.168.0.12"],"names": [{"C": "CN","L": "beijing","O": "etcd","ST": "beijing","OU": "System"}]}// 注意:hosts用自己的主机名和ip// 也可以在签发的时候再加上 -hostname=127.0.0.1,k8s-master1,k8s-master2,k8s-master3,// 可以指定受信的主机列表//"hosts": [//"k8s-master1",//"www.example.net"//],
# 签发itdachang的etcd证书cfssl gencert \ -ca=/etc/kubernetes/pki/etcd/ca.pem \ -ca-key=/etc/kubernetes/pki/etcd/ca-key.pem \ -config=/etc/kubernetes/pki/ca-config.json \ -profile=etcd \ etcd-itdachang-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/etcd
# 把生成的etcd证书,复制给其他机器for i in k8s-master2 k8s-master3;do scp -r /etc/kubernetes/pki/etcd root@$i:/etc/kubernetes/pki;done
4、etcd高可用安装
etcd配置文件示例: Configuration flags | etcd
etcd高可用安装示例: Clustering Guide | etcd
为了保证启动配置一致性,我们编写etcd配置文件,并将etcd做成service启动
# etcd yaml示例。# This is the configuration file for the etcd server.# Human-readable name for this member.name: 'default'# Path to the data directory.data-dir:# Path to the dedicated wal directory.wal-dir:# Number of committed transactions to trigger a snapshot to disk.snapshot-count: 10000# Time (in milliseconds) of a heartbeat interval.heartbeat-interval: 100# Time (in milliseconds) for an election to timeout.election-timeout: 1000# Raise alarms when backend size exceeds the given quota. 0 means use the# default quota.quota-backend-bytes: 0# List of comma separated URLs to listen on for peer traffic.listen-peer-urls: http://localhost:2380# List of comma separated URLs to listen on for client traffic.listen-client-urls: http://localhost:2379# Maximum number of snapshot files to retain (0 is unlimited).max-snapshots: 5# Maximum number of wal files to retain (0 is unlimited).max-wals: 5# Comma-separated white list of origins for CORS (cross-origin resource sharing).cors:# List of this member's peer URLs to advertise to the rest of the cluster.# The URLs needed to be a comma-separated list.initial-advertise-peer-urls: http://localhost:2380# List of this member's client URLs to advertise to the public.# The URLs needed to be a comma-separated list.advertise-client-urls: http://localhost:2379# Discovery URL used to bootstrap the cluster.discovery:# Valid values include 'exit', 'proxy'discovery-fallback: 'proxy'# HTTP proxy to use for traffic to discovery service.discovery-proxy:# DNS domain used to bootstrap initial cluster.discovery-srv:# Initial cluster configuration for bootstrapping.initial-cluster:# Initial cluster token for the etcd cluster during bootstrap.initial-cluster-token: 'etcd-cluster'# Initial cluster state ('new' or 'existing').initial-cluster-state: 'new'# Reject reconfiguration requests that would cause quorum loss.strict-reconfig-check: false# Accept etcd V2 client requestsenable-v2: true# Enable runtime profiling data via HTTP serverenable-pprof: true# Valid values include 'on', 'readonly', 'off'proxy: 'off'# Time (in milliseconds) an endpoint will be held in a failed state.proxy-failure-wait: 5000# Time (in milliseconds) of the endpoints refresh interval.proxy-refresh-interval: 30000# Time (in milliseconds) for a dial to timeout.proxy-dial-timeout: 1000# Time (in milliseconds) for a write to timeout.proxy-write-timeout: 5000# Time (in milliseconds) for a read to timeout.proxy-read-timeout: 0client-transport-security:# Path to the client server TLS cert file.cert-file:# Path to the client server TLS key file.key-file:# Enable client cert authentication.client-cert-auth: false# Path to the client server TLS trusted CA cert file.trusted-ca-file:# Client TLS using generated certificatesauto-tls: falsepeer-transport-security:# Path to the peer server TLS cert file.cert-file:# Path to the peer server TLS key file.key-file:# Enable peer client cert authentication.client-cert-auth: false# Path to the peer server TLS trusted CA cert file.trusted-ca-file:# Peer TLS using generated certificates.auto-tls: false# Enable debug-level logging for etcd.debug: falselogger: zap# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.log-outputs: [stderr]# Force to create a new one member cluster.force-new-cluster: falseauto-compaction-mode: periodicauto-compaction-retention: "1"
三个etcd机器都创建 /etc/etcd 目录,准备存储etcd配置信息
#三个master执行mkdir -p /etc/etcd
vi /etc/etcd/etcd.yaml
# 我们的yamlname: 'etcd-master3'#每个机器可以写自己的域名,不能重复data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.0.12:2380'# 本机ip+2380端口,代表和集群通信listen-client-urls: 'https://192.168.0.12:2379,http://127.0.0.1:2379' #改为自己的max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.0.12:2380' #自己的ipadvertise-client-urls: 'https://192.168.0.12:2379'#自己的ipdiscovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'etcd-master1=https://192.168.0.10:2380,etcd-master2=https://192.168.0.11:2380,etcd-master3=https://192.168.0.12:2380' #这里不一样initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem'auto-tls: truepeer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem'auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: false
三台机器的etcd做成service,开机启动
vi /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServiceDocumentation=https://etcd.io/docs/v3.4/op-guide/clustering/After=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.yamlRestart=on-failureRestartSec=10LimitNOFILE=65536[Install]WantedBy=multi-user.targetAlias=etcd3.service
# 加载&开机启动systemctl daemon-reloadsystemctl enable --now etcd# 启动有问题,使用 journalctl -u 服务名排查journalctl -u etcd
测试etcd访问
# 查看etcd集群状态etcdctl --endpoints="192.168.0.10:2379,192.168.0.11:2379,192.168.0.12:2379" --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pemendpoint status --write-out=table# 以后测试命令export ETCDCTL_API=3HOST_1=192.168.0.10HOST_2=192.168.0.11HOST_3=192.168.0.12ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379## 导出环境变量,方便测试,参照https://github.com/etcd-io/etcd/tree/main/etcdctlexport ETCDCTL_DIAL_TIMEOUT=3sexport ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.pemexport ETCDCTL_CERT=/etc/kubernetes/pki/etcd/etcd.pemexport ETCDCTL_KEY=/etc/kubernetes/pki/etcd/etcd-key.pemexport ETCDCTL_ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379# 自动用环境变量定义的证书位置etcdctlmember list --write-out=table#如果没有环境变量就需要如下方式调用etcdctl --endpoints=$ENDPOINTS --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem member list --write-out=table## 更多etcdctl命令,https://etcd.io/docs/v3.4/demo/#access-etcd
五、k8s组件与证书
1、K8s离线安装包
https://github.com/kubernetes/kubernetes 找到changelog对应版本
# 下载k8s包wget https://dl.k8s.io/v1.21.1/kubernetes-server-linux-amd64.tar.gz
2、master节点准备
# 把kubernetes把复制给master所有节点for i in k8s-master1 k8s-master2 k8s-master3k8s-node1 k8s-node2 k8s-node3;do scp kubernetes-server-* root@$i:/root/;done
#所有master节点解压kubelet,kubectl等到 /usr/local/bin。tar -xvf kubernetes-server-linux-amd64.tar.gz--strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}#master需要全部组件,node节点只需要 /usr/local/bin kubelet、kube-proxy
3、apiserver 证书生成
3.1、apiserver-csr.json
//10.96.0. 为service网段。可以自定义 如: 66.66.0.1// 192.168.0.250: 是我准备的负载均衡器地址(负载均衡可以自己搭建,也可以购买云厂商lb。){"CN": "kube-apiserver","hosts": ["10.96.0.1","127.0.0.1","192.168.0.250","192.168.0.10","192.168.0.11","192.168.0.12","192.168.0.13","192.168.0.14","192.168.0.15","192.168.0.16","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "Kubernetes","OU": "Kubernetes"}]}
3.2、生成apiserver证书
# 192.168.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改192.168.0.1,# 如果不是高可用集群,10.103.236.236为Master01的IP#先生成CA机构vi ca-csr.json{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "Kubernetes","OU": "Kubernetes"}],"ca": {"expiry": "87600h"}}cfssl gencert -initca ca-csr.json | cfssljson -bare ca -cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=/etc/kubernetes/pki/ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
4、front-proxy证书生成
官方文档:配置聚合层 | Kubernetes
注意:front-proxy不建议用新的CA机构签发证书,可能导致通过他代理的组件如metrics-server权限不可用。
如果用新的,api-server配置添加 –requestheader-allowed-names=front-proxy-client
4.1、front-proxy-ca-csr.json
front-proxy根ca
vi front-proxy-ca-csr.json{"CN": "kubernetes","key": { "algo": "rsa", "size": 2048}}
#front-proxy 根ca生成cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
4.2、front-proxy-client证书
vifront-proxy-client-csr.json#准备申请client客户端
{"CN": "front-proxy-client","key": { "algo": "rsa", "size": 2048}}
#生成front-proxy-client 证书cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client#忽略警告,毕竟我们不是给网站生成的
5、controller-manage证书生成与配置
5.1、controller-manager-csr.json
vi controller-manager-csr.json
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-controller-manager","OU": "Kubernetes"}]}
5.2、生成证书
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \controller-manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
5.3、生成配置
# 注意,如果不是高可用集群,192.168.0.250:6443改为master01的地址,6443为apiserver的默认端口# set-cluster:设置一个集群项,kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.0.250:6443 \ --kubeconfig=/etc/kubernetes/controller-manager.conf# 设置一个环境项,一个上下文kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.conf# set-credentials 设置一个用户项kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.conf# 使用某个环境当做默认环境kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.conf# 后来也用来自动批复kubelet证书
6、scheduler证书生成与配置
6.1、scheduler-csr.json
vi scheduler-csr.json
{"CN": "system:kube-scheduler","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:kube-scheduler","OU": "Kubernetes"}]}
6.2、签发证书
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=/etc/kubernetes/pki/ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
6.3、生成配置
# 注意,如果不是高可用集群,192.168.0.250:6443 改为master01的地址,6443是api-server默认端口kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.0.250:6443 \ --kubeconfig=/etc/kubernetes/scheduler.confkubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.confkubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.confkubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.conf#k8s集群安全操作相关
7、admin证书生成与配置
7.1、admin-csr.json
vi admin-csr.json
{"CN": "admin","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Beijing","L": "Beijing","O": "system:masters","OU": "Kubernetes"}]}
7.2、生成证书
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=/etc/kubernetes/pki/ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
7.3、生成配置
# 注意,如果不是高可用集群,192.168.0.250:6443改为master01的地址,6443为apiserver的默认端口kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.0.250:6443 \--kubeconfig=/etc/kubernetes/admin.confkubectl config set-credentials kubernetes-admin \--client-certificate=/etc/kubernetes/pki/admin.pem \--client-key=/etc/kubernetes/pki/admin-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/admin.confkubectl config set-context kubernetes-admin@kubernetes \--cluster=kubernetes \--user=kubernetes-admin \--kubeconfig=/etc/kubernetes/admin.confkubectl config use-context kubernetes-admin@kubernetes \--kubeconfig=/etc/kubernetes/admin.conf
kubelet将使用 bootstrap 引导机制,自动颁发证书,所以我们不用配置了。要不然,1万台机器,一个万kubelet,证书配置到明年去。。。
8、ServiceAccount Key生成
k8s底层,每创建一个ServiceAccount,都会分配一个Secret,而Secret里面有秘钥,秘钥就是由我们接下来的sa生成的。所以我们提前创建出sa信息
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
9、发送证书到其他节点
# 在master1上执行for NODE in k8s-master2 k8s-master3dofor FILE in admin.conf controller-manager.conf scheduler.confdoscp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}donedone
六、高可用配置
高可用配置
如果你不是在创建高可用集群,则无需配置haproxy和keepalived
高可用有很多可选方案
nginx
haproxy
keepalived
云供应商提供的负载均衡产品
云上安装注意事项
云上安装可以直接使用云上的lb,比如阿里云slb,腾讯云elb等
公有云要用公有云自带的负载均衡,比如阿里云的SLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的。
阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。
青云使用
创建负载均衡器,指定ip地址为我们之前的预留地址
进入负载均衡器,创建监听器
选择TCP,6443端口
添加后端服务器地址与端口
七、组件启动
1、所有master执行
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes#三个master节点kube-xx相关的程序都在 /usr/local/binfor NODE in k8s-master2 k8s-master3doscp -r /etc/kubernetes/* root@$NODE:/etc/kubernetes/done
接下来把master1生成的所有证书全部发给master2,master3
2、配置apiserver服务
2.1、配置
所有Master节点创建kube-apiserver.service
注意,如果不是高可用集群,192.168.0.250改为master01的地址
以下文档使用的k8s service网段为
10.96.0.0/16
,该网段不能和宿主机的网段、Pod网段的重复特别注意:docker的网桥默认为
172.17.0.1/16
。不要使用这个网段
# 每个master节点都需要执行以下内容# --advertise-address: 需要改为本master节点的ip# --service-cluster-ip-range=10.96.0.0/16: 需要改为自己规划的service网段# --etcd-servers: 改为自己etcd-server的所有地址vi /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \--v=2\--logtostderr=true\--allow-privileged=true\--bind-address=0.0.0.0\--secure-port=6443\--insecure-port=0\--advertise-address=192.168.0.10 \--service-cluster-ip-range=10.96.0.0/16\--service-node-port-range=30000-32767\--etcd-servers=https://192.168.0.10:2379,https://192.168.0.11:2379,https://192.168.0.12:2379 \--etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem\--etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem\--etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem\--client-ca-file=/etc/kubernetes/pki/ca.pem\--tls-cert-file=/etc/kubernetes/pki/apiserver.pem\--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem\--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem\--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem\--service-account-key-file=/etc/kubernetes/pki/sa.pub\--service-account-signing-key-file=/etc/kubernetes/pki/sa.key\--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota\--authorization-mode=Node,RBAC\--enable-bootstrap-token-auth=true\--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem\--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem\--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem\--requestheader-allowed-names=aggregator,front-proxy-client\--requestheader-group-headers=X-Remote-Group\--requestheader-extra-headers-prefix=X-Remote-Extra-\--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.target
2.2、启动apiserver服务
systemctl daemon-reload && systemctl enable --now kube-apiserver#查看状态systemctl status kube-apiserver
3、配置controller-manager服务
3.1、配置
所有Master节点配置kube-controller-manager.service
文档使用的k8s Pod网段为
196.16.0.0/16
,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改;特别注意:docker的网桥默认为
172.17.0.1/16
。不要使用这个网
# 所有节点执行vi /usr/lib/systemd/system/kube-controller-manager.service## --cluster-cidr=196.16.0.0/16 : 为Pod的网段。修改成自己想规划的网段[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-controller-manager \--v=2 \--logtostderr=true \--address=127.0.0.1 \--root-ca-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \--service-account-private-key-file=/etc/kubernetes/pki/sa.key \--kubeconfig=/etc/kubernetes/controller-manager.conf \--leader-elect=true \--use-service-account-credentials=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--pod-eviction-timeout=2m0s \--controllers=*,bootstrapsigner,tokencleaner \--allocate-node-cidrs=true \--cluster-cidr=196.16.0.0/16 \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--node-cidr-mask-size=24Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target
3.2、启动
# 所有master节点执行systemctl daemon-reloadsystemctl daemon-reload && systemctl enable --now kube-controller-managersystemctl status kube-controller-manager
4、配置scheduler
4.1、配置
所有Master节点配置kube-scheduler.service
vi /usr/lib/systemd/system/kube-scheduler.service [Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-scheduler \--v=2 \--logtostderr=true \--address=127.0.0.1 \--leader-elect=true \--kubeconfig=/etc/kubernetes/scheduler.confRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target
4.2、启动
systemctl daemon-reloadsystemctl daemon-reload && systemctl enable --now kube-schedulersystemctl status kube-scheduler
八、TLS与引导启动原理
1、master1配置bootstrap
注意,如果不是高可用集群,
192.168.0.250:6443
改为master1的地址,6443为apiserver的默认端口
#准备一个随机token。但是我们只需要16个字符head -c 16 /dev/urandom | od -An -t x | tr -d ' '# 值如下: 737b177d9823531a433e368fcdb16f5f# 生成16个字符的head -c 8 /dev/urandom | od -An -t x | tr -d ' '# d683399b7a553977
#设置集群kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.0.250:6443 \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf#设置秘钥kubectl config set-credentials tls-bootstrap-token-user \--token=l6fy8c.d683399b7a553977 \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf #设置上下文kubectl config set-context tls-bootstrap-token-user@kubernetes \--cluster=kubernetes \--user=tls-bootstrap-token-user \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf#使用设置kubectl config use-context tls-bootstrap-token-user@kubernetes \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
2、master1设置kubectl执行权限
kubectl 能不能操作集群是看 /root/.kube 下有没有config文件,而config就是我们之前生成的admin.conf,具有操作权限的
# 只在master1生成,因为生产集群,我们只能让一台机器具有操作集群的权限,这样好控制mkdir -p /root/.kube ;cp /etc/kubernetes/admin.conf /root/.kube/config
#验证kubectl get nodes# 应该在网络里面开放负载均衡器的6443端口;默认应该不要配置的[root@k8s-master1 ~]# kubectl get nodesNo resources found#说明已经可以连接apiserver并获取资源
3、创建集群引导权限文件
# master准备这个文件 vi/etc/kubernetes/bootstrap.secret.yamlapiVersion: v1kind: Secretmetadata:name: bootstrap-token-l6fy8cnamespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData:description: "The default bootstrap token generated by 'kubelet '."token-id: l6fy8ctoken-secret: d683399b7a553977usage-bootstrap-authentication: "true"usage-bootstrap-signing: "true"auth-extra-groups:system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: kubelet-bootstraproleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: node-autoapprove-bootstraproleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: node-autoapprove-certificate-rotationroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.iokind: Groupname: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubeletrules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metricsverbs:- "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: system:kube-apiservernamespace: ""roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubeletsubjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kube-apiserver
# 应用此文件资源内容kubectl create -f /etc/kubernetes/bootstrap.secret.yaml
九、引导Node节点启动
所有节点的kubelet需要我们引导启动
1、发送核心证书到节点
master1节点把核心证书发送到其他节点
cd /etc/kubernetes/#查看所有信息#执行复制所有令牌操作for NODE in k8s-master2 k8s-master3 k8s-node1 k8s-node2; do ssh $NODE mkdir -p /etc/kubernetes/pki/etcd for FILE in ca.pem etcd.pem etcd-key.pem; do scp /etc/kubernetes/pki/etcd/$FILE $NODE:/etc/kubernetes/pki/etcd/ done for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.conf; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE} done done
2、所有节点配置kubelet
# 所有节点创建相关目录mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/## 所有node节点必须有 kubelet kube-proxyfor NODE in k8s-master2 k8s-master3 k8s-node3 k8s-node1 k8s-node2; do scp -r /etc/kubernetes/* root@$NODE:/etc/kubernetes/ done
2.1、创建kubelet.service
#所有节点,配置kubelet服务vi/usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=docker.serviceRequires=docker.service[Service]ExecStart=/usr/local/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.target
# 所有节点配置kubelet service配置文件vi /etc/systemd/system/kubelet.service.d/10-kubelet.conf[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.4.1"Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "ExecStart=ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
2.2、创建kubelet-conf.yml文件
#所有节点,配置kubelet-conf文件vi /etc/kubernetes/kubelet-conf.yml# clusterDNS 为service网络的第10个ip值,改成自己的。如:10.96.0.10
apiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationaddress: 0.0.0.0port: 10250readOnlyPort: 10255authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.pemauthorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30scgroupDriver: systemdcgroupsPerQOS: trueclusterDNS:- 10.96.0.10clusterDomain: cluster.localcontainerLogMaxFiles: 5containerLogMaxSize: 10MicontentType: application/vnd.kubernetes.protobufcpuCFSQuota: truecpuManagerPolicy: nonecpuManagerReconcilePeriod: 10senableControllerAttachDetach: trueenableDebuggingHandlers: trueenforceNodeAllocatable:- podseventBurst: 10eventRecordQPS: 5evictionHard:imagefs.available: 15%memory.available: 100Minodefs.available: 10%nodefs.inodesFree: 5%evictionPressureTransitionPeriod: 5m0s#缩小相应的配置failSwapOn: truefileCheckFrequency: 20shairpinMode: promiscuous-bridgehealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 20simageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80imageMinimumGCAge: 2m0siptablesDropBit: 15iptablesMasqueradeBit: 14kubeAPIBurst: 10kubeAPIQPS: 5makeIPTablesUtilChains: truemaxOpenFiles: 1000000maxPods: 110nodeStatusUpdateFrequency: 10soomScoreAdj: -999podPidsLimit: -1registryBurst: 10registryPullQPS: 5resolvConf: /etc/resolv.confrotateCertificates: trueruntimeRequestTimeout: 2m0sserializeImagePulls: truestaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 4h0m0ssyncFrequency: 1m0svolumeStatsAggPeriod: 1m0s
2.3、所有节点启动kubelet
systemctl daemon-reload && systemctl enable --now kubeletsystemctl status kubelet
会提示 “Unable to update cni config”。
接下来配置cni网络即可
3、kube-proxy配置
注意,如果不是高可用集群,
192.168.0.250:6443
改为master1的地址,6443改为apiserver的默认端口
3.1、生成kube-proxy.conf
以下操作在master1执行
#创建kube-proxy的sakubectl -n kube-system create serviceaccount kube-proxy#创建角色绑定kubectl create clusterrolebinding system:kube-proxy \--clusterrole system:node-proxier \--serviceaccount kube-system:kube-proxy#导出变量,方便后面使用SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}')JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath='{.data.token}' | base64 -d)PKI_DIR=/etc/kubernetes/pkiK8S_DIR=/etc/kubernetes# 生成kube-proxy配置# --server: 指定自己的apiserver地址或者lb地址kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.0.250:6443 \--kubeconfig=${K8S_DIR}/kube-proxy.conf# kube-proxy秘钥设置kubectl config set-credentials kubernetes \--token=${JWT_TOKEN} \--kubeconfig=/etc/kubernetes/kube-proxy.confkubectl config set-context kubernetes \--cluster=kubernetes \--user=kubernetes \--kubeconfig=/etc/kubernetes/kube-proxy.confkubectl config use-context kubernetes \--kubeconfig=/etc/kubernetes/kube-proxy.conf
#把生成的 kube-proxy.conf 传给每个节点for NODE in k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3; doscp /etc/kubernetes/kube-proxy.conf $NODE:/etc/kubernetes/ done
3.2、配置kube-proxy.service
# 所有节点配置 kube-proxy.service 服务,一会儿设置为开机启动vi /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.yaml \--v=2Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target
3.3、准备kube-proxy.yaml
一定注意修改自己的Pod网段范围
# 所有机器执行vi /etc/kubernetes/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.conf #kube-proxy引导文件qps: 5clusterCIDR: 196.16.0.0/16#修改为自己的Pod-CIDRconfigSyncPeriod: 15m0sconntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30sipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250ms
3.4、启动kube-proxy
所有节点启动
systemctl daemon-reload && systemctl enable --now kube-proxysystemctl status kube-proxy
十、部署calico
可以参照calico私有云部署指南
# 下载官网calicocurl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico.yaml## 把这个镜像修改成国内镜像# 修改一些我们自定义的. 修改etcd集群地址sed -i 's#etcd_endpoints: "http://:"#etcd_endpoints: "https://192.168.0.10:2379,https://192.168.0.11:2379,https://192.168.0.12:2379"#g' calico.yaml# etcd的证书内容,需要base64编码设置到yaml中ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.pem | base64 -w 0 `ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 -w 0 `ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 -w 0 `# 替换etcd中的证书base64编码后的内容sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico.yaml#打开 etcd_ca 等默认设置(calico启动后自己生成)。sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico.yaml# 修改自己的Pod网段 196.16.0.0/16POD_SUBNET="196.16.0.0/16"sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@value: '"${POD_SUBNET}"'@g' calico.yaml# 一定确定自己是否修改好了#确认calico是否修改好grep "CALICO_IPV4POOL_CIDR" calico.yaml -A 1
# 应用calico配置kubectl apply -f calico.yaml
十一、部署coreDNS
git clone https://github.com/coredns/deployment.gitcd deployment/kubernetes#10.96.0.10 改为 service 网段的 第 10 个ip./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
十二、给机器打上role标签
kubectl label node k8s-master1 node-role.kubernetes.io/master=''kubectl label node k8s-master2 node-role.kubernetes.io/master=''kubectl label node k8s-master3 node-role.kubernetes.io/master=''kubectl taints node k8s-master1
十三、集群验证
验证Pod网络可访问性
同名称空间,不同名称空间可以使用 ip 互相访问
跨机器部署的Pod也可以互相访问
验证Service网络可访问性
集群机器使用serviceIp可以负载均衡访问
pod内部可以访问service域名 serviceName.namespace
pod可以访问跨名称空间的service
# 部署以下内容进行测试apiVersion: apps/v1kind: Deploymentmetadata:name:nginx-01namespace: defaultlabels:app:nginx-01spec:selector:matchLabels:app: nginx-01replicas: 1template:metadata:labels:app:nginx-01spec:containers:- name:nginx-01image:nginx---apiVersion: v1kind: Servicemetadata:name: nginx-svcnamespace: defaultspec:selector:app:nginx-01type: ClusterIPports:- name: nginx-svcport: 80targetPort: 80protocol: TCP---apiVersion: v1kind: Namespacemetadata:name: hellospec: {}---apiVersion: apps/v1kind: Deploymentmetadata:name:nginx-hellonamespace: hellolabels:app:nginx-hellospec:selector:matchLabels:app: nginx-helloreplicas: 1template:metadata:labels:app:nginx-hellospec:containers:- name:nginx-helloimage:nginx---apiVersion: v1kind: Servicemetadata:name: nginx-svc-hellonamespace: hellospec:selector:app:nginx-hellotype: ClusterIPports:- name: nginx-svc-helloport: 80targetPort: 80protocol: TCP
# 给两个master标识为workerkubectl label node k8s-node3 node-role.kubernetes.io/worker=''kubectl label node k8s-master3 node-role.kubernetes.io/worker=''kubectl label node k8s-node1 node-role.kubernetes.io/worker=''kubectl label node k8s-node2 node-role.kubernetes.io/worker=''# 给master1打上污点。二进制部署的集群,默认master是没有污点的,可以任意调度。我们最好给一个master打上污点,保证master最小可用kubectl label node k8s-master3 node-role.kubernetes.io/master=''kubectl taint nodes k8s-master1 node-role.kubernetes.io/master=:NoSchedule
文末惊喜
开发云特价优惠
【开发云】年年都是折扣价,不用四处薅羊毛
- 博客主页:https://lansonli.blog.csdn.net
- 欢迎点赞 收藏 ⭐留言 如有错误敬请指正!
- 本文由 Lansonli 原创,首发于 CSDN博客
- 停下休息的时候不要忘了别人还在奔跑,希望大家抓紧时间学习,全力奔赴更美好的生活✨