• 欢迎访问开心洋葱网站,在线教程,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站,欢迎加入开心洋葱 QQ群
  • 为方便开心洋葱网用户,开心洋葱官网已经开启复制功能!
  • 欢迎访问开心洋葱网站,手机也能访问哦~欢迎加入开心洋葱多维思维学习平台 QQ群
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏开心洋葱吧~~~~~~~~~~~~~!
  • 由于近期流量激增,小站的ECS没能经的起亲们的访问,本站依然没有盈利,如果各位看如果觉着文字不错,还请看官给小站打个赏~~~~~~~~~~~~~!

二进制方法-部署k8s集群部署1.18版本

Linux 七月流星雨 2177次浏览 0个评论

二进制方法-部署k8s集群部署1.18版本

1. 前置知识点

1.1 生产环境可部署kubernetes集群的两种方式

  • 目前生产部署Kubernetes集群主要有两种方式

    • kuberadm

      Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

    • 二进制包

      从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

      Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

1.2 安装要求

  • 在开始之前,部署Kubernetes集群机器需要满足以下几个条件:
    • 一台或多台机器,操作系统 CentOS7.x-86_x64
    • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
    • 集群中所有机器之间网络互通
    • 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
    • 禁止swap分区

1.3 环境准备

1.3.1 软件环境
软件 版本
操作系统 CentOS7.8_x64 (mini)
Docker 19-ce
Kubernetes 1.18
1.3.2 单master架构图

1.3.3 服务器整体规划
角色 IP 组件
k8s-master 192.168.0.201 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1 192.168.0.202 kubelet,kube-proxy,docker etcd
k8s-node2 192.168.0.203 kubelet,kube-proxy,docker,etcd
1.4 操作系统初始化配置
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.0.201 k8s-master
192.168.0.202 k8s-node1
192.168.0.203 k8s-node2
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

2. 部署Etcd集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。

节点名称 IP
etcd-1 192.168.0.201
etcd-2 192.168.0.202
etcd-3 192.168.0.203

注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。

2.1 准备cfssl 证书生成工具

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。

找任意一台服务器操作,这里用Master节点

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.1.1 在master执行工具安装
[root@k8s-master ~]# mkdir -p ~/cfssl/
[root@k8s-master ~]# cd cfssl/
[root@k8s-master cfssl]# pwd
/root/cfssl
[root@k8s-master cfssl]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master cfssl]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master cfssl]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master cfssl]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
[root@k8s-master cfssl]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@k8s-master cfssl]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@k8s-master cfssl]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.2 生成Etcd证书

2.2.1 创建工作目录
[root@k8s-master ~]# mkdir -p ~/TLS/{etcd,k8s}

[root@k8s-master ~]# cd /root/TLS/etcd/
2.2.2 自签CA:

执行json格式配置

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
2.2.2.1 执行生成证书
[root@k8s-master etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/11/17 20:53:48 [INFO] generating a new CA key and certificate from CSR
2020/11/17 20:53:48 [INFO] generate received request
2020/11/17 20:53:48 [INFO] received CSR
2020/11/17 20:53:48 [INFO] generating key: rsa-2048
2020/11/17 20:53:48 [INFO] encoded CSR
2020/11/17 20:53:48 [INFO] signed certificate with serial number 101950529088026535677297860863057856432140076739

2.2.2.2 验证CA证书
[root@k8s-master etcd]# ls *pem
ca-key.pem  ca.pem
2.2.3 使用自签CA签发Etcd HTTTPS证书

创建证书申请文件

cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.0.201",
    "192.168.0.202",
    "192.168.0.203"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

执行配置文件

[root@k8s-master etcd]# cat > server-csr.json << EOF
> {
>     "CN": "etcd",
>     "hosts": [
>     "192.168.0.201",
>     "192.168.0.202",
>     "192.168.0.203"
>     ],
>     "key": {
>         "algo": "rsa",
>         "size": 2048
>     },
>     "names": [
>         {
>             "C": "CN",
>             "L": "BeiJing",
>             "ST": "BeiJing"
>         }
>     ]
> }
> EOF

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

2.2.3.1 生成Etcd 证书
[root@k8s-master etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/11/17 21:03:05 [INFO] generate received request
2020/11/17 21:03:05 [INFO] received CSR
2020/11/17 21:03:05 [INFO] generating key: rsa-2048
2020/11/17 21:03:05 [INFO] encoded CSR
2020/11/17 21:03:05 [INFO] signed certificate with serial number 134705649830183343899987337527377566420156796503
2020/11/17 21:03:05 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

2.2.3.2 验证etcd证书
[root@k8s-master etcd]# ls server*pem
server-key.pem  server.pem

3. 从Github下载二进制文件

  • 下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master etcd]# mkdir -p ~/tools
[root@k8s-master etcd]# cd /root/tools/
[root@k8s-master tools]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master tools]# ll
总用量 16960
-rw-r--r--. 1 root root 17364053 11月 17 21:09 etcd-v3.4.9-linux-amd64.tar.gz

4. 部署Etcd集群

以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3.

4.1 创建工作目录并解压二进制包

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

执行上面命令工作目录

[root@k8s-master etcd]# cd /root/tools/
[root@k8s-master tools]# ll
总用量 16960
-rw-r--r--. 1 root root 17364053 11月 17 21:09 etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master tools]# tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master tools]# mv etcd-v3.4.9-linux-amd64/etcd /opt/etcd/bin/
[root@k8s-master tools]# mv etcd-v3.4.9-linux-amd64/etcdctl /opt/etcd/bin/

4.2 创建etcd配置文件

cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.201:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.201:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.201:2380,etcd-2=https://192.168.0.202:2380,etcd-3=https://192.168.0.203:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
  • ETCD_NAME:节点名称,集群中唯一

  • ETCD_DATA_DIR:数据目录

  • ETCD_LISTEN_PEER_URLS:集群通信监听地址

  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

  • ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

  • ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址

  • ETCD_INITIAL_CLUSTER:集群节点地址

  • ETCD_INITIAL_CLUSTER_TOKEN:集群Token

  • ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

  • 服务器执行配置

    [root@k8s-master ~]# cat > /opt/etcd/cfg/etcd.conf << EOF
    > #[Member]
    > ETCD_NAME="etcd-1"
    > ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    > ETCD_LISTEN_PEER_URLS="https://192.168.0.201:2380"
    > ETCD_LISTEN_CLIENT_URLS="https://192.168.0.201:2379"
    > #[Clustering]
    > ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.201:2380"
    > ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.201:2379"
    > ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.201:2380,etcd-2=https://192.168.0.202:2380,etcd-3=https://192.168.0.203:2380"
    > ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    > ETCD_INITIAL_CLUSTER_STATE="new"
    > EOF
    

4.3 配置systemctl管理etcd服务

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
  • 服务器执行配置

    [root@k8s-master ~]# cat > /usr/lib/systemd/system/etcd.service << EOF
    > [Unit]
    > Description=Etcd Server
    > After=network.target
    > After=network-online.target
    > Wants=network-online.target
    > [Service]
    > Type=notify
    > EnvironmentFile=/opt/etcd/cfg/etcd.conf
    > ExecStart=/opt/etcd/bin/etcd \
    > --cert-file=/opt/etcd/ssl/server.pem \
    > --key-file=/opt/etcd/ssl/server-key.pem \
    > --peer-cert-file=/opt/etcd/ssl/server.pem \
    > --peer-key-file=/opt/etcd/ssl/server-key.pem \
    > --trusted-ca-file=/opt/etcd/ssl/ca.pem \
    > --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
    > --logger=zap
    > Restart=on-failure
    > LimitNOFILE=65536
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    
    

4.4 拷贝生成的证书

拷贝ca证书和etcd证书

[root@k8s-master ~]# cp ~/TLS/etcd/ca*pem  /opt/etcd/ssl/
[root@k8s-master ~]# cp -a ~/TLS/etcd/server*pem /opt/etcd/ssl/
[root@k8s-master ~]# ll /opt/etcd/ssl/
总用量 16
-rw-------. 1 root root 1675 11月 19 14:53 ca-key.pem
-rw-r--r--. 1 root root 1265 11月 19 14:53 ca.pem
-rw-------. 1 root root 1675 11月 17 21:03 server-key.pem
-rw-r--r--. 1 root root 1338 11月 17 21:03 server.pem

4.5 拷贝master节点所有生成的文件拷贝到节点2和节点3

[root@k8s-master ~]# scp -r /opt/etcd/ root@192.168.0.202:/opt
[root@k8s-master ~]# scp -r /usr/lib/systemd/system/etcd.service root@192.168.0.202:/usr/lib/systemd/system/
[root@k8s-master ~]# scp -r /opt/etcd/ root@192.168.0.203:/opt
[root@k8s-master ~]# scp -r /usr/lib/systemd/system/etcd.service root@192.168.0.203:/usr/lib/systemd/system/

4.6 修改节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP

4.6.1 修改k8s-node1节点etcd.conf配置文件
[root@k8s-node1 ~]# vim /opt/etcd/cfg/etcd.conf 
[root@k8s-node1 ~]# cat /opt/etcd/cfg/etcd.conf 
#[Member]
ETCD_NAME="etcd-2"         # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.202:2380"      # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.202:2379"    # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.202:2380"    # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.202:2379"      # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.201:2380,etcd-2=https://192.168.0.202:2380,etcd-3=https://192.168.0.203:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

4.6.2 修改k8s-node2节点etcd.conf配置文件
[root@k8s-node2 ~]# vim /opt/etcd/cfg/etcd.conf 
[root@k8s-node2 ~]# cat /opt/etcd/cfg/etcd.conf 
#[Member]
ETCD_NAME="etcd-3"         # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.203:2380"     # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.203:2379"    # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.203:2380"    # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.203:2379"      # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.201:2380,etcd-2=https://192.168.0.202:2380,etcd-3=https://192.168.0.203:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

4.7 启动并设置开机启动

  • 启动master节点

    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl start etcd
    [root@k8s-master ~]# systemctl enable etcd
    Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
    
  • 启动node1节点

    [root@k8s-node1 ~]# systemctl daemon-reload
    [root@k8s-node1 ~]# systemctl start etcd
    [root@k8s-node1 ~]# systemctl enable etcd
    Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
    
  • 启动node2节点

    [root@k8s-node2 ~]# systemctl daemon-reload
    [root@k8s-node2 ~]# systemctl start etcd
    [root@k8s-node2 ~]# systemctl enable etcd
    Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
    

4.8 查看集群状态

[root@k8s-master ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379" endpoint health
https://192.168.0.201:2379 is healthy: successfully committed proposal: took = 40.294783ms
https://192.168.0.203:2379 is healthy: successfully committed proposal: took = 40.593516ms
https://192.168.0.202:2379 is healthy: successfully committed proposal: took = 21.798951ms

如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

5. 安装Docker

以下在所有节点操作,这里采用yum安装方式,

5.1 安装docker依赖包

yum install -y yum-utils device-mapper-persistent-data   lvm2
5.1.1 在k8s master上执行
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data   lvm2
5.1.2 在k8s node1上执行
[root@k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data   lvm2
5.1.3 在k8s node2上执行
[root@k8s-node2 ~]# yum install -y yum-utils device-mapper-persistent-data   lvm2

5.2 安装docker镜像源

5.2.1 在k8s master执行
[root@k8s-master ~]#  yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
5.2.2 在k8s node1执行
[root@k8s-node1 ~]#  yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
5.2.3 在k8s node2执行
[root@k8s-node2 ~]#  yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
已加载插件:fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo

5.3 安装docker

5.3.1 在k8s master上执行docker安装
[root@k8s-master ~]# yum install -y docker-ce
5.3.2 在k8s node1上执行docker安装
[root@k8s-node1 ~]# yum install -y docker-ce
5.3.3 在k8s node2上执行docker安装
[root@k8s-node2 ~]# yum install -y docker-ce

5.6 配置镜像加速

mkdir -p  /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

5.7 启动docker,设置开机启动

 /bin/systemctl daemon-reload
 /bin/systemctl start docker
 /bin/systemctl enable docker 

6. 部署master node

6.1 生成kube-apiserver证书

6.1.1 生成ca证书
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 执行apiserver证书配置

    [root@k8s-master ~]# mkdir -p /root/TLS/apiserver
    [root@k8s-master ~]# cd /root/TLS/apiserver/
    [root@k8s-master apiserver]# ll
    总用量 0
    [root@k8s-master apiserver]# cat > ca-config.json << EOF
    > {
    >   "signing": {
    >     "default": {
    >       "expiry": "87600h"
    >     },
    >     "profiles": {
    >       "kubernetes": {
    >          "expiry": "87600h",
    >          "usages": [
    >             "signing",
    >             "key encipherment",
    >             "server auth",
    >             "client auth"
    >         ]
    >       }
    >     }
    >   }
    > }
    > EOF
    [root@k8s-master apiserver]# cat > ca-csr.json << EOF
    > {
    >     "CN": "kubernetes",
    >     "key": {
    >         "algo": "rsa",
    >         "size": 2048
    >     },
    >     "names": [
    >         {
    >             "C": "CN",
    >             "L": "Beijing",
    >             "ST": "Beijing",
    >             "O": "k8s",
    >             "OU": "System"
    >         }
    >     ]
    > }
    > EOF
    
    
  • 执行生成ca证书

    [root@k8s-master apiserver]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    2020/11/19 21:02:47 [INFO] generating a new CA key and certificate from CSR
    2020/11/19 21:02:47 [INFO] generate received request
    2020/11/19 21:02:47 [INFO] received CSR
    2020/11/19 21:02:47 [INFO] generating key: rsa-2048
    2020/11/19 21:02:47 [INFO] encoded CSR
    2020/11/19 21:02:47 [INFO] signed certificate with serial number 618964693704774402914754546857528123070512384496
    [root@k8s-master apiserver]# ll *pem
    -rw-------. 1 root root 1675 11月 19 21:02 ca-key.pem
    -rw-r--r--. 1 root root 1359 11月 19 21:02 ca.pem
    
6.2.1 生成kube-apiserver https证书
  • 创建证书申请文件

    cat > server-csr.json << EOF
    {
        "CN": "kubernetes",
        "hosts": [
          "10.0.0.1",
          "127.0.0.1",
          "192.168.0.201",
          "192.168.0.202",
          "192.168.0.203",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
  • 执行生成apiserver证书配置

    [root@k8s-master /]# cd /root/TLS/apiserver/
    [root@k8s-master apiserver]# cat > server-csr.json << EOF
    > {
    >     "CN": "kubernetes",
    >     "hosts": [
    >       "10.0.0.1",
    >       "127.0.0.1",
    >       "192.168.0.201",
    >       "192.168.0.202",
    >       "192.168.0.203",
    >       "kubernetes",
    >       "kubernetes.default",
    >       "kubernetes.default.svc",
    >       "kubernetes.default.svc.cluster",
    >       "kubernetes.default.svc.cluster.local"
    >     ],
    >     "key": {
    >         "algo": "rsa",
    >         "size": 2048
    >     },
    >     "names": [
    >         {
    >             "C": "CN",
    >             "L": "BeiJing",
    >             "ST": "BeiJing",
    >             "O": "k8s",
    >             "OU": "System"
    >         }
    >     ]
    > }
    > EOF
    

    注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

  • 执行生成证书

    [root@k8s-master apiserver]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    2020/11/19 21:16:00 [INFO] generate received request
    2020/11/19 21:16:00 [INFO] received CSR
    2020/11/19 21:16:00 [INFO] generating key: rsa-2048
    2020/11/19 21:16:01 [INFO] encoded CSR
    2020/11/19 21:16:01 [INFO] signed certificate with serial number 61289883131760633497559745925872614733825752323
    2020/11/19 21:16:01 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    
    [root@k8s-master apiserver]# ls server*pem
    server-key.pem  server.pem
    

6.2 下载kubernetes安装包

  • 下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183

  • 注释:打开链接你会发现里面有很多包,下载一个server包就够了,包含了master和worker node二进制文件

6.3 解压二进制包

[root@k8s-master /]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
[root@k8s-master apiserver]# cd /root/tools/
[root@k8s-master tools]# wget https://storage.useso.com/kubernetes-release/release/v1.18.3/kubernetes-server-linux-amd64.tar.gz
[root@k8s-master tools]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master tools]# cd kubernetes/server/bin/
[root@k8s-master bin]# ll
总用量 1087376
-rwxr-xr-x. 1 root root  48128000 5月  20 2020 apiextensions-apiserver
-rwxr-xr-x. 1 root root  39813120 5月  20 2020 kubeadm
-rwxr-xr-x. 1 root root 120668160 5月  20 2020 kube-apiserver
-rw-r--r--. 1 root root         8 5月  20 2020 kube-apiserver.docker_tag
-rw-------. 1 root root 174558720 5月  20 2020 kube-apiserver.tar
-rwxr-xr-x. 1 root root 110059520 5月  20 2020 kube-controller-manager
-rw-r--r--. 1 root root         8 5月  20 2020 kube-controller-manager.docker_tag
-rw-------. 1 root root 163950080 5月  20 2020 kube-controller-manager.tar
-rwxr-xr-x. 1 root root  44032000 5月  20 2020 kubectl
-rwxr-xr-x. 1 root root 113283800 5月  20 2020 kubelet
-rwxr-xr-x. 1 root root  38379520 5月  20 2020 kube-proxy
-rw-r--r--. 1 root root         8 5月  20 2020 kube-proxy.docker_tag
-rw-------. 1 root root 119099392 5月  20 2020 kube-proxy.tar
-rwxr-xr-x. 1 root root  42950656 5月  20 2020 kube-scheduler
-rw-r--r--. 1 root root         8 5月  20 2020 kube-scheduler.docker_tag
-rw-------. 1 root root  96841216 5月  20 2020 kube-scheduler.tar
-rwxr-xr-x. 1 root root   1687552 5月  20 2020 mounter
[root@k8s-master bin]# cp kube-apiserver /opt/kubernetes/bin/
[root@k8s-master bin]# cp kube-scheduler /opt/kubernetes/bin/
[root@k8s-master bin]# cp kube-controller-manager /opt/kubernetes/bin/
[root@k8s-master bin]# cp kubectl /usr/bin/

6.4 部署kube-apiserver

6.4.1 创建配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379 \\
--bind-address=192.168.0.201 \\
--secure-port=6443 \\
--advertise-address=192.168.0.201 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

注释:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

  • –logtostderr:启用日志
  • —v:日志等级
  • –log-dir:日志目录
  • –etcd-servers:etcd集群地址
  • –bind-address:监听地址
  • –secure-port:https安全端口
  • –advertise-address:集群通告地址
  • –allow-privileged:启用授权
  • –service-cluster-ip-range:Service虚拟IP地址段
  • –enable-admission-plugins:准入控制模块
  • –authorization-mode:认证授权,启用RBAC授权和节点自管理
  • –enable-bootstrap-token-auth:启用TLS bootstrap机制
  • –token-auth-file:bootstrap token文件
  • –service-node-port-range:Service nodeport类型默认分配端口范围
  • –kubelet-client-xxx:apiserver访问kubelet客户端证书
  • –tls-xxx-file:apiserver https证书
  • –etcd-xxxfile:连接Etcd集群证书
  • –audit-log-xxx:审计日志
6.4.2 执行生成配置
[root@k8s-master /]# cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
> KUBE_APISERVER_OPTS="--logtostderr=false \\
> --v=2 \\
> --log-dir=/opt/kubernetes/logs \\
> --etcd-servers=https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379 \\
> --bind-address=192.168.0.201 \\
> --secure-port=6443 \\
> --advertise-address=192.168.0.201 \\
> --allow-privileged=true \\
> --service-cluster-ip-range=10.0.0.0/24 \\
> --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
> --authorization-mode=RBAC,Node \\
> --enable-bootstrap-token-auth=true \\
> --token-auth-file=/opt/kubernetes/cfg/token.csv \\
> --service-node-port-range=30000-32767 \\
> --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
> --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
> --tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
> --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
> --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
> --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
> --etcd-cafile=/opt/etcd/ssl/ca.pem \\
> --etcd-certfile=/opt/etcd/ssl/server.pem \\
> --etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
> --audit-log-maxage=30 \\
> --audit-log-maxbackup=3 \\
> --audit-log-maxsize=100 \\
> --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
> EOF
6.4.3 拷贝刚才生成的apiserver证书
[root@k8s-master /]# cp -a /root/TLS/apiserver/ca*pem    /opt/kubernetes/ssl/
[root@k8s-master /]# cp -a /root/TLS/apiserver/server*pem /opt/kubernetes/ssl/
[root@k8s-master /]# ll /opt/kubernetes/ssl/
总用量 16
-rw-------. 1 root root 1675 11月 19 21:02 ca-key.pem
-rw-r--r--. 1 root root 1359 11月 19 21:02 ca.pem
-rw-------. 1 root root 1679 11月 19 21:16 server-key.pem
-rw-r--r--. 1 root root 1627 11月 19 21:16 server.pem
6.4.4 启用TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

  • TLS bootstraping 工作流程:

二进制方法-部署k8s集群部署1.18版本

  • 创建上述配置文件中token文件:

    cat > /opt/kubernetes/cfg/token.csv << EOF
    c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
    EOF
    

    执行以上操作

    [root@k8s-master ~]# cat > /opt/kubernetes/cfg/token.csv << EOF
    > c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
    > EOF
    
  • 生成token码,可以自行生成替换

    [root@k8s-master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
    d2251ee6e9f478ef53f768d2873a3a7a
    
6.4.5 使用systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  • 执行管理文件

    [root@k8s-master ~]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
    > [Unit]
    > Description=Kubernetes API Server
    > Documentation=https://github.com/kubernetes/kubernetes
    > [Service]
    > EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
    > ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
    > Restart=on-failure
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    
6.4.6 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
  • 执行启动apiserver文件

    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl start kube-apiserver
    [root@k8s-master ~]# systemctl enable kube-apiserver
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
    
  • 验证服务是否启动

    [root@k8s-master ~]# netstat -lntup|grep kube-apiserver
    tcp        0      0 192.168.0.201:6443      0.0.0.0:*               LISTEN      2701/kube-apiserver 
    tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      2701/kube-apiserver 
    [root@k8s-master ~]# ps -ef |grep kube-apiserver
    root       2701      1  1 16:03 ?        00:01:08 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379 --bind-address=192.168.0.201 --secure-port=6443 --advertise-address=192.168.0.201 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log
    root       2768   2053  0 17:07 pts/0    00:00:00 grep --color=auto kube-apiserver
    
6.4.7 授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

6.5 部署kube-controller-manager

6.5.1 创建kube-controller-manager配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
  • 注释

    • –master:通过本地非安全本地端口8080连接apiserver。
    • –leader-elect:当该组件启动多个时,自动选举(HA)
    • –cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
  • 执行以上配置文件

    [root@k8s-master ~]# cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
    > KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
    > --v=2 \\
    > --log-dir=/opt/kubernetes/logs \\
    > --leader-elect=true \\
    > --master=127.0.0.1:8080 \\
    > --bind-address=127.0.0.1 \\
    > --allocate-node-cidrs=true \\
    > --cluster-cidr=10.244.0.0/16 \\
    > --service-cluster-ip-range=10.0.0.0/24 \\
    > --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
    > --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
    > --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
    > --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
    > --experimental-cluster-signing-duration=87600h0m0s"
    > EOF
    
6.5.2 创建controller-manager 的systemd管理配置文件
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  • 在master执行controller-manager 的systemd管理配置文件

    [root@k8s-master ~]# cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
    > [Unit]
    > Description=Kubernetes Controller Manager
    > Documentation=https://github.com/kubernetes/kubernetes
    > [Service]
    > EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
    > ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
    > Restart=on-failure
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    
6.5.3 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
  • 在master环境执行启动和开机启动

    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl start kube-controller-manager
    [root@k8s-master ~]# systemctl enable kube-controller-manager
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    

6.6 部署kube-scheduler

6.6.1 在master上创建kube-scheduler配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
  • 注释

    • –master:通过本地非安全本地端口8080连接apiserver。
    • –leader-elect:当该组件启动多个时,自动选举(HA)
  • 执行配置文件

    [root@k8s-master ~]# cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
    > KUBE_SCHEDULER_OPTS="--logtostderr=false \
    > --v=2 \
    > --log-dir=/opt/kubernetes/logs \
    > --leader-elect \
    > --master=127.0.0.1:8080 \
    > --bind-address=127.0.0.1"
    > EOF
    
6.6.2 配置systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
  • 执行配置

    [root@k8s-master ~]# cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
    > [Unit]
    > Description=Kubernetes Scheduler
    > Documentation=https://github.com/kubernetes/kubernetes
    > [Service]
    > EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
    > ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    > Restart=on-failure
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    
6.6.3 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
  • 执行开机启动

    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl start kube-scheduler
    [root@k8s-master ~]# systemctl enable kube-scheduler
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
    
6.6.4 查看集群状态
[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
  • 注释

    如上输出说明master节点组件运行正常

7. 部署worker node

下面还是在Master Node上操作,即同时作为Worker Node

7.1 创建工作目录并拷贝二进制文件

在所有worker node创建工作目录:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 

从master节点拷贝kubelet kube-proxy配置文件:

[root@k8s-master /]# cd /root/tools/
[root@k8s-master tools]# ll
总用量 372092
drwxr-xr-x. 3 630384594 600260513        96 11月 19 13:59 etcd-v3.4.9-linux-amd64
-rw-r--r--. 1 root      root       17364053 11月 17 21:09 etcd-v3.4.9-linux-amd64.tar.gz
drwxr-xr-x. 4 root      root             79 5月  20 2020 kubernetes
-rw-r--r--. 1 root      root      363654483 11月 19 21:41 kubernetes-server-linux-amd64.tar.gz
[root@k8s-master tools]# cd kubernetes/server/bin/
[root@k8s-master bin]# ll
总用量 1087376
-rwxr-xr-x. 1 root root  48128000 5月  20 2020 apiextensions-apiserver
-rwxr-xr-x. 1 root root  39813120 5月  20 2020 kubeadm
-rwxr-xr-x. 1 root root 120668160 5月  20 2020 kube-apiserver
-rw-r--r--. 1 root root         8 5月  20 2020 kube-apiserver.docker_tag
-rw-------. 1 root root 174558720 5月  20 2020 kube-apiserver.tar
-rwxr-xr-x. 1 root root 110059520 5月  20 2020 kube-controller-manager
-rw-r--r--. 1 root root         8 5月  20 2020 kube-controller-manager.docker_tag
-rw-------. 1 root root 163950080 5月  20 2020 kube-controller-manager.tar
-rwxr-xr-x. 1 root root  44032000 5月  20 2020 kubectl
-rwxr-xr-x. 1 root root 113283800 5月  20 2020 kubelet
-rwxr-xr-x. 1 root root  38379520 5月  20 2020 kube-proxy
-rw-r--r--. 1 root root         8 5月  20 2020 kube-proxy.docker_tag
-rw-------. 1 root root 119099392 5月  20 2020 kube-proxy.tar
-rwxr-xr-x. 1 root root  42950656 5月  20 2020 kube-scheduler
-rw-r--r--. 1 root root         8 5月  20 2020 kube-scheduler.docker_tag
-rw-------. 1 root root  96841216 5月  20 2020 kube-scheduler.tar
-rwxr-xr-x. 1 root root   1687552 5月  20 2020 mounter

[root@k8s-master bin]# cp -a kubelet kube-proxy /opt/kubernetes/bin/    # 本地拷贝

7.2 部署kubelet

7.2.1 创建kubelet配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
  • 注释

    • –hostname-override:显示名称,集群中唯一
    • –network-plugin:启用CNI
    • –kubeconfig:空路径,会自动生成,后面用于连接apiserver
    • –bootstrap-kubeconfig:首次启动向apiserver申请证书
    • –config:配置参数文件
    • –cert-dir:kubelet证书生成目录
    • –pod-infra-container-image:管理Pod网络容器的镜像
  • 执行配置文件

    [root@k8s-master bin]# cat > /opt/kubernetes/cfg/kubelet.conf << EOF
    > KUBELET_OPTS="--logtostderr=false \\
    > --v=2 \\
    > --log-dir=/opt/kubernetes/logs \\
    > --hostname-override=k8s-master \\
    > --network-plugin=cni \\
    > --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
    > --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
    > --config=/opt/kubernetes/cfg/kubelet-config.yml \\
    > --cert-dir=/opt/kubernetes/ssl \\
    > --pod-infra-container-image=lizhenliang/pause-amd64:3.0"
    > EOF
    
7.2.2 创建kubelet-config.yml配置文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
  • 执行配置文件

    [root@k8s-master kubernetes]# cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
    > kind: KubeletConfiguration
    > apiVersion: kubelet.config.k8s.io/v1beta1
    > address: 0.0.0.0
    > port: 10250
    > readOnlyPort: 10255
    > cgroupDriver: cgroupfs
    > clusterDNS:
    > - 10.0.0.2
    > clusterDomain: cluster.local 
    > failSwapOn: false
    > authentication:
    >   anonymous:
    >     enabled: false
    >   webhook:
    >     cacheTTL: 2m0s
    >     enabled: true
    >   x509:
    >     clientCAFile: /opt/kubernetes/ssl/ca.pem 
    > authorization:
    >   mode: Webhook
    >   webhook:
    >     cacheAuthorizedTTL: 5m0s
    >     cacheUnauthorizedTTL: 30s
    > evictionHard:
    >   imagefs.available: 15%
    >   memory.available: 100Mi
    >   nodefs.available: 10%
    >   nodefs.inodesFree: 5%
    > maxOpenFiles: 1000000
    > maxPods: 110
    > EOF
    
7.2.3 生成bootstrap.kubeconfig文件
  • 创建bootstrap-kubeconfig.sh脚本

    [root@k8s-master kubernetes]# cd ~
    [root@k8s-master ~]# ll
    总用量 4
    drwxr-xr-x. 2 root root    6 11月 17 20:39 -
    -rw-------. 1 root root 1658 11月 11 05:16 anaconda-ks.cfg
    drwxr-xr-x. 2 root root    6 11月 17 20:52 cfssl
    drwxr-xr-x. 5 root root   46 11月 19 20:59 TLS
    drwxr-xr-x. 4 root root  137 11月 19 21:46 tools
    [root@k8s-master ~]# vim bootstrap-kubeconfig.sh
    [root@k8s-master ~]# cat bootstrap-kubeconfig.sh
    #!/bin/bash
    
    KUBE_APISERVER="https://192.168.0.201:6443" # apiserver IP:PORT
    TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
    
    # 生成 kubelet bootstrap kubeconfig 配置文件
    kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/kubernetes/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=bootstrap.kubeconfig
    kubectl config set-credentials "kubelet-bootstrap" \
      --token=${TOKEN} \
      --kubeconfig=bootstrap.kubeconfig
    kubectl config set-context default \
      --cluster=kubernetes \
      --user="kubelet-bootstrap" \
      --kubeconfig=bootstrap.kubeconfig
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    
  • 执行脚本,生成bootstrap-kubeconfig配置文件

    [root@k8s-master ~]# sh  bootstrap-kubeconfig.sh
    Cluster "kubernetes" set.
    User "kubelet-bootstrap" set.
    Context "default" created.
    Switched to context "default".
    [root@k8s-master ~]# ll
    总用量 12
    drwxr-xr-x. 2 root root    6 11月 17 20:39 -
    -rw-------. 1 root root 1658 11月 11 05:16 anaconda-ks.cfg
    -rw-------  1 root root 2167 11月 23 21:51 bootstrap.kubeconfig
    -rw-r--r--  1 root root  693 11月 23 21:50 bootstrap-kubeconfig.sh
    drwxr-xr-x. 2 root root    6 11月 17 20:52 cfssl
    drwxr-xr-x. 5 root root   46 11月 19 20:59 TLS
    drwxr-xr-x. 4 root root  137 11月 19 21:46 tools
    
  • 拷贝bootstrap.kubeconfig

    [root@k8s-master ~]# cp -a bootstrap.kubeconfig /opt/kubernetes/cfg/
    
7.2.4 配置kubelet使用systemctl管理
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
  • 执行配置文件

    [root@k8s-master ~]# cat > /usr/lib/systemd/system/kubelet.service << EOF
    > [Unit]
    > Description=Kubernetes Kubelet
    > After=docker.service
    > [Service]
    > EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
    > ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
    > Restart=on-failure
    > LimitNOFILE=65536
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    
7.2.5 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
  • 执行启动

    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl start kubelet
    [root@k8s-master ~]# systemctl enable kubelet
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    

7.3 批准kubelet证书申请并加入集群

# 查看kubelet证书请求
[root@k8s-master ~]# kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-EDYoKN4sH6vlVUx7HrCH7i1lBMDTqNiTx9oRU3e3xM4   2m4s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending


# 批准申请
[root@k8s-master ~]# kubectl certificate approve  node-csr-EDYoKN4sH6vlVUx7HrCH7i1lBMDTqNiTx9oRU3e3xM4
certificatesigningrequest.certificates.k8s.io/node-csr-EDYoKN4sH6vlVUx7HrCH7i1lBMDTqNiTx9oRU3e3xM4 approved

# 查看节点
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   17s   v1.18.3

7.4 部署kube-proxy

7.4.1 创建kube-proxy配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
  • 执行配置

    [root@k8s-master ~]# cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
    > KUBE_PROXY_OPTS="--logtostderr=false \\
    > --v=2 \\
    > --log-dir=/opt/kubernetes/logs \\
    > --config=/opt/kubernetes/cfg/kube-proxy-config.yml"
    > EOF
    
    
7.4.2 配置kube-proxy-config文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF
  • 执行配置

    [root@k8s-master ~]# cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
    > kind: KubeProxyConfiguration
    > apiVersion: kubeproxy.config.k8s.io/v1alpha1
    > bindAddress: 0.0.0.0
    > metricsBindAddress: 0.0.0.0:10249
    > clientConnection:
    >   kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
    > hostnameOverride: k8s-master
    > clusterCIDR: 10.0.0.0/24
    > EOF
    
7.4.3 生成kube-proxy证书
#切换到apiserver的目录下
[root@k8s-master ~]# cd /root/TLS/apiserver/

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 查看证书
ls kube-proxy*pem
  • 执行配置

    [root@k8s-master ~]# cd /root/TLS/apiserver/
    [root@k8s-master apiserver]# ll
    总用量 36
    -rw-r--r--. 1 root root  294 11月 19 21:00 ca-config.json
    -rw-r--r--. 1 root root 1001 11月 19 21:02 ca.csr
    -rw-r--r--. 1 root root  264 11月 19 21:00 ca-csr.json
    -rw-------. 1 root root 1675 11月 19 21:02 ca-key.pem
    -rw-r--r--. 1 root root 1359 11月 19 21:02 ca.pem
    -rw-r--r--. 1 root root 1261 11月 19 21:16 server.csr
    -rw-r--r--. 1 root root  557 11月 19 21:10 server-csr.json
    -rw-------. 1 root root 1679 11月 19 21:16 server-key.pem
    -rw-r--r--. 1 root root 1627 11月 19 21:16 server.pem
    [root@k8s-master apiserver]# cat > kube-proxy-csr.json << EOF
    > {
    >   "CN": "system:kube-proxy",
    >   "hosts": [],
    >   "key": {
    >     "algo": "rsa",
    >     "size": 2048
    >   },
    >   "names": [
    >     {
    >       "C": "CN",
    >       "L": "BeiJing",
    >       "ST": "BeiJing",
    >       "O": "k8s",
    >       "OU": "System"
    >     }
    >   ]
    > }
    > EOF
    
  • 执行证书生成命令

    [root@k8s-master apiserver]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    2020/11/23 22:16:56 [INFO] generate received request
    2020/11/23 22:16:56 [INFO] received CSR
    2020/11/23 22:16:56 [INFO] generating key: rsa-2048
    2020/11/23 22:16:57 [INFO] encoded CSR
    2020/11/23 22:16:57 [INFO] signed certificate with serial number 135691991395151702368398150414714985360518061883
    2020/11/23 22:16:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    
  • 查看证书

    [root@k8s-master apiserver]# ls kube-proxy*pem
    kube-proxy-key.pem  kube-proxy.pem
    
7.4.4 生成kubeconfig配置文件
  • 创建kubeconfig.sh脚本

    [root@k8s-master ~]# vim kubeconfig.sh
    [root@k8s-master ~]# cat kubeconfig.sh 
    #!/bin/bash
    
    KUBE_APISERVER="https://192.168.0.201:6443"     #修改本地ip地址
    
    kubectl config set-cluster kubernetes \
      --certificate-authority=/opt/kubernetes/ssl/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kube-proxy.kubeconfig
    kubectl config set-credentials kube-proxy \
      --client-certificate=./kube-proxy.pem \
      --client-key=./kube-proxy-key.pem \
      --embed-certs=true \
      --kubeconfig=kube-proxy.kubeconfig
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kube-proxy \
      --kubeconfig=kube-proxy.kubeconfig
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    
    
  • 执行脚本

    [root@k8s-master ~]# sh kubeconfig.sh 
    Cluster "kubernetes" set.
    error: error reading client-certificate data from ./kube-proxy.pem: open ./kube-proxy.pem: no such file or directory
    Context "default" created.
    Switched to context "default".
    
  • 拷贝配置文件kube-proxy.kubeconfig

    [root@k8s-master ~]# cp -a kube-proxy.kubeconfig /opt/kubernetes/cfg/
    
7.4.5 配置systemctl管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
  • 执行配置

    [root@k8s-master ~]# cat > /usr/lib/systemd/system/kube-proxy.service << EOF
    > [Unit]
    > Description=Kubernetes Proxy
    > After=network.target
    > [Service]
    > EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
    > ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
    > Restart=on-failure
    > LimitNOFILE=65536
    > [Install]
    > WantedBy=multi-user.target
    > EOF
    
    
7.4.6 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
  • 执行启动并设置开机启动

    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl start kube-proxy
    [root@k8s-master ~]# systemctl enable kube-proxy
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    

7.5 部署CNI网络

7.5.1 下载CNI二进制文件
  • 下载地址: https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
[root@k8s-master ~]# cd /root/tools/
[root@k8s-master tools]# ll
总用量 372092
drwxr-xr-x. 3 630384594 600260513        96 11月 19 13:59 etcd-v3.4.9-linux-amd64
-rw-r--r--. 1 root      root       17364053 11月 17 21:09 etcd-v3.4.9-linux-amd64.tar.gz
drwxr-xr-x. 4 root      root             79 5月  20 2020 kubernetes
-rw-r--r--. 1 root      root      363654483 11月 19 21:41 kubernetes-server-linux-amd64.tar.gz
[root@k8s-master tools]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
[root@k8s-master tools]# ll
总用量 408108
-rw-r--r--  1 root      root       36878412 11月 24 12:55 cni-plugins-linux-amd64-v0.8.6.tgz
drwxr-xr-x. 3 630384594 600260513        96 11月 19 13:59 etcd-v3.4.9-linux-amd64
-rw-r--r--. 1 root      root       17364053 11月 17 21:09 etcd-v3.4.9-linux-amd64.tar.gz
drwxr-xr-x. 4 root      root             79 5月  20 2020 kubernetes
-rw-r--r--. 1 root      root      363654483 11月 19 21:41 kubernetes-server-linux-amd64.tar.gz
7.5.2 解压二进制包并移动到默认工作目录:
[root@k8s-master tools]# mkdir -p /opt/cni/bin
[root@k8s-master tools]# tar -zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/
./
./flannel
./ptp
./host-local
./firewall
./portmap
./tuning
./vlan
./host-device
./bandwidth
./sbr
./static
./dhcp
./ipvlan
./macvlan
./loopback
./bridge
7.5.3 部署CNI网络:
[root@k8s-master tools]# cd /root
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master ~]# sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
  • 注释:默认镜像地址无法访问,修改为docker hub镜像仓库
# 部署插件
kubectl apply -f kube-flannel.yml

# 查询插件
kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-2pc95   1/1     Running   0          72s

# 查询node
kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    <none>   41m   v1.18.3

部署好网络插件,Node准备就绪。

7.6 授权apiserver 访问kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

# 执行配置文件
kubectl apply -f apiserver-to-kubelet-rbac.yaml
  • 执行配置文件

    [root@k8s-master ~]# cat > apiserver-to-kubelet-rbac.yaml << EOF
    > apiVersion: rbac.authorization.k8s.io/v1
    > kind: ClusterRole
    > metadata:
    >   annotations:
    >     rbac.authorization.kubernetes.io/autoupdate: "true"
    >   labels:
    >     kubernetes.io/bootstrapping: rbac-defaults
    >   name: system:kube-apiserver-to-kubelet
    > rules:
    >   - apiGroups:
    >       - ""
    >     resources:
    >       - nodes/proxy
    >       - nodes/stats
    >       - nodes/log
    >       - nodes/spec
    >       - nodes/metrics
    >       - pods/log
    >     verbs:
    >       - "*"
    > ---
    > apiVersion: rbac.authorization.k8s.io/v1
    > kind: ClusterRoleBinding
    > metadata:
    >   name: system:kube-apiserver
    >   namespace: ""
    > roleRef:
    >   apiGroup: rbac.authorization.k8s.io
    >   kind: ClusterRole
    >   name: system:kube-apiserver-to-kubelet
    > subjects:
    >   - apiGroup: rbac.authorization.k8s.io
    >     kind: User
    >     name: kubernetes
    > EOF
    
  • 执行配置文件生成

    [root@k8s-master ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
    clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
    clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
    

7.7 新增加worker node

7.7.1 拷贝已部署好的node相关文件到新节点
[root@k8s-master ~]# scp -r /opt/kubernetes root@192.168.0.202:/opt
[root@k8s-master ~]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.0.202:/usr/lib/systemd/system

[root@k8s-master ~]# scp -r /opt/kubernetes root@192.168.0.203:/opt
[root@k8s-master ~]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.0.203:/usr/lib/systemd/system

7.7.2 删除kubelet证书和kubeconfig文件
[root@k8s-node1 ~]# rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
[root@k8s-node1 ~]# rm -f /opt/kubernetes/ssl/kubelet*

[root@k8s-node2 ~]# rm -f /opt/kubernetes/cfg/kubelet.kubeconfig 
[root@k8s-node2 ~]# rm -f /opt/kubernetes/ssl/kubelet*

7.7.3 修改主机名
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kubelet.conf 
[root@k8s-node1 ~]# cat /opt/kubernetes/cfg/kubelet.conf 
--hostname-override=k8s-master
 # 把上面的修改为下面这种
--hostname-override=k8s-node1 

[root@k8s-node2 ~]# vim /opt/kubernetes/cfg/kubelet.conf
[root@k8s-node2 ~]# cat /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master
 # 把上面的修改为下面这种
--hostname-override=k8s-node2 

7.7.4 启动并设置开机启动,在两台node都的执行
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
  • 在k8s-node1上执行

    [root@k8s-node1 ~]# systemctl daemon-reload
    [root@k8s-node1 ~]# systemctl start kubelet
    [root@k8s-node1 ~]# systemctl enable kubelet
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    [root@k8s-node1 ~]# systemctl start kube-proxy
    [root@k8s-node1 ~]# systemctl enable kube-proxy
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    
  • 在k8s-node2上执行

    [root@k8s-node2 ~]# systemctl daemon-reload
    [root@k8s-node2 ~]# systemctl start kubelet
    [root@k8s-node2 ~]# systemctl enable kubelet
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    [root@k8s-node2 ~]# systemctl start kube-proxy
    [root@k8s-node2 ~]# systemctl enable kube-proxy
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    
7.7.5 在master上批准新node kubelet证书申请
# 查看kubelet证书申请
[root@k8s-master ~]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-dtumcoSnXgyUaZCgSflcsFXHx4dkXLwN9RHZispUKb8   34s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-oLimBxMuWXYX0e0o0ddQ66n4er3niq7hRWHF7NXx6b8   28s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending



# 批准申请
[root@k8s-master ~]# kubectl certificate approve node-csr-dtumcoSnXgyUaZCgSflcsFXHx4dkXLwN9RHZispUKb8
certificatesigningrequest.certificates.k8s.io/node-csr-dtumcoSnXgyUaZCgSflcsFXHx4dkXLwN9RHZispUKb8 approved
[root@k8s-master ~]# kubectl certificate approve node-csr-oLimBxMuWXYX0e0o0ddQ66n4er3niq7hRWHF7NXx6b8
certificatesigningrequest.certificates.k8s.io/node-csr-oLimBxMuWXYX0e0o0ddQ66n4er3niq7hRWHF7NXx6b8 approved


# 查询node节点
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      <none>   20h   v1.18.3
k8s-node1    NotReady   <none>   32s   v1.18.3
k8s-node2    NotReady   <none>   19s   v1.18.3

8. 部署dashboard图像化管理

8.1 下载 dashboard图像化管理包

  • 官方下载地址: https://github.com/kubernetes/kubernetes/releases


开心洋葱 , 版权所有丨如未注明 , 均为原创丨未经授权请勿修改 , 转载请注明二进制方法-部署k8s集群部署1.18版本
喜欢 (0)

您必须 登录 才能发表评论!

加载中……