欢迎光临范阳布衣的运维博客
分享工作和学习中的知识、技术

kubernetes二进制部署

大锤阅读(283)

kubernetes二进制高可用最新15.1部署

环境:

主机名 角色 IP
master1 192.9.200.191
master2 192.9.200.194
node1 192.9.200.192
node2 192.9.200.193
k8s-LB1 192.9.200.187
k8s-LB2 192.9.200.186
VIP 192.9.200.188

关闭swap

swapoff -a 临时
永久
注释:
vim /etc/fstab

关闭防火墙和selinux 关闭防火墙:

systemctl stop firewalld 
systemctl disable firewalld 
Iptables -F

关闭selinux

$ sed -i 's/enforcing/disabled/' /etc/selinux/config $ setenforce 0
临时 $ setenforce 0

实验由初:

  • 部署k8s高可用集群只需要对kube-apiserver进行做keepalived的高可用
  • Controller-manager和scheduler配置文件中可直接加入–leader-elect=true可以自动实现leader选举
  • 例如,某一个pod对象创建的请求被3个控制器实例分别执行一次进而创建出一个pod对象副本来。因此,在某一时刻,仅能有一个kube-controller-manager实例处于正常工作状态,余下的均处于备用状态,或者称为等待状态
  • 注意多个实例要都同时启用–leader-elect=true
  • 这种leader选举操作时分布式锁机制的一种应用,它通过创建和维护k8s资源对象来维护锁状态,初始状态时,各controller-manager实例通过竞争的方式去抢占指定的Endpoints。胜利者被选为leader

一、签发证书

传入etcd ca证书json文件便于认证,这里是用于以下证书的配置,我集合在一个脚本里面了
下面我分批执行

[root@k8s-master1 ~]# mkdir k8s
[root@k8s-master1 ~]# cd k8s
[root@k8s-master1 k8s]# mkdir etcd-cert
[root@k8s-master1 k8s]# mkdir k8s-cert
[root@k8s-master1 k8s]# cd etcd-cert/
[root@k8s-master1 etcd-cert]# ls
cfssl.sh  etcd-cert.sh
  • 这里我传了一个cfssl的认证工具,有两种认证形式,openssl、cfssl、我们这里用的cfssl
  • cfssl是一个证书工具,json、详细信息生成,并赋予权限
这里也可以直接执行,我是把它放到一个脚本里,直接执行的
[root@k8s-master1 etcd-cert]# cat cfssl.sh 
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@k8s-master1 etcd-cert]# sh cfssl.sh

证书机构准备为你颁发证书,我这里写的是10年,可以根据你自己情况而定
这里写的都是CA证书自己的地址

[root@k8s-master1 etcd-cert]# cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

[root@k8s-master1 etcd-cert]# cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
写完之后查看目录下会给我们生成2个json文件
[root@k8s-master1 etcd-cert]# ls 
ca-config.json ca-csr.json cfssl.sh etcd-cert.sh
生成根证书,这也是自身拥有的
用cfssl工具初始化一个CA机构生成文件,通过json管道输出
[root@habor etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/01/14 09:56:19 [INFO] generating a new CA key and certificate from CSR
2020/01/14 09:56:19 [INFO] generate received request
2020/01/14 09:56:19 [INFO] received CSR
2020/01/14 09:56:19 [INFO] generating key: rsa-2048
2020/01/14 09:56:20 [INFO] encoded CSR
2020/01/14 09:56:20 [INFO] signed certificate with serial number 598218397381984963873576092981722229959596709051
生成之后会显示pem的证书
[root@habor etcd-cert]# ls *.pem
ca-key.pem  ca.pem
  • 进行颁发证书,给我们的HTTPS进行加密,对我们的etcd进行CA证书的颁发,并写清每台的主机IP,同时rsa的加密算法实现
[root@k8s-master1 etcd-cert]# cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.9.200.191",
    "192.9.200.192",
    "192.9.200.193",
    "192.9.200.194",
    "192.9.200.195"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF
  • 这里生成了三个证书,可以使用证书了
[root@habor etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/01/14 10:29:19 [INFO] generate received request
2020/01/14 10:29:19 [INFO] received CSR
2020/01/14 10:29:19 [INFO] generating key: rsa-2048
2020/01/14 10:29:20 [INFO] encoded CSR
2020/01/14 10:29:20 [INFO] signed certificate with serial number 579313813132544632030603444417591789641875728047
2020/01/14 10:29:20 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[root@habor etcd-cert]# ls *pem
ca-key.pem  ca.pem  server-key.pem  server.pem

二、部署etcd集群

创建soft的目录放软件包
[root@k8s-master1 ~]# mkdir soft
[root@k8s-master1 ~]# cd soft
[root@k8s-master1 soft]# rz -E
rz waiting to receive.
[root@k8s-master1 soft]# ls
etcd-v3.3.10-linux-amd64.tar.gz
  • 解压etcd软件包,可以在github上下载,一般以amd结尾的
[root@k8s-master1 soft]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz 
[root@k8s-master1 soft]# ls etcd-v3.3.10-linux-amd64 etcd-v3.3.10-linux-amd64.tar.gz 
[root@k8s-master1 soft]# cd etcd-v3.3.10-linux-amd64/
  • etcd是主要的相关配置,etcdctl是管理工具
[root@k8s-master1 etcd-v3.3.10-linux-amd64]# ls 
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
创建etcd的目录以便日后管理
[root@k8s-master1 soft]# mkdir /opt/etcd/{cfg,bin,ssl} -p 
[root@k8s-master1 soft]# cd etcd-v3.3.10-linux-amd64/ 
[root@k8s-master1 etcd-v3.3.10-linux-amd64]# mv etcd etcdctl /opt/etcd/bin 
[root@k8s-master1 etcd-v3.3.10-linux-amd64]# ls /opt/etcd/bin/ 
etcd etcdctl
  • 这里我传了一个写了以下配置etcd的脚本,我们分批执行一下
[root@master k8s-cert]# cat etcd.sh
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd


[root@k8s-master1 k8s]# chmod +x etcd.sh 
[root@k8s-master1 k8s]# ls
etcd-cert  etcd.sh  k8s-cert
尝试指定我们的配置etcd文件,输入结果报错
[root@k8s-master1 k8s]# ./etcd.sh etcd01 192.168.30.21 etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.
  • 因为我们这里/usr/lib/systemd/system/etcd.service需要指定三个证书,也是刚才我们生成的,确保key-file在我们启动文件systemd中指定调用的文件耦合,ca.pem、server.pem、server-key.pem。
这三个需要放在我们调用中的/opt/etcd/ssl中
[root@k8s-master1 ~]# cp /root/k8s/etcd-cert/{ca,server-key,server}.pem /opt/etcd/ssl
[root@k8s-master1 ~]# cd /opt/etcd/ssl
[root@k8s-master1 ssl]# ls
ca.pem  server-key.pem  server.pem
  • Server.pem是暴露我们2379端口用的
  • 其他的是用于我们集群中
cat/usr/lib/systemd/system/etcd.service
--initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
  • 这下我们的证书已经指定好了。直接启动就可以了,这里因为我们指定的是三个etcd
  • 找不到另外两个节点,所有处于一直启动状态中,我们只需把另外两个节点加入进来就可以了,用ps-ef查看也是没问题的,进程有etcd的
[root@k8s-master1 ~]# systemctl restart etcd
[root@k8s-master1 ~]# ps -ef |grep etcd
root       2332   1633  0 10:03 pts/0    00:00:00 systemctl restart etcd
root       2338      1  2 10:03 ?        00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.30.21:2380 --listen-client-urls=https://192.168.30.21:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.30.21:2379 --initial-advertise-peer-urls=https://192.168.30.21:2380 --initial-cluster=etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root       2402   2351  0 10:03 pts/1    00:00:00 grep --color=auto etcd
  • 把我们master上opt/etcd的配置文件及启动文件用-r连目录上传到另外两台主机上,还有systemd下的启动调用集群证书文件
[root@k8s-master1 ~]# scp -r /opt/etcd root@192.9.200.191:/opt
[root@k8s-master1 ~]# scp -r /opt/etcd root@192.9.200.192:/opt

[root@k8s-master1 ~]# scp /usr/lib/systemd/system/etcd.service root@192.9.200.191:/usr/lib/systemd/system
[root@k8s-master1 ~]# scp /usr/lib/systemd/system/etcd.service root@192.9.200.192:/usr/lib/systemd/system
修改每个节点上的ip以及名称
  • 2379是数据通信的端口,2380是集群直接的端口
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.30.22:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.30.22:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.22:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.22:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-master2 ~]# systemctl start etcd

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.30.23:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.30.23:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.23:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.23:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node1 ~]# systemctl start etcd

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.30.24:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.30.24:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.30.24:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.30.24:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.30.21:2380,etcd02=https://192.168.30.22:2380,etcd03=https://192.168.30.23:2380,etcd04=https://192.168.30.24:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node2 ~]# systemctl start etcd
  • 2379是数据通信的端口,2380是集群直接的端口
检查集群状态,因为我们是自签的证书所有要指定我们的证书pem
[root@master1 ~]# /opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.9.200.191:2379,https://192.9.200.192:2379,https://192.9.200.193:2379" \
cluster-health
member 7c6184a1559940b6 is healthy: got healthy result from https://192.9.200.192:2379
member 7d8ccd08e835f80f is healthy: got healthy result from https://192.9.200.191:2379
member ff359cfff45a5758 is healthy: got healthy result from https://192.9.200.193:2379
cluster is healthy

三、node节点都安装docker

  • 这里是Centos7安装方式,ce版本是最新的社区版
安装依赖包
$ sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
添加Docker软件包源
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
安装Docker-ce
$ sudo yum install docker-ce
启动Docker
$ sudo systemctl start docker
默认是国外的源,下载会很慢,建议配置国内镜像仓库
#vim /etc/docker/daemon.json
{
"registry-mirrors": [ "https://registry.docker-cn.com" ]
}
$ systemctl enable docker
建议使用daocloud的加速器
  • 该脚本可以将 –registry-mirror 加入到你的 Docker 配置文件 /etc/docker/daemon.json 中
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
启动Docker
$ sudo systemctl start docker

四、部署fanneld网络

  • 使用k8s网络通信原理实现有两种方案,隧道方案和路由方案
  • 常用fannle、100台以内,支持很多的封包类型,传输形式,支持路由表同一局域网限制,对网络环境跨互联网进行使用,支持已有的进行通信,使用的是重叠网络进行隧道方案设计性能开销大,基于现有的tcp数据包再封装一次,传输,两边有这样一次封装和解封装的进程,使用重叠网络(flannel)callco、上百台 使用BJP 、 协议通信,不支持多网络环境,必须在支持bjp的环境,在路由表中的环境学习IP进行通信,一般大型公司使用callco路由方案是有路由表进行转发的,不会对数据包封装和解封装,性能好,走的是三层,网络层

  • Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。

  • Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式

为你的key设置数组,为k8s节点设置子网,再为大子网分配一个小的子网,再分配到每个node上,数据转发方式为vxlan
[root@master1 ~]# /opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.9.200.191:2379,https://192.9.200.192:2379,https://192.9.200.193:2379" \
set /coreos.com/network/config '{ "Network": "172.168.0.0/16", "Backend": {"Type": "vxlan"}}'
​
{ "Network": "172.168.0.0/16", "Backend": {"Type": "vxlan"}}
用get去查看子网范围状态
[root@master1 ~]# /opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.9.200.191:2379,https://192.9.200.192:2379,https://192.9.200.193:2379" \
get /coreos.com/network/config
​
{ "Network": "172.168.0.0/16", "Backend": {"Type": "vxlan"}}
下载二进制包并配置
https://github.com/coreos/flannel/releases
[root@k8s-node1 ~]# ls
anaconda-ks.cfg  flannel-v0.10.0-linux-amd64.tar.gz  flannel.sh
[root@k8s-node1 ~]# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
[root@k8s-node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
[root@k8s-node1 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
[root@k8s-node1 ~]# cat flannel.sh
#!/bin/bash
​
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
​
cat <<EOF >/opt/kubernetes/cfg/flanneld
​
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
​
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/docker.service
​
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
​
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
​
[Install]
WantedBy=multi-user.target
​
EOF
​
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker
​
[root@k8s-node1 ~]# chmod +x flannel.sh 
[root@k8s-node1 ~]# ./flannel.sh https://192.9.200.191:2379,https://192.9.200.192:2379,https://192.9.200.193:2379
[root@k8s-node1 ~]# systemctl start flanneld
查看ip a
  • docker网络和flanneld已经分配同一网络
[root@k8s-node1 ~]# systemctl restart docker
[root@k8s-node1 ~]# ip a
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:be:3d:91:69 brd ff:ff:ff:ff:ff:ff
    inet 172.17.97.1/24 brd 172.17.97.255 scope global docker0
       valid_lft forever preferred_lft forever
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 72:1d:9e:dd:4f:a2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.97.0/32 scope global flannel.1
给node2节点部署flannel
[root@k8s-node1 ~]# scp -r /opt/kubernetes/ root@192.9.200.193:/opt
[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{flanneld,docker}.service 192.9.200.193:/usr/lib/systemd/system/
​
[root@k8s-node1 ~]# systemctl start flanneld
[root@k8s-node1 ~]# systemctl restart docker
[root@node1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:96:4c:2c brd ff:ff:ff:ff:ff:ff
    inet 192.9.200.192/24 brd 192.9.200.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:fa:92:77:aa brd ff:ff:ff:ff:ff:ff
    inet 172.168.99.1/24 brd 172.168.99.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 0a:6c:c4:7a:9e:a6 brd ff:ff:ff:ff:ff:ff
    inet 172.168.99.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
​
​
[root@node2 etcd]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:96:65:b3 brd ff:ff:ff:ff:ff:ff
    inet 192.9.200.193/24 brd 192.9.200.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:ae:4e:42:45 brd ff:ff:ff:ff:ff:ff
    inet 172.168.72.1/24 brd 172.168.72.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 1a:6f:76:b1:98:f3 brd ff:ff:ff:ff:ff:ff
    inet 172.168.72.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
在node2测试创建容器分配的网络,测试网络都是flanneld分配出去的
[root@node2 ~]#  docker run -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
bdbbaa22dec6: Pull complete
Digest: sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
Status: Downloaded newer image for busybox:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
    link/ether 02:42:ac:a8:48:02 brd ff:ff:ff:ff:ff:ff
    inet 172.168.72.2/24 brd 172.168.72.255 scope global eth0
       valid_lft forever preferred_lft forever
在node1节点测试node2上的pod容器是否可以通信,是可以的
[root@node1 ~]# ping 172.168.72.2
PING 172.168.72.2 (172.168.72.2) 56(84) bytes of data.
64 bytes from 172.168.72.2: icmp_seq=1 ttl=63 time=0.884 ms
64 bytes from 172.168.72.2: icmp_seq=2 ttl=63 time=0.590 ms
  • 在node2节点测试node1上的pod容器是否可以通信,是可以的

五、部署master

  • 这里是自己写的配置
[root@k8s-master1 k8s ~]# unzip master.zip 
Archive:  master.zip
  inflating: apiserver.sh            
  inflating: controller-manager.sh   
  inflating: scheduler.sh  
​
[root@k8s-master1 k8s]# ls
apiserver.sh  controller-manager.sh  etcd-cert  etcd.sh  flannel.sh  k8s-cert  scheduler.sh
​
[root@k8s-master1 ~]# cd soft/
把二进制包拿进来,解压完并放到我们的工作目录
[root@k8s-master1 soft]# tar zxvf kubernetes-server-linux-amd64.tar.gz 
[root@k8s-master1 soft]# cd kubernetes/server/bin/
[root@k8s-master1 bin]# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
[root@k8s-master1 bin]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
监听本机apiserver和etcd的地址
[root@master master]# cat apiserver.sh
#!/bin/bash
​
MASTER_ADDRESS=$1
ETCD_SERVERS=$2
​
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
​
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
EOF
​
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
[root@k8s-master1 k8s]# ./apiserver.sh 192.9.200.191 \
https://192.9.200.191:2379,https://192.9.200.192:2379,https://192.9.200.193:2379
设置api-server.sh我们的配置选项中的日志存放位置
[root@k8s-master1 k8s]# mkdir /opt/kubernetes/logs
修改日志存放位置,修改opt/kubernetes/cfg/kube-apiserver

[root@k8s-master1 k8s]# vim apiserver.sh 
KUBE_APISERVER_OPTS="--logtostderr=false \\
--log-dir=/opt/kubernetes/logs  \\
[root@k8s-master1 ~]# cd k8s/k8s-cert/
[root@k8s-master1 k8s-cert]# ls
k8s-cert.sh
修改ip 把节点IP添加进去,并执行,生成证书
[root@master k8s-cert]# cat k8s-cert.sh
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
​
cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
              "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成 kubernetes CA 证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
​
#-----------------------
# 生成API_SERVER证书
cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.9.209.191",
      "192.9.209.194",
      "192.9.200.188",
      "192.9.200.189",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
​
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
​
#-----------------------
​
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
​
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
创建 Kubernetes Proxy 证书
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
执行脚本
bash k8s-cert.sh
查看证书
[root@habor k8s-cert]# ll
total 72
-rw-r--r-- 1 root root 1009 Jan 14 13:00 admin.csr
-rw-r--r-- 1 root root  229 Jan 14 12:58 admin-csr.json
-rw------- 1 root root 1675 Jan 14 13:00 admin-key.pem
-rw-r--r-- 1 root root 1399 Jan 14 13:00 admin.pem
-rw-r--r-- 1 root root  294 Jan 14 12:53 ca-config.json
-rw-r--r-- 1 root root 1001 Jan 14 12:56 ca.csr
-rw-r--r-- 1 root root  266 Jan 14 12:53 ca-csr.json
-rw------- 1 root root 1679 Jan 14 12:56 ca-key.pem
-rw-r--r-- 1 root root 1359 Jan 14 12:56 ca.pem
-rw-r--r-- 1 root root 1009 Jan 14 13:40 kube-proxy.csr
-rw-r--r-- 1 root root  230 Jan 14 13:39 kube-proxy-csr.json
-rw------- 1 root root 1679 Jan 14 13:40 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 Jan 14 13:40 kube-proxy.pem
-rw-r--r-- 1 root root 1269 Jan 14 12:58 server.csr
-rw-r--r-- 1 root root  582 Jan 14 12:56 server-csr.json
-rw------- 1 root root 1675 Jan 14 12:58 server-key.pem
-rw-r--r-- 1 root root 1635 Jan 14 12:58 server.pem
-rw-r--r-- 1 root root   84 Jan 14 13:05 token.csv
拷贝证书
[root@k8s-master1 k8s-cert]# cp ca.pem ca-key.pem server.pem server-key.pem /opt/kubernetes/ssl/

五、部署apiserver生成token文件

  • 把kubeconfig.sh 拉进来
  • 把第一段复制进来生成token文件
[root@habor k8s-cert]# BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@k8s-master1 k8s-cert]# cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
[root@k8s-master1 k8s-cert]# cat token.csv 
0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
​
[root@k8s-master1 k8s-cert]# mv token.csv /opt/kubernetes/cfg
[root@k8s-master1 k8s-cert]# systemctl start kube-apiserver
[root@k8s-master1 ~]# ps -ef |grep kube
root      59260      1 99 15:26 ?        00:00:06 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 --bind-address=192.168.30.21 --secure-port=6443 --advertise-address=192.168.30.21 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      59275  13922  0 15:26 pts/2    00:00:00 grep --color=auto kube
查看apiserver日志存放位置
[root@k8s-master1 cfg]# ls /opt/kubernetes/logs
kube-apiserver.ERROR
kube-apiserver.INFO
kube-apiserver.k8s-master1.unknownuser.log.ERROR.20190713-195308.66108
kube-apiserver.k8s-master1.unknownuser.log.ERROR.20190713-195313.66130
apiserver默认监听
[root@master1 ~]# netstat -lntup  |grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      20608/kube-apiserve
[root@master1 ~]# netstat -lntup  |grep 6443
tcp        0      0 192.9.200.191:6443      0.0.0.0:*               LISTEN      20608/kube-apiserve
部署controller-manager
[root@master master]# cat controller-manager.sh
#!/bin/bash
​
MASTER_ADDRESS=$1
​
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
​
​
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
EOF
​
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
​
[root@k8s-master k8s]# chmod +x controller-manager.sh
  • 指定本地连接的ip 127.0.0.1
[root@k8s-master1 k8s]# ./controller-manager.sh 127.0.0.1
部署scheduller
[root@master master]# cat scheduler.sh
#!/bin/bash
​
MASTER_ADDRESS=$1
​
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
​
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"
​
EOF
​
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
​
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
​
[Install]
WantedBy=multi-user.target
EOF
​
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
​
[root@master1 ~]# chmod +x scheduler.sh
[root@master1 ~]# ./scheduler.sh 127.0.0.1
把kubectl 放到/usr/bin下可以执行了
[root@k8s-master1 ~]# cp /root/soft/kubernetes/server/bin/kubectl /usr/bin/
查看单词缩写
[root@k8s-master1 ~]# kubectl api-resources
[root@master1 bin]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}

七、部署node节点

[root@k8s-master1 cfg]# cat token.csv 
aa70bb385b5a864e477b8c641fbef3d0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  • 将kubelet-bootstrap用户绑定到系统集群角色
[root@k8s-master1 ~]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
​
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
2.创建kubeconfig文件
  • 相当于认证信息,有了认证信息,才有权限访问apiserver
  • 将上面生成的删除,从下面开始
[root@k8s-master1 k8s-cert]# vim kubeconfig.sh 
BOOTSTRAP_TOKEN=aa70bb385b5a864e477b8c641fbef3d0
APISERVER=$1
SSL_DIR=$2
​
创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"
​
设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
​
设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
​
设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
--user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
​
设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
​
#----------------------
​
创建kube-proxy kubeconfig文件
​
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
​
kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
​
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
​
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
执行文件
[root@master1 ~]# ./kubeconfig.sh 192.9.200.191 /root/api-cert
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" modified.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" modified.
Switched to context "default".
确保token加入进来
[root@k8s-master1 k8s-cert]# cat bootstrap.kubeconfig 
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: aa70bb385b5a864e477b8c641fbef3d0
3.部署kubelet,kube-proxy组件
  • Bootstrap.kubeconfig用来部署kubelet
  • Kube-proxy.kubeconfig用来部署kube-proxy
[root@k8s-master1 k8s-cert]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.9.200.192:/opt/kubernetes/cfg
[root@k8s-master1 k8s-cert]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.9.200.193:/opt/kubernetes/cfg

[root@k8s-node1 ~]# unzip node.zip 
[root@node01 ~]# cat kubelet.sh
#!/bin/bash
​
NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}
​
cat <<EOF >/opt/kubernetes/cfg/kubelet
​
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
​
EOF
​
cat <<EOF >/opt/kubernetes/cfg/kubelet.config
​
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP}
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF
​
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
​
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
​
[Install]
WantedBy=multi-user.target
EOF
​
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@k8s-node1 ~]# bash kubelet.sh 192.168.30.23 #指定node ip 即可
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
  • 把kube-proxy的ipvs规则删除
  • 创建日志目录文件并调用到执行目录下
[root@k8s-node1 cfg]# mkdir /opt/kubernetes/logs
[root@k8s-node1 kubernetes]# vim cfg/kubelet
KUBELET_OPTS="--logtostderr=false \
--log-dir=/opt/kubernetes/log \
--v=4 \
  • 把kubelet、kube-proxy启动文件传到这个目录下
[root@k8s-master1 ~]# scp soft/kubernetes/server/bin/{kubelet,kube-proxy} root@192.168.30.23:/opt/kubernetes/bin/
[root@k8s-master1 ~]# scp /root/soft/kubernetes/server/bin/{kubelet,kube-proxy} root@192.168.30.24:/opt/kubernetes/bin/
​
[root@k8s-node1 kubernetes]# systemctl restart kubelet
[root@k8s-node1 kubernetes]# ps -ef |grep kube
root      10953      1  0 16:31 ?        00:00:06 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.30.21:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      34160      1  6 20:58 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/log --v=4 --hostname-override=192.168.30.23 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      34183  16147  0 20:58 pts/1    00:00:00 grep --color=auto kube
验证证书
[root@master1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-vv0VG2YgoxIUXsFzcrzrpwpCqAKM6Vy5Fv7aMnQ6dkE   34s   kubelet-bootstrap   Pending
颁发证书# kubectl certificate approve后面跟node节点的name
[root@master1 ~]# kubectl certificate approve node-csr-vv0VG2YgoxIUXsFzcrzrpwpCqAKM6Vy5Fv7aMnQ6dkE
certificatesigningrequest.certificates.k8s.io/node-csr-vv0VG2YgoxIUXsFzcrzrpwpCqAKM6Vy5Fv7aMnQ6dkE approved
​
[root@k8s-master1 ~]# kubectl get node
NAME            STATUS     ROLES    AGE   VERSION
192.168.30.23   NotReady   <none>   7s    v1.15.1
[root@k8s-master1 ~]# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.30.23   Ready    <none>   11s   v1.15.1
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk   8m15s   kubelet-bootstrap   Approved,Issued
[root@k8s-node1 ~]# vim proxy.sh 
[root@k8s-node1 ~]# bash proxy.sh 192.168.30.23
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-node1 ~]# ps -ef |grep kube-proxy
root      35841      1  0 21:14 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.30.23 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root      36005  16147  0 21:15 pts/1    00:00:00 grep --color=auto kube-proxy

七.二、部署第二个node节点

[root@k8s-node1 ~]# scp -r /opt/kubernetes/ root@192.9.200.193:/opt
[root@k8s-node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.9.200.193:/usr/lib/systemd/system/
去node2上操作
[root@k8s-node2 ~]# cd /opt/kubernetes/cfg/
[root@k8s-node2 cfg]# ls
bootstrap.kubeconfig  kubelet         kubelet.kubeconfig  kube-proxy.kubeconfig
flanneld              kubelet.config  kube-proxy
[root@k8s-node2 cfg]# cd ../ssl
[root@k8s-node2 ssl]# ls
kubelet-client-2019-07-13-21-06-07.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
删除node1上颁发的证书
[root@k8s-node2 ssl]# rm -rf *
修改一个ip ,找到配置文件把ip上改成第二个node
[root@k8s-node2 cfg]# grep 23 *
kubelet:--hostname-override=192.9.200.193 \
kubelet.config:address: 192.9.200.193
kube-proxy:--hostname-override=192.9.200.193 \
这里把kube-proxy 的ipvs删掉
把这些都修改为24主机的IP之后启动
[root@k8s-node2 cfg]# systemctl restart kubelet
[root@k8s-node2 cfg]# systemctl restart kube-proxy.service
[root@k8s-node2 cfg]# ps -ef |grep kube
root      62846      1  0 16:49 ?        00:00:07 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.9.200.191:2379,https://192.9.200.193:2379,https://192.9.200.192:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      86738      1  6 21:27 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/log --v=4 --hostname-override=192.9.200.191 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      86780      1 35 21:28 ?        00:00:02 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.30.24 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
root      86923  66523  0 21:28 pts/1    00:00:00 grep --color=auto kube
查看到master节点又有新的节点加入
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo   90s   kubelet-bootstrap   Pending
node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk   31m   kubelet-bootstrap   Approved,Issued
颁发证书
[root@k8s-master1 ~]# kubectl certificate approve node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo
certificatesigningrequest.certificates.k8s.io/node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo approved
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-eH_jPNUBXJF6sIii9SvNz9fW71543MLjPvOYWeDteqo   3m18s   kubelet-bootstrap   Approved,Issued
node-csr-xLNLbvb3cibW-fyr_5Qyd3YuUYAX9DJgDwViu3AyXMk   33m     kubelet-bootstrap   Approved,Issued
查看node节点状态
[root@master1 ~]# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.9.200.192   Ready    <none>   16m   v1.15.2
192.9.200.193   Ready    <none>   31s   v1.15.2

八、创建一个测试实例

[root@k8s-master1 ~]# kubectl run nginx --image=nginx
[root@k8s-master1 ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-wb228   1/1     Running   0          49s
暴露外部端口进行用户端访问
[root@k8s-master1 ~]#  kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
[root@k8s-master1 ~]# kubectl get svc 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        20h
nginx        NodePort    10.0.0.27    <none>        88:44364/TCP   20h
  • 访问外网内网都可以
查看pod日志
[root@k8s-master1 ~]# kubectl logs nginx-7cdbd8cdc9-2qrcw 
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-7cdbd8cdc9-2qrcw)

如果出现报错这里说明缺少一个默认的绑定集群的角色
定义/opt/kubernetes/cfg/kubelet.config
authentication:
anonymous:
  enabled: true
并赋予系统集群一个角色
[root@k8s-master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous –clusterrole=cluster-admin –user=system:anonymous

[root@k8s-master1 ~]# kubectl logs nginx-7cdbd8cdc9-2qrcw 
172.17.55.0 - - [18/Jul/2019:08:08:22 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.55.0 - - [18/Jul/2019:08:08:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.55.0 - - [18/Jul/2019:08:08:27 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.46.1 - - [18/Jul/2019:08:14:37 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0" "-"
172.17.46.1 - - [18/Jul/2019:08:52:25 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0" "-"
172.17.46.1 - - [18/Jul/2019:08:52:25 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0" "-"
172.17.46.1 - - [18/Jul/2019:08:52:25 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10

九.部署master高可用

[root@k8s-master1 ~]# scp -r /opt/kubernetes/ root@192.9.200.194:/opt
[root@k8s-master1 ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@192.9.200.194:/usr/lib/systemd/system
修改kube-apiserver的IP

[root@k8s-master2 cfg]# grep 191 * kube-apiserver:--etcd-servers=https://192.9.200.191:2379,https://192.9.200.192:2379,https://192.9.200.193:2379 \ kube-apiserver:--bind-address=192.168.30.21 \ kube-apiserver:--advertise-address=192.168.30.21 \
启动kube-apiserver

[root@k8s-master2 ~]# systemctl start kube-apiserver [root@k8s-master2 ~]# systemctl start kube-scheduler.service [root@k8s-master2 ~]# systemctl start kube-controller-manager.service [root@k8s-master2 ~]# ps -ef |grep kube root       6840      1 12 14:10 ?        00:00:13 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.30.21:2379,https://192.168.30.22:2379,https://192.168.30.23:2379,https://192.168.30.24:2379 --bind-address=192.168.30.22 --secure-port=6443 --advertise-address=192.9.200.192 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem root       6913      1  9 14:12 ?        00:00:01 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root       6945      1 14 14:12 ?        00:00:01 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s root       6953   3519 10 14:12 pts/1    00:00:00 grep --color=auto kube
把kubectl 从master传过来,查看集群状态

[root@k8s-master1 ~]# scp /usr/bin/kubectl root@192.9.200.192:/usr/bin
  • 因为我们部署了etcd,所有集群状态在master2也能看到
[root@k8s-master2 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-3               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
[root@k8s-master2 ~]# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.9.200.193   Ready    <none>   41m   v1.15.1
192.9.200.192   Ready    <none>   37m   v1.15.1

十.安装nginx实现负载调度

#####在k8s-LB 1/2上都安装的先决条件 :

sudo yum install yum-utils
建立了 yum 库、创建文件/etc/yum.repos.d/nginx.repo有下列内容 :
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
​
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
安装 nginx, 请运行以下命令 :
  • 修改进程数为4
  • 设置负载均衡器,从stream池子放置需要负载均衡的ip,也是就master上的IP
  • 并代理到我们LB上,用LB来访问,请求分发流量到后端不同的master主机上
stream {
    upstream k8s-apiserver {
        server 192.9.200.191:6443;
        server 192.9.200.194:6443;
    }
    server {
        listen 192.9.200.187:6443;
        proxy_pass k8s-apiserver;
    }
}
[root@k8s-LB1 ~]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@k8s-LB1 ~]# systemctl restart nginx
[root@k8s-LB1 ~]# ps -ef |grep nginx
root       2394      1  0 14:56 ?        00:00:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx      2395   2394  0 14:56 ?        00:00:00 nginx: worker process
nginx      2396   2394  0 14:56 ?        00:00:00 nginx: worker process
nginx      2397   2394  0 14:56 ?        00:00:00 nginx: worker process
nginx      2398   2394  0 14:56 ?        00:00:00 nginx: worker process
root       2414   1912  0 14:56 pts/0    00:00:00 grep --color=auto nginx
确定我们监听的是6443,端口
[root@k8s-LB1 ~]# netstat -anpt |grep 6443
tcp        0      0 192.9.200.187:6443      0.0.0.0:*               LISTEN      2394/nginx: master
修改node节点上的ip,指定我们的负载均衡器的IP  192.9.200.187
[root@k8s-node1 cfg]# vim bootstrap.kubeconfig 
server: https://192.9.200.187:6443
​
[root@k8s-node1 cfg]# vim kubelet.kubeconfig 
    server: https://192.9.200.187:6443
​
[root@k8s-node1 cfg]# vim kube-proxy.kubeconfig  
server: https://192.9.200.187:6443
重启kubelet
[root@k8s-node1 cfg]# systemctl restart kubelet
[root@k8s-node1 cfg]# ps -ef |grep kubelet
root      39714      1  7 15:05 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.9.200.193 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      39903   4302  0 15:05 pts/2    00:00:00 grep --color=auto kubelet
​
[root@k8s-node2 cfg]# vim kubelet.kubeconfig 
 server: https://192.9.200.187:6443
​
[root@k8s-node2 cfg]# vim kube-proxy.kubeconfig 
 server: https://192.9.200.187:6443
​
[root@k8s-node2 cfg]# vim bootstrap.kubeconfig 
 server: https://192.9.200.187:6443
重启kubelet
[root@k8s-node2 cfg]# systemctl restart kubelet
[root@k8s-node2 cfg]# ps -ef |grep kubelet
root     101094      1  7 15:09 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.9.200.193 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root     101283  13585  0 15:09 pts/1    00:00:00 grep --color=auto kubelet
设置nginx启动日志并记录node节点上的状态日志
[root@k8s-LB1 ~]# vim /etc/nginx/nginx.conf 
stream {
    log_format main "$remote_addr $upstream_addr - $time_local $status";
    access_log /var/log/nginx/k8s-access.log main;
    upstream k8s-apiserver {
[root@k8s-LB1 ~]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
​
[root@k8s-LB1 ~]# systemctl reload nginx
[root@k8s-LB1 ~]# ls /var/log/nginx/
access.log  error.log  k8s-access.log
测试日志日否开启
  • 重启node2上的kubelet,查看k8s-LB1的日志
[root@k8s-node2 cfg]# systemctl restart kubelet
  • 看到日志已经分配了,并来自两个master上的日志,这里Nginx负载均衡说明没有问题
[root@k8s-LB1 ~]# tail /var/log/nginx/k8s-access.log 
192.9.200.193 192.9.200.187:6443 - 15/Jan/2020:11:14:51 +0800 200
192.9.200.192 192.9.200.187:6443 - 15/Jan/2020:11:14:51 +0800 200

十一.部署主从LB +keepalived实现vip 高可用

  • 这里前面我们按照好了,直接把配置scp过来就行
[root@k8s-LB1 ~]# scp /root/etc/nginx/nginx.conf root@192.9.200.187:/etc/nginx/nginx.conf
修改代理监听的IP为192.9.200.186
[root@k8s-LB2 yum.repos.d]# vim /etc/nginx/nginx.conf 
stream {
    log_format main "$remote_addr $upstream_addr - $time_local $status";
    access_log /var/log/nginx/k8s-access.log main;
    upstream k8s-apiserver {
        server 192.9.200.191:6443;
        server 192.9.200.192:6443;
    }
    server {
        listen 192.9.200.186:6443;
        proxy_pass k8s-apiserver;
    }
}
​
[root@k8s-LB2 yum.repos.d]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf
重启,日志也启动了

[root@k8s-LB2 yum.repos.d]# systemctl restart nginx
[root@k8s-LB2 ~]# tail /var/log/nginx/k8s-access.log 
两个节点都安装keepalived
[root@k8s-LB1 ~]# yum install keepalived
[root@k8s-LB2 ~]# yum install keepalived
修改主配置文件
[root@k8s-LB1 ~]# rz -E
rz waiting to receive.
[root@k8s-LB1 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf 
cp:是否覆盖"/etc/keepalived/keepalived.conf"? y
​
[root@k8s-LB1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
​
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
​
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
​
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
 virtual_ipaddress {
        192.9.200.188/24
    }
    track_script {
        check_nginx
    }
}
​
/usr/local/nginx/sbin/check_nginx.sh
​
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
​
if [ "$count" -eq 0 ];then
    /etc/init.d/keepalived stop
fi
  • 写一个脚本,检查nginx进程状态,如果启动失败,那就停掉keepalived,上文我们在配置文件中也写到了脚本
[root@k8s-LB1 ~]# vim /etc/keepalived/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
​
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@k8s-LB1 keepalived]# chmod +x /etc/keepalived/check_nginx.sh
[root@k8s-LB1 keepalived]# systemctl restart keepalived
[root@k8s-LB1 keepalived]# ps -ef |grep keepalived
root       4085      1  0 16:15 ?        00:00:00 /usr/sbin/keepalived -D
root       4086   4085  0 16:15 ?        00:00:00 /usr/sbin/keepalived -D
root       4087   4085  0 16:15 ?        00:00:00 /usr/sbin/keepalived -D
root       4111   1912  0 16:15 pts/0    00:00:00 grep --color=auto keepalived
这里会绑定一个vip 地址,配置文件中设置的
[root@k8s-LB1 keepalived]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:12:31:53 brd ff:ff:ff:ff:ff:ff
    inet 192.9.200.187/24 brd 192.168.30.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.9.200.188/24 scope global secondary ens33
把LB1的配置文件转到LB2上,这里修改matser为backup 优先级为90
[root@k8s-LB1 ~]# scp /etc/keepalived/keepalived.conf root@192.9.200.186:/etc/keepalived
! Configuration File for keepalived
​
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
​
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
​
vrrp_instance VI_1 {
    state BACKUP 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {  
        auth_type PASS
        auth_pass 1111
    }   
 virtual_ipaddress {
        192.9.200.188/24
    }
    track_script {
        check_nginx
    }
}
​
/usr/local/nginx/sbin/check_nginx.sh
​
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
​
if [ "$count" -eq 0 ];then
    /etc/init.d/keepalived stop
fi
脚本也传过来
[root@k8s-LB1 ~]# scp /etc/keepalived/check_nginx.sh root@192.9.200.186:/etc/keepalived
[root@k8s-LB2 keepalived]# ls
check_nginx.sh  keepalived.conf
[root@k8s-LB2 keepalived]# chmod +x check_nginx.sh 
[root@k8s-LB2 ~]# systemctl start keepalived
[root@k8s-LB2 ~]# ps -ef |grep keepalived
root      58283      1  0 16:32 ?        00:00:00 /usr/sbin/keepalived -D
root      58285  58283  0 16:32 ?        00:00:00 /usr/sbin/keepalived -D
root      58286  58283  0 16:32 ?        00:00:00 /usr/sbin/keepalived -D
root      58360   2184  0 16:33 pts/0    00:00:00 grep --color=auto keepalived
测试keepalived是否成功
  • 停掉LB1的Nginx 那么VIP地址就飘到LB上了
[root@k8s-LB1 ~]# systemctl stop nginx
[root@k8s-LB2 ~]# ip a
 valid_lft 1613sec preferred_lft 1613sec
    inet 192.9.200.186/24 brd 192.168.30.255 scope global secondary noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.9.200.188/24 scope global secondary ens33
  • 因为脚本里面我们停掉了keepaived所有就飘不过来了,重启一个keepalived就可以了
  • 脚本写入停掉keepalived,这里主要是给我们提供一个服务挂掉的原因,设置报警之后,挂了说明服务有问题,方便我们去解决,如果不写入脚本,那么vip飘移过去,但是我们不知道服务存在问题,写入就是更好的通知我们状态,LB1问题解决重启keepalived,VIP地址还会回来,因为重点在与优先级的问题,LB1设置的是100,所有优先抢占
[root@k8s-LB1 ~]# systemctl start nginx
[root@k8s-LB1 ~]# systemctl restart keepalived
[root@k8s-LB1 ~]# ip a
  inet 192.9.200.187/24 brd 192.168.30.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.9.200.188/24 scope global secondary ens33

十二.接入k8s

  • 只需要修改node节点上kubeconfig的IP为master上的虚拟ip 也就是vip 地址192.168.30.20
[root@k8s-node2 cfg]# vim bootstrap.kubeconfig 
[root@k8s-node2 cfg]# vim kubelet.kubeconfig 
[root@k8s-node2 cfg]# vim kube-proxy.kubeconfig 
[root@k8s-node2 cfg]# systemctl restart kubelet
[root@k8s-node2 cfg]# systemctl restart kube-proxy
​
[root@k8s-node1 cfg]# vim bootstrap.kubeconfig 
[root@k8s-node1 cfg]# vim kubelet.kubeconfig 
[root@k8s-node1 cfg]# vim kube-proxy.kubeconfig 
[root@k8s-node1 cfg]# systemctl restart kubelet
[root@k8s-node1 cfg]# systemctl restart kube-proxy
查看请求目前没有接受到vip 的请求,需要改一下nginx监听的IP
[root@k8s-LB1 ~]# tail /var/log/nginx/k8s-access.log 
192.9.200.193 192.9.200.191:6443 - 15/Jan/2020:11:14:51 +0800 200
192.9.200.192 192.9.200.194:6443 - 15/Jan/2020:11:14:51 +0800 200
192.9.200.192 192.9.200.191:6443 - 15/Jan/2020:11:14:51 +0800 200
192.9.200.193 192.9.200.194:6443 - 15/Jan/2020:11:14:51 +0800 200
​
​
​
[root@k8s-LB1 ~]# vim /etc/nginx/nginx.conf 
 server {
        listen 0.0.0.0:6443;
        proxy_pass k8s-apiserver;
​
[root@k8s-LB1 ~]# systemctl restart nginx
[root@k8s-LB2 ~]# vim /etc/nginx/nginx.conf 
 server {
        listen 0.0.0.0:6443;
        proxy_pass k8s-apiserver;
​
[root@k8s-LB2 ~]# systemctl restart nginx
测试重启node节点,查看
[root@k8s-node2 cfg]# systemctl restart kubelet
[root@k8s-LB1 ~]# tail /var/log/nginx/k8s-access.log 
192.9.200.193 192.9.200.191:6443 - 15/Jan/2020:11:14:51 +0800 200
192.9.200.192 192.9.200.194:6443 - 15/Jan/2020:11:14:51 +0800 200
192.9.200.192 192.9.200.191:6443 - 15/Jan/2020:11:14:51 +0800 200
192.9.200.193 192.9.200.194:6443 - 15/Jan/2020:11:14:51 +0800 200
远程连接kubernetes集群
  • 在要主机上配置kubectl工具
[root@habor ~]# scp /opt/kubernetes/bin/kubectl root@192.9.200.189:/usr/bin/

生成配置文件

[root@habor ~]# cat cluster-config.sh
kubectl config set-cluster kubernetes \
--server=https://192.9.200.188:6443 \
--embed-certs=true \
--certificate-authority=/root/k8s/k8s-cert/ca.pem \
--kubeconfig=config
​
###
kubectl config set-credentials cluster-admin \
--certificate-authority=/root/k8s/k8s-cert/ca.pem \
--embed-certs=true \
--client-key=/root/k8s/k8s-cert/admin-key.pem \
--client-certificate=/root/k8s/k8s-cert/admin.pem \
--kubeconfig=config
​
###
kubectl config set-context default --cluster=kubernetes --user=cluster-admin --kubeconfig=config
kubectl config use-context default --kubeconfig=config
​
[root@habor ~]# ./cluster-config.sh
Cluster "kubernetes" set.
User "cluster-admin" set.
Context "default" modified.
Switched to context "default".
验证:
[root@habor ~]# kubectl --kubeconfig=./config get node
NAME            STATUS   ROLES    AGE   VERSION
192.9.200.192   Ready    <none>   22h   v1.15.2
192.9.200.193   Ready    <none>   22h   v1.15.2
设置别名方便使用
[root@habor ~]# vi ~/.bashrc
# kubectl
alias kubectl='kubectl --kubeconfig=/root/config'

部署kubernetes遇到的一些问题

范阳布衣阅读(1093)

部署ingress

部署完后查看ingress状态,报连不上10254端口

Liveness probe failed: Get http://192.9.201.151:10254/healthz: dial tcp 192.9.201.151:10254: connect: connection refused
  Normal   Killing    6m16s (x2 over 6m56s)  kubelet, 192.9.201.151  Killing container with id docker://nginx-ingress-controller:Container failed liveness probe.. Container will be killed and recreated.

解决办法

在kube-proxy配置文件(/opt/kubernetes/cfg/kube-proxy )增加–masquerade-all=true参数,然后重启kube-proxy,再重建ingress-controller。

30分钟部署一个Kubernetes集群【1.15】

范阳布衣阅读(405)

作者:李振良

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

这个工具能通过两条指令完成一个kubernetes集群的部署:

# 创建一个 Master 节点
$ kubeadm init

# 将一个 Node 节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口 >

1. 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区

2. 学习目标

  1. 在所有节点上安装Docker和kubeadm
  2. 部署Kubernetes Master
  3. 部署容器网络插件
  4. 部署 Kubernetes Node,将节点加入Kubernetes集群中
  5. 部署Dashboard Web页面,可视化查看Kubernetes资源

3. 准备环境

关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld

关闭selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config 
$ setenforce 0

关闭swap:
$ swapoff -a  $ 临时
$ vim /etc/fstab  $ 永久

添加主机名与IP对应关系(记得设置主机名):
$ cat /etc/hosts
192.168.31.61 k8s-master
192.168.31.62 k8s-node1
192.168.31.63 k8s-node2

将桥接的IPv4流量传递到iptables的链:
$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system

4. 所有节点安装Docker/kubeadm/kubelet

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

4.1 安装Docker

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-18.06.1.ce-3.el7
$ systemctl enable docker && systemctl start docker
$ docker --version
Docker version 18.06.1-ce, build e68fc7a

4.2 添加阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.3 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
$ systemctl enable kubelet

5. 部署Kubernetes Master

在192.168.31.63(Master)执行。

$ kubeadm init \
  --apiserver-advertise-address=192.168.31.61 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.15.0 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16

由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。

使用kubectl工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes

6. 安装Pod网络插件(CNI)

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

确保能够访问到quay.io这个registery。

如果下载失败,可以改成这个镜像地址:lizhenliang/flannel:v0.11.0-amd64

7. 加入Kubernetes Node

在192.168.31.65/66(Node)执行。

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

$ kubeadm join 192.168.31.61:6443 --token esce21.q6hetwm8si29qxwn \
    --discovery-token-ca-cert-hash sha256:00603a05805807501d7181c3d60b478788408cfe6cedefedb1f97569708be9c5

8. 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc

访问地址:http://NodeIP:Port

9. 部署 Dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

默认镜像国内无法访问,修改镜像地址为: lizhenliang/kubernetes-dashboard-amd64:v1.10.1

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
$ kubectl apply -f kubernetes-dashboard.yaml

访问地址:http://NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

$ kubectl create serviceaccount dashboard-admin -n kube-system
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。