K8S1.16.2部署

新版的K8S 1.16版本在稳定性和可用性方面有了较大的提升,特别是支持后端PV的扩容API接口已经更新为beta版本,在使用有状态的数据存储POD时管理会更加方便,也更符合生产需求。 下面新版K8S 1.16.2 快速部署说明。

创新互联公司主要业务有网站营销策划、成都网站设计、网站建设、微信公众号开发、小程序开发H5响应式网站、程序开发等业务。一次合作终身朋友,是我们奉行的宗旨;我们不仅仅把客户当客户,还把客户视为我们的合作伙伴,在开展业务的过程中,公司还积累了丰富的行业经验、网络营销推广资源和合作伙伴关系资源,并逐渐建立起规范的客户服务和保障体系。 

配置信息

  1. 主机列表:
主机名IP
k8s-master 192.168.20.70
k8s-worker-1 192.168.20.71
k8s-worker-2 192.168.20.72
  1. 组件版本信息
系统组件版本
CentOS7 内核4.4.178
docker 18.09.5
k8s 1.16.2

所有节点系统初始化

  1. 关闭防火墙,selinux,升级内核到4.4版本以上,设置解析主机名,同步集群时间。
  2. 配置系统参数:
# cat /etc/sysctl.d/k8s.conf 
net.ipv4.ip_nonlocal_bind = 1    
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_port_range = 10000 65000
fs.file-max = 2000000
net.ipv4.ip_forward = 1
vm.swappiness = 0
  1. 安装docker
# ll /tmp/
total 33512
-rw-r--r-- 1 root root 19623520 Apr 18  2019 docker-ce-18.09.5-3.el7.x86_64.rpm
-rw-r--r-- 1 root root 14689524 Apr 18  2019 docker-ce-cli-18.09.5-3.el7.x86_64.rpm

mv docker-ce.repo  /etc/yum.repos.d/
yum install docker-ce-*
  1. 配置docker:
cat > /etc/docker/daemon.json <
  1. 各个节点的iptables设置为 ‘legacy’模式,或者编译安装1.8.0以上版本:
update-alternatives --set iptables /usr/sbin/iptables-legacy
配置镜像

1.配置国内镜像源:

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  1. 安装对应的组件:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
  1. 修改kubelet的默认驱动
echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" >  /etc/sysconfig/kubelet

安装k8s Master

Master节点初始化配置

以下操作在Master上执行。

  1. 执行如下脚本,下载镜像:
#!/bin/bash
images=(kube-apiserver-amd64:v1.16.2 kube-controller-manager-amd64:v1.16.2  kube-scheduler-amd64:v1.16.2 kube-proxy-amd64:v1.16.2 pause-amd64:3.1  coreDNS-amd64:1.6.2 etcd:3.3.15-0 )
for image in ${images[@]} ; do
  imageName=`echo $image |sed 's/-amd64//g'`
  docker pull mirrorgooglecontainers/$image
  docker tag mirrorgooglecontainers/$image k8s.gcr.io/$imageName
  docker rmi mirrorgooglecontainers/$image
done
  1. 启动master节点,使用flannel网络配置:
kubeadm  init --pod-network-cidr=10.244.0.0/16   --ignore-preflight-errors=ImagePull
  1. 执行成功后,所有master节点上的服务都已启动成功:
[root@k8s-master ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      6175/sshd           
tcp        0      0 127.0.0.1:21925         0.0.0.0:*               LISTEN      6188/containerd     
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      8681/kubelet        
tcp        0      0 127.0.0.1:19944         0.0.0.0:*               LISTEN      8681/kubelet        
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      9253/kube-proxy     
tcp        0      0 192.168.20.70:2379      0.0.0.0:*               LISTEN      9053/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      9053/etcd           
tcp        0      0 192.168.20.70:2380      0.0.0.0:*               LISTEN      9053/etcd           
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      9053/etcd           
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      9006/kube-controlle 
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      8989/kube-scheduler 
tcp6       0      0 :::22                   :::*                    LISTEN      6175/sshd           
tcp6       0      0 :::10250                :::*                    LISTEN      8681/kubelet        
tcp6       0      0 :::10251                :::*                    LISTEN      8989/kube-scheduler 
tcp6       0      0 :::6443                 :::*                    LISTEN      9098/kube-apiserver 
tcp6       0      0 :::10252                :::*                    LISTEN      9006/kube-controlle 
tcp6       0      0 :::10256                :::*                    LISTEN      9253/kube-proxy  
  1. 配置kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

在安装网络插件前,master节点状态还处于NotReady状态:

# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   16m   v1.16.2
  1. 安装flannel网络插件:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

此时节点信息已经处于Ready:

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   25m   v1.16.2

添加节点

下载所需的镜像

node节点下载对应的镜像:

docker pull registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-coredns:1.6.2
docker tag  registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
docker tag  registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
添加节点主机
  1. 在master获取token, token信息在执行kubeadm init 初始化的时候也会显示:
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
orof8e.2u2qtt10j4p4lnx9   20h       2019-10-25T16:28:28+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
  1. 获取对应的hash值:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

提示:如果执行join命令时提示token过期,按照提示在Master 上执行kubeadm token create 生成一个新的token。

  1. 在需要添加的worker节点上执行添加操作填写对应的token和hash值:
kubeadm join 192.168.20.70:6443 --token orof8e.2u2qtt10j4p4lnx9 --discovery-token-ca-cert-hash sha256:c752a1110d36d9bda79672d0d31425dfe113b9691bf3d2dc7123ac36b271e858

通过上面的同一条命令,在多个节点上执行,可以添加多个node节点。

  1. 添加成功后,在master节点上可以查看node状态:
[root@k8s-master ~]# kubectl get node
NAME           STATUS   ROLES    AGE     VERSION
k8s-master     Ready    master   3h55m   v1.16.2
k8s-worker-1   Ready       27s     v1.16.2
  1. 对worker节点添加ROLES标识:
kubectl label node k8s-worker-1 node-role.kubernetes.io/worker=worker
kubectl label node k8s-worker-2 node-role.kubernetes.io/worker=worker

查看节点信息:

[root@k8s-master ~]# kubectl get node
NAME           STATUS   ROLES    AGE     VERSION
k8s-master     Ready    master   3h62m   v1.16.2
k8s-worker-1   Ready    worker   7m19s   v1.16.2
k8s-worker-2   Ready    worker   4m48s   v1.16.2

自定义集群配置

安装Metric-server

1.下载metric-server的yaml文件:https://github.com/AndySkyL/k8s/tree/master/k8s_deploy/k8s-1.16/kubeadm-deploy/metric-server

  1. 到当前的目录中执行yaml:
kubectl create -f ./
  1. pod正常启动后检查apiservices和top命令是否正常:
# kubectl get apiservices |grep 'metrics'
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        20m

# kubectl  top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master     250m         12%    1230Mi          48%       
k8s-worker-1   80m          4%     575Mi           14%       
k8s-worker-2   77m          3%     524Mi           13%  
安装Dashboard

==K8S 1.16需要使用 Dashboard V2 版本,默认使用Metric-server,使用V1版本会报错 ==

  1. 由于dashboard使用的CA证书默认集群中并没有创建,引用的部署dashboard时,secrets并没有具体加载的证书和密钥,会造成登录不成功,浏览器无法识别请求的情况。 所以这里先创建CA证书:
# 创建存储证书的目录
mkdir key && cd key

# 创建Dashboard 的namespace
kubectl create namespace kubernetes-dashboard

# 生成key
openssl genrsa -out dashboard.key 2048

# 生成证书请求
openssl req  -new  -key dashboard.key  -out dashboard.csr -days 3650   -subj "/O=k8s/CN=dashboard"

# 生成自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  1. 这里将创建3个文件:
[root@k8s-master key]# ll
total 12
-rw-r--r-- 1 root root  993 Oct 25 14:23 dashboard.crt
-rw-r--r-- 1 root root  899 Oct 25 14:23 dashboard.csr
-rw-r--r-- 1 root root 1679 Oct 25 14:21 dashboard.key
  1. 创建使用自签证书的secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key \
--from-file=dashboard.crt -n kubernetes-dashboard
  1. 部署dashboard,先修改暴露外部的NodePort端口:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml

# 在service中修改如下信息:
...
spec:
  ports:
  - nodePort: 30727
    port: 443
    protocol: TCP
    targetPort: 8443
...

# 部署dashboard
kubectl create -f recommended.yaml
  1. 创建admin权限的管理员账号
    
    # cat admin-account.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: admin-user
    namespace: kubernetes-dashboard
  1. 使用ip + NodePort端口直接登录
  2. 获取admin的token:
kubectl describe secret -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret|grep admin|awk '{print $1}')
启用ipvs模式

在不安装ipvs的情况下,集群会使用iptables的方式,这里需要对配置进行重新自定义。

  • IPVS和旧版iptables的比较和联系参考:https://kubernetes.io/zh/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
  • 不同方式使用配置ipvs说明: https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs
  1. 安装ipvs的安装包:
yum install -y ipvsadm ipset conntrack
  1. 修改默认的configmap 中kube-proxy的模式为ipvs:
kubectl get cm kube-proxy -n kube-system -o yaml | sed 's/mode: ""/mode: "ipvs"/' | kubectl apply -f -
  1. 删除旧的kube-proxy容器,自动创建新的ipvs模式容器:
for i in $(kubectl get po -n kube-system | awk '/kube-proxy/ {print $1}'); do
  kubectl delete po $i -n kube-system
done

问题汇总

Metric-Server 无法获取宿主主机状态

如果安装metric-server 启动后apiservices正常,但是pod日志报如下错误(无法解析宿主机的主机名,找不到主机):

unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_su1: unable to fetch metrics from Kubelet k8s-worker-1 (k8s-worker-1): Get https://k8s-worker-1:10250/stats/summary?only_cpu_and_memorylookup k8s-worker-1 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-worker-2: unable trom Kubelet k8s-worker-2 (k8s-worker-2): Get https://k8s-worker-2:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-.0.10:53: no such host,

解决方式:
确认metric-server的deployment中开启了--kubelet-preferred-address-types=InternalIP 参数:

        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP 

分享文章:K8S1.16.2部署
标题链接:http://cdiso.cn/article/iigich.html