3.部署kubernetes集群

写在前面

本章是kubernetes系列教程第二篇,要深入学习kubernetes,首先需要有一个kubernetes环境,考虑到国内网络限制,可以通过阿里云镜像进行部署。

1. 环境准备

1.1 环境说明和准备

操作系统 主机名 IP地址 功能 配置
ubuntu-16.04.5-server-amd64 k8s-master 192.168.91.128 主节点 2核4G
ubuntu-16.04.5-server-amd64 k8s-node1 192.168.91.129 从节点 2核4G
ubuntu-16.04.5-server-amd64 k8s-node2 192.168.91.131 从节点 2核4G

1.1.1 设置主机名,其他两个节点类似设置

➜  ~# hostnamectl set-hostname k8s-master
➜  ~# hostname
k8s-master

1.1.2 设置hosts文件,其他两个节点设置相同内容

root@node-1 ~# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain 
192.168.91.128 k8s-master
192.168.91.129 k8s-node1
192.168.91.131 k8s-node2

1.1.3 关闭防火墙和SElinux

sudo ufw disable

1.1.4 禁用swap

sudo sed -i '/swap/ s/^/#/' /etc/fstab
sudo swapoff -a

1.2 安装Docker环境

1.2.1 更新apt源,并添加https支持 (所有主机)

sudo apt-get update && sudo apt-get 
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common -y

1.2.2 使用utc源添加GPG Key (所有主机)

curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add 

1.2.3 添加Docker-ce稳定版源地址(所有主机)

sudo add-apt-repository "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

1.2.4 安装docker-ce(所有主机)

sudo apt-get update
sudo apt install docker-ce=18.06.1~ce~3-0~ubuntu

2. 安装kubernetes集群

2.1 添加apt key以及源(所有主机)

sudo apt update && sudo apt install -y apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" >>/etc/apt/sources.list.d/kubernetes.list

2.2 安装kubelet,kubeadm,kubectl (所有主机)

目前1.15.2的支持比较广,我们hold在这个版本上。

sudo apt update
sudo apt install -y kubelet=1.15.2-00 kubeadm=1.15.2-00 kubectl=1.15.2-00
sudo apt-mark hold kubelet=1.15.2-00 kubeadm=1.15.2-00 kubectl=1.15.2-00

2.3 安装kubernetes集群(仅master)

sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.2 --pod-network-cidr=192.169.0.0/16

输出:

[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.104]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.104 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.10.104 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 41.503569 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 767b6y.incfuyom78fl6j88
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
  
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.104:6443 --token 767b6y.incfuyom78fl6j88 \
    --discovery-token-ca-cert-hash sha256:941807715378bcbd5bd1cbe244c4bdbf00dee4e45c3b0ff3555eea746607a672 

img

2.4 生成kubectl环境配置文件(仅master)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.5 安装网络插件,让pod之间通信(仅master)

安装网络plugin,kubernetes支持多种类型网络插件,要求网络支持CNI插件即可,CNI是Container Network Interface,要求kubernetes的中pod网络访问方式:

  • node和node之间网络互通
  • pod和pod之间网络互通
  • node和pod之间网络互通

不同的CNI plugin支持的特性有所差别。kubernetes支持多种开源的网络CNI插件,常见的有flannel、calico、canal、weave等,flannel是一种overlay的网络模型,通过vxlan隧道方式构建tunnel网络,实现k8s中网络的互联,后续在做介绍,如下是安装过程:

kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/rbac-kdd.yaml
kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/calico.yaml

image-20200305013117977

2.6 查看kube-system命名空间下的pod状态(仅master)

kubectl get pod -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-h5c7l                  1/1     Running   0          5m29s
coredns-bccdc95cf-spwmh                  1/1     Running   0          5m29s
etcd-vm-0-12-ubuntu                      1/1     Running   0          4m27s
kube-apiserver-vm-0-12-ubuntu            1/1     Running   0          4m33s
kube-controller-manager-vm-0-12-ubuntu   1/1     Running   0          4m43s
kube-flannel-ds-amd64-96528              1/1     Running   0          2m18s
kube-proxy-gkldm                         1/1     Running   0          5m29s
kube-scheduler-vm-0-12-ubuntu            1/1     Running   0          4m43s

2.7 添加node节点

将另外两个节点加入到集群中,复制上述的添加节点命令到指定节点添加即可。

kubeadm join 192.168.10.104:6443 --token 767b6y.incfuyom78fl6j88     --discovery-token-ca-cert-hash sha256:941807715378bcbd5bd1cbe244c4bdbf00dee4e45c3b0ff3555eea746607a672

img

3 验证kubernetes组件

3.1 验证node状态

获取当前安装节点,可以查看到状态, 角色,启动市场,版本

kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   44m   v1.15.2
k8s-node1    Ready    <none>   42m   v1.15.2
k8s-node2    Ready    <none>   42m   v1.15.2

3.2 查看kubernetse服务组件状态

包括scheduler,controller-manager,etcd

kubectl get componentstatuses 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

3.3 查看pod的情况,master中的角色

包括kube-apiserver,kube-scheduler,kube-controller-manager,etcd,coredns以pods形式部署在集群中,worker节点的kube-proxy也以pod的形式部署。实际上pod是以其他控制器如daemonset的形式控制的。

➜  ~ kubectl get pods -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-h5c7l                  1/1     Running   0          5m29s
coredns-bccdc95cf-spwmh                  1/1     Running   0          5m29s
etcd-vm-0-12-ubuntu                      1/1     Running   0          4m27s
kube-apiserver-vm-0-12-ubuntu            1/1     Running   0          4m33s
kube-controller-manager-vm-0-12-ubuntu   1/1     Running   0          4m43s
kube-flannel-ds-amd64-96528              1/1     Running   0          2m18s
kube-proxy-gkldm                         1/1     Running   0          5m29s
kube-scheduler-vm-0-12-ubuntu            1/1     Running   0          4m43s

3.4 配置kubectl命令补全

使用kubectl和kubernetes交互时候可以使用缩写模式也可以使用完整模式,如kubectl get nodes和kubectl get no能实现一样的效果,为了提高工作效率,可以使用命令补全的方式加快工作效率。

1、生成kubectl bash命令行补全shell

kubectl completion bash >/etc/kubernetes/kubectl.sh
echo "source /etc/kubernetes/kubectl.sh" >>/root/.bashrc 
cat /root/.bashrc 
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi
source /etc/kubernetes/kubectl.sh #添加环境变量配置

2、加载shell环境变量,使配置生效

[root@node-1 ~]# source /etc/kubernetes/kubectl.sh 

3、校验命令行补全,命令行中输入kubectl get co再按TAB键就能自动补全了

[root@node-1~]# kubectl get co componentstatuses configmaps  controllerrevisions.apps   
[root@node-1~]# kubectl get componentstatuses 

4、zsh命令补全, ~/.zshrc文件,增加如下两行:

plugins=(kubectl)

source <(kubectl completion zsh)

除了支持命令行补全之外,kubectl还支持命令简写,如下是一些常见的命令行检测操作,更多通过kubectl api-resources命令获取,SHORTNAMES显示的是子命令中的简短用法。

  • kubectl get componentstatuses,简写kubectl get cs获取组件状态
  • kubectl get nodes,简写kubectl get no获取node节点列表
  • kubectl get services,简写kubectl get svc获取服务列表
  • kubectl get deployments,简写kubectl get deploy获取deployment列表
  • kubectl get statefulsets,简写kubectl get sts获取有状态服务列表

3.5 pod无法在master创建

pod默认不能创建在master,“k8s-master” 为 master node name.

kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-

4.1 删除kubernetes

➜  ~ sudo kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "vm-0-12-ubuntu" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0302 22:05:10.200243   30513 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"

参考文档

  1. Container Runtime安装文档:https://kubernetes.io/docs/setup/production-environment/container-runtimes/
  2. kubeadm安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  3. 初始化kubeadm集群:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
  4. https://blog.csdn.net/shykevin/article/details/98811021
  5. kubeadm离线部署1.14.1集群 :https://cloud.tencent.com/developer/article/1479625?s=original-sharing&from=10680
这些信息有用吗?
Do you have any suggestions for improvement?

Thanks for your feedback!