一。集群规划 1.1 机器分配
节点名称
IP地址
k8s-master.nnv5.cn
192.168.1.41
k8s-node01.nnv5.cn
192.168.1.42
k8s-node02.nnv5.cn
192.168.1.43
1.2 软件版本
组件
描述
Linux
CentOS Linux release 7.6.1810 (Core)
Kubeadm
v1.17.11
Docker
18.09.9
二。环境配置
注意:环境配置所有节点都需要操作,这里只拿k8s-master.nnv5.cn这个节点来示例
2.1 修改主机名与hosts解析 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 /]$ hostnamectl set-hostname k8s-master.nnv5.cn /]$ vim /etc/hostname k8s-master.nnv5.cn /]$ cat >> /etc/sysconfig/network <<EOF HOSTNAME=k8s-master.nnv5.cn EOF /]$ hostname k8s-master.nnv5.cn && bash /]$ cat >>/etc/hosts<<EOF 192.168.1.41 k8s-master.nnv5.cn 192.168.1.42 k8s-node01.nnv5.cn 192.168.1.43 k8s-node02.nnv5.cn 192.168.1.252 hub.nnv5.cn 151.101.0.133 raw.githubusercontent.com EOF
2.2 关闭防火墙关闭selinux 1 2 3 4 5 6 7 8 9 10 /]$ systemctl stop firewalld && systemctl disable firewalld /]$ setenforce 0 /]$ vim /etc/sysconfig/selinux SELINUX=disabled /]$ yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
2.3 安装依赖包 1 /]$ yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
2.4 调整Linux内核参数 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 /]$ cat > kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它 vm.overcommit_memory=1 # 不检查物理内存是否够用 vm.panic_on_oom=0 # 开启 OOM fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF /]$ modprobe br_netfilter /]$ cp kubernetes.conf /etc/sysctl.d/kubernetes.conf /]$ sysctl -p /etc/sysctl.d/kubernetes.conf
2.5 调整系统时区配置 1 2 3 4 5 6 7 8 9 10 11 /]$ timedatectl set-timezone Asia/Shanghai /]$ timedatectl set-local-rtc 0 /]$ systemctl restart rsyslog && systemctl restart crond /]$ ntpdate ntp1.aliyuncom
2.6 关闭不需要的系统服务 1 /]$ systemctl stop postfix && systemctl disable postfix
2.7 加载ipvs模块使kube-proxy使用ipvs调度 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 /]$ modprobe br_netfilter modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 /]$ lsmod | grep -e ip_vs -e nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack_ipv4 15053 0 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 nf_conntrack 133095 3 ip_vs,xt_conntrack,nf_conntrack_ipv4 libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
2.8 安装Docker服务 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 cd /etc/yum.repos.d/ && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y docker-ce-18.09.9 && systemctl enable docker && systemctl start docker cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://ub816mdv.mirror.aliyuncs.com"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "exec-opts": ["native.cgroupdriver=systemd"], "insecure-registries": ["hub.nnv5.cn"] } EOF systemctl daemon-reload && systemctl restart docker
2.9 关闭THP
如果不关闭THP,Hadoop的系统cpu使用率很高
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 /]$ echo never > /sys/kernel/mm/transparent_hugepage/enabled /]$ echo never > /sys/kernel/mm/transparent_hugepage/defrag /]$ cat >> /etc/rc.local << EOF if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi EOF /]$ chmod +x /etc/rc.local
三。安装Kubernetes集群 3.1 初始化k8s-Master节点 3.1.1 配置kubernetes的yum仓库并安装kubeadm 1 2 3 4 5 6 7 8 9 10 11 12 13 [root@k8s-master ~]$ cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF [root@k8s-master ~]$ yum -y install kubeadm-1.16.11 kubectl-1.16.11 kubelet-1.16.11 [root@k8s-master ~]$ systemctl enable kubelet && systemctl start kubelet
3.1.2 初始化Master节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 [root@k8s-master ~]$ kubeadm config print init-defaults > kubeadm-config.yaml [root@k8s-master ~] apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups : - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.41 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master.nnv5.cn taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type : CoreDNS etcd: local : dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.16.11 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipvs [root@k8s-master ~]$ kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME /.kube sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown $(id -u):$(id -g) $HOME /.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.41:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:a7935bd3a3f61616674bf00dc473555d23f8dfb44fe9651f957a8c267a3924d9
3.1.3 初始化成功后配置 1 2 3 4 5 6 7 8 9 10 [root@k8s-master ~]$ mkdir -p $HOME /.kube [root@k8s-master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config [root@k8s-master ~]$ sudo chown $(id -u):$(id -g) $HOME /.kube/config [root@k8s-master ~]$ yum install -y bash-completion [root@k8s-master ~]$ source /usr/share/bash-completion/bash_completion [root@k8s-master ~]$ source <(kubectl completion bash) [root@k8s-master ~]$ echo "source <(kubectl completion bash)" >> ~/.bashrc
3.1.4 查看集群pod和node节点状态 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@k8s-master ~]$ kubectl get pods -o wide -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-5644d7b6d9-mfll6 0/1 Pending 0 2m42s <none> <none> <none> <none> kube-system coredns-5644d7b6d9-z29p7 0/1 Pending 0 2m42s <none> <none> <none> <none> kube-system etcd-k8s-master.nnv5.cn 1/1 Running 0 119s 192.168.1.41 k8s-master.nnv5.cn <none> <none> kube-system kube-apiserver-k8s-master.nnv5.cn 1/1 Running 0 119s 192.168.1.41 k8s-master.nnv5.cn <none> <none> kube-system kube-controller-manager-k8s-master.nnv5.cn 1/1 Running 0 2m3s 192.168.1.41 k8s-master.nnv5.cn <none> <none> kube-system kube-proxy-8vrrf 1/1 Running 0 2m42s 192.168.1.41 k8s-master.nnv5.cn <none> <none> kube-system kube-scheduler-k8s-master.nnv5.cn 1/1 Running 0 2m7s 192.168.1.41 k8s-master.nnv5.cn <none> <none> [root@k8s-master ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master.nnv5.cn NotReady master 3m7s v1.16.11
3.2 部署flannel网络插件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@k8s-master ~]$ cat >> /etc/hosts <<EOF 151.101.0.133 raw.githubusercontent.com EOF [root@k8s-master ~]$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@k8s-master ~]$ kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
3.3 初始化Node节点并加入集群
注意:以下操作所有node节点都操作,这里只使用k8s-node01做示例。
3.3.1 配置kubernetes的yum仓库并安装kubeadm 1 2 3 4 5 6 7 8 9 10 11 12 13 [root@k8s-node01 ~]$ cat > /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF [root@k8s-node01 ~]$ yum -y install kubeadm-1.16.11 kubectl-1.16.11 kubelet-1.16.11 [root@k8s-node01 ~]$ systemctl enable kubelet && systemctl start kubelet
3.3.2 Node节点加入到集群 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@k8s-node01 yum.repos.d] > --discovery-token-ca-cert-hash sha256:07087d54c9ffb805a55230a4823fe24609d40761868f9008b5fa53d39a651a16 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.