使用kubeadm搭建kubernetes1.15

使用kubeadm方式安装kubernetes

[TOC]

环境准备

在所有节点执行,此环节所有的节点操作系统均匀centos7.6

1、配置hosts解析

1
2
3
4
5
6
# cat /etc/hosts
10.122.17.206 k8s-node3
10.122.17.205 k8s-node2
10.122.17.204 k8s-node1
10.122.17.200 k8s-master
# masker 主机到node免密

2、禁用防火墙

1
2
systemctl stop firewalld
systemctl disable firewalld

3、禁用SELINUX

1
2
3
4
# setenforce 0
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# cat /etc/selinux/config
SELINUX=disabled

4、创建/etc/sysctl.d/k8s.conf,添加如下内容,并使修改生效

1
2
3
4
5
6
# cat /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/k8s.conf

5、安装ipvs

1
2
3
4
5
6
7
8
9
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了 ipset 软件包

1
# yum install ipset -y

为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm

1
# yum -y install ipvsadm -y

6、安装时间同步服务

1
2
3
4
# yum install chrony -y
# systemctl enable chronyd
# systemctl start chronyd
# chronyc sources

7、关闭swap分区

1
# swapoff -a

修改/etc/fstab文件,注释掉SWAP的自动挂载,使用free -m确认swap已经关闭。swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

8、安装docker-ce

1
2
3
4
5
6
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum makecache fast
# yum list docker-ce --showduplicates | sort -r
# 可以选择安装一个版本,比如我们这里安装最新版本
# yum install docker-ce-19.03.1-3.el7 -y

9、配置Docker镜像加速器

1
2
3
4
5
6
7
8
cat <<EOF >/etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors" : [
"https://ot2k4d59.mirror.aliyuncs.com/"
]
}
EOF

10、启动Docker

1
2
3
# systemctl daemon-reload
# systemctl restart docker
# systemctl enable docker

11、安装kubeadmkubeletkubectl

配置yum

1
2
3
4
5
6
7
8
9
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadmkubeletkubectl

1
2
3
# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes -y
# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

kubelet 设置成开机启动

1
# systemctl enable kubelet.service

到这里为止上面所有的操作都需要在所有节点执行配置

初始化集群

1、在master节点配置配置kubeadm初始文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#  kubeadm config print init-defaults > kubeadm.yaml
# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.122.17.200 # apiserver 节点内网IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 科学上网使用阿里registry源
kind: ClusterConfiguration
kubernetesVersion: v1.15.3 # k8s版本
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16 # 我们这里是准备安装 calico 网络插件的,需要将 networking.podSubnet 设置为192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kube-proxy 模式

2、初始化master配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# kubeadm init --config kubeadm.yaml
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.122.17.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:44a4ed41c1f69a23139b5d1e2bb6ec158d835257c6aab0a7607a122e77f51c04

3、拷贝 kubeconfig 文件

1
2
3
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、kubectl自动补全

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# yum install -y bash-completion
# locate bash_completion
/etc/bash_completion.d
/etc/bash_completion.d/git
/etc/bash_completion.d/iprutils
/etc/bash_completion.d/oscap
/etc/bash_completion.d/redefine_filedir
/etc/bash_completion.d/scl.bash
/etc/bash_completion.d/yum-utils.bash
/etc/profile.d/bash_completion.sh
/root/naftis/vendor/github.com/spf13/cobra/bash_completions.go
/usr/share/bash-completion/bash_completion
# source /usr/share/bash-completion/bash_completion
# source <(kubectl completion bash)
# echo 'source /usr/share/bash-completion/bash_completion' >> .bashrc
# echo 'source <(kubectl completion bash)' >>.bashrc

添加计算节点

1、使用kubeadm添加计算节点

node节点记住初始化集群上面的配置和操作要提前做好,必须安装好kubeadmkubelet,执行以下命名

1
2
# kubeadm join 10.122.17.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:44a4ed41c1f69a23139b5d1e2bb6ec158d835257c6aab0a7607a122e77f51c04

如果忘记了上面的join命令可以在master节点使用命令kubeadm token create --print-join-command重新获取,如下。

1
2
3
# kubeadm token create --print-join-command
kubeadm join 10.122.17.200:6443 --token dkb75n.dxkqjiv96eu7ky2s \
--discovery-token-ca-cert-hash sha256:44a4ed41c1f69a23139b5d1e2bb6ec158d835257c6aab0a7607a122e77f51c04

2、执行成功后可以在master节点运行kubectl get nodes命令

1
2
3
4
5
6
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 3m v1.15.3
k8s-node1 NotReady <none> 48m v1.15.3
k8s-node2 NotReady <none> 24m v1.15.3
k8s-node3 NotReady <none> 13m v1.15.3

可以看到是NotReady状态,这是因为还没有安装网络插件,接下来安装网络插件

安装calio网络组建

以下命令在控制节点执行

1、安装calio网络插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#  wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
# 如果节点是多网卡,可以在资源清单文件中指定内网网卡
# vi calico.yaml
......
spec:
containers:
- env:
- name: DATASTORE_TYPE
value: kubernetes
- name: IP_AUTODETECTION_METHOD # DaemonSet中添加该环境变量
value: interface=eth0 # 指定内网网卡
- name: WAIT_FOR_DATASTORE
value: "true"
......
# kubectl apply -f calico.yaml # 安装calico网络插件

等待一段时间可以查看下pod的状态

2、查看pod状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-65b8787765-hx9cr 1/1 Running 0 3h50m
calico-node-2rlsz 1/1 Running 0 3h18m
calico-node-5vz46 1/1 Running 0 3h50m
calico-node-6szsd 1/1 Running 0 3h50m
calico-node-s5nmr 1/1 Running 0 3h29m
coredns-bccdc95cf-9jf8w 1/1 Running 0 4h8m
coredns-bccdc95cf-gk27l 1/1 Running 0 4h8m
etcd-k8s-master 1/1 Running 0 4h7m
kube-apiserver-k8s-master 1/1 Running 0 4h7m
kube-controller-manager-k8s-master 1/1 Running 0 4h7m
kube-proxy-5jbft 1/1 Running 0 3h29m
kube-proxy-d9p8w 1/1 Running 0 3h18m
kube-proxy-h6n88 1/1 Running 0 3h54m
kube-proxy-mqgj7 1/1 Running 0 4h8m
kube-scheduler-k8s-master 1/1 Running 0 4h7m

网络插件运行成功了,node 状态也正常了:

3、查看node状态

1
2
3
4
5
6
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 4h10m v1.15.3
k8s-node1 Ready <none> 3h55m v1.15.3
k8s-node2 Ready <none> 3h31m v1.15.3
k8s-node3 Ready <none> 3h20m v1.15.3

4、安装calicoctl

calicoctl安装为Kubernetes pod
使用Kubernetes API数据存储区

1
# kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calicoctl.yaml

然后,您可以使用kubectl运行命令,如下所示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# kubectl exec -ti -n kube-system calicoctl -- /calicoctl get profiles -o wide
NAME LABELS
kns.default map[]
kns.kube-node-lease map[]
kns.kube-public map[]
kns.kube-system map[]
ksa.default.default map[]
ksa.kube-node-lease.default map[]
ksa.kube-public.default map[]
ksa.kube-system.attachdetach-controller map[]
ksa.kube-system.bootstrap-signer map[]
ksa.kube-system.calico-kube-controllers map[]
ksa.kube-system.calico-node map[]
ksa.kube-system.calicoctl map[]
ksa.kube-system.certificate-controller map[]
ksa.kube-system.clusterrole-aggregation-controller map[]
ksa.kube-system.coredns map[]
ksa.kube-system.cronjob-controller map[]
ksa.kube-system.daemon-set-controller map[]
ksa.kube-system.default map[]
ksa.kube-system.deployment-controller map[]
ksa.kube-system.disruption-controller map[]
ksa.kube-system.endpoint-controller map[]
ksa.kube-system.expand-controller map[]
ksa.kube-system.generic-garbage-collector map[]
ksa.kube-system.horizontal-pod-autoscaler map[]
ksa.kube-system.job-controller map[]
ksa.kube-system.kube-proxy map[]
ksa.kube-system.namespace-controller map[]
ksa.kube-system.node-controller map[]
ksa.kube-system.persistent-volume-binder map[]
ksa.kube-system.pod-garbage-collector map[]
ksa.kube-system.pv-protection-controller map[]
ksa.kube-system.pvc-protection-controller map[]
ksa.kube-system.replicaset-controller map[]
ksa.kube-system.replication-controller map[]
ksa.kube-system.resourcequota-controller map[]
ksa.kube-system.service-account-controller map[]
ksa.kube-system.service-controller map[]
ksa.kube-system.statefulset-controller map[]
ksa.kube-system.token-cleaner map[]
ksa.kube-system.ttl-controller map[]

5、设置别名

1
2
3
4
# alias calicoctl="kubectl exec -i -n kube-system calicoctl /calicoctl -- "
# echo 'alias calicoctl="kubectl exec -i -n kube-system calicoctl /calicoctl -- "' >> .bashrc
# tail -n 1 .bashrc
alias calicoctl="kubectl exec -i -n kube-system calicoctl /calicoctl -- "

6、使命别名执行命令

1
2
3
4
5
6
# calicoctl get nodes
NAME
k8s-master
k8s-node1
k8s-node2
k8s-node3

7、测试coredns

1
2
3
4
5
6
7
8
9
10
# kubectl run curl --image=radial/busyboxplus:curl -it 
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ [email protected]:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ [email protected]:/ ]$

安装Dashboard

1、下载kubernetes-dashboard.yaml文件

1
# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

2、科学上网

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# vim kubernetes-dashboard.yaml
# 修改镜像名称
......
containers:
- args:
- --auto-generate-certificates
image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 # 修改成阿里源
imagePullPolicy: IfNotPresent
......
# 修改Service为NodePort类型
......
selector:
k8s-app: kubernetes-dashboard
type: NodePort
......

3、安装Dashboard

1
2
3
4
5
6
7
8
9
10
# kubectl apply -f kubernetes-dashboard.yaml
# kubectl get pods -n kube-system -l k8s-app=kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-5dc4c54b55-c7sv2 1/1 Running 0 62s
[[email protected] ~]# kubectl get pods,svc -n kube-system -l k8s-app=kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/kubernetes-dashboard-5dc4c54b55-c7sv2 1/1 Running 0 81s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes-dashboard NodePort 10.107.132.165 <none> 443:30114/TCP 80s

4、创建一个具有全局所有权限的用户来登录Dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# vim rbac.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

直接创建

1
2
3
# kubectl apply -f rbac.yml
# kubectl get secret -n kube-system|grep admin
admin-user-token-wpqck kubernetes.io/service-account-token 3 2m38s

6、获取登录token

1
2
# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')|grep token:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXdwcWNrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwZmUzNTRiYS1mODkxLTQ0M2UtOWUxMy0wNjE3NWEzNWI5NzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Xqq3V8MmFp5ssCw4Hhzxy9YmdiVxMLsMSXnMaklOkVjHiX4B4kVGED2IDmwTh3wEi2VqYxjkRVm-b_JY2AmCfoaPMUtb4R9OZj414volWgAH1CX4VbFB0o1yr6aP3i8SKmQlQfIweEqYUWDGB5zv3ml1u0WjUGyVJeiXY0GxDAtiROelsKaK4gHGsUlIFZ8izW0xmuGDnT9hvCuztqbGsb78bFmQfAL2dKNB12rLOg01EwF0g88jvtfHiv80yX18ulP46ZK0MFx4YhTrLRjq09rAqCSnMGWPHtBSogKUQhpDkwCze61ByKlG6KM_DhFE7elQkBEj-jwvygDwvn7tMQ

7、使用Firefox浏览器登录Dashboard


本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!