install k8s

install k8s

master节点步骤

初始环境:redhat 7.5 最小安装

1.准备工作:

1.1 local yum repo

cat <<EOF >/etc/yum.repos.d/rhel.repo
[RHEL75]
name=RHEL 7.5
baseurl=file:///mnt
gpgcheck=0 
enabled=1
EOF

mount /dev/sr0 /mnt

yum repolist

1.2 安装必要工具

# yum install perl wget

1.3 关闭selinux,关闭防火墙

# systemctl disable firewalld

# systemctl stop firewalld

# perl -p -i -e "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

1.4 确保每个节点上 MAC 地址和 product_uuid 的唯一性。

可以使用下列命令获取网络接口的 MAC 地址:

# ifconfig -a

下列命令可以用来获取 product_uuid

# sudo cat /sys/class/dmi/id/product_uuid

一般来讲,硬件设备会拥有独一无二的地址,但是有些虚拟机可能会雷同。Kubernetes 使用这些值来唯一确定集群中的节点。如果这些值在集群中不唯一,可能会导致安装失败。

1.5 关掉swap

# swapoff -a

永久生效的方法是在/etc/fstab中注释掉swap对应的行。

2. 安装 Docker

2.1 依赖包

网上搜索下载这个container-selinux包:

# wget ftp://bo.mirror.garr.it/1/slc/centos/7.1.1503/extras/x86_64/Packages/container-selinux-2.9-4.el7.noarch.rpm

# yum install container-selinux-2.9-4.el7.noarch.rpm (会安装linux DVD中的其他依赖包)

2.2 yum安装

# yum install -y yum-utils device-mapper-persistent-data lvm2

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# yum install docker-ce

2.3 rpm包安装

下载地址: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

# yun install <rpm file>

3个包:

  • docker-ce-cli.x86_64 1:18.09.2-3.el7
  • containerd.io.x86_64 0:1.2.2-3.3.el7
  • docker-ce.x86_64 3:18.09.2-3.el7

其他依赖包在linux DVD中。

2.4 启动服务

# systemctl enable docker

# systemctl start docker

验证:

# docker run hello-world

//会下载image,并运行

2.5 镜像加速

对于使用 systemd 的系统,请在 /etc/docker/daemon.json 中写入如下内容(如果文件不存在请新建该文件)

注:若平常使用阿里云镜像较为频繁,推荐使用阿里云镜像加速,这里以 Docker 官方加速器 为例

# cat <<EOF > /etc/docker/daemon.json

{

"registry-mirrors": [

"https://registry.docker-cn.com"

]

}

EOF

# systemctl daemon-reload

# systemctl restart docker

3.安装kubectl, kubeadm, kubelet

3.1 Yum 安装

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl enable kubelet (这时启动不了,会因为没有初始化出配置文件而报错) 

3.2 修改内核参数

cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

3.3 确认cgroup driver

确保EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env 文件中的值和

# docker info| grep Cgroup

返回的值一致,也就是cgroupfs。

目前看,1.13.3的值缺省就是和docker的一样,不像前期版本,1.13.3不需要修改。

4.配置镜像

查看需要的docker images:

# kubeadm config images list

下载到本地,然后修改tag:

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.3

docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.3

docker pull mirrorgooglecontainers/kube-scheduler:v1.13.3

docker pull mirrorgooglecontainers/kube-proxy:v1.13.3

docker pull mirrorgooglecontainers/pause:3.1

docker pull mirrorgooglecontainers/etcd:3.2.24

docker pull coredns/coredns:1.2.6
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3

docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3

docker tag mirrorgooglecontainers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3

docker tag mirrorgooglecontainers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3

docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24

docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

5.初始化k8s cluster

# kubeadm init --pod-network-cidr=10.10.0.0/16

根据上一步最后的提示,执行:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.安装calico网络插件

# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

然后再启动kubelet服务:

# systemctl start kubelet

如果有报错,可能是calico的images还没下载完成,可以用docker images查看:

# docker images |grep calico

calico/node v3.3.4 74004ba60cc5 3 days ago 75.3MB

calico/cni v3.3.4 d97e9e8e263d 3 days ago 75.4MB

同时确保/etc/cni/net.d目录下生成了相应的文件

7.查看cluster状态

# kubectl cluster-info

Kubernetes master is running at https://192.168.0.41:6443

KubeDNS is running at https://192.168.0.41:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system calico-node-nsd5f 2/2 Running 0 13m

kube-system coredns-86c58d9df4-c7mmm 1/1 Running 0 22m

kube-system coredns-86c58d9df4-cnmbg 1/1 Running 0 22m

kube-system etcd-rh75vm41 1/1 Running 0 21m

kube-system kube-apiserver-rh75vm41 1/1 Running 0 21m

kube-system kube-controller-manager-rh75vm41 1/1 Running 0 21m

kube-system kube-proxy-xq9dk 1/1 Running 0 22m

kube-system kube-scheduler-rh75vm41 1/1 Running 0 21m

worker节点步骤

第1到4步和master节点一样。

5.将worker节点加入到集群

在master节点执行:

# kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

例如:

# kubeadm join 192.168.0.41:6443 --token dn85nz.uj7zf8a4qzsz5mev --discovery-token-ca-cert-hash sha256:adfabaf4fe85218d443d46da0577e3769028d8a9e4eb6fd4972555f2ede7228f

加入成功后,在worker节点也会自动pull 网络(calico)相关的images,在calico 相关images没有下载完成之前,节点的状态会是NotReady。当网络images下载完成,kubelet状态正常后,节点状态会变为Ready:

# kubectl get nodes

NAME STATUS ROLES AGE VERSION

rh75vm41 Ready master 28h v1.13.3

rh75vm42 Ready <none> 6m v1.13.3


# kubectl get pods -n kube-system -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

calico-node-bxvz7 2/2 Running 0 31m 192.168.0.42 rh75vm42 <none> <none>

calico-node-nsd5f 2/2 Running 0 28h 192.168.0.41 rh75vm41 <none> <none>

coredns-86c58d9df4-c7mmm 1/1 Running 0 28h 10.10.0.2 rh75vm41 <none> <none>

coredns-86c58d9df4-cnmbg 1/1 Running 0 28h 10.10.0.3 rh75vm41 <none> <none>

etcd-rh75vm41 1/1 Running 0 28h 192.168.0.41 rh75vm41 <none> <none>

kube-apiserver-rh75vm41 1/1 Running 0 28h 192.168.0.41 rh75vm41 <none> <none>

kube-controller-manager-rh75vm41 1/1 Running 0 28h 192.168.0.41 rh75vm41 <none> <none>

kube-proxy-bw2r2 1/1 Running 0 31m 192.168.0.42 rh75vm42 <none> <none>

kube-proxy-xq9dk 1/1 Running 0 28h 192.168.0.41 rh75vm41 <none> <none>

kube-scheduler-rh75vm41 1/1 Running 0 28h 192.168.0.41 rh75vm41 <none> <none>

其他

1. token 相关

查看token

# kubeadm token list

重新生成token

Toke的有限期应该是23H,如果过了这个时间,需要重新生成toke, 可以用命令:

# kubeadm token create

2. 部署k8s dashboard

准备镜像及部署

# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

从文件中查看需要的docker镜像:

在1.10.1版本,需要的镜像是 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

我们可以使用下面的方式获得镜像:

# docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

# docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

然后部署dashboard:

# kubectl apply -f kubernetes-dashboard.yaml

查看状态

# kubectl get deployments --all-namespaces

NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE

default curl 1/1 1 1 97m

kube-system calico-typha 0/0 0 0 30h

kube-system coredns 2/2 2 2 30h

kube-system kubernetes-dashboard 1/1 1 1 87m

准备proxy及打开登陆页面

# kubectl proxy

然后在master本机,打开浏览器,使用下面地址访问dashboard:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

可以打开登录页面,但是目前还无法登陆。

创建用户及角色

创建文件 admin-user.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
# kubectl apply -f admin-user.yaml

创建文件 admin-role.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
# kubectl apply -f admin-role.yaml
# kubectl apply -f admin-role.yaml

获取token

# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Name: admin-user-token-vsgwz

Namespace: kube-system

Labels: <none>

Annotations: kubernetes.io/service-account.name: admin-user

kubernetes.io/service-account.uid: 392a2ee9-3aa5-11e9-9732-080027fe91e0

Type: kubernetes.io/service-account-token

Data

====

ca.crt: 1025 bytes

namespace: 11 bytes

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXZzZ3d6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzOTJhMmVlOS0zYWE1LTExZTktOTczMi0wODAwMjdmZTkxZTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Xfu0COCQZU4ze3DJvGJM6vs10pnTFOj0L5XhMGO_iktvuMsVDv5iQEoxj_cUfen1kh_9gIMNw7zgCzeQnBb-BA21F7_1kmC-MoyQDUfgCsG6hOJBG5qVP4402KqbBW8c89xp3i05lx8UPyxYkk-L4ElDBw7dmHlruFhbw15hnv4_6DeaGeF5EfBZfwdQ_zpQe_6d1I1k83Qs1P63HS52IxwhcURPLV1_oF-YlgGjONPhORRhQr6qs-q4jo_eXoRpN-Il56o7nT-2Nt5p6fvxkS0Y74CDxOr_36u3jEE8Mb_sqYvOp8tCkEzZtkgTtA2Zb4ml4nityEm0gqVhkCqXiw

使用token方式登陆,将token输入到登录页面:

clip_image002

点击sign in 登录。

Comments are closed.