V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
hansonwang99
V2EX  ›  Blogger

利用 Kubeadm 部署 Kubernetes 1.13.1 集群实践录

  •  1
     
  •   hansonwang99 · 2018-12-27 08:10:22 +08:00 · 2926 次点击
    这是一个创建于 2160 天前的主题,其中的信息可能已经有所发展或是发生改变。

    我的最新版桌面


    概 述

    Kubernetes 集群的搭建方法其实有多种,比如我在之前的文章[《利用 K8S 技术栈打造个人私有云(连载之:K8S 集群搭建)》]( https://www.codesheep.cn/2018/02/01/利用 K8S 技术栈打造个人私有云(连载之:K8S 集群搭建)/)中使用的就是二进制的安装方法。虽然这种方法有利于我们理解 k8s 集群,但却过于繁琐。而 kubeadm 是 Kubernetes 官方提供的用于快速部署 Kubernetes 集群的工具,其历经发展如今已经比较成熟了,利用其来部署 Kubernetes 集群可以说是非常好上手,操作起来也简便了许多,因此本文详细叙述之。


    节点规划

    本文准备部署一个 一主两从三节点 Kubernetes 集群,整体节点规划如下表所示:

    主机名 | IP | 角色 |

    k8s-master | 192.168.39.79 | k8s 主节点 |

    k8s-node-1 | 192.168.39.77 | k8s 从节点 |

    k8s-node-2 | 192.168.39.78 | k8s 从节点 |

    下面介绍一下各个节点的软件版本:

    • 操作系统:CentOS-7.4-64Bit
    • Docker 版本:1.13.1
    • Kubernetes 版本:1.13.1

    所有节点都需要安装以下组件:

    • Docker:不用多说了吧
    • kubelet:运行于所有 Node 上,负责启动容器和 Pod
    • kubeadm:负责初始化集群
    • kubectl:k8s 命令行工具,通过其可以部署 /管理应用 以及 CRUD 各种资源

    准备工作

    • 所有节点关闭防火墙
    systemctl disable firewalld.service 
    systemctl stop firewalld.service
    
    • 禁用 SELINUX
    setenforce 0
    
    vi /etc/selinux/config
    SELINUX=disabled
    
    • 所有节点关闭 swap
    swapoff -a
    
    • 设置所有节点主机名
    hostnamectl --static set-hostname  k8s-master
    hostnamectl --static set-hostname  k8s-node-1
    hostnamectl --static set-hostname  k8s-node-2
    
    • 所有节点 主机名 /IP 加入 hosts 解析

    编辑 /etc/hosts文件,加入以下内容:

    192.168.39.79 k8s-master
    192.168.39.77 k8s-node-1
    192.168.39.78 k8s-node-2
    

    组件安装

    0x01. Docker 安装(所有节点)

    不赘述 ! ! !

    0x02. kubelet、kubeadm、kubectl 安装(所有节点)

    • 首先准备 repo
    cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
    [kubernetes]
    name=Kubernetes Repo
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    EOF
    
    • 然后执行如下指令来进行安装
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config
    
    yum install -y kubelet kubeadm kubectl
    systemctl enable kubelet && systemctl start kubelet
    

    安装 kubelet kubeadm kubectl


    Master 节点配置

    0x01. 初始化 k8s 集群

    为了应对网络不畅通的问题,我们国内网络环境只能提前手动下载相关镜像并重新打 tag :

    docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
    docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
    docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
    docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd:3.2.24
    docker pull coredns/coredns:1.2.6
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
    
    docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
    docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
    docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
    docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
    docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
    docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    
    docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1           
    docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1  
    docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1           
    docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1               
    docker rmi mirrorgooglecontainers/pause:3.1                        
    docker rmi mirrorgooglecontainers/etcd:3.2.24                      
    docker rmi coredns/coredns:1.2.6
    docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
    

    准备好的镜像

    然后再在 Master 节点上执行如下命令初始化 k8s 集群:

    kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16
    
    • --kubernetes-version: 用于指定 k8s 版本
    • --apiserver-advertise-address:用于指定使用 Master 的哪个 network interface 进行通信,若不指定,则 kubeadm 会自动选择具有默认网关的 interface
    • --pod-network-cidr:用于指定 Pod 的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的 flannel 网络方案。

    执行命令后,控制台给出了如下所示的详细集群初始化过程:

    [root@localhost ~]# kubeadm init --config kubeadm-config.yaml
    W1224 11:01:25.408209   10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0 podSubnet ”
    [init] Using Kubernetes version: v1.13.1
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull ’
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env ”
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml ”
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki ”
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes ”
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests ”
    [control-plane] Creating static Pod manifest for "kube-apiserver ”
    [control-plane] Creating static Pod manifest for "kube-controller-manager ”
    [control-plane] Creating static Pod manifest for "kube-scheduler ”
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests ”
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 24.005638 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system ” Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
    [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''”
    [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public ” namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123
    
    [root@localhost ~]#
    

    0x02. 配置 kubectl

    在 Master 上用 root 用户执行下列命令来配置 kubectl:

    echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
    source /etc/profile 
    echo $KUBECONFIG
    

    0x03. 安装 Pod 网络

    安装 Pod 网络是 Pod 之间进行通信的必要条件,k8s 支持众多网络方案,这里我们依然选用经典的 flannel 方案

    • 首先设置系统参数:
    sysctl net.bridge.bridge-nf-call-iptables=1
    
    • 然后在 Master 节点上执行如下命令:
    kubectl apply -f kube-flannel.yaml
    

    kube-flannel.yaml 文件在此

    一旦 Pod 网络安装完成,可以执行如下命令检查一下 CoreDNS Pod 此刻是否正常运行起来了,一旦其正常运行起来,则可以继续后续步骤

    kubectl get pods --all-namespaces -o wide
    

    检查所有 pod 是否正常启动

    同时我们可以看到主节点已经就绪:kubectl get nodes

    查看主节点状态


    添加 Slave 节点

    在两个 Slave 节点上分别执行如下命令来让其加入 Master 上已经就绪了的 k8s 集群:

    kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
    

    如果 token 忘记,则可以去 Master 上执行如下命令来获取:

    kubeadm token list
    

    上述 kubectl join 命令的执行结果如下:

    [root@localhost ~]# kubeadm join 192.168.39.79:6443 --token yndddp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8
    [preflight] Running pre-flight checks
    [discovery] Trying to connect to API Server "192.168.39.79:6443 ”
    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443 ”
    [discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443 ”
    [discovery] Successfully established connection with API Server "192.168.39.79:6443 ”
    [join] Reading configuration from the cluster …
    [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml ’
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml ”
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env ”
    [kubelet-start] Activating the kubelet service
    [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap …
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the master to see this node join the cluster.
    

    效果验证

    • 查看节点状态
    kubectl get nodes
    

    节点状态

    • 查看所有 Pod 状态
    kubectl get pods --all-namespaces -o wide
    

    查看所有 Pod 状态

    好了,集群现在已经正常运行了,接下来看看如何正常的拆卸集群。


    拆卸集群

    首先处理各节点:

    kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
    kubectl delete node <node name>
    

    一旦节点移除之后,则可以执行如下命令来重置集群:

    kubeadm reset
    

    安装 dashboard

    就像给 elasticsearch 配一个可视化的管理工具一样,我们最好也给 k8s 集群配一个可视化的管理工具,便于管理集群。

    因此我们接下来安装 v1.10.0版本的 kubernetes-dashboard,用于集群可视化的管理。

    • 首先手动下载镜像并重新打标签:(所有节点)
    docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
    docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
    
    • 安装 dashboard:
    kubectl create -f dashboard.yaml
    

    dashboard.yaml 文件在此

    • 查看 dashboard 的 pod 是否正常启动,如果正常说明安装成功:
     kubectl get pods --namespace=kube-system
    
    [root@k8s-master ~]# kubectl get pods --namespace=kube-system
    NAME                                    READY   STATUS    RESTARTS   AGE
    coredns-86c58d9df4-4rds2                1/1     Running   0          81m
    coredns-86c58d9df4-rhtgq                1/1     Running   0          81m
    etcd-k8s-master                         1/1     Running   0          80m
    kube-apiserver-k8s-master               1/1     Running   0          80m
    kube-controller-manager-k8s-master      1/1     Running   0          80m
    kube-flannel-ds-amd64-8qzpx             1/1     Running   0          78m
    kube-flannel-ds-amd64-jvp59             1/1     Running   0          77m
    kube-flannel-ds-amd64-wztbk             1/1     Running   0          78m
    kube-proxy-crr7k                        1/1     Running   0          81m
    kube-proxy-gk5vf                        1/1     Running   0          78m
    kube-proxy-ktr27                        1/1     Running   0          77m
    kube-scheduler-k8s-master               1/1     Running   0          80m
    kubernetes-dashboard-79ff88449c-v2jnc   1/1     Running   0          21s
    
    • 查看 dashboard 的外网暴露端口
    kubectl get service --namespace=kube-system
    
    NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
    kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   5h38m
    kubernetes-dashboard   NodePort    10.99.242.186   <none>        443:31234/TCP   14
    
    • 生成私钥和证书签名:
    openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
    openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
    rm dashboard.pass.key
    openssl req -new -key dashboard.key -out dashboard.csr  [如遇输入,一路回车即可] 
    
    • 生成 SSL 证书:
    openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
    
    • 然后将生成的 dashboard.keydashboard.crt置于路径 /home/share/certs下,该路径会配置到下面即将要操作的

    dashboard-user-role.yaml文件中

    • 创建 dashboard 用户
     kubectl create -f dashboard-user-role.yaml
    

    dashboard-user-role.yaml 文件在此

    • 获取登陆 token
    kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
    
    [root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
    Name:         admin-token-9d4vl
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: admin
                  kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw
    

    token 既然生成成功,接下来就可以打开浏览器,输入 token 来登录进集群管理页面:

    登录集群管理页面

    集群概览


    后 记

    由于能力有限,若有错误或者不当之处,还请大家批评指正,一起学习交流!



    6 条回复    2019-01-23 11:57:09 +08:00
    0312birdzhang
        1
    0312birdzhang  
       2018-12-27 08:21:22 +08:00 via iPhone
    前几天刚用 kubeadm 搭了一个测服,就是这个步骤
    lestat
        2
    lestat  
       2018-12-27 08:29:32 +08:00 via Android
    先收藏
    yuedingwangji
        3
    yuedingwangji  
       2018-12-27 09:07:40 +08:00 via Android
    之前我也简单搭建了下,有后续教程么,怎么玩?
    br00k
        4
    br00k  
       2018-12-27 09:09:02 +08:00
    我用 rancher,官方文档完善,部署方便。😂
    ghos
        5
    ghos  
       2018-12-27 09:14:24 +08:00 via Android
    rancher+1
    yuedingwangji
        6
    yuedingwangji  
       2019-01-23 11:57:09 +08:00
    大佬 ,docker 用的是 18.09 还是 18.06 呀
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   5620 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 27ms · UTC 06:49 · PVG 14:49 · LAX 22:49 · JFK 01:49
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.