상세 컨텐츠

본문 제목

K8S Master Clustering 설치 및 구성

Ops/Kubernetes

by 크리두 2020. 5. 28. 10:20

본문

반응형
  1. 환경 정보

    kubeadm : v1.12.3
    kubelet : v1.12.3
    docker version : 17.03.2-ce
    ubuntu 16.04

  2. 사용 IP

    IP는 예시이니 참고로 알아두시면 될거 같습니다.
    VIP는 Active / Standby 형태로 구성하기 위해 만들었습니다.

192.168.10.25 master1 master server
192.168.10.26 master2 master server
192.168.10.27 master3 master server
192.168.10.28   VIP

 

 

  • 설치 진행
    • k8s cluster install (master #1, #2, #3 공통 적용)
sudo apt-get update   && sudo apt-get install -qy docker.io

sudo apt-get update   && sudo apt-get install -y apt-transport-https   && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

echo "deb http://apt.kubernetes.io/ kubernetes-xenial main"   | sudo tee -a /etc/apt/sources.list.d/kubernetes.list   && sudo apt-get update

apt-get install kubelet=1.12.3-00 kubeadm=1.12.3-00 kubernetes-cni=0.6.0-00 keepalived

- sysctl 설정

cat <<EOF > /etc/sysctl.d/k8s.conf

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl -p /etc/sysctl.d/k8s.conf

swapoff -a

/etc/fstab ## swap 주석

systemctl enable docker

systemctl start docker

 

  • HAProxy (m1, m2, m3) 설정

    폴더 생성 후 아래 yaml 파일을 haproxy.yaml 파일로 저장
mkdir -p /etc/haproxy
더보기

kind: Pod
apiVersion: v1
metadata:                   
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  labels:
    component: haproxy
    tier: control-plane
  name: kube-haproxy
  namespace: kube-system
spec:                    
  hostNetwork: true
  priorityClassName: system-cluster-critical
  containers:
  - name: kube-haproxy
    image: docker.io/haproxy:1.7-alpine
    resources:
      requests:
        cpu: 100m
    volumeMounts:
    - name: haproxy-cfg
      readOnly: true
      mountPath: /usr/local/etc/haproxy/haproxy.cfg
  volumes:                    
  - name: haproxy-cfg
    hostPath:
      path: /etc/haproxy/haproxy.cfg
      type: FileOrCreate

  • keepalived 설정

더보기

cat <<EOF > /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {
interface enp0s3
state MASTER
virtual_router_id 51
priority 100
nopreempt
advert_int 1

unicast_peer {
192.168.10.25
192.168.10.26
192.168.10.27
}
track_interface {
enp0s3
}

virtual_ipaddress {
192.168.10.28
}
}

virtual_server 192.168.10.28 8080 {
delay_loop 5
lvs_sched wlc
lvs_method NAT
persistence_timeout 1800
protocol TCP

real_server 192.168.10.25 6443 {
weight 1
TCP_CHECK {
connect_port 6443
connect_timeout 3
}
}
real_server 192.168.10.26 6443 {
weight 1
TCP_CHECK {
connect_port 6443
connect_timeout 3
}
}
real_server 192.168.10.27 6443 {
weight 1
TCP_CHECK {
connect_port 6443
connect_timeout 3
}
}
}

EOF

systemctl start keepalived
systemctl enable keepalived

 

 

  • master 1

 

더보기

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
apiServerCertSANs:
- "192.168.10.28"
api:
  controlPlaneEndpoint: "192.168.10.28:8443"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://192.168.100.25:2379"
      advertise-client-urls: "https://192.168.100.25:2379"
      listen-peer-urls: "https://192.168.100.25:2380"
      initial-advertise-peer-urls: "https://192.168.100.25:2380"
      initial-cluster: "m1=https://192.168.100.25:2380"
    serverCertSANs:
      - m1
      - 192.168.10.25
    peerCertSANs:
      - m1
      - 192.168.10.25
networking:
  podSubnet: "10.244.0.0/16"

kubeadm init --config kubeadm-config.yaml
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

ca key를 통해 인증을 하여 등록되기 때문에 ca key를 각 Master에 옮겨줘야 한다.

scp가 번거로우면 ansible로 관리를 해주면 편하다.

 

(m2, m3 → mkdir /etc/kubernetes/pki/etcd)

scp /etc/kubernetes/pki/ca.crt 192.168.10.27:/etc/kubernetes/pki
scp /etc/kubernetes/pki/ca.key 192.168.10.27:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub 192.168.10.27:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key 192.168.10.27:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key 192.168.10.27:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt 192.168.10.27:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt 192.168.10.27:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key 192.168.10.27:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf 192.168.10.27:/etc/kubernetes/

 

  • master2
더보기

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
apiServerCertSANs:
- "192.168.100.28"
api:
  controlPlaneEndpoint: "192.168.10.28:8443"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://192.168.10.26:2379"
      advertise-client-urls: "https://192.168.10.26:2379"
      listen-peer-urls: "https://192.168.10.26:2380"
      initial-advertise-peer-urls: "https://192.168.10.26:2380"
      initial-cluster: "m1=https://192.168.10.25:2380,m2=https://192.168.10.26:2380"
      initial-cluster-state: existing
    serverCertSANs:
      - m2
      - 192.168.100.26
    peerCertSANs:
      - m2
      - 192.168.100.26
networking:
  podSubnet: "10.244.0.0/16"

kubeadm alpha phase certs all --config /root/kubeadm-config.yaml
kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml
kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml
systemctl restart kubelet

export KUBECONFIG=/etc/kubernetes/admin.conf

kubectl exec -n kube-system etcd-m1 -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://192.168.100.25:2379 member add m2 https://192.168.100.26:2380

kubeadm alpha phase etcd local --config kubeadm-config.yaml

kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml
kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

  • master3
더보기

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
apiServerCertSANs:
- "192.168.100.28"
api:
  controlPlaneEndpoint: "192.168.10.28:8443"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://192.168.10.27:2379"
      advertise-client-urls: "https://192.168.10.27:2379"
      listen-peer-urls: "https://192.168.10.27:2380"
      initial-advertise-peer-urls: "https://192.168.100.27:2380"
      initial-cluster: "m1=https://192.168.10.25:2380,m2=https://192.168.10.26:2380,m4=https://192.168.10.27:2380"
      initial-cluster-state: existing
    serverCertSANs:
      - m4
      - 192.168.100.27
    peerCertSANs:
      - m4
      - 192.168.100.27
networking:
  podSubnet: "10.244.0.0/16"

 

kubeadm alpha phase certs all --config /root/kubeadm-config.yaml
kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml
kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml
kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml
kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml
kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml
systemctl restart kubelet

export KUBECONFIG=/etc/kubernetes/admin.conf

kubeadm alpha phase etcd local --config kubeadm-config.yaml

kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml
kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml
kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

  • flannel 적용

    내부적으로 가상화 네트워크를 잡아주기 위해서 사용한다.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

 

  • 참고

 

master 확인 및 리스트 확인

etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key -C https://192.168.10.25:2379,https://192.168.10.26:2379,https://192.168.10.27:2379 member list

 

< Trouble Shooting >

 

  • keepalived IPVS 관련 에러시
modprobe -r ip_vs
modprobe ip_vs
systemctl restart keepalived

 

  • host name 과 kubeadm.conf 의 name 이 반드시 맞아야 함.

  • 버추얼 박스 + vagrant 로 할경우 오류 : 

    1 docker 버젼 실패시 

    export VERSION=18.03 && curl -sSL get.docker.com | sh

  • https://medium.com/@yunhochung/%EC%BF%A0%EB%B2%84%EB%84%A4%ED%8B%B0%EC%8A%A4-k8s-%EC%84%A4%EC%B9%98-%EB%B0%8F-%ED%81%B4%EB%9F%AC%EC%8A%A4%ED%84%B0-%EA%B5%AC%EC%84%B1%ED%95%98%EA%B8%B0-dc64e2fb44ae



    This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    Docker Cgroup Driver: cgroupfs = > Cgroup Driver: systemd


    # 데몬 설정. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Docker 재시작. systemctl daemon-reload systemctl restart docker

  • [preflight] running pre-flight checks
    [WARNING Hostname]: hostname "k8s-test1" could not be reached
    [WARNING Hostname]: hostname "k8s-test1" lookup k8s-test1 on 8.8.8.8:53: no such host

     

    /etc/hosts 수정

    127.0.0.1 localhost
    127.0.1.1 k8s-test1

  • 우분투 도커 특정 버전 설치
    https://pyjaru.github.io/20180808_01/

  • [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty

    rm -rf /var/lib/etcd

  • kubeadm init --config kubeadm-config.yaml

    init 시 생성된 파일 다시 reset 해야 다시 적용됨.
    아래 명령어로 reset

    kubeadm reset

  • node 조인 시 master token 필요 시

    kubeadm token list

     

     

  • hash ca key 확인

     

    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
kubeadm reset
rm -rf /var/lib/etcd

 

 

반응형

'Ops > Kubernetes' 카테고리의 다른 글

RBAC Authorization  (0) 2021.10.20
K8S ingress에 대하여  (0) 2021.06.24
Private 도커 Registry 설치 및 설정  (0) 2020.11.10
K8S node Internal IP 변경  (0) 2020.04.23
Docker 설치 및 세팅(CentOS 7)  (0) 2020.01.07

관련글 더보기

댓글 영역