Cluster Maintenance

"drain" node and move pods:

kubectl drain node-1

This "cordons" a node, to uncordon:

kubectl uncordon node-1

cordon marks unschedulable but leaves existing nodes:

kubectl cordon node-1

Cluster Upgrade Introduction

Components should be somewhat in synch.

kube-apiserver is main component, the controller manager and the kube scheduler should be less than or equal to the version, and be a maximum of one lower inversion. The kubelet and kube proxy should be a maximum of two versions lower than the API server and should not be greater than the version of the API server.

kubectl should be +-1

k8s supports last 3 minor versions.

Upgrades do master first (pods stay up meanwhile)

Nex we do workers, can do all at once or one node at a time.

Alternatively create new nodes with higher version and remove old

We need to upgrade kubeadm first with apt.

Then kubelet with apt

Upg master:

Upg workers:

Upgrading kubeadm clusters

Backup and Restore

Can save all yaml for cluster via:

Can backup etcd via:

To restore:

Operating etcd clusters for Kubernetes

Usually etcd is a static pod, so if we want to edit, edit manifests.

Look at pod:

Find ip, trusted-ca-file, key-file and cert-file, test via:

Snapshot to /opt/snapshot-pre-boot.db:

Restore to /etcd-backup:

We will edit static pod. And point the etcd-data hostpath to new data directory.

Multi-Cluster

List all:

Swap:

Last updated