#StackBounty: #kubernetes #kubeadm Changing Kubernetes cluster IP addres

Bounty: 50

I have a single-node Kubernetes cluster that I need to change IP. I see that the current IP is within many configuration files, which is not a big problem. The bigger problem is that when I changed the address, I had an error that the certificate is valid only for the old IP, not the new one.

How should I perform the change? I assume that I can change the IPs in the config files to a DNS-recognizable hostname, so the configs will be valid for both IPs. Is it possible to regenerate the certificate for a hostname so it will also work regardless of the IP?

Edit:

I’ve found a command to regenerate the certificate:

kubeadm alpha phase certs selfsign --apiserver-advertise-address=0.0.0.0 --cert-altnames=10.161.233.80 --cert-altnames=114.215.201.87

Unfortunately, it doesn’t work:

Error: unknown flag: --apiserver-advertise-address

I’ve also tried to run these commands:

sudo mv /etc/kubernetes/pki /etc/kubernetes/pki_bak
sudo kubeadm init phase certs all

It has recreated the certs with a correct ip address, but the /etc/kubernetes/admin.conf still has the wrong one. Any kubectl --kubeconfig=/etc/kubernetes/admin.conf without --insecure-skip-tls-verify fails. Same for kubeadm config view:

sudo kubeadm config view --v=5
I0330 04:32:11.754422   21907 config.go:296] [config] retrieving ClientSet from file
I0330 04:32:11.769423   21907 config.go:400] [config] getting the cluster configuration
Get https://10.202.91.41:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: x509: certificate is valid for 10.96.0.1, 10.202.91.129, not 10.202.91.41

My cluster configuration is quite distributed so I’d like to avoid having to recreate the cluster from scratch as it will be quite hard to do that.

I’m currently on version 1.16.


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.