How to renew the certificates on your Kubernetes cluster using kubeadm

I was preparing for an interview and I needed to revise my Kubernetes knowledge. I switched on my Kubernetes cluster in order to practice some scenarios and came across the following error message.

petru@ukubemaster:~$ k get po
E1013 09:09:04.884064    2465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://172.16.10.106:6443/api?timeout=32s\": dial tcp 172.16.10.106:6443: connect: connection refused - error from a previous attempt: read tcp 172.16.10.106:44448->172.16.10.106:6443: read: connection reset by peer"
E1013 09:09:04.886435    2465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://172.16.10.106:6443/api?timeout=32s\": dial tcp 172.16.10.106:6443: connect: connection refused"
E1013 09:09:04.888453    2465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://172.16.10.106:6443/api?timeout=32s\": dial tcp 172.16.10.106:6443: connect: connection refused"
E1013 09:09:04.890803    2465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://172.16.10.106:6443/api?timeout=32s\": dial tcp 172.16.10.106:6443: connect: connection refused"
E1013 09:09:04.892893    2465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://172.16.10.106:6443/api?timeout=32s\": dial tcp 172.16.10.106:6443: connect: connection refused"
The connection to the server 172.16.10.106:6443 was refused - did you specify the right host or port?
petru@ukubemaster:~$ 
Kubernetes error message

Based on the error message, the reason for the problem is not very clear.

Check if the pods responsible for the Kubernetes control plane are up and running

I used kubeadm to install Kubernetes on the VMs. My first thought was to check if the pods responsible for the Kubernetes control plane are running. For this, I used the command sudo crictl pods.

According to the output, the pods appear to be in a running status.

petru@ukubemaster:~$ sudo crictl pods
POD ID              CREATED             STATE               NAME                                               NAMESPACE           ATTEMPT             RUNTIME
f7d333a2688a0       About an hour ago   Ready               kube-apiserver-ukubemaster.degulian.com            kube-system         82                  (default)
7c1b201c7471c       About an hour ago   Ready               etcd-ukubemaster.degulian.com                      kube-system         86                  (default)
991f3ccd5928d       About an hour ago   Ready               kube-controller-manager-ukubemaster.degulian.com   kube-system         86                  (default)
956d05e968481       About an hour ago   Ready               kube-scheduler-ukubemaster.degulian.com            kube-system         86                  (default)
0b3c289cddff0       2 days ago          NotReady            kube-scheduler-ukubemaster.degulian.com            kube-system         85                  (default)
a03f2f42b8109       2 days ago          NotReady            kube-controller-manager-ukubemaster.degulian.com   kube-system         85                  (default)
38f110e1d83fb       2 days ago          NotReady            etcd-ukubemaster.degulian.com                      kube-system         85                  (default)
f4691dc37b5f1       4 weeks ago         NotReady            calico-kube-controllers-7f764f4f68-prnvv           kube-system         81                  (default)
1d539e445d3c1       4 weeks ago         NotReady            coredns-7c65d6cfc9-82rww                           kube-system         81                  (default)
576de3148592f       4 weeks ago         NotReady            coredns-7c65d6cfc9-g6wd8                           kube-system         81                  (default)
560dead2ec0c4       4 weeks ago         NotReady            calico-node-pmk8b                                  kube-system         80                  (default)
101f4eac7eab4       4 weeks ago         NotReady            kube-proxy-gccnn                                   kube-system         80                  (default)
6343e97e10767       4 weeks ago         NotReady            coredns-7c65d6cfc9-g6wd8                           kube-system         80                  (default)
50071c26f268c       4 weeks ago         NotReady            coredns-7c65d6cfc9-82rww                           kube-system         80                  (default)
1094219009713       4 weeks ago         NotReady            calico-kube-controllers-7f764f4f68-prnvv           kube-system         80                  (default)
3049b09fdabea       12 months ago       NotReady            kube-bench-master-rcdjf                            default             0                   (default)
petru@ukubemaster:~$ 
Output from sudo crictl pods command

Check the logs for the containers to find out more information

You cannot check the logs of the pods with the crictl command. However, you can use it to find out what containers are running, and then check the logs for the containers themselves.

petru@ukubemaster:~$ sudo crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
3857efcf45aa1       2e96e5913fc06       About an hour ago   Running             etcd                      368                 7c1b201c7471c       etcd-ukubemaster.degulian.com
3b709a06499cc       045733566833c       About an hour ago   Running             kube-controller-manager   88                  991f3ccd5928d       kube-controller-manager-ukubemaster.degulian.com
fb3f945f57bda       1766f54c897f0       About an hour ago   Running             kube-scheduler            88                  956d05e968481       kube-scheduler-ukubemaster.degulian.com
petru@ukubemaster:~$ 
Output of the command crictl ps

We can see that there are 3 containers in a runnning state. Let’s check the logs for the etcd container with the below command.

petru@ukubemaster:~$ sudo crictl logs 3857efcf45aa1 
{"level":"warn","ts":"2025-10-13T08:08:34.472695Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2025-10-13T08:08:34.472927Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.16.10.106:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.16.10.106:2380","--initial-cluster=ukubemaster.degulian.com=https://172.16.10.106:2380","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.16.10.106:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.16.10.106:2380","--name=ukubemaster.degulian.com","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2025-10-13T08:08:34.673295Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}
{"level":"warn","ts":"2025-10-13T08:08:34.673490Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2025-10-13T08:08:34.673534Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.16.10.106:2380"]}
{"level":"info","ts":"2025-10-13T08:08:34.673666Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2025-10-13T08:08:34.799628Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.16.10.106:2379"]}
{"level":"info","ts":"2025-10-13T08:08:34.819070Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":true,"name":"ukubemaster.degulian.com","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.16.10.106:2380"],"listen-peer-urls":["https://172.16.10.106:2380"],"advertise-client-urls":["https://172.16.10.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.16.10.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2025-10-13T08:08:41.351983Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"6.527098587s"}
{"level":"info","ts":"2025-10-13T08:08:53.513216Z","caller":"etcdserver/server.go:511","msg":"recovered v2 store from snapshot","snapshot-index":34314062,"snapshot-size":"39 kB"}
{"level":"info","ts":"2025-10-13T08:08:53.513851Z","caller":"etcdserver/server.go:524","msg":"recovered v3 backend from snapshot","backend-size-bytes":143138816,"backend-size":"143 MB","backend-size-in-use-bytes":54341632,"backend-size-in-use":"54 MB"}
{"level":"info","ts":"2025-10-13T08:08:53.557122Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"96ec802e1e378685","local-member-id":"4800ade1f4197616","commit-index":34314651}
{"level":"info","ts":"2025-10-13T08:08:53.558627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 switched to configuration voters=(5188337956705367574)"}
{"level":"info","ts":"2025-10-13T08:08:53.558741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 became follower at term 84"}
{"level":"info","ts":"2025-10-13T08:08:53.558837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4800ade1f4197616 [peers: [4800ade1f4197616], term: 84, commit: 34314651, applied: 34314062, lastindex: 34314651, lastterm: 84]"}
{"level":"info","ts":"2025-10-13T08:08:53.559203Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2025-10-13T08:08:53.559273Z","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"96ec802e1e378685","local-member-id":"4800ade1f4197616","recovered-remote-peer-id":"4800ade1f4197616","recovered-remote-peer-urls":["https://172.16.10.106:2380"]}
{"level":"info","ts":"2025-10-13T08:08:53.559311Z","caller":"membership/cluster.go:287","msg":"set cluster version from store","cluster-version":"3.5"}
{"level":"warn","ts":"2025-10-13T08:08:53.561985Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2025-10-13T08:08:53.565804Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":24814545}
{"level":"info","ts":"2025-10-13T08:08:53.652301Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":24824132}
{"level":"info","ts":"2025-10-13T08:08:53.774481Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2025-10-13T08:08:53.779756Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4800ade1f4197616","timeout":"7s"}
{"level":"info","ts":"2025-10-13T08:08:53.799998Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4800ade1f4197616"}
{"level":"info","ts":"2025-10-13T08:08:53.800200Z","caller":"etcdserver/server.go:858","msg":"starting etcd server","local-member-id":"4800ade1f4197616","local-server-version":"3.5.15","cluster-id":"96ec802e1e378685","cluster-version":"3.5"}
{"level":"info","ts":"2025-10-13T08:08:53.800716Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"4800ade1f4197616","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2025-10-13T08:08:53.800738Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2025-10-13T08:08:53.800919Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2025-10-13T08:08:53.800950Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2025-10-13T08:08:53.820297Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-10-13T08:08:53.824073Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2025-10-13T08:08:53.824280Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.16.10.106:2380"}
{"level":"info","ts":"2025-10-13T08:08:53.824421Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.16.10.106:2380"}
{"level":"info","ts":"2025-10-13T08:08:53.824749Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4800ade1f4197616","initial-advertise-peer-urls":["https://172.16.10.106:2380"],"listen-peer-urls":["https://172.16.10.106:2380"],"advertise-client-urls":["https://172.16.10.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.16.10.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2025-10-13T08:08:53.824835Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2025-10-13T08:08:54.259872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 is starting a new election at term 84"}
{"level":"info","ts":"2025-10-13T08:08:54.260008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 became pre-candidate at term 84"}
{"level":"info","ts":"2025-10-13T08:08:54.260093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 received MsgPreVoteResp from 4800ade1f4197616 at term 84"}
{"level":"info","ts":"2025-10-13T08:08:54.260129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 became candidate at term 85"}
{"level":"info","ts":"2025-10-13T08:08:54.260162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 received MsgVoteResp from 4800ade1f4197616 at term 85"}
{"level":"info","ts":"2025-10-13T08:08:54.260193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4800ade1f4197616 became leader at term 85"}
{"level":"info","ts":"2025-10-13T08:08:54.260243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4800ade1f4197616 elected leader 4800ade1f4197616 at term 85"}
{"level":"info","ts":"2025-10-13T08:08:54.303028Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4800ade1f4197616","local-member-attributes":"{Name:ukubemaster.degulian.com ClientURLs:[https://172.16.10.106:2379]}","request-path":"/0/members/4800ade1f4197616/attributes","cluster-id":"96ec802e1e378685","publish-timeout":"7s"}
{"level":"info","ts":"2025-10-13T08:08:54.303056Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-10-13T08:08:54.303091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
{"level":"info","ts":"2025-10-13T08:08:54.303639Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2025-10-13T08:08:54.303748Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2025-10-13T08:08:54.331446Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-10-13T08:08:54.331462Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
{"level":"info","ts":"2025-10-13T08:08:54.333727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.16.10.106:2379"}
{"level":"info","ts":"2025-10-13T08:08:54.333895Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2025-10-13T08:08:54.349433Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57850","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2025-10-13T08:08:54.349433Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2025-10-13T08:08:54.349992Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57872","server-name":"","error":"remote error: tls: bad certificate"}
{"level":"warn","ts":"2025-10-13T08:08:54.356644Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"172.16.10.106:33706","server-name":"","error":"tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T08:08:54Z is after 2025-09-30T14:29:04Z"}
Output of crictl logs command
"error":"remote error: tls: bad certificate"}

The output provides a clue to the reason for the error message that we see when running kubectl commands. It seems that the certificates have expired.

Check if the certificates are expired

To check if this is true or not, we could run the following commands.

petru@ukubemaster:~$ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 2284902737035841905 (0x1fb59aeef356d171)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN = kubernetes
        Validity
            Not Before: Sep 30 14:24:04 2024 GMT
            Not After : Sep 30 14:29:04 2025 GMT
        Subject: CN = kube-apiserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:b9:46:e4:a5:f9:46:65:35:d1:36:72:8b:2a:58:
                    38:06:46:10:25:e5:68:34:74:00:e1:41:92:d3:52:
                    d0:61:8e:a2:7b:d2:8e:38:30:ae:09:de:b1:c4:63:
                    cb:3c:24:77:29:4b:21:d1:24:52:06:a7:48:dd:61:
                    79:1b:b0:eb:2c:96:c4:6d:e0:71:6f:94:79:0a:e9:
                    00:26:a2:3e:67:45:a8:15:61:c2:4a:fb:a6:95:9c:
                    38:f4:f2:6e:0d:ea:d6:07:8e:c4:c0:86:53:3c:dc:
                    f9:ca:25:84:80:4c:06:de:ff:a1:55:32:74:72:44:
                    27:1e:0d:aa:a8:fe:0a:67:28:3b:b0:3a:c8:67:47:
                    61:b9:4f:6e:54:72:7b:e7:a6:51:a6:98:8c:2b:09:
                    5c:9c:27:3c:e5:64:e7:d8:0b:07:ab:95:92:cc:79:
                    bf:98:e5:1b:d5:0b:ac:dd:7d:be:f1:8f:0d:2a:ed:
                    7f:1d:a1:3c:5a:1a:4d:b8:5c:c9:e2:52:cc:0a:e0:
                    b5:76:f3:41:d3:70:b1:34:bd:15:e6:43:74:28:0f:
                    22:45:c9:6f:6a:03:e2:22:56:09:a3:e2:d0:de:d2:
                    8b:43:a4:57:3b:c0:93:26:cf:f9:6a:5a:35:d7:7e:
                    54:dd:8f:ff:5c:3b:38:2a:c9:70:7f:b1:91:14:f1:
                    77:eb
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier: 
                F5:4C:20:28:B8:9A:7F:2C:F3:1E:D6:E1:CA:A5:36:B4:EA:9D:C4:70
            X509v3 Subject Alternative Name: 
                DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:ukubemaster.degulian.com, IP Address:10.96.0.1, IP Address:172.16.10.106
    Signature Algorithm: sha256WithRSAEncryption
    Signature Value:
        75:ca:d8:c7:27:1e:02:4c:f5:ac:66:6c:87:c2:85:06:01:45:
        eb:94:4c:c1:80:a7:33:80:4a:46:8c:9c:89:dc:c3:80:85:f6:
        d5:5c:e2:97:8f:16:d5:33:06:33:59:51:1d:cf:fa:d2:22:0a:
        bd:9b:f1:87:04:73:92:25:3c:60:18:31:cc:a9:60:04:a0:7d:
        77:cc:a1:73:19:eb:5d:d9:9a:ae:90:a2:82:07:ac:51:f5:fb:
        f5:43:21:ee:22:68:10:bc:f7:71:d9:63:5f:70:74:1c:40:c4:
        47:f4:a9:38:cd:99:8a:80:3b:5c:e8:de:79:1b:31:6c:1a:12:
        5d:04:b1:5c:43:2b:40:54:6c:65:ac:71:8f:a3:43:ae:04:a7:
        20:8a:d9:00:27:fc:8c:fe:60:82:96:64:8c:1d:ce:93:a6:81:
        e7:ca:50:69:d8:9b:29:14:a4:33:a6:0d:22:4c:3c:98:e3:a6:
        fb:47:03:d4:e0:25:3f:af:74:d7:7e:3d:5e:e7:6b:6d:3d:aa:
        8d:a8:ae:ec:28:39:5a:5e:43:b2:11:16:61:95:94:d2:28:b7:
        70:97:48:65:fd:a0:66:aa:6e:04:20:ad:c6:91:78:b0:6a:21:
        ec:33:68:94:b9:ca:d1:ac:ef:1d:24:ba:45:78:35:23:c9:e9:
        1a:c3:cb:d4
petru@ukubemaster:~$ 
Output of the command: openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout

Indeed, the certificate has expired on 30th September 2025. Let’s use the kubeadm command to check the status of all the certificates used in the Kubernetes cluster.

petru@ukubemaster:~$ sudo kubeadm certs check-expiration
[sudo] password for petru: 
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Sep 30, 2025 14:29 UTC   <invalid>       ca                      no      
apiserver                  Sep 30, 2025 14:29 UTC   <invalid>       ca                      no      
apiserver-etcd-client      Sep 30, 2025 14:29 UTC   <invalid>       etcd-ca                 no      
apiserver-kubelet-client   Sep 30, 2025 14:29 UTC   <invalid>       ca                      no      
controller-manager.conf    Sep 30, 2025 14:29 UTC   <invalid>       ca                      no      
etcd-healthcheck-client    Sep 30, 2025 14:29 UTC   <invalid>       etcd-ca                 no      
etcd-peer                  Sep 30, 2025 14:29 UTC   <invalid>       etcd-ca                 no      
etcd-server                Sep 30, 2025 14:29 UTC   <invalid>       etcd-ca                 no      
front-proxy-client         Sep 30, 2025 14:29 UTC   <invalid>       front-proxy-ca          no      
scheduler.conf             Sep 30, 2025 14:29 UTC   <invalid>       ca                      no      
super-admin.conf           Sep 30, 2025 14:29 UTC   <invalid>       ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Sep 28, 2034 14:29 UTC   8y              no      
etcd-ca                 Sep 28, 2034 14:29 UTC   8y              no      
front-proxy-ca          Sep 28, 2034 14:29 UTC   8y              no      
petru@ukubemaster:~$ 
Output of kubeadm certs check-expiration command

There are a couple of certificates that are expired.

Renew the certificates with the kubeadm command

To renew the certificates, we need to run the command kubeadm certs renew all.

kubeadm certs renew --help

I will use the all option to renew all available certificates.

petru@ukubemaster:~$ sudo kubeadm certs renew all
[sudo] password for petru: 
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
certificate embedded in the kubeconfig file for the super-admin renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
petru@ukubemaster:~$ 
Output of the command: sudo kubeadm certs renew all

Confirm that the certificates have been renewed

Let’s confirm now that the certificates have been renewed.

petru@ukubemaster:~$ sudo kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Oct 13, 2026 10:55 UTC   364d            ca                      no      
apiserver                  Oct 13, 2026 10:55 UTC   364d            ca                      no      
apiserver-etcd-client      Oct 13, 2026 10:55 UTC   364d            etcd-ca                 no      
apiserver-kubelet-client   Oct 13, 2026 10:55 UTC   364d            ca                      no      
controller-manager.conf    Oct 13, 2026 10:55 UTC   364d            ca                      no      
etcd-healthcheck-client    Oct 13, 2026 10:55 UTC   364d            etcd-ca                 no      
etcd-peer                  Oct 13, 2026 10:55 UTC   364d            etcd-ca                 no      
etcd-server                Oct 13, 2026 10:55 UTC   364d            etcd-ca                 no      
front-proxy-client         Oct 13, 2026 10:55 UTC   364d            front-proxy-ca          no      
scheduler.conf             Oct 13, 2026 10:55 UTC   364d            ca                      no      
super-admin.conf           Oct 13, 2026 10:55 UTC   364d            ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Sep 28, 2034 14:29 UTC   8y              no      
etcd-ca                 Sep 28, 2034 14:29 UTC   8y              no      
front-proxy-ca          Sep 28, 2034 14:29 UTC   8y              no      
petru@ukubemaster:~$ 
Confirm that the certificates were renewed

The certificates have been renewed. They are valid for one more year.

If you run the kubectl command now, you will find an output similar to the below one.

petru@ukubemaster:~$ kubectl get po
E1013 14:08:49.795813   97521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E1013 14:08:49.818185   97521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E1013 14:08:49.836559   97521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E1013 14:08:49.855477   97521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
E1013 14:08:49.873995   97521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: the server has asked for the client to provide credentials"
error: You must be logged in to the server (the server has asked for the client to provide credentials)
petru@ukubemaster:~$ 
wrong credentials

Basically, the kubectl is trying to connect to the API server using the old credentials (old certificates). You will need to use the new credentials (new certificates) that were generated.

Copy the new config in your home directory

To use the kubectl command, copy the file /etc/kubernetes/admin.conf to your home directory. Please update the permissions for the new file to match your user and group.

petru@ukubemaster:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite '/home/petru/.kube/config'? y  
petru@ukubemaster:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
petru@ukubemaster:~$ 
copy the admin.conf in your home directory

Confirm that the kubectl command is running without any error

The last step is to confirm that you can run kubectl commands without any errors.

petru@ukubemaster:~$ kubectl get po
NAME                        READY   STATUS             RESTARTS        AGE
firstginx-d7d55ffc9-drg9j   1/1     Running            79 (3d3h ago)   377d
firstginx-d7d55ffc9-mlq2k   1/1     Running            70 (3d3h ago)   361d
firstginx-d7d55ffc9-qfnvf   1/1     Running            75 (3d3h ago)   368d
kube-bench-master-rcdjf     0/1     Completed          0               362d
medusa                      0/1     ImagePullBackOff   0               182d
myapp-b6579598c-2vpc9       2/2     Running            37 (3d3h ago)   258d
mydaemon-p8lr9              1/1     Running            79 (3d3h ago)   377d
mydaemon-zvdtb              1/1     Running            78 (3d3h ago)   377d
nginx                       0/1     Pending            0               256d
tester-567d8d577-w9bn7      1/1     Running            72 (3d3h ago)   361d
petru@ukubemaster:~$ 
kubectl command running without any error

My Kubernetes cluster is reachable and ready for me to continue with the experimentation. That’s all. If you encounter a similar error message, you will know the possible reason and how to fix the issue. You will need to renew the certificates used by Kubernetes.

Processing…
Success! You're on the list.

Leave a Reply