Can’t find kubeadm token after initializing master
The instruction for Kubernetes 1.9.x (and above) can be found here. The commands I used are: kubeadm token generate kubeadm token create <generated-token> –print-join-command –ttl=0
The instruction for Kubernetes 1.9.x (and above) can be found here. The commands I used are: kubeadm token generate kubeadm token create <generated-token> –print-join-command –ttl=0
That namespace exists in clusters created with kubeadm for now. It contains a single ConfigMap object, cluster-info, that aids discovery and security bootstrap (basically, contains the CA for the cluster and such). This object is readable without authentication. If you are courious: $ kubectl get configmap -n kube-public cluster-info -o yaml There are more details … Read more
When I used calico as CNI and I faced a similar issue. The container remained in creating state, I checked for /etc/cni/net.d and /opt/cni/bin on master both are present but not sure if this is required on worker node as well. root@KubernetesMaster:/opt/cni/bin# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-5c7588df-5zds6 0/1 ContainerCreating 0 21m … Read more
One option is to tell kubectl that you don’t want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go: kubectl –insecure-skip-tls-verify –context=employee-context get pods The better option is to fix the certificate. Easiest if you reinitialize the cluster by running kubeadm reset … Read more
Add pod network add-on – Weave Net kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
I faced similar issue recently. The problem was cgroup driver. Kubernetes cgroup driver was set to systems but docker was set to systemd. So I created /etc/docker/daemon.json and added below: { “exec-opts”: [“native.cgroupdriver=systemd”] } Then sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl restart kubelet Run kubeadm init or kubeadm join again.
That’s the beauty of Docker Compose and Docker Swarm… Their simplicity. We came across this same Kubernetes shortcoming when deploying the ELK stack. We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it’s complete, kubernetes automatically starts the [main] container. We made … Read more
Another option you have is to remove the Validating Webhook entirely: kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission I found I had to do that on another issue, but the workaround/solution works here as well. This isn’t the best answer; the best answer is to figure out why this doesn’t work. But at some point, you live … Read more
This worked for me: kubectl label node cb2.4xyz.couchbase.com node-role.kubernetes.io/worker=worker NAME STATUS ROLES AGE VERSION cb2.4xyz.couchbase.com Ready custom,worker 35m v1.11.1 cb3.5xyz.couchbase.com Ready worker 29m v1.11.1 I could not delete/update the old label, but I can live with it.
kubeadm token create –print-join-command