Kubeconfig

controlplane ~ ➜  kubectl config use-context research --kubeconfig=/root/my-kube-config
Switched to context "research".

We don't want to specify the kubeconfig file option on each kubectl command.

Set the my-kube-config file as the default kubeconfig file and make it persistent across all sessions without overwriting the existing ~/.kube/config. Ensure any configuration changes persist across reboots and new shell sessions.

  1. Open your shell configuration file: vi ~/.bashrc
  2. Add the following line to export the variable: export KUBECONFIG=/root/my-kube-config
  3. Apply the Changes:
  4. Reload the shell configuration to apply the changes in the current session: source ~/.bashrc
controlplane ~ ➜  kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://controlplane:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

controlplane ~ ➜  cat my-kube-config 
apiVersion: v1
kind: Config

clusters:
- name: production
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

- name: development
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

- name: kubernetes-on-aws
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

- name: test-cluster-1
  cluster:
    certificate-authority: /etc/kubernetes/pki/ca.crt
    server: https://controlplane:6443

contexts:
- name: test-user@development
  context:
    cluster: development
    user: test-user

- name: aws-user@kubernetes-on-aws
  context:
    cluster: kubernetes-on-aws
    user: aws-user

- name: test-user@production
  context:
    cluster: production
    user: test-user

- name: research
  context:
    cluster: test-cluster-1
    user: dev-user

users:
- name: test-user
  user:
    client-certificate: /etc/kubernetes/pki/users/test-user/test-user.crt
    client-key: /etc/kubernetes/pki/users/test-user/test-user.key
- name: dev-user
  user:
    client-certificate: /etc/kubernetes/pki/users/dev-user/developer-user.crt       # Changed the file name
    client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key
- name: aws-user
  user:
    client-certificate: /etc/kubernetes/pki/users/aws-user/aws-user.crt
    client-key: /etc/kubernetes/pki/users/aws-user/aws-user.key

current-context: test-user@development
preferences: {}

---

controlplane ~ ➜  cat .kube/config  # this config is not used now, as we set **KUBECONFIG** env to `/root/my-kube-config`
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tL
    server: https://controlplane:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tL
    client-key-data: LS0tL

---

controlplane ~ ➜  kubectl get po
error: unable to read client-cert /etc/kubernetes/pki/users/dev-user/developer-user.crt for dev-user due to open /etc/kubernetes/pki/users/dev-user/developer-user.crt: no such file or directory

controlplane ~ ✖ ls -al /etc/kubernetes/pki/users
total 20
drwxr-xr-x 5 root root 4096 Jun 23 00:00 .
drwxr-xr-x 4 root root 4096 Jun 23 00:00 ..
drwxr-xr-x 2 root root 4096 Jun 23 00:00 aws-user
drwxr-xr-x 2 root root 4096 Jun 23 00:00 dev-user
drwxr-xr-x 2 root root 4096 Jun 23 00:00 test-user

controlplane ~ ➜  vi /root/my-kube-config 

controlplane ~ ➜  kubectl get po
No resources found in default namespace.