[CKA][re] Mock Exam - 3 풀이 (오답 체크) By starseat 2023-05-05 13:50:24 server/oss Post Tags # udemy 연습 문제 [Certified Kubernetes Administrator (CKA) with Practice Tests](https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/) 의 **Mock Exam - 3** 의 문제 및 확인 사항 # Q1 - check Create a new **service account** with the name `pvviewer`. Grant this Service account access to `list` all **PersistentVolumes** in the cluster by creating an appropriate cluster role called `pvviewer-role` and **ClusterRoleBinding** called `pvviewer-role-binding`. Next, create a **pod** called `pvviewer` with the image: `redis` and serviceAccount: `pvviewer` in the **default namespace**. - ServiceAccount: **pvviewer** - ClusterRole: **pvviewer-role** - ClusterRoleBinding: **pvviewer-role-binding** - Pod: **pvviewer** - Pod configured to use ServiceAccount pvviewer ? ## note ```text $ kubectl create serviceaccount pvviewer serviceaccount/pvviewer created // k get sa 로 확인 $ kubectl create clusterrole pvviewer-role --verb=list --resource=persistentvolumes clusterrole.rbac.authorization.k8s.io/pvviewer-role created // k get clusterrole pvviewer-role 로 확인 $ kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer clusterrolebinding.rbac.authorization.k8s.io/pvviewer-role-binding created ``` ## solution Pods authenticate to the API Server using `ServiceAccounts`. If the ServiceAccount name is not specified, the default service account for the namespace is used during a pod creation. Reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ Now, **create a service account `pvviewer`**: ```text $ kubectl create serviceaccount pvviewer serviceaccount/pvviewer created ``` To create a clusterrole: ```text $ kubectl create clusterrole pvviewer-role --resource=persistentvolumes --verb=list ``` To create a clusterrolebinding: ```text kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer ``` Solution manifest file to create a new pod called `pvviewer` as follows: ```yaml apiVersion: v1 kind: Pod metadata: labels: run: pvviewer name: pvviewer spec: containers: - image: redis name: pvviewer # Add service account name serviceAccountName: pvviewer # 중요! ``` # Q2 - check List the `InternalIP` of all nodes of the cluster. Save the result to a file `/root/CKA/node_ips`. Answer should be in the format: **InternalIP of controlplane**(space)**InternalIP of node01** (in a single line) ## note ```text $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME controlplane Ready control-plane 43m v1.26.0 192.6.224.6 Ubuntu 20.04.5 LTS 5.4.0-1104-gcp containerd://1.6.6 node01 Ready 42m v1.26.0 192.6.224.9 Ubuntu 20.04.5 LTS 5.4.0-1104-gcp containerd://1.6.6 ``` ```text $ kubectl get nodes -o json | jq $ kubectl get nodes -o json | jq -c 'path' ``` ## solution Explore the jsonpath loop. ```text $ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips ``` ```text $ cat /root/CKA/node_ips 192.6.224.6 192.6.224.9 ``` # Q3 Create a pod called `multi-pod` with two containers. Container 1, name: `alpha`, image: `nginx` Container 2: name: `beta`, image: `busybox`, command: `sleep 4800` Environment Variables: - Container 1: - name: alpha - Container 2: - name: beta - Pod Name: `multi-pod` - Container 1: `alpha` - Container 2: `beta` - Container beta commands set correctly? - Container 1 Environment Value Set - Container 2 Environment Value Set ## solution ```yaml # multi-pod.yaml apiVersion: v1 kind: Pod metadata: name: multi-pod spec: containers: - image: nginx name: alpha env: - name: "name" value: "alpha" - image: busybox name: beta command: ["sleep", "4800"] env: - name: "name" value: "beta" ``` ```yaml # command: ["sleep", "4800"] 를 아래와 같이 사용 가능 command: - "sleep" - "4800" ``` # Q4 Create a Pod called `non-root-pod` , image: `redis:alpine` **runAsUser: 1000** **fsGroup: 2000** - Pod non-root-pod fsGroup configured - Pod non-root-pod runAsUser configured ## solution ```yaml # non-root-pod.yaml apiVersion: v1 kind: Pod metadata: name: non-root-pod spec: securityContext: runAsUser: 1000 fsGroup: 2000 containers: - name: redis image: redis:alpine ``` # Q5 - check We have deployed a new pod called `np-test-1` and a service called `np-test-service`. Incoming connections to this service are not working. Troubleshoot and fix it. Create **NetworkPolicy**, by the name `ingress-to-nptest` that allows incoming connections to the service over port 80. Important: Don't delete any current objects deployed. - Important: Don't Alter Existing Objects! - NetworkPolicy: Applied to All sources (Incoming traffic from all pods)? - NetWorkPolicy: Correct Port? - NetWorkPolicy: Applied to correct Pod? ## solution ```yaml # vi ingress-to-nptest.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ingress-to-nptest namespace: default spec: podSelector: matchLabels: run: np-test-1 policyTypes: - Ingress ingress: - ports: - protocol: TCP port: 80 ``` - 접속 확인 ```text $ kubectl run alpine/curl --rm -it -- sh > curl np-test-service ``` # Q6 Taint the worker node `node01` to be **Unschedulable**. Once done, create a pod called `dev-redis`, image `redis:alpine`, to ensure workloads are not scheduled to this worker node. Finally, create a new pod called `prod-redis` and image: `redis:alpine` with **toleration to be scheduled on node01**. key: `env_type`, value: `production`, operator: `Equal` and effect: `NoSchedule` - Key = env_type - Value = production - Effect = NoSchedule - pod 'dev-redis' (no tolerations) is not scheduled on node01? - Create a pod 'prod-redis' to run on node01 ## solution ```text $ kubectl taint --help $ kubectl taint nodes node01 env_type=production:NoSchedule ``` ```test // sample pod $ kubectl run dev-redis --image=redis:alpine $ kubectl run prod-redis --image=redis:alpine --dry-run=client -o yaml > prod-redis.yaml ``` ```yaml # tolerations-pod-redis.yaml apiVersion: v1 kind: Pod metadata: name: dev-redis spec: containers: - name: dev-redis image: nginx --- apiVersion: v1 kind: Pod metadata: name: prod-redis labels: env_type: poduction spec: containers: - name: prod-redis image: nginx tolerations: - key: "env_type" operator: "Exists" effect: "NoSchedule" value: "production" ``` # Q7 Create a pod called `hr-pod` in `hr` namespace belonging to the **`production` environment and `frontend` tier** . image: `redis:alpine` Use appropriate labels and create all the required objects if it does not exist in the system already. - hr-pod labeled with environment production? - hr-pod labeled with tier frontend? ## solution ```text $ kubectl create ns hr namespace/hr created ``` ```text $ kubectl run hr-pod -n hr --image=redis:alpine \ --labels="environment=production,tier=frontend" ``` or ```yaml # hr-pod.yaml apiVersion: v1 kind: Pod metadata: name: hr-pod namespace: hr labels: environment: production tier: frontend spec: containers: - name: redis image: redis:alpine ``` # Q8 - failed A kubeconfig file called `super.kubeconfig` has been created under `/root/CKA`. There is something wrong with the configuration. Troubleshoot and fix it. - Fix /root/CKA/super.kubeconfig ## original file ```yaml # /root/CKA/super.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1EVXdOVEEwTWpJek0xb1hEVE16TURVd01qQTBNakl6TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFJkCk9pNlVqMWd1THRYYkRRRDRKeGNLNVplTWRpTWpBYmIzUkdpcEVuclpRbXkxM2daSXc2SFZXazlvQ0kxRjB3R2IKajdKT3M0Yk5oVXF5bWtCc2pOVXNoU2J6TlI4RHo0NzBuVVdkNXpuQkttWXZUUW5wT2FuTTBTZ3VLbFNOM0kvZwovOWNlS1FRMExxdWZxdWE3ZElOL1hQN09QR3V4OTcvSVpKZjh6MmJVTnpEVGR6eERuYnNEWHh4YUFoUTd3NzNhCklpbU9kR1ZEdk8zSitRUEhOYXBXVkVpNTUvQlhOaW9IRUZNOUtxY2xhKy9aVEFPTmtPdU0zNTZJSjExM0tCWmUKQlJxV3lsMUlFdk9aSTgrd09YcU95WFRDUlZxNllzODkva0R5U21oRW53b0hocnh2TXNoanJEeHJVMHo4am1HeQptOFNlTnU4MFZDUWNNVXNIQ1AwQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZJYjJ6L3JDc1BRTTlDZ3JGTnFobnNBS3pNMHRNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBR3lneGdUMlU5RWExenZQTENNWAoxNCtMYWk5RmpWYk5YbGc4OUZQdGh6SVAwZUM2SDRFMXlUSlBlUitDaWlaRTljUW05YXFkMEs1Sk9tK3Z0Wkd2CjF2by96VTVOcVhEb1hNRmRXNFhKNk05RkdIRTZ1U0I3anhzc3U4dVg5VkZyNzJBM2haTFdBT25YQ0FRN2daMjcKZTc4RkJTM2pkeXJLbFpDOThTZjlLYk9iM2lUVkxQcTFFZSs4S0NtdVB2VytJWUhSMmZuMllmSUxCVVlXNGpvRApBMWp2c2gzMU50bW4wUkdxMWZ5MW9VbFNXY2lQK2JMY2gybGpwS094MXF4eGg4QWN0OFFTaUhGTjVsalUyN0dkCnR3b2dldUlHMWhuZjhnWFE3WWJSay9vY21XOGpIcXEwQ3FodlVUMHBkZ1dOTHE3ak9vKzdrZ3ZXckNaYytaVGEKenBnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://controlplane:9999 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJU0JQK1RIUEp0bEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBMU1EVXdOREl5TXpOYUZ3MHlOREExTURRd05ESXlNelphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTIrTGJRcGtmcEpIWkM0OHQKc0pwdU52dUZVbjRiSXRoZ05vRjZXNUFoZlBnakFzaGluek9pMTRxWE5kb2xHbGlnZGh2ZWFQdXZ1TFZzKzlRNgp1ay82d0Y3aEJmUC9FWHpvMWhhTjBabXptUzE1dEYwSTdsMXJKMnpreEVCMGdnclZxZHBydGZXd0RUSk1kYjdZClVaWUZvc3prWGtmbkt3eVFhSzd2OHY3Yk1YUmdnNmU5dm9ZZEpFa2REMDdUbXBxanpwYU5JWTdIVkhsdTN0SFgKcURuMDU4RnF5WHNkQU9xMjg1ckl0STNZYzcweHExbXg0MytSaWRpaEc2UmNsaWV6Q2IyNjA5ZWRLZXRRRmxyZQpub2tMVWx5dzRsWDlWc2h2OEE3bll2NjNRQ0JBNnVDTWJuSDBCSVZYaGxELzJEZFY1OVR3S0RUOEUvWTNlUnJPCklLSGt3UUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTRzlzLzZ3ckQwRFBRb0t4VGFvWjdBQ3N6TgpMVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBWmVibWtKbC9Dd2kwZDlpelQvdkZmWCtuaU1CeGFEQWZwWVZiCkNJMVE2M1BvUDMwakwvOW9aVE9Rd0hRMmJtU0w0QndTRG13WmRvaVZwQlBGcGZrR1FVTDVNemJIeXNYU1psNmIKampMejNXaE5sY0pNQ1B0ZXgrelVabVNuZXBiWWxDdEh5M2xSWmhoQi83S2huV2NtR29hQi8xd1BHTmRtWEg5UApsVENLMHZLYmFMWGJrbTlkNnVjUWhJQmVOQUpRLzJYUnlMT1RjMUlrL0JmZDF4clozU2RKckQzRnpjdnFRamJCCjJZdUVET1lqQk5pbnVWRUIrRzRWUEJWSjN6ZmFERjIxNGJoQXVzWDBJa0Y3QUxRNVErVEZjRldSdDhqYTBBZkgKYzJRbkFpd2pRc2dOYnppY0JHaFNaT2IwYkd4YSt3U1gyODBYUkltYzRiUmZVbFBQNGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== ``` ## solution Verify host and port for kube-apiserver are correct. Open the super.kubeconfig in vi editor. Change the **9999 port to 6443** and run the below command to verify: ```text // check $ kubectl get nodes --kubeconfug /root/CKA/super.kubeconfig $ cat .kube/config // fix $ kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfig ``` # Q9 - check We have created a new deployment called `nginx-deploy`. scale the deployment to `3` replicas. Has the replica's increased? Troubleshoot the issue and fix it. - deployment has 3 replicas ## note ```text $ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx-deploy 1/1 1 1 2m48s ``` ```text $ kubectl scale deployment nginx-deploy --replicas=3 ``` ## solution Use the command kubectl scale to increase the replica count to 3. ```text $ kubectl get pods -n kube-system ``` The command running inside the controller-manager pod is incorrect. After fix all the values in the file and wait for controller-manager pod to restart. Alternatively, you can run sed command to change all values at once: ``` $ sed -i 's/kube-contro1ler-manager/kube-controller-manager/g' /etc/kubernetes/manifests/kube-controller-manager.yaml ``` This will fix the issues in controller-manager yaml file. At last, inspect the deployment by using below command: ```text $ kubectl get deploy ``` ## check ```text $ vi /etc/kubernetes/manifests/kube-controller-manager.yaml ``` Previous Post [CKA][re] Mock Exam - 2 풀이 (오답 체크) Next Post -