[CKA][re] Mock Exam - 2 풀이 (오답 체크) By starseat 2023-04-10 17:52:31 server/oss Post Tags # udemy 연습 문제 [Certified Kubernetes Administrator (CKA) with Practice Tests](https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/) 의 **Mock Exam - 2** 의 문제 및 확인 사항 # Q1 Take a backup of the etcd cluster and save it to `/opt/etcd-backup.db`. ```text // 정보 확인 ps -ef etcd > ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ snapshot save /opt/etcd-backup.db ``` # Q2 Create a Pod called `redis-storage` with image: `redis:alpine` with a Volume of type `emptyDir` that lasts for the life of the Pod. Specs on the below. - Pod named 'redis-storage' created - Pod 'redis-storage' uses Volume type of emptyDir - Pod 'redis-storage' uses volumeMount with mountPath = /data/redis ```yaml apiVersion: v1 kind: Pod metadata: name: redis-storage spec: containers: - image: redis:alpine name: redis-storage volumeMounts: - mountPath: /data/redis name: redis-volume volumes: - name: redis-volume emptyDir: sizeLimit: 500Mi ``` # Q3 Create a new pod called `super-user-pod` with image `busybox:1.28`. Allow the pod to be able to set `system_time`. The container should sleep for 4800 seconds. - Pod: super-user-pod - Container Image: busybox:1.28 - SYS_TIME capabilities for the conatiner? ```yaml apiVersion: v1 kind: Pod metadata: name: super-user-pod spec: containers: - name: super-user-pod image: busybox:1.28 args: ["sh", "-c", "sleep 48000"] securityContext: capabilities: add: ["SYS_TIME"] ``` # Q4 - failed A pod definition file is created at `/root/CKA/use-pv.yaml`. Make use of this manifest file and mount the persistent volume called `pv-1`. Ensure the pod is running and the PV is bound. - mountPath: **/data** - persistentVolumeClaim Name: **my-pvc** - persistentVolume Claim configured correctly - pod using the correct mountPath - pod using the persistent volume claim? ## solution Add a `persistentVolumeClaim` definition to pod definition file. Solution manifest file to create a pvc `my-pvc` as follows: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Mi ``` And then, update the pod definition file as follows: ```yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: use-pv name: use-pv spec: containers: - image: nginx name: use-pv volumeMounts: - mountPath: "/data" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: my-pvc ``` Finally, create the pod by running: ```text kubectl create -f /root/CKA/use-pv.yaml ``` # Q5 Create a new deployment called `nginx-deploy`, with image `nginx:1.16` and `1 replica`. Next upgrade the deployment to version `1.17` using **rolling update**. - Deployment : nginx-deploy. Image: nginx:1.16 - Image: nginx:1.16 - Task: Upgrade the version of the deployment to 1:17 - Task: Record the changes for the image upgrade ```yaml # nginx-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.16 ``` ```text kubectl apply -f nginx-deploy.yaml ... kubectl set image deployment/nginx-deploy nginx=nginx:1.17 ``` # Q6 - failed Create a new user called `john`. Grant him access to the cluster. John should have permission to `create, list, get, update and delete pods` in the `development` namespace . The private key exists in the location: `/root/CKA/john.key` and csr at `/root/CKA/john.csr`. **Important Note:** As of kubernetes 1.19, the CertificateSigningRequest object expects a **signerName**. Please refer the documentation to see an example. The documentation tab is available at the top right of terminal. - CSR: `john-developer` Status:`Approved` - Role Name: `developer`, namespace: `development`, Resource: `Pods` - Access: User `john` has appropriate permissions ## Solution ```yaml apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: john-developer spec: signerName: kubernetes.io/kube-apiserver-client request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0VhbTlvYmpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQUt2Um1tQ0h2ZjBrTHNldlF3aWVKSzcrVVdRck04ZGtkdzkyYUJTdG1uUVNhMGFPCjV3c3cwbVZyNkNjcEJFRmVreHk5NUVydkgyTHhqQTNiSHVsTVVub2ZkUU9rbjYra1NNY2o3TzdWYlBld2k2OEIKa3JoM2prRFNuZGFvV1NPWXBKOFg1WUZ5c2ZvNUpxby82YU92czFGcEc3bm5SMG1JYWpySTlNVVFEdTVncGw4bgpjakY0TG4vQ3NEb3o3QXNadEgwcVpwc0dXYVpURTBKOWNrQmswZWhiV2tMeDJUK3pEYzlmaDVIMjZsSE4zbHM4CktiSlRuSnY3WDFsNndCeTN5WUFUSXRNclpUR28wZ2c1QS9uREZ4SXdHcXNlMTdLZDRaa1k3RDJIZ3R4UytkMEMKMTNBeHNVdzQyWVZ6ZzhkYXJzVGRMZzcxQ2NaanRxdS9YSmlyQmxVQ0F3RUFBYUFBTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQ1VKTnNMelBKczB2czlGTTVpUzJ0akMyaVYvdXptcmwxTGNUTStsbXpSODNsS09uL0NoMTZlClNLNHplRlFtbGF0c0hCOGZBU2ZhQnRaOUJ2UnVlMUZnbHk1b2VuTk5LaW9FMnc3TUx1a0oyODBWRWFxUjN2SSsKNzRiNnduNkhYclJsYVhaM25VMTFQVTlsT3RBSGxQeDNYVWpCVk5QaGhlUlBmR3p3TTRselZuQW5mNm96bEtxSgpvT3RORStlZ2FYWDdvc3BvZmdWZWVqc25Yd0RjZ05pSFFTbDgzSkljUCtjOVBHMDJtNyt0NmpJU3VoRllTVjZtCmlqblNucHBKZWhFUGxPMkFNcmJzU0VpaFB1N294Wm9iZDFtdWF4bWtVa0NoSzZLeGV0RjVEdWhRMi80NEMvSDIKOWk1bnpMMlRST3RndGRJZjAveUF5N05COHlOY3FPR0QKLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg== usages: - digital signature - key encipherment - client auth ``` To approve this certificate, run: ```text $ kubectl certificate approve john-developer ``` Next, create a role developer and rolebinding developer-role-binding, run the command: ```text $ kubectl create role developer --resource=pods --verb=create,list,get,update,delete --namespace=development $ kubectl create rolebinding developer-role-binding --role=developer --user=john --namespace=development ``` To verify the permission from kubectl utility tool: ```text $ kubectl auth can-i update pods --as=john --namespace=development ``` # Q7 - failed Create a nginx pod called `nginx-resolver` using image `nginx`, expose it internally with a service called `nginx-resolver-service`. Test that you are able to look up the service and pod names from within the cluster. Use the image: `busybox:1.28` for **dns lookup**. Record results in `/root/CKA/nginx.svc` and `/root/CKA/nginx.pod` - Pod: nginx-resolver created - Service DNS Resolution recorded correctly - Pod DNS resolution recorded correctly ## Solution Use the command kubectl run and create a nginx pod and busybox pod. Resolve it, nginx service and its pod name from busybox pod. To create a pod nginx-resolver and expose it internally: ```text $ kubectl run nginx-resolver --image=nginx $ kubectl expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP ``` To create a pod test-nslookup. Test that you are able to look up the service and pod names from within the cluster: ```text $ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service $ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service > /root/CKA/nginx.svc ``` Get the IP of the nginx-resolver pod and replace the dots(.) with hyphon(-) which will be used below. ```text $ kubectl get pod nginx-resolver -o wide $ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup > /root/CKA/nginx.pod ``` ## nsloop POD 정보로 부터 PO IP 확인 ```text $ kubectl get pods -o wide // nginx-resolver pod ip: 10.50.192.4 ``` 테스트용 POD 실행 ```text $ kubectl run test-busybox --image=busybox:1.28 -- sleep 4800 // -- 옵션은 container 에 명령 수행 // 위 옵션으로 아래와 같은 yaml 로 실행됨. apiVersion: v1 kind: Pod metadata: labels: run: test-busybox name: test-busybox spec: containers: - args: - sleep - "4800" image: busybox:1.28 imagePullPolicy: IfNotPresent name: test-busybox ``` 테스트용 POD 인 `test-busybox` 로 Service DNS 확인 ```text $ kubectl exec test-busybox -- nslookup nginx-resolver-service > /root/CKA/nginx.svc ``` 테스트용 POD 인 `test-busybox` 로 POD DNS 확인 ```text $ kubectl exec test-busybox -- nslookup 10-50-192-5.default.pod.cluster.local > /root/CKA/nginx.pod ``` # Q8 Create a static pod on `node01` called `nginx-critical` with image `nginx` and make sure that it is recreated/restarted automatically in case of a failure. Use `/etc/kubernetes/manifests` as the `Static Pod` path for example. - static pod configured under /etc/kubernetes/manifests ? - Pod `nginx-critical-node01` is up and running ```text $ ssh node01 $ cd /etc/kubernetes/manifests $ vi nginx-critical.yaml apiVersion: v1 kind: Pod metadata: name: nginx-critical spec: containers: - name: nginx image: nginx ``` Previous Post [CKA][re] Mock Exam - 1 풀이 (오답 체크) Next Post [CKA][re] Mock Exam - 3 풀이 (오답 체크)