CKS - Mock test 1
CKS mock test 1 covering AppArmor profiles, seccomp, Trivy vulnerability scanning, and Falco runtime security rules.
The following pod manifest shows how to apply an AppArmor profile annotation to restrict container behavior. The pod uses a custom service account and mounts a host path volume.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
controlplane $ cat 1.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: frontend-site
namespace: omni
annotations:
container.apparmor.security.beta.kubernetes.io/nginx: localhost/restricted-frontend
spec:
containers:
- image: nginx:alpine
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/nginx/html
name: test-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: fe-token-5xxvl
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: node01
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: frontend-default
serviceAccountName: frontend-default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- hostPath:
path: /data/pages
type: Directory
name: test-volume
- name: fe-token-5xxvl
secret:
defaultMode: 420
secretName: fe-token-5xxvl
This second pod manifest demonstrates a pod in the orion namespace that mounts a secret volume for database credentials at a specific path.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
controlplane $ cat 2.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-xyz
namespace: orion
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: app-xyz
ports:
- containerPort: 3306
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-8rm6q
readOnly: true
- mountPath: /mnt/connector/password
name: a-safe-secret
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: node01
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-8rm6q
secret:
defaultMode: 420
secretName: default-token-8rm6q
- name: a-safe-secret
secret:
secretName: a-safe-secret
- A number of pods have been created in the delta namespace. Using the trivy tool, which has been installed on the controlplane, identify all the pods that have HIGH or CRITICAL level vulnerabilities and delete the corresponding pods.
Note: Do not modify the objects in any way other than deleting the ones that have high or critical vulnerabilities.
The following commands use kubectl JSONPath output to generate Trivy scan commands for each pod’s container image. This lets you quickly identify which pods are running vulnerable images.
1
2
3
4
k get pods -n delta -ojsonpath='{range .items[*]}{.metadata.name}{" trivy image --severity=HIGH,CRITICAL "}{.spec.containers[*].image}{" | grep Total\n"}{end}' > 3.yaml
k get pods -n delta -ojsonpath='{range .items[*]}{" trivy image --severity=HIGH,CRITICAL "}{.spec.containers[*].image}{" | grep Total\n"}{end}' >> 3.yaml
This block copies the seccomp audit profile to the node and creates a pod that uses it. Afterward, you can verify that syscalls are being logged by checking the syslog on the node.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
ssh node01
cp CKS/audit.json /var/lib/kubelet/seccomp/profiles/
# executed at controlplane
controlplane $ cat 4.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: audit-nginx
name: audit-nginx
spec:
nodeName: node01
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
containers:
- image: nginx
name: audit-nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
# check syscalls at node01
ssh node01
cat /var/log/syslog | grep audit --color
The following Falco configuration overrides the default “Write below binary dir” rule to change its output format. The custom rule in falco_rules.local.yaml will persist across Falco updates.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
vim +/"File below a known binary directory opened for writing" /etc/falco/falco_rules.yaml
cat /etc/falco/falco_rules.local.yaml
# Or override/append to any rule, macro, or list from the Default Rules
- rule: Write below binary dir
desc: an attempt to write to any file below a set of binary directories
condition: >
bin_dir and evt.dir = < and open_write
and not package_mgmt_procs
and not exe_running_docker_save
and not python_running_get_pip
and not python_running_ms_oms
and not user_known_write_below_binary_dir_activities
output: >
CRITICAL File below a known binary directory opened for writing (user=%user.name file_updated=%fd.name command=%proc.cmdline)
priority: ERROR
tags: [filesystem, mitre_persistence]
To enable Falco file output for security incident alerts, edit the Falco configuration and set the file output path. After saving, restart the Falco service to apply changes.
1
2
3
4
5
6
7
8
9
vim /etc/falco/falco.yaml
...
file_output:
enabled: true
keep_alive: false
filename: /opt/security_incidents/alerts.log
...
systemctl restart falco