CKAD series part 3: Configuration

This is part 3 in a multi-part series on my CKAD learning experience. For the other parts in the series, please check out the following links:

As I mentioned in the intro to this blog series, my study plan was to cover the topics in the official curriculum for the CKAD one by one – and share my learnings with you. In this post we’ll walk through configuration and these 5 topics:

  • Understand ConfigMaps
  • Understand SecurityContexts
  • Define an application resource requirements
  • Create and consume secrets
  • Understand ServiceAccount

Let’s cover each of those five topics:

Configmaps

ConfigMaps are a mechanism within Kubernetes are a mechanism to pass configuration data to the containers in your pods. That data could be command-line arguments, environment variables, ports… By using ConfigMaps, you keep configuration data outside of your pod definitions. Within ConfigMaps, you define key-value pairs of configuration data.

You can create ConfigMaps in a couple of ways:

  • From a literal value (in kubectl)
  • From a configuration file
  • From a yaml file
  • Using kustomization.yaml (not covered in this post)

From a literal

We’ll start of by creating our configmap, adding that to our pod, and then look at the value.

kubectl create configmap literal --from-literal=MYNAME=Nills
apiVersion: v1
kind: Pod
metadata:
  name: config-literal
spec:
  containers:
    - name: basic
      image: nginx
      env:
        - name: MYNAMEFROMLITERAL
          valueFrom:
            configMapKeyRef:
              name: literal
              key: MYNAME

We can then exec into our container to get the environmental variable:

kubectl create -f simplepodconfigliteral.yaml
kubectl exec -it config-literal /bin/bash
echo $MYNAMEFROMLITERAL #FROM WITHIN THE CONTAINER

From a configuration file

Let’s now do the same, but with a configuration file rather than a literal:

echo 'MYNAME=John Smith' > name.config

kubectl create configmap fromfile --from-file=name.config
apiVersion: v1
kind: Pod
metadata:
  name: config-file
spec:
  containers:
    - name: basic
      image: nginx
      env:
        - name: MYNAMEFROMFILE
          valueFrom:
            configMapKeyRef:
              name: fromfile
              key: name.config
kubectl apply -f simplepodconfigfile.yaml

Then exec into the container to get the environmental variable:

kubectl exec -it config-file /bin/bash
echo $MYNAMEFROMFILE

The output here is

MYNAME=JohnSmith

Which is actually the content of the file, not the variable itself.

Meaning we couldn’t directly set the environmental variable, but we get the content of the file. If you want to get the actual values, you should import the configmap with the flag --from-env-file.

kubectl create configmap fromenvfile --from-env-file=name.config
apiVersion: v1
kind: Pod
metadata:
  name: env-file
spec:
  containers:
    - name: basic
      image: nginx
      env:
        - name: MYNAMEFROMFILE
          valueFrom:
            configMapKeyRef:
              name: fromenvfile
              key: MYNAME
kubectl apply -f simplepodenvfile.yaml
kubectl exec -it env-file /bin/bash
echo MYNAMEFROMFILE

You then see the actual value, not the whole content of the file.

To see the difference between the --from-file flag, and the --from-env-file flag, you can describe (and compare) both configmaps (if you created both with me) by doing the following describes:

kubectl describe configmap/fromfile configmap/fromenvfile
Name:         fromfile
Namespace:    default
Labels:       
Annotations:  

Data
====
name.config:
----
MYNAME=JohnSmith 
Events:  


Name:         fromenvfile
Namespace:    default
Labels:       
Annotations:  

Data
====
MYNAME:
----
JohnSmith 
Events:  

You can see the real impact of --from-file and --from-env-file if you have multiple values to set. To test this out, just add another value to name.config, and recreate the configmaps:

echo 'SECONDNAME=Test' >> name.config
kubectl delete configmap/fromfile configmap/fromenvfile
kubectl create configmap fromfile --from-file=name.config
kubectl create configmap fromenvfile --from-env-file=name.config
kubectl describe configmap/fromfile configmap/fromenvfile
Name:         fromfile
Namespace:    default
Labels:       
Annotations:  

Data
====
name.config:
----
MYNAME=JohnSmith
SECONDNAME=Test
Events:  


Name:         fromenvfile
Namespace:    default
Labels:       
Annotations:  

Data
====
MYNAME:
----
JohnSmith
SECONDNAME:
----
Test
Events:  

Using --from-file actually loads the file, and using --from-env-file loads the actual values in the file.

From a YAML file

You can also create configmaps directly as YAML files. An example below:

kind: ConfigMap 
apiVersion: v1 
metadata:
  name: config-from-yaml
data:
  myname: this-is-yaml

You can create this via:

kubectl apply -f configmap.yaml

We can reference this configmap in the following way:

apiVersion: v1
kind: Pod
metadata:
  name: config-yaml
spec:
  containers:
    - name: basic
      image: nginx
      env:
        - name: MYNAMEFROMYAML
          valueFrom:
            configMapKeyRef:
              name: config-from-yaml
              key: myname
kubectl apply -f simplepodyaml.yaml

kubectl exec -it config-yaml /bin/bash
echo $MYNAMEFROMYAML

The output should be

this-is-yaml

Summarizing configmaps

Configmaps are a way to store non-secret data in the kubernetes apiserver. There are multiple ways to create them. When using a file, you should be careful and distinguish between --from-file and --from-env-file.

Understand SecurityContexts

A securitycontext describes the privileges and access control that a pod will use. Some settings here include (from the kubernetes docs):

  • Discretionary Access Control: Permission to access an object, like a file, is based on user ID (UID) and group ID (GID).
  • Security Enhanced Linux (SELinux): Objects are assigned security labels.
  • Running as privileged or unprivileged.
  • Linux Capabilities: Give a process some privileges, but not all the privileges of the root user.
  • AppArmor: Use program profiles to restrict the capabilities of individual programs.
  • Seccomp: Filter a process’s system calls.
  • AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the no_new_privs flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has CAP_SYS_ADMIN.

This is actually a very important topic to dive into. As we’re prepping for the exam, I don’t want to dive too deep into the details of all the settings and the topics. Let’s have a look at some of the settings.

We’ll start of with creating a simple pod, without a security context.

apiVersion: v1
kind: Pod
metadata:
  name: simple-security
spec:
  containers:
    - name: basic
      image: nginx

Let’s create this with the command kubectl create -f simple-security.yaml – and then exec into our container with kubectl exec -it simple-security sh

Once in our container, let’s have a look at our current processes and user. You’ll see everything is running as root, and you’re logged as root:

/ # ps aux
PID   USER     TIME  COMMAND
    1 root      0:00 sleep 3600
   59 root      0:00 sh
   64 root      0:00 ps aux
/ # id
uid=0(root) gid=0(root) groups=10(wheel)
/ # exit

Let’s now change our pod definition, to include a user and group id:

apiVersion: v1
kind: Pod
metadata:
  name: user-security
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  containers:
    - name: basic
      image: busybox
      command:
        - sleep
        - "3600"

Let’s now do the same: create the container, exec into it, and look at out processes and user.

kubectl create -f user-security.yaml
pod/user-security created
kubectl exec -it user-security sh
/ $ ps aux
PID   USER     TIME  COMMAND
    1 1000      0:00 sleep 3600
    6 1000      0:00 sh
   11 1000      0:00 ps aux
/ $ id
uid=1000 gid=3000 groups=2000

Let’s have a look at what we did here, we defined at the pod level a securitycontext. This means all containers in our pod will run as a certain user. Let’s have a look at what happens when we combine both, a pod securitycontext and one specifically on our container:

apiVersion: v1
kind: Pod
metadata:
  name: container-security
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  containers:
    - name: basic
      image: busybox
      command:
        - sleep
        - "3600"
      securityContext:
        runAsUser: 666
kubectl create -f container-security.yaml
pod/container-security created
kubectl exec -it container-security sh
/ $ ps aux
PID   USER     TIME  COMMAND
    1 666       0:00 sleep 3600
    6 666       0:00 sh
   11 666       0:00 ps aux
/ $ id
uid=666 gid=3000 groups=2000

We see an outcome as expected. We use the groupid and groups of the pod definition, but the more finegrained container policy is applied.

Define an application resource requirements

Application resource requirements – and constraints – are critical within a kubernetes application definition. It provides information to the kubernetes system on how many resources should be ‘reserved’ for each container in the pods in a deployment – and it can set maxima as well. Additionally, it also allows you to set quota at namespace level; so a certain namespace cannot exceed a certain amount of resources.

There are four definitions for resource requirements:

  • CPU requests: Expressed as a fraction of a core that is reserved for the container. This can be expressed as a decimal number (0.1) or as a ‘milli’ value (100m). 0.1 and 100m are equivalent.
  • Memory requests: Expresses as a number of bytes of memory to be reserved for the container. You can express this either as a actual number of bytes, as a E, P, T, G, M, K value or as a Ei, Pi, Ti, Gi, Mi, Ki value. Wondering about the difference between M and Mi? M is 10-based, M i is 2 based, meaning M is 1,000,000 and Mi is 1,048,576.
  • CPU limits: The fraction of a core that can actually be scheduled. The milli-value here is multiplied by 100, and this value is the amount of ms that can be run per 100ms. (100m, would mean this container can be scheduled on the cpu for 10ms per 100ms)
  • Memory limits: The maximum amount of memory that a container is allowed to use. If this value is exceeded, your container might be terminated.

The kubernetes scheduler will work with the requests and limits to schedule pods across your cluster. The sum of the requests for containers in the pods running on a node will never exceed the total available resources on that node. This means that on a 2 core machine, a maximum of 4 pods with a request of 500m will be scheduled.

Please take into account that requests are reserved, and are not linked to the actual usage on your cluster. With incorrect requests, you could run a very underutilized cluster.

CPU settings

Let’s start playing around a little with CPU requirements first:

apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo
spec:
  containers:
  - name: cpu-demo-ctr
    image: vish/stress
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: "0.5"
    args:
    - -cpus
    - "2"

What you see in the yaml above, is that we create a stress container. We limit it to 1CPU, with a reservation of 0.5CPU, while we’re running stress with 2CPUs. Let’s look at how much CPU is being used:

kubectl create -f cpu-requests.yaml
pod/cpu-demo created
kubectl top pods
NAME       CPU(cores)   MEMORY(bytes)
cpu-demo   990m         1Mi

One thing I want you to notice here: when exceeding the limits, the pod kept running (i.e. wasn’t killed), and was just limited. This is different with memory usage, let’s have a look:

Memory settings

Let’s start with creating a pod that will try to get 150MB of memory, while being allowed 200MB. This should work, without any glitches:

apiVersion: v1
kind: Pod
metadata:
  name: memory-low
spec:
  containers:
  - name: memory-demo-ctr
    image: polinux/stress
    resources:
      limits:
        memory: "200Mi"
      requests:
        memory: "100Mi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

We can create this and then watch the memory utilization:

kubectl create -f memory-low.yaml
kubectl top pods
NAME         CPU(cores)   MEMORY(bytes)
memory-low   40m          151Mi

And if we check a kubectl get pods, we’ll see zero restarts (as expected):

NAME         READY   STATUS    RESTARTS   AGE
memory-low   1/1     Running   0          3m56s

We can now edit the pod definition, and put a memory limit of 125Mi in place. This will create issues, and will cause kubernetes to kill and restart our pod.

apiVersion: v1
kind: Pod
metadata:
  name: memory-high
spec:
  containers:
  - name: memory-demo-ctr
    image: polinux/stress
    resources:
      limits:
        memory: "125Mi"
      requests:
        memory: "100Mi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]

We can create this pod, and we can quickly see that I’ll get killed and restarted:

kubectl create -f memory-high.yaml
kubectl get pods --watch

NAME          READY   STATUS      RESTARTS   AGE
memory-high   0/1     OOMKilled   0          6s
memory-low    1/1     Running     0          6m
memory-high   0/1     OOMKilled   1          7s
memory-high   0/1     CrashLoopBackOff   1          8s
memory-high   0/1     OOMKilled          2          21s

As you can see, the container is killed due to exceeding it’s memory requirements. Notice how this is different from our earlier CPU scenario?

Create and consume secrets

If you’ve followed along with our Configmaps example earlier, you’ll notice a lot of similarities to secrets. In essence, secrets are configmaps that are base64 obfuscated. There’s a couple of ways to create secrets, with the clue here being who does the base64 encoding:

  • Using kubectl, referencing a file. Kubernetes will do the base64 encoding
  • Using kubectl, using literal values. Kubernetes will do the base64 encoding
  • Using a YAML file, you need to do the base64 encoding.

Afterwards you can consume secrets in your pods, we’ll check that after we created a few. Let’s have a look at all three mechanisms:

Using kubectl, referencing a file:

Let’s quickly create a file with the value of a secret:

echo 'love' >secretingredient.txt

We can now create a secret referencing that file:

kubectl create secret generic recipe --from-file=./secretingredient.txt 

We can then look at the secret, which will look like this:

kubectl get secret recipe -o yaml
apiVersion: v1
data:
  secretingredient.txt: bG92ZQo=
kind: Secret
metadata:
  creationTimestamp: "2019-07-17T04:17:49Z"
  name: recipe
  namespace: default
  resourceVersion: "3404565"
  selfLink: /api/v1/namespaces/default/secrets/recipe
  uid: d63c536a-a849-11e9-aedb-0aee3919cf84
type: Opaque

As you can see, the text itself is obfuscated. We can decode this using base64:

echo "bG92ZQo=" | base64 -d -
love

Using kubectl, using literal values

We can do the same using a literal-value, using the following kubectl:

kubectl create secret generic recipe --from-literal=mysecret=ilovekubernetes

We can then do the same as before, get the secret and decode the value:

kubectl get secret literalsecret -o yaml
apiVersion: v1
data:
  mysecret: aWxvdmVrdWJlcm5ldGVz
kind: Secret
metadata:
  creationTimestamp: "2019-07-17T04:22:12Z"
  name: literalsecret
  namespace: default
  resourceVersion: "3404979"
  selfLink: /api/v1/namespaces/default/secrets/literalsecret
  uid: 731c8496-a84a-11e9-aedb-0aee3919cf84
type: Opaque

echo "aWxvdmVrdWJlcm5ldGVz" | base64 -d -
Ilovekubernetes

Using a YAML file

Last way to create a secret is to create it via a yaml file. This yaml file expects your secret to be base64 encoded already, so let’s first base64 encode a string and create a secret:

echo "heavymetal" | base64 -
aGVhdnltZXRhbAo=

We can then create this secret, and do the same as before, trying to decode it:

apiVersion: v1
kind: Secret
metadata:
  name: music
type: Opaque
data:
  myfavorite: aGVhdnltZXRhbAo=
kubectl create -f secret.yaml
secret/music created
kubectl get secret music -o yaml
apiVersion: v1
data:
  myfavorite: aGVhdnltZXRhbAo=
kind: Secret
metadata:
  creationTimestamp: "2019-07-17T04:26:20Z"
  name: music
  namespace: default
  resourceVersion: "3405365"
  selfLink: /api/v1/namespaces/default/secrets/music
  uid: 070d7b14-a84b-11e9-aedb-0aee3919cf84
type: Opaque

echo "aGVhdnltZXRhbAo=" | base64 -d -
heavymetal

Let’s now have a look on how we can use these secrets in our pods:

Using secrets

Let’s have a look at how we can use the three secrets we created:

  • Using a file: recipe
  • Using a literal: literalsecret
  • Using a YAML file: music

We’ll consume all three secrets in a single pod. We’ll consume the file secret twice, once as an environmental variable and once as a volume.

apiVersion: v1
kind: Pod
metadata:
  name: secretsconsumed
spec:
  containers:
    - name: basic
      image: busybox
      command:
        - sleep
        - "3600"
      env:
        - name: FROMFILE
          valueFrom:
            secretKeyRef:
              name: recipe
              key: secretingredient.txt
        - name: FROMLITERAL
          valueFrom:
            secretKeyRef:
              name: literalsecret
              key: mysecret
        - name: FROMYAML
          valueFrom:
            secretKeyRef:
              name: music
              key: myfavorite
      volumeMounts:
        - name: secret-volume
          mountPath: "/tmp/supersecret"
          readOnly: true
  volumes:
    - name: secret-volume
      secret:
        secretName: recipe

We can then create our pod, exec into it, and have a look at the environment variables that are set and read our file.

kubectl create -f secretsconsumed.yaml
kubectl exec -it secretsconsumed sh
/ # echo $FROMFILE
love
/ # echo $FROMLITERAL
ilovekubernetes
/ # echo $FROMYAML
heavymetal
/ # cat /tmp/supersecret/secretingredient.txt
love

Understand ServiceAccounts

A service account provides an identity for processes that run in a Pod. This allows processes in a pod to communicate to the api-server. Each pod by default gets assigned the default  service account.

Let’s go ahead and test this out (instructions partly from this stackoverflow post):

apiVersion: v1
kind: Pod
metadata:
  name: simple-pod
spec:
  containers:
    - name: basic
      image: radial/busyboxplus:curl
      command:
        - sleep
        - "3600"

We’ll create this pod and connect to the kubernetes API:

kubectl create -f simplepod.yaml
kubectl exec -it simple-pod sh
KUBEHOST="aksworksho-akschallenge-d19ddd-c565b193.hcp.westus2.azmk8s.io" #change this to your cluster
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \
      https://$KUBEHOST/api/v1/namespaces/default/pods/$HOSTNAME

The response to my curl (as you will see as well if your cluster is RBAC enabled) is:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "pods \"simple-pod\" is forbidden: User \"system:serviceaccount:default:default\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\"",
  "reason": "Forbidden",
  "details": {
    "name": "simple-pod",
    "kind": "pods"
  },
  "code": 403

Let’s try to solve this by giving our default service account access to the API-server. As I am lazy, I’ll show you something new, how you can create multiple objects from a single yaml file, by using the --- in between the objects.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: ServiceAccount
  name: default 
  apiGroup: ""
roleRef:
  kind: Role 
  name: pod-reader  
  apiGroup: ""

What we’ve done here, is create a Role called pod-reader, that allows its subjects to get, watch and list pods in the default namespace. Next we create a RoleBinding, that links out default ServiceAccount to that role.

Let’s create this, and see what is looks like in our pod. Btw. Notice how we didn’t kill our pod, we only made api-level changes.

kubectl create -f solution.yaml
kubectl exec -it simple-pod sh
KUBEHOST="aksworksho-akschallenge-d19ddd-c565b193.hcp.westus2.azmk8s.io" #change this to your cluster
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \
      https://$KUBEHOST/api/v1/namespaces/default/pods/$HOSTNAME

And our response is the full json definition of our pod. #SUCCESS

We can also create our own service accounts, and then give them permissions to pods. Let’s try the same, with a new service account, the same role, a new rolebinding and a new pod.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: thanos-read-pods
  namespace: default
subjects:
- kind: ServiceAccount
  name: thanos 
  apiGroup: ""
roleRef:
  kind: Role 
  name: pod-reader  
  apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: thanos
  namespace: default
---
apiVersion: v1
kind: Pod
metadata:
  name: thanos-pod
spec:
  serviceAccountName: thanos
  containers:
    - name: basic
      image: radial/busyboxplus:curl
      command:
        - sleep
        - "3600"

Then, we’ll exec into our new pod, and try our curl again:

kubectl exec -it thanos-pod sh
KUBEHOST="aksworksho-akschallenge-d19ddd-c565b193.hcp.westus2.azmk8s.io" #change this to your cluster
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \
      https://$KUBEHOST/api/v1/namespaces/default/pods/$HOSTNAME

And again we get a full JSON definition of our pod. #SUCCESS

Summary of configuration

In part 3 of my CKAD series we dove into configuration. We touched on configmaps and secrets (both pretty similar), securitycontext, resource constraints and service accounts.

Up to part 4?

One thought to “CKAD series part 3: Configuration”

Leave a Reply