This is part 2 in a multi-part series on my CKAD learning experience. For the other parts in the series, please check out the following links:
- Part 1: intro, exam topics and my study plan
- Part 3: Configuration
- Part 4: multi-container pods
- Part 5: Observability
- Part 6: Pod Design
- Part 7: Networking
As I mentioned in the intro to this blog series, my study plan was to cover the topics in the official curriculum for the CKAD one by one – and share my learnings with you. Topic number 1 are core concepts, with 2 learning objectives:
- Understand Kubernetes API primitives
- Create and configure basic Pods
Let’s cover each of those two topics
Kubernetes API primitives
Kubernetes is a container orchestrator. To orchestrate those containers, Kubernetes essentially behaves like a desired state controller. You define your desired state to the kubernetes system, and the system will take the necessary actions to achieve that desired state.
The communication to kubernetes through the kubernetes REST api. Clients tools – such as kubectl – convert kubectl commands to REST calls to the kubernetes API. That API is continuously updated, and you might have come across the following ‘versions’ of the API (or versions of apigroups):
- Alpha: this API is in development, the feature might even be dropped in the future, while changes to the API and bugs are highly likely.
- Beta: the code for this API is well tested, and the feature is considered safe. However, the schema and/or semantics might still change and bugs might occur.
- Stable: Stable versions of features will appear in released software for many subsequent versions.
The API itself is evolving with every kubernetes release, and the architecture of the API itself has also evolved with the introduction of API groups. The initial kubernetes API, which is available on the path /api/v1, wasn’t easy to extend. API groups make it easier to extend the kubernetes API and the evolve certain functionalities without changes the main (core) api. This also makes it possible to extend the API with custom resources through CRDs (CustomResourceDefinitions).
Kubernetes Objects are persistent entities in the Kubernetes system. By creating an object, you’re effectively telling the Kubernetes system what you want your cluster’s workload to look like; this is your cluster’s desired state. An object could be a pod, service, deployment…
Every kubernetes object has two nested object fields:
- The spec, which the end-user provides.
- The status, which the kubernetes control plane generates.
It kubernetes’s job to make sure status matches spec.
One thing I learned in reading these docs, it that the kubernetes API itself talks JSON (not YAML). While typically you write yaml to interface with kubectl, kubectl converts this YAML into JSON when talking to the API.
- Imperative commands:
- kubectl run nginx –image nginx
- Imperative object configuration
- kubectl create/delete/replace commands
- Declarative object configuration
- kubectl apply command
I never paid close attention to the difference between the last two. I’ve done kubectl create and kubectl apply interchangeably, and have gotten errors when I ran create against an object that I wanted to change. The kubernetes docs on this topic list a number of pros/cons for each approach if you want to learn more.
Object names need to be unique per object type per namespace. You cannot have two pod definitions with the same name in the same namespace, but you can across namespaces. If you delete an object and recreate it with the same name, this is allowed. However, the Kubernetes API generated a UID per object that is also historically unique.
If you want to get the UID of a resource, you can with the following command:
kubectl get pod captureorder-5d6fd597d4-jhdqf -o json | jq .metadata.uid
Namespaces are a mechanism to create ‘virtual clusters’ on top of one kubernetes cluster. As I mentioned before, names need to be unique only within one namespace. You can also attach resource quota to namespaces.
To use namespaces, just add the following flag to your command
Three cool tricks with namespaces that might come in handy:
- How to get resources in all namespaces: via the
- How to set the default namespace:
kubectl config set-context --current --namespace=xxx
- How to view default namespace in your kube config:
kubectl config view | grep namespace
A final note on namespaces: they are also important for the default DNS resolution. Every service gets a DNS entry, in the form of
service.namespace.svc.cluster.local. If you need to reach a service in a separate namespace, you’ll need to use the FQDN, rather than just the simple service name.
Labels, selectors and annotation
Within Kubernetes, you can use labels and selectors to organize and select sets of objects. If you need to create a service and link that service to a certain deployment, you label the deployment, and use a selector in the service to select the pods that deployment generated. For example:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: azure-vote-front spec: replicas: 1 template: metadata: labels: app: azure-vote-front spec: containers: - name: azure-vote-front image: microsoft/azure-vote-front:redis-v1 ports: - containerPort: 80 env: - name: REDIS value: "azure-vote-back" --- apiVersion: v1 kind: Service metadata: name: azure-vote-front spec: type: LoadBalancer ports: - port: 80 selector: app: azure-vote-front
As you can see in the example above, we create a deployment with the label
app: azure-vote-front. Afterwards we create a service, which references that deployment with the selector
Some facts about labels and selectors:
- They are by design not unique (multiple pods/deployments/services can have the same labels attached to them)
- When selecting a label through a selector, you can an equals (through = or ==, they have the same effect) or a does-not-equals (through !=). You can also use sets, such as the following examples:
app in (web, db, backend)
app notin (monitoring, backup)
On the other hand you have annotations, which are arbitrary metadata. This doesn’t mean however that that metadata can’t have a technical impact. For example, if you create a service with an Azure load balancer, you’ll use an annotation to set the load-balancer to the internal type:
apiVersion: v1 kind: Service metadata: name: internal-app annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" spec: type: LoadBalancer ports: - port: 80 selector: app: internal-app
Create and configure basic Pods
A pod is the basic execution unit within Kubernetes. A pod can contain multiple containers in it. Within a pod, containers share their networking and shared storage access. A pod is scheduled as a whole on one of the nodes in your cluster, meaning all containers in a pod will run on the same node. You can run multiple instances of a pod, and those multiple instances can be spread across multiple nodes though.
By design, pods are ephemeral. They can be killed or moved to another node by the system, meaning any state not stored on a persistent store will be lost. Typically, you won’t create pods directly; rather, you’ll create a deployment, which will create pods for you.
A simple (basic?) pod design is the following:
apiVersion: v1 kind: Pod metadata: name: basic-pod spec: containers: - name: basic image: nginx
You can save this snippet of code as a file, and then create this in your kubernetes cluster using the following command:
kubectl apply -f simplepod.yaml
If you then do
kubectl get pods, you’ll see that pod was created. If you want to verify that nginx is running, you can do the following commands to verify everything works in the pod:
kubectl exec -it basic-pod /bin/bash apt update apt install curl -y curl localhost
When scheduling and monitoring a pod – and more specifically traffic to a pod – Kubernetes uses
livenessProbes. As we haven’t created a service yet, we can’t use a
readinessProbe yet, but we can play around with a
livenessProbe. Let’s try the following with our basic-pod:
apiVersion: v1 kind: Pod metadata: name: basic-pod spec: containers: - name: basic image: nginx livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 15 timeoutSeconds: 1
As we are adding a livenessProbe to our pod, we cannot update the pod, we have to delete and recreate it:
kubectl delete -f simplepod.yaml kubectl apply -f simplepod.yaml kubectl get pods --watch
Now open up a second terminal, and do the following:
kubectl exec -it basic-pod /bin/bash mv /usr/share/nginx/html/index.html /usr/share/nginx/html/index.html.backup
This last bit will make our livenessProbe fail. You will notice in the first terminal (where we are watching the pods in the cluster) that Kubernetes will restart our pod. In that restarted pod, we’ll have the original filesystem again (our move command will have been ‘undone’), and the livenessProbe will be succesfull again. If you want even more info on the restart of your pod, execute the following command:
kubectl describe pod/basic-pod
Which output should look like this, indicating the failed probe and the pod restart:
Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 62s default-scheduler Successfully assigned default/basic-pod to aks-nodepool1-14406582-vmss000001 Warning Unhealthy 9s (x3 over 29s) kubelet, aks-nodepool1-14406582-vmss000001 Liveness probe failed: HTTP probe failed with statuscode: 403 Normal Killing 9s kubelet, aks-nodepool1-14406582-vmss000001 Container basic failed liveness probe, will be restarted Normal Pulling 8s (x2 over 60s) kubelet, aks-nodepool1-14406582-vmss000001 Pulling image "nginx" Normal Pulled 7s (x2 over 58s) kubelet, aks-nodepool1-14406582-vmss000001 Successfully pulled image "nginx" Normal Created 6s (x2 over 58s) kubelet, aks-nodepool1-14406582-vmss000001 Created container basic Normal Started 6s (x2 over 58s) kubelet, aks-nodepool1-14406582-vmss000001 Started container basic
I believe this should cover our first exam topic, Core Concepts. We have briefly touched on the Core API Primitives, and then created a basic pod, in which we also played around with a livenessProbe.
Up to our next topic in part 3: Configuration!