Examples

Running a Job

The simplest workload you can set up in RAIL is running a Job. It will schedule a Pod that runs its container to completion and then stops.

In this example we will run a Job that runs a Python program that computes plenty of digits of 𝜋. We will use the standard Dockerhub Python container and we will use a ConfigMap to provide the Python script that we want to run. We will also use the kustomize tool as a simpler way to create the ConfigMap.

We have prepared the example in the <https://git.app.uib.no/gisle/k8s-pi-job> repo. Here you can inspect the content of the kustomization.yaml file which results in two resources a Job and a ConfigMap that you can view by running this on your workstation:

kubectl kustomize git@git.app.uib.no:gisle/k8s-pi-job.git?ref=f1f411f93f953c01b24b9b0e0b172afd2185d743

This will output the Kubernetes resources you need for this job.

This output can then be piped to kubectl apply -f - to create the objects in the cluster.

You can also apply this directly with:

kubectl apply -k git@git.app.uib.no:gisle/k8s-pi-job.git?ref=f1f411f93f953c01b24b9b0e0b172afd2185d743

Then inspect the state of the job with:

kubectl describe job pi

When the job finishes you can read the output generated with:

kubectl logs job/pi

The job and the pod are automatially deleted after 10 minutes (as specified by the ttlSecondsAfterFinished setting). If you want to clean up before this time run this command:

kubectl delete -k git@git.app.uib.no:gisle/k8s-pi-job.git?ref=f1f411f93f953c01b24b9b0e0b172afd2185d743

For reference, this is the resource specification file describing the job above:

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  ttlSecondsAfterFinished: 600
  template:
    spec:
      securityContext:
        runAsUser: 10000
      restartPolicy: Never
      containers:
      - name: pi
        image: python:3
        command: ["python", "/app/pi.py", "10000"]
        volumeMounts:
        - name: app
          mountPath: /app

        # RAIL requires us to specify how resource hungry each container is
        resources:
          requests:
            cpu: 200m
            memory: 5Mi
          limits:
            cpu: 200m
            memory: 20Mi

        # This states the defaults for the securityContext and will get rid of
        # the warning that you should set these values.  These values can not be
        # set at the Pod-level, so they need to be specified here.
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          seccompProfile:
            type:
              RuntimeDefault

In order to comply with the demands of the RAIL platform we had to use the ":3" tag for the image instead of defaulting to ":latest" which is not allowed, and we hade to override the user that it runs under with securityContext.runAsUser since this container by default runs as root. The id 10000 is just conventional, any number other that 0 will do. You can put the securityContext.runAsUser specification either at the Pod-level as done here, or inside the container record.

The resources block must be specified for each container in RAIL. This specifies the resources the container normally will consume as well as upper limits. More information available from Container Resources Specification.

Another approach

Instead of using the base Python container and providing the script using a ConfigMap we might want to build a custom container that contains all we need and run that. This require a Dockerfile and a system that can build and publish the image to a container registry – and our git.app.uib.no service provide just this. The <https://git.app.uib.no/gisle/k8s-pi-job> repository is set up with both the Dockerfile and the build instructions for producing the image in the .gitlab-ci.yml file.

This is the Dockerfile required:

FROM python:3-alpine
RUN mkdir /app
COPY pi.py /app
WORKDIR /app
USER 10000

The essence of the .gitlab-ci.yml is simply:

build_container:
  stage: build
  variables:
    dir: .
    image_name: pi-job
    dockerfile: Dockerfile
  extends:
  - .build_container_image
  rules:
  - if: '$REUSE_CONTAINER == null'

This will cause the container image pi-job to be built and published to the git.app.uib.no:4567 registry. The full address to the image in the repo will then be git.app.uib.no:4567/gisle/k8s-pi-job/pi-job:<tag>.

To actually run this job we now require this one Kubernetes resource file:

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  ttlSecondsAfterFinished: 600
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: pi
        image: git.app.uib.no:4567/gisle/k8s-pi-job/pi-job:f32c04f6
        command: ["python", "pi.py", "10000"]

        # RAIL requires us to specify how resource hungry each container is
        resources:
          requests:
            cpu: 200m
            memory: 5Mi
          limits:
            cpu: 200m
            memory: 20Mi

        # This states the defaults for the securityContent and will get rid of
        # the warning that you should set these values.  These values can not be
        # set at the Pod-level, so they need to be specified here.
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          seccompProfile:
            type:
              RuntimeDefault

It’s the same as above, but simplified since we now don’t have to mess with volumes to mount the ConfigMap and we don’t have to override the securityContext.runAsUser either since our custom container has already overridden the default user id.

To run this job just pipe this file to kubectl apply -f- and to clean up the job pipe this file to kubectl delete -f-.

The resource file above works when the registry is connected to a public Git project, which also gives you a public registry that can be fetched from without providing authentication information. If you publish from a protected or private project, then you need to provide authentication information by setting up the imagePullSecrets list. We have a separate document on UiB GitLab as Container Image Registry that explains how to get an access token and configure authentication.

Running a Service

A service is an abstraction in Kubernetes for an interface that clients can access to be connected to an application. The most common interface is the HTTP protocol that web servers provide.

In this example we will set up a service that will talk to an instance of nginx application as an example of a web server. This is the Service resource specification required for that:

apiVersion: v1
kind: Service
metadata:
  name: xyzzy
spec:
  selector:
    app.kubernetes.io/name: nginx
    app.kubernetes.io/instance: xyzzy
  ports:
  - name: http
    port: 80
    targetPort: http
    protocol: TCP

Save this to a file and pass it to kubectl apply. This makes the service available inside current namespace in the cluster, which means that client code that runs in containers in pods in the namespace can simply fetch the front page content from http://xyzzy. The address is now available, but it’s not functional yet as we have not provided any backends that actually implement the server side component.

Let’s first explain what this service specification expresses.

  • metadata.name is the handle to reference this service as well as the “hostname” that clients connect to.

  • spec.ports[].port is the port that clients use. Here it’s just the standard HTTP port of 80.

  • spec.selector is a set of labels that together will determine which pods will be used as backends to implement this service. Each web request will be routed to one of the pods selected. The label names here are from the well-known labels where app.kubernetes.io/name is the application name and app.kubernetes.io/instance is the name given to this instance.

  • spec.ports[].targetPort is the name of port on the selected Pod that the request will be forwarded to. Using a name instead of a number here allow the pod itself to declare where it want to be contacted. This could be port 80, but some backends servers choose to use a different port.

Ingress: Exposing a service to the outside world

Most web servers also want to be available to world outside of the cluster. In the scary world outside it’s also a requirement to use secure HTTP, aka https or HTTP over TLS. In RAIL we can set this up with an Ingress resource that looks like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: xyzzy
  annotations:
    cert-manager.io/cluster-issuer: harica-temp
    cert-manager.io/private-key-algorithm: ECDSA
    cert-manager.io/usages: "digital signature"
    cert-manager.io/private-key-rotation-policy: Always
    #nginx.ingress.kubernetes.io/whitelist-source-range: 129.177.0.0/16,2001:700:200::/48
spec:
  ingressClassName: nginx
  rules:
  - host: xyzzy.osl1.prod.rail.uib.no
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: xyzzy
            port:
              name: http
  tls:
  - hosts:
    - xyzzy.osl1.prod.rail.uib.no
    secretName: xyzzy.osl1.prod.rail.uib.no-tls

This will register the requested DNS domain name and create an valid TLS certificate for the domain name. It will also forward all traffic received for that domain name to service xyzzy set up above. The example here assumes that we run from the osl1-prod RAIL cluster. The domain names for other RAIL clusters will have to be adjusted accordingly.

Attention

cluster-issuer sectigo-clusterwide is defunct. Certificates issued by Sectigo in all clusters are valid until January 2026. You can not deploy new TLS certificates fron Sectigo.

Important

At this point in time, there are only one available cluster-issuer, harica-temp. This issuer is only providing TLS certificates for cluster specific URLs. Thus, you can not issue certificates for domains like .uib.no, you need to include clustername, for example myapp.bgo1.prod.uib.no, or pre-register your application in DNS to point to a specific cluster, since harica-temp requires a http challenge for every name in the certificate request.

The nginx.ingress.kubernetes.io/whitelist-source-range annotation is commented out in the example above. You can’t enable it until after the certificate has been issued. Harica must be able to reach the server to verify the certificate.

Save this specification to a file and pass it to kubectl apply. Then wait a minute for the certificate and hostname to be set up. You can inspect the output of kubectl get ingress xyzzy to see when the address has been allocated.

At this point you can test out the service from a host on the Internet by running:

curl -i http://xyzzy.osl1.prod.rail.uib.no

You will see that this just redirects the client to https://, so let’s try that instead:

curl -i https://xyzzy.osl1.prod.rail.uib.no

and this should then output something like this:

HTTP/2 503
date: Sun, 21 Apr 2024 22:12:50 GMT
content-type: text/html
content-length: 190
strict-transport-security: max-age=15724800; includeSubDomains

<html>
<head><title>503 Service Temporarily Unavailable</title></head>
...

which is as expected, since we have still not provided any backends to actually implement this service.

Deployment: Pods that runs the backends

Finally here is the specification of the Deployment that will create the Pods that run the backends. We also need to set up a matching NetworkPolicy so that the Pods are able to receive incoming traffic on port 8080, which is the port that the container we run here listens on.

For a discussion on the resources and securityContext check the article about setting up a simple example Job.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-xyzzy
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: nginx
      app.kubernetes.io/instance: xyzzy
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nginx
        app.kubernetes.io/instance: xyzzy
    spec:
      containers:
      - name: nginx
        image: nginxinc/nginx-unprivileged:1.25
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP

        # RAIL requires us to specify how resource hungry each container is
        resources:
          requests:
            cpu: 100m
            memory: 20Mi
          limits:
            cpu: 500m
            memory: 100Mi

        # This states the defaults for the securityContext and will get rid of
        # the warning that you should set these values.  These values can not be
        # set at the Pod-level, so they need to be specified here.
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          runAsNonRoot: true
          seccompProfile:
            type:
              RuntimeDefault
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: nginx-xyzzy
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: nginx
      app.kubernetes.io/instance: xyzzy
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: http
    from: