Running a Service¶
A service is an abstraction in Kubernetes for an interface that clients can access to be connected to an application. The most common interface is the HTTP protocol that web servers provide.
In this example we will set up a service that will talk to an instance of nginx application as an example of a web server. This is the Service resource specification required for that:
apiVersion: v1
kind: Service
metadata:
name: xyzzy
spec:
selector:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
Save this to a file and pass it to kubectl apply
. This makes the service
available inside current namespace in the cluster, which means that client code
that runs in containers in pods in the namespace can simply fetch the front page
content from http://xyzzy. The address is now available, but it’s not
functional yet as we have not provided any backends that actually implement the
server side component.
Let’s first explain what this service specification expresses.
metadata.name
is the handle to reference this service as well as the “hostname” that clients connect to.spec.ports[].port
is the port that clients use. Here it’s just the standard HTTP port of 80.spec.selector
is a set of labels that together will determine which pods will be used as backends to implement this service. Each web request will be routed to one of the pods selected. The label names here are from the well-known labels whereapp.kubernetes.io/name
is the application name andapp.kubernetes.io/instance
is the name given to this instance.spec.ports[].targetPort
is the name of port on the selected Pod that the request will be forwarded to. Using a name instead of a number here allow the pod itself to declare where it want to be contacted. This could be port 80, but some backends servers choose to use a different port.
Ingress: Exposing a service to the outside world¶
Most web servers also want to be available to world outside of the cluster. In the scary world outside it’s also a requirement to use secure HTTP, aka https or HTTP over TLS. In RAIL we can set this up with an Ingress resource that looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: xyzzy
annotations:
cert-manager.io/cluster-issuer: harica-temp
cert-manager.io/private-key-algorithm: ECDSA
cert-manager.io/usages: "digital signature"
cert-manager.io/private-key-rotation-policy: Always
#nginx.ingress.kubernetes.io/whitelist-source-range: 129.177.0.0/16,2001:700:200::/48
spec:
ingressClassName: nginx
rules:
- host: xyzzy.osl1.prod.rail.uib.no
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: xyzzy
port:
name: http
tls:
- hosts:
- xyzzy.osl1.prod.rail.uib.no
secretName: xyzzy.osl1.prod.rail.uib.no-tls
This will register the requested DNS domain name and create an valid TLS
certificate for the domain name. It will also forward all traffic received for
that domain name to service xyzzy
set up above. The example here assumes that
we run from the osl1-prod RAIL cluster. The domain names for other RAIL
clusters will have to be adjusted accordingly.
Attention
cluster-issuer sectigo-clusterwide
is defunct. Certificates issued by Sectigo in
all clusters are valid until January 2026. You can not deploy new TLS certificates from Sectigo.
Important
At this point in time, there are only one available cluster-issuer, harica-temp
.
This issuer is only providing TLS certificates for cluster specific URLs. Thus, you
can not issue certificates for domains like .uib.no, you need to include clustername,
for example myapp.bgo1.prod.uib.no
, or pre-register your application in DNS to point
to a specific cluster, since harica-temp
requires a http challenge for every name in
the certificate request.
The nginx.ingress.kubernetes.io/whitelist-source-range annotation is commented out in the example above. You can’t enable it until after the certificate has been issued. Harica must be able to reach the server to verify the certificate.
Save this specification to a file and pass it to kubectl apply
. Then wait a
minute for the certificate and hostname to be set up. You can
inspect the output of kubectl get ingress xyzzy
to see when the address has
been allocated.
At this point you can test out the service from a host on the Internet by running:
curl -i http://xyzzy.osl1.prod.rail.uib.no
You will see that this just redirects the client to https://, so let’s try that instead:
curl -i https://xyzzy.osl1.prod.rail.uib.no
and this should then output something like this:
HTTP/2 503
date: Sun, 21 Apr 2024 22:12:50 GMT
content-type: text/html
content-length: 190
strict-transport-security: max-age=15724800; includeSubDomains
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
...
which is as expected, since we have still not provided any backends to actually implement this service.
Deployment: Pods that runs the backends¶
Finally here is the specification of the Deployment that will create the Pods that run the backends. We also need to set up a matching NetworkPolicy so that the Pods are able to receive incoming traffic on port 8080, which is the port that the container we run here listens on.
For a discussion on the resources
and securityContext
check the article about setting up a simple example Job.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-xyzzy
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
template:
metadata:
labels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
spec:
containers:
- name: nginx
image: nginxinc/nginx-unprivileged:1.25
ports:
- name: http
containerPort: 8080
protocol: TCP
# RAIL requires us to specify how resource hungry each container is
resources:
requests:
cpu: 100m
memory: 20Mi
limits:
cpu: 500m
memory: 100Mi
# This states the defaults for the securityContext and will get rid of
# the warning that you should set these values. These values can not be
# set at the Pod-level, so they need to be specified here.
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type:
RuntimeDefault
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginx-xyzzy
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: xyzzy
policyTypes:
- Ingress
ingress:
- ports:
- port: http
from: