MicroK8s: exposing services via NGINX ingress controller
Introduction
MicroK8s is a great way to quickly spin up a production-grade K8s cluster. It's so lightweight that you could easily run it on your local machine and use it for development purposes, for example.
But how do you go about exposing the services you've deployed on your cluster, so that they can be reached over the network from outside the cluster? One way to achieve this, is by using a so-called Ingress.
In cloud environments, Ingress is often the preferred way of exposing applications. An alternative would be a service of type LoadBalancer, but these usually tie into an external load balancer that is provided by the cloud provider. In cases where you need to expose a lot of different services, this might result in considerable additional costs for the external load balancers that are provided.
In this post, I'll show you how to enable Ingress on MicroK8s, and we'll also go over an example deployment. The same applies to pretty much any kind of K8s cluster, apart from the first step where we enable Ingress.
Enabling Ingress
Enabling Ingress on MicroK8s is actually quite easy. It's just a matter of running the following command:
microk8s enable ingress
If you take a closer look, you will notice that a number of objects just got created within the cluster.
Creating a demo namespace
For this demo, I've created a new Namespace, called nginx-ingress-demo:
apiVersion: v1
kind: Namespace
metadata:
name: nginx-ingress-demo
Save the content above to a .yaml file, and apply it by running the kubectl apply command, for example:
Creating a deployment
Next, we'll create a deployment so that we actually have something to expose. For this, we'll be using the NGINX "hello" image. A simple webpage will be displayed, showing the IP address and the name of the pod that's servicing our request.
The deployment looks as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
labels:
app: nginx
namespace: nginx-ingress-demo
spec:
replicas: 3
selector:
matchLabels:
app: nginx-test
template:
metadata:
labels:
app: nginx-test
spec:
containers:
- name: nginx
image: nginxdemos/hello
ports:
- containerPort: 80
Note that we're spinning up 3 replicas. This will allow us to actually see the load balancing behaviour, because our requests will get serviced by the different pods.
Once again, apply this YAML file through kubectl, similarly to what we did for the namespace.
Creating a service
Now we can start working on actually exposing our deployment. The first step, is creating a service that will be placed in front of our deployment. You can easily do this by running the kubectl expose command, like so:
Running this command will create a service of type ClusterIP for us. Don't forget to specify the correct namespace, or else you'll receive an error because the deployment couldn't be found in the default namespace.
You can run this command to see the service that has just been created by running the previous command:
We can also see this in more detail:
And we can even output to YAML, in case we want to apply this configuration via YAML later on:
Creating the ingress
Now that our deployment and service are all set up, it's time to create the ingress. The YAML file to create our ingress looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: nginx-ingress-demo
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: nginx-test
port:
number: 80
After applying the YAML file via kubectl, we can see the newly created ingress by running the following command:
As you can see (from the output of the get ingress command), the ingress is bound to IP 127.0.0.1 on port 80, which means that you should now be able to point your browser to http://localhost and see the NGINX demo application.
If you refresh the page a couple of times, you will notice that the requests get served by different pods (see server address and server name fields).
And that's it! Feel free to leave a comment below this post. Any feedback is greatly appreciated.