Practical Guide to Kubernetes Ingress with Nginx
Table of Contents
In the first article, you learnt about the concept of Ingress in Kubernetes and how it helps route external traffic to services within the cluster. You saw a list of Ingress controllers, including Traefik, HAProxy and the subject of this article, Nginx.
In this article, you'll delve deeper into the Nginx Ingress controller. You'll explore its key components and understand how it facilitates routing. By the end of this guide, you'll have a practical understanding of using Nginx Ingress to manage traffic within your Kubernetes cluster.
Prerequisites
This guide is a continuation of the EverythingDevOps’ “Getting Started with Kubernetes Ingress” article. It is assumed that you've gone through that article and have a foundational understanding of Ingress in Kubernetes.
Aside from that, you'll need the following:
- A Kubernetes cluster up and running, with
kubectl
installed and configured. - Nginx Ingress controller installed in your Kubernetes cluster. You can follow the instructions from the previous article to install the Nginx Ingress controller.
Understanding Ingress with Nginx
Once you've deployed the Nginx Ingress controller in your Kubernetes cluster, it becomes the entry point for all incoming traffic. It routes requests to the appropriate services based on the rules defined in the Ingress resource.
The Nginx controller runs inside a pod in your cluster. This pod is responsible for configuring the Nginx server to handle incoming requests. It also watches for changes in the Ingress resources and updates the Nginx configuration accordingly.
Since the controller is set up inside the cluster, it has no way of receiving requests directly from the outside world. For this, Kubernetes will set up a layer 3/ layer 4 load balancer instance to route traffic to the Nginx controller.
When a request comes in, the load balancer forwards it to the Nginx controller. The controller then receives the request and determines which service it should be routed to.
After identifying the service, the Nginx Ingress controller forwards the request to that service and returns the response to the client.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
What variation of Nginx are you using?
There are two main variations of the Nginx Ingress controller. One was created by the Kubernetes community and the other by Nginx.
Both variations go by the name "Nginx Ingress Controller" and are both open-source projects. However, there are a few differences between them.
Nginx Ingress Controller by Kubernetes
This version is based on Nginx Open source with its GitHub repository found here: https://github.com/kubernetes/ingress-nginx
. It is actively maintained by the Kubernetes community and has its documentation on Kubernetes.io.
Nginx Ingress Controller by Nginx
This version is maintained by F5 Nginx and its GitHub repository can be found here: https://github.com/nginxinc/kubernetes-ingress
. It is also actively maintained and has its documentation on docs.nginx.com.
The Nginx version has two editions. Nginx Open source which is what the Kubernetes version is based on and Nginx Plus which is a commercial version with additional features.
What's the difference?
Aside from being so similar up to the point where they implement the same function, there are still some differences between both versions of the Nginx controller.
However, the main difference is that in the Nginx version, you get HTTP load balancing. However, it's not available in the Kubernetes version.
The Kubernetes version supports TLS termination and path/host-based routing. With the Nginx version, you can extend these functionalities through annotations. These annotations allow you to configure advanced features like load balancing, persistence, etc.
Nginx Ingress Rules and Defaults
Ingress resources are used to define the rules that govern how incoming traffic should be routed within the cluster. These routing rules are used by the Ingress controller to determine how to route requests.
The following are some key components of an Ingress resource:
- Host: This is the hostname that the Ingress rule should match.
- Path: The path that the Ingress rule should match.
- Backend: The service that the request should be routed to.
- PathType: The type of path matching to be used.
- Default Backend: The service that should handle requests that don't match any of the defined rules.
These are just a few of the components that can be defined in an Ingress resource. To see an extensive list of components, you can check out the official Kubernetes documentation.
Nginx Ingress Rules
The Ingress resource defines routing rules based on the host, path, or both. The host rule is used to route requests based on the hostname in the request. The path rule is used to route requests based on the path in the request.
Below is an example of an Ingress resource that defines a rule based on the host:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
In this example, the Ingress resource defines a rule that routes requests with the hostname example.com
to the example-service
service.
Note: The ingressClassName: nginx
field specifies the Ingress class that should handle the Ingress resource. Specifying the ingressClassName
field is important when you have multiple Ingress controllers running in your cluster.
Nginx Default Backend
By default, the Nginx Ingress controller has a default backend that handles requests that don't match any of the defined rules.
That means any request that's not mapped with an Ingress will be routed to this default backend.
The Nginx default backend exposes two endpoints:
/healthz
: This endpoint returns a 200 OK response./404
: This endpoint returns a 404 Not Found error.
You can customize the default backend by defining a custom service that should handle requests that don't match any of the defined rules.
Advanced Ingress configuration with annotations
In the first article, you learnt about the basic configuration of Ingress resources with several examples.
In this section, you'll learn how to configure advanced features of the Ingress-NGINX controller using annotations.
As earlier mentioned, annotations are used to extend the functionality of the Ingress controller.
The use of annotations in the Ingress-NGINX controller allows you to customize additional configuration features like URI rewriting, down to simple features like setting the value of a connection timeout.
You can also make some configurations using ConfigMaps. However, annotations have more precedence over ConfigMaps.
The following Ingress resource file will contain annotations that will configure the Nginx Ingress controller to rewrite URIs and set a connection timeout:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/proxy-connect-timeout: 10s
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /foo(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
From the Ingress resource above, we have specified two annotations.
The first annotation nginx.ingress.kubernetes.io/rewrite-target: /$2
configures the Nginx Ingress controller to rewrite URIs.
The second annotation nginx.ingress.kubernetes.io/proxy-connect-timeout: "10s"
sets the connection timeout to 10 seconds.
The $2
in the nginx.ingress.kubernetes.io/rewrite-target
annotation tells Nginx to pass the second capture group to the upstream. That means, that when a request comes in with the path /foo/bar
, the Nginx Ingress controller will rewrite the URI to /bar
and set a connection timeout of 10 seconds.
The rewrite annotation is one of the most commonly used annotations in the Nginx Ingress controller. There are several other annotations that you can use to customize the behaviour of your Ingress resources.
To see a list of all available annotations, you can check out the Ingress controller documentation.
Using Nginx Ingress with Applications
Understanding various concepts of the Nginx Ingress controller is great. However, it's more important to see how these concepts can be applied in real-world scenarios.
In this section, you're going to deploy a simple application in your Kubernetes cluster and expose it using the Nginx Ingress controller.
Deploying a Sample Application
The application is a simple word counter that counts the number of words in a given text and is built using JavaScript.
On your terminal, create a file for the deployment manifest:
nano word-counter-deployment.yaml
Add the following content to the file:
apiVersion: v1
kind: Service
metadata:
name: word-counter
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: word-counter
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: word-counter
spec:
replicas: 2
selector:
matchLabels:
app: word-counter
template:
metadata:
labels:
app: word-counter
spec:
containers:
- name: myapp
image: aahil13/myapp:main
ports:
- containerPort: 8080
This configuration defines the deployment and service. The deployment will create 2 replicas of the image aahil13/myapp:main
and expose it on port 8080. The service will expose the deployment on port 80.
Apply the deployment manifest:
kubectl apply -f word-counter-deployment.yaml
service/word-counter created
deployment.apps/word-counter created
Verify that the service has been created:
kubectl get service word-counter
You should get the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
word-counter ClusterIP 10.645.0.1 <none> 80/TCP 1m
Expose the service with Nginx Ingress
Now that you have deployed the sample application, you can expose it using the Nginx Ingress controller.
This tutorial assumes you've already installed the Nginx Ingress controller in your Kubernetes cluster. If you haven't, you can follow the instructions here.
Create an Ingress resource file:
nano word-counter-ingress.yaml
Add the following content to the file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: word-counter-ingress
spec:
ingressClassName: nginx
rules:
- host: your_domain_name.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: word-counter
port:
number: 80
This Ingress resource defines a rule that routes requests with the hostname your_domain_name.com
to the word-counter
service.
Replace your_domain_name.com
with your actual domain name.
Apply the Ingress resource:
kubectl apply -f word-counter-ingress.yaml
Testing the Application
Once you've applied the Ingress resource, you can test the application by visiting the domain name you specified in the Ingress resource.
You should see the word counter application running and ready to count the number of words in a given text.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
Uninstalling Nginx Ingress Controller
From the previous article, you installed the Nginx Ingress controller in your Kubernetes cluster. In this section, you'll learn how to uninstall it.
Use the following steps to uninstall the Nginx Ingress controller and all its associated resources
from your cluster:
- Delete the Nginx Ingress namespace: You need to delete the namespace where the Nginx Ingress controller is deployed. You can do this by running the following command:
kubectl delete namespace nginx-ingress
- Delete the cluster role and cluster role binding: You also need to delete the cluster role and cluster role binding associated with the Nginx Ingress controller. You can do this by running the following commands:
kubectl delete clusterrole nginx-ingress
kubectl delete clusterrolebinding nginx-ingress
You should get the following output:
clusterrole.rbac.authorization.k8s.io "nginx-ingress" deleted
clusterrolebinding.rbac.authorization.k8s.io "nginx-ingress" deleted
- Delete the custom resource definitions (CRDs): You also need to delete the custom resource definitions (CRDs) associated with the Nginx Ingress controller. You can do this by running the following command:
kubectl delete -f config/crd/bases/crds.yaml
After all this, you should have successfully uninstalled the Nginx Ingress controller from your Kubernetes cluster.
Confirm that the Nginx Ingress controller has been uninstalled by running the following command:
kubectl get pods -n nginx-ingress
You should get the following output:
No resources found in nginx-ingress namespace.
Conclusion
The Nginx Ingress controller has comprehensive documentation meant to guide you in using all its features. This and many other reasons contribute to why it's one of the most popular Ingress controllers in the Kubernetes ecosystem.
In this article, you've learnt about the two main variations of the Nginx Ingress controller and how to use it to manage traffic within your Kubernetes cluster. Specifically, the Nginx Ingress controller utilized here is the one provided by Kubernetes.
You've also seen how to configure advanced features of the Nginx Ingress controller using annotations and deployed a sample word counter application in your Kubernetes cluster. With this knowledge, you should be able to effectively use the Nginx Ingress controller to manage traffic within your Kubernetes cluster.
Like this article? Sign up for our newsletter below and become one of over 1000 subscribers who stay informed on the latest developments in the world of DevOps. Subscribe now!
The Practical DevOps Newsletter
Your weekly source of expert tips, real-world scenarios, and streamlined workflows!