Getting Started with Kubernetes Ingress
Table of Contents
Having a clear understanding of the Kubernetes ecosystem isn't an easy task, especially for a beginner. You have to understand various concepts concerning networking, storage, and security, among others.
An important aspect to gain an understanding of in Kubernetes networking is how to manage external access to services within a cluster. Kubernetes provides several options for this purpose such as NodePort and LoadBalancer. However, this article will focus on another external traffic management option, Ingress.
In this article, you will understand what Kubernetes Ingress is, and why you should use it. You'll also get a breakdown of the inner workings of Kubernetes Ingress and an introduction to the Kubernetes Ingress Resource and Controller. By the end of this article, you will have gone through the important aspects of Kubernetes Ingress and summed it up by deploying your first Ingress Controller.
What is Kubernetes Ingress?
Kubernetes Ingress is an API object that serves as a gateway for external traffic to access services within a Kubernetes cluster. It operates by defining a set of routing rules, typically using HTTP/HTTPS protocols, to efficiently direct incoming requests to the appropriate internal services.
When an external request is sent to the cluster worker nodes, it first encounters the Ingress, which then forwards the traffic to the designated internal service within the cluster. This internal service further directs the request to the specific application running inside a pod.
Compared to other Kubernetes external routing options, such as NodePort or LoadBalancer services, Ingress offers additional functionalities like load balancing, name-based virtual hosting, and SSL termination. These features make Ingress particularly well-suited for managing traffic in production environments.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
Why Should You Use Kubernetes Ingress?
As you progress along this article, you will understand that Ingress comes with so much capability and is the reason why most developers rely on it for managing external traffic.
However, here are some reasons why you should use Kubernetes Ingress:
- Compatibility in production environment: For production environments, there are certain features you'd want to have in place such as TLS configuration. With Ingress, by adding a few configuration lines, you can set up TLS for your application.
- Single point of entry for external traffic: Before Ingress, you'd rely on exposing each service individually to allow for external traffic. Using Ingress, you can define a unified entry point, typically through a domain name or IP address, which streamlines the process for users to access your applications.
- Layer 7 (L7) based routing: Ingress offers precise control over traffic distribution to your pods using Layer 7 load balancing. This level of control allows you to finely tune how incoming requests are managed based on various factors like HTTP headers, URL paths, and hostnames. With this granularity, you can effortlessly implement advanced routing strategies such as A/B testing, canary deployments, or blue-green deployments.
Before Kubernetes Ingress
Before the introduction of Kubernetes Ingress, developers had to rely on manual configurations and utilize services such as NodePort and LoadBalancer to manage external traffic.
With NodePort, developers expose their services by assigning a port on each node within the cluster. Incoming traffic directed to that specific port on any node would then be forwarded to the respective service. While this approach was relatively straightforward, it lacked flexibility and necessitated the management of port allocations across the cluster.
Alternatively, LoadBalancer leveraged the load balancer service provided by cloud providers. Kubernetes could automate the provisioning of a cloud load balancer and configure it to route traffic to the designated service. Although more automated than NodePort, this method could incur higher costs depending on the pricing model of the cloud provider.
However, both NodePort and LoadBalancer had their limitations. They did not offer capabilities for handling advanced routing requirements, SSL termination, or other features commonly needed in modern applications. This gap in functionality paved the way for the introduction of Kubernetes Ingress.
Ingress brought forth a more sophisticated and flexible approach to managing external traffic. It enabled users to define routing rules and leverage Ingress controllers to handle traffic accordingly, addressing the shortcomings of previous methods and providing a comprehensive solution for managing ingress traffic within Kubernetes clusters.
How Does Kubernetes Ingress Work?
Consider the following configuration images below. The first image describes a typical Ingress resource file, and the second shows an internal service file.
Figure 1. Ingress resource and internal service configuration*
The Ingress resource file, ingress.yaml
defines the routing rules for incoming traffic. It specifies the host in this case my-app.com
. It also specifies the path and the port to an internal service.
The internal service file internal-service.yaml
, describes the service that the Ingress will route traffic.
When a user accesses the application via the domain my-app.com
, the domain name gets resolved to the IP address associated with the Ingress controller's service. The request then arrives at the Ingress controller, which monitors for Ingress resources in the cluster.
The Ingress controller matches the request to the ingress resource based on the host header my-app.com
. Following the rules defined in ingress.yaml
, the Ingress controller forwards the request to the myapp-internal-service
service within the cluster.
The myapp-internal-service
service receives the request and routes it to one of the pods running the application.
Finally, the pod processes the request, generates a response, and sends it back through the same path: from the pod to the service, to the Ingress controller, and finally back to the client.
In one short sentence, when a browser request is made, it encounters the Ingress, which then directs it to an internal service, ultimately mapping to a pod housing the application.
Kubernetes Ingress Resource and Controller
As discussed earlier, the Ingress resource defines rules for routing external HTTP and HTTPS traffic to services within the cluster.
Ingress resources provide a way to configure how external traffic should be directed to different services based on factors like hostnames, paths, or other request attributes.
An Ingress resource typically includes rules and annotations - additional configuration settings for the Ingress controller, such as SSL certificate information or load balancing settings. An example of an Ingress resource is shown in ingress.yaml
from Figure 1.
Complementing the Ingress resource is the Ingress controller responsible for implementing the routing rules specified in the Ingress resource.
The Ingress controller continuously monitors the cluster for changes to Ingress resources and dynamically adjusts the routing accordingly. It analyzes the Ingress resource, evaluates the defined rules, and manages the redirection of incoming traffic.
The controller is implemented as a third-party application, and there are several Ingress controllers available, each with its unique features and capabilities. Some popular Ingress controllers include Nginx, Traefik, and HAProxy.
Each of these controllers above has its specific configuration syntax. However, the Ingress resource acts as a layer of abstraction above those specifications.
This simply means you don't need to know which controller is set up in your cluster. The Ingress resource will have the same outcome in every Kubernetes cluster with any Ingress controller.
Ingress & Ingress Controller Architecture
The Ingress controller begins its work by continuously monitoring the Ingress resources within the Kubernetes cluster. Whenever you create, update, or delete an Ingress resource, Kubernetes generates events. The Ingress controller listens to these events to stay informed about any changes made to Ingress configurations.
When it detects any changes, the controller reads the configurations you specified. It then interprets these configurations to understand the rules and requirements for routing. This step involves understanding factors like hostnames, paths, TLS termination settings, and other routing criteria defined in the Ingress resources.
Once the controller understands the configurations, it converts this information into a format that the underlying reverse proxy (such as Nginx or HAProxy) can understand. The transformed configurations are then applied to the reverse proxy.
The Ingress controller works closely with the reverse proxy, which serves as the gateway for incoming external traffic. The controller configures and manages the reverse proxy to ensure that it follows the routing rules defined in the Ingress resources.
This integration involves dynamically updating the configuration of the reverse proxy based on changes in Ingress resources and ensuring that incoming traffic is properly directed to the appropriate services inside the cluster.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
List of Kubernetes Ingress Controller
Kubernetes allows you to deploy multiple Ingress controllers within a cluster. Each controller may use different technologies or have distinct configurations tailored to specific use cases.
The following is a list of popular Ingress controllers you can use in your Kubernetes cluster:
- Nginx Ingress controller: Provided by the Nginx project, this controller is one of the most popular and widely used Ingress controllers. Check out our article on practical guide to Kubernetes Ingress with Nginx.
- Istio Ingress: This controller is based on the Istio service.
- HAProxy Ingress: This controller is based on the HAProxy.
- Traefik Ingress: The Traefik project provides this Ingress controller.
Choosing an Ingress controller depends on your specific requirements which might be the need for advanced routing features, SSL termination, or integration with other Kubernetes services.
Kubernetes supports 20+ Ingress controllers, and you can find the complete list here. Each controller has its unique features and capabilities, so it's essential to evaluate your requirements before selecting one.
Deploy Your First Ingress Controller
You've learnt about why you should use Kubernetes Ingress, and how it works. Now, you will deploy your first Ingress controller.
You'll be deploying one of the most widely used Ingress controllers - Nginx Ingress Controller. There are multiple ways of installing this controller, but for this article, you'll be using the manifest file method.
To see other installation methods, you can visit the official Nginx Ingress Controller documentation.
Prerequisites
To deploy the Nginx Ingress Controller, you need to have a Kubernetes cluster running. If you don't have a cluster, you can create one using a cloud provider like Google Cloud, AWS, or Azure. Alternatively, you can use a local development environment like Minikube.
Minikube
If you're using Minikube, there's a straightforward way of installing the Nginx Controller. You can use the following command:
minikube addons enable ingress
With this single command, Minikube will configure the Controller for you. All you need to do is set up an Ingress resource and you're good to go!
If you’re using another Kubernetes cluster for this demo, follow along.
Step 1: Clone the Nginx Ingress Controller Repository
Once your cluster is running, clone the Nginx Ingress Controller repository using the following command:
git clone https://github.com/nginxinc/kubernetes-ingress.git
This repository contains all the manifest files you'll need to deploy the Nginx Ingress Controller. Once the cloning is complete, navigate to the directory.
Step 2: Configure Namespace and Service Account
The Nginx Ingress Controller should be deployed in its namespace. To create a new namespace and service account, run the following command in the newly cloned directory:
kubectl apply -f deployments/common/ns-and-sa.yaml
Once it is applied, switch the context to the newly created namespace:
kubectl config set-context --current --namespace=nginx-ingress
Step 3: Setup Role-Based Access Control (RBAC)
The Nginx Ingress Controller requires specific permissions to access resources within the cluster.
This command will create a cluster role and role binding for the service account:
kubectl apply -f deployments/rbac/rbac.yaml
Step 4: Configure ConfigMap and Ingress Class
The Nginx Ingress Controller uses a ConfigMap to store its configuration settings. You can create a ConfigMap using the following command:
kubectl apply -f deployments/common/nginx-config.yaml
Next, you need to define the Ingress class for the Nginx Ingress Controller. Without this, the Nginx Controller won't start.
kubectl apply -f deployments/common/ingress-class.yaml
Step 5: Setup Custom Resources
Without these custom resources, the Ingress pod will be in an unhealthy state. The first command below sets up a VirtualServer resource. The other two commands set up custom resources for the NGINX App Protect WAF and DoS module respectively.
You can create these resources using the following command:
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v3.4.3/deploy/crds.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v3.4.3/deploy/crds-nap-waf.yaml
kubectl apply -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/v3.4.3/deploy/crds-nap-dos.yaml
Step 6: Deploy the Nginx Ingress Controller
There are two ways to deploy the Controller, either as a Deployment or DaemonSet. Using the Deployment method, you can dynamically change the number of Ingress Controller replicas. Choosing the DeamonSet method allows you to run the Ingress Controller on all nodes.
For this article, you'll be deploying the controller as a Deployment.
Run the following command to deploy the Controller:
kubectl apply -f deployments/deployment/nginx-ingress.yaml
Step 7: Verify the Deployment
To verify that the Nginx Ingress Controller has been deployed successfully, run the following command:
kubectl get pods --namespace=nginx-ingress
Your output should look similar to this:
NAME READY STATUS RESTARTS AGE
nginx-ingress-755bf8968b-j6cd2 1/1 Running 0 11s
If the pod is in the Running
state, then the Controller has been successfully deployed.
Step 8: Accessing the Nginx Ingress Controller
After installation, you can access the Nginx Ingress Controller. However, without an Ingress resource, Nginx will display a 404 page.
To access the Nginx Ingress Controller, you can use a NodePort or LoadBalancer service. The following command creates a NodePort service to access the Controller:
kubectl create -f deployments/service/nodeport.yaml
Once the service is created, you can access the Nginx Ingress Controller using the NodePort service.
To access the Controller, you need to see the NodePort assigned to the service. Get the NodePort using the following command:
kubectl get svc --namespace=nginx-ingress
You should see an output similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress NodePort 10.108.172.170 <none> 80:30080/TCP,443:30443/TCP 2m
From this output, the NodePort for HTTP traffic is 30080
, and the NodePort for HTTPS traffic is 30443
.
You can access the Nginx Ingress Controller using the NodePort assigned to the service. Use the following command to access the Controller:
curl http://<node-ip>:<node-port>
For HTTP traffic, you should see the following output:
<html>
<head>
<title>404 Not Found</title>
</head>
<body>
<center><h1>404 Not Found</h1></center>
<hr />
<center>nginx/1.25.4</center>
</body>
</html>
For HTTPS traffic, you should see the following output:
<html>
<head>
<title>400 The plain HTTP request was sent to HTTPS port</title>
</head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr />
<center>nginx/1.25.4</center>
</body>
</html>
Routing Traffic to Multiple Paths with Ingress
There are multiple use cases where you would want to utilize Ingress, one of which is routing traffic to multiple paths.
In a case where you have a single domain and you want to route traffic to different services based on the path, Ingress is the perfect solution.
Consider you have an application. This application has one domain myapp.com
but has multiple services that it offers. So when you have the myapp.com
account, you can use the ecommerce service or the payment service. These are all separate applications that are accessible using the same domain.
The following Ingress resource configuration demonstrates how to route traffic to the multiple paths of the application:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.com
http:
paths:
- path: /ecommerce
backend:
serviceName: ecommerce-service
servicePort: 3000
- path: /payment
backend:
serviceName: payment-service
servicePort: 5000
In the rules section, the Ingress resource specifies the host myapp.com
and defines two paths /ecommerce
and /payment
. Each path is associated with a different service within the cluster.
When a user accesses the myapp.com/ecommerce
path, the Ingress controller forwards the request to the ecommerce-service
service.
Similarly, when the user accesses the myapp.com/payment
path, the request is directed to the payment-service
service.
This way, you can forward traffic with one Ingress of the same host to multiple applications using multiple paths.
Encore is the Development Platform for building event-driven and distributed systems. Move faster with purpose-built local dev tools and DevOps automation for AWS/GCP. Get Started for FREE today.
TLS Configuration with Ingress
In production, you're not going to have an application that’s in http
because it indicates that the application isn't secure. You'd want to set up https
for your application. This is where TLS configuration comes in.
Configuring TLS using Ingress is a straightforward process. All you need to do is define the TLS settings in the Ingress resource.
The following Ingress resource configuration demonstrates how to configure TLS for the myapp.com
domain:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-tls-secret
rules:
- host: myapp.com
http:
paths:
- path: /
backend:
serviceName: myapp-internal-service
servicePort: 80
From this configuration, the tls
section specifies the domain myapp.com
and the secretName
of the TLS certificate. The secretName
is the name of the Kubernetes secret that contains the TLS certificate and private key.
This secret needs to be created in a cluster. The following is an example of how to create a secret with a TLS certificate:
apiVersion: v1
kind: Secret
metadata:
name: myapp-tls-secret
namespace: default
data:
tls.crt: base64-encoded-cert
tls.key: base64-encoded-key
type: kubernetes.io/tls
The name
field specifies the name of the secret, and the namespace
field specifies the namespace where the secret is created. The tls.crt
and tls.key
fields contain the base64-encoded certificate and private key, respectively.
The value contents of tls.crt
and tls.key
are actual file contents and NOT file paths or locations, and they must be base64-encoded.
Note: You must use the TLS type kubernetes.io/tls
when creating a secret for TLS certificates. Also, the Secret component must be in the same namespace as the Ingress resource that references it.
Conclusion
This article went through the most crucial aspects of Kubernetes Ingress. From the basics of what Ingress is, to why you should use it, and how it works.
You've also deployed your first Controller, understood how to route traffic to multiple paths and configured TLS with Ingress.
Take your time to digest the information in this article, and practice deploying Ingress resources in your Kubernetes cluster. You should also explore the various Ingress controllers available and evaluate which one best suits your requirements.
For a more in-depth guide on using the Nginx Ingress Controller, check out our practical guide to Kubernetes Ingress with Nginx.
Like this article? Sign up for our newsletter below and become one of over 1000 subscribers who stay informed on the latest developments in the world of DevOps. Subscribe now!
The Practical DevOps Newsletter
Your weekly source of expert tips, real-world scenarios, and streamlined workflows!