🌎 Helm & Gateway API
🔥 This section is an alternative to the Helm & Ingress section, but instead of covering the legacy Ingress API, it uses the newer Gateway API which is still evolving, but represents the future of L4/L7 routing in Kubernetes. Do not run through both sections, choose one or the other. 🔥
For this section we'll touch on two slightly more advanced topics, these are use of Helm and introducing an gateway data plane to our cluster. The gateway will let us further refine & improve the networking aspects of the app we've deployed.
📌 The Gateway API
The Kubernetes Gateway API is an official Kubernetes project focused on L4 and L7 routing in Kubernetes. This project represents the next generation of Kubernetes Ingress, Load Balancing, and Service Mesh APIs. It is designed to improve upon the existing Ingress API by providing more expressive and extensible resource definitions. It is still evolving but slowly replacing the older Ingress API.
📚 Learn more about the Gateway API
Do not confuse the "Gateway API" with "API Gateway" products like AWS API Gateway or Azure API Management & Azure App Gateway, or even the generic term "API Gateway". The Gateway API is a Kubernetes-native API for managing L4/L7 routing. It would be valid to say "Let's use the Gateway API to route traffic through our API Gateway". Look I know, it's confusing, I wasn't involved in the naming!
🧙 Intro to Custom Resources
Kuberentes is designed to be extensible,in fact, extremely so. The core Kubernetes API is made up of a number of pre-defined resources like Pods, Deployments, Services, etc. But Kubernetes also allows anyone to define their own resources, these are called Custom Resources. The Gateway API is implemented as a set of Custom Resources (until such time as it is merged into the core Kubernetes API set).
Before we can use the Gateway API in our cluster and create instances of the resources, we need to install these Custom Resource Definitions (CRDs) which define the new resources. This is done with a single command:
kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v2.1.0" | kubectl apply -f -
🗃️ Namespaces
So far we've worked in a single Namespace called default
, but Kubernetes allows you create additional Namespaces
in order to logically group and separate your resources.
Namespaces do not provide any form of network boundary or isolation of workloads, and the underlying resources (Nodes) remain shared. There are ways to achieve higher degress of isolation, but it is a matter well beyond the scope of this workshop.
Create a new namespace called nginx-gateway
:
kubectl create namespace nginx-gateway
Namespaces are simple idea but they can trip you up, you will have to add --namespace
or -n
to any kubectl
commands you want to use against a particular namespace. The following alias can be helpful to set a namespace as the
default for all kubectl
commands, meaning you don't need to add -n
, think of it like a Kubernetes equivalent of the
cd
command.
# Note the space at the end
alias kubens='kubectl config set-context --current --namespace '
🪖 Introduction to Helm
Helm is an CNCF project which can be used to greatly simplify deploying applications to Kubernetes, either applications written and developed in house, or external 3rd party software & tools.
- To use Helm, the Helm CLI tool
helm
is required. - Helm simplifies deployment using the concept called a chart, when a chart is deployed it is refereed to as a release.
- A chart consists of one or more Kubernetes YAML templates + supporting files.
- Helm charts support dynamic parameters called values. Charts expose a set of default values through their
values.yaml
file, and these values can be set and over-ridden at release time. - The use of values is critical for automated deployments and CI/CD.
- Charts can referenced through the local filesystem, or in a remote repository called a chart repository. The can also be kept in a container registry but that is an advanced and experimental topic.
🚪 Deploying the NGINX Gateway
For this section we'll be using the NGINX Gateway as our gateway implementation, this is a open source project which supports the Gateway API. There are other options available, including commercial products, but NGINX is a good choice for this workshop as it is free, open source, and widely used.
helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --namespace nginx-gateway
This command does the following:
helm install
- installs a new Helm chart as a releasengf
- the name of the release, you can choose any name you likeoci://ghcr.io/nginx/charts/nginx-gateway-fabric
- the location of the chart, in case it's a special type of remote URL--namespace nginx-gateway
- the namespace to install the NIGNX controler & proxy into
🛠️ Configuring the Gateway
Now we have the NGINX Gateway installed, we need to configure it to route traffic to our application. This is done using
a Gateway
resource, this would typically be set up once by a cluster administrator, not the application developer, but
today we are wearing multiple hats!
Cretate a file called gateway.yaml
with the following content:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: nanomon-gateway
spec:
gatewayClassName: nginx
listeners:
- name: http
port: 80
protocol: HTTP
# This is required to allow routes from other namespaces
allowedRoutes:
namespaces:
from: All
Normally the
hostname
field would be set to a specific hostname, or some wildcard, but for this workshop we are using the external IP address of the Load Balancer without any DNS, so we leave it unset.
Apply the configuration but place it in the nginx-gateway
namespace with the following command. This allows us to
separate the gateway resources from the application resources:
kubectl apply -f gateway.yaml -n nginx-gateway
This will actually create the gateway instance (the Helm install you ran previously didn't do that), i.e. the NGINX
proxy which will route traffic, and along with it a Service of type LoadBalancer
which will give it an external IP
address.
To get this external IP address, run: kubectl get svc -n nginx-gateway
and look for the service called
nanomon-gateway-nginx
. It may take a few minutes for the external IP to be assigned, but once it is, make a note of
it.
If you go to this external IP address in your browser, you should see a 404 error, this is because we haven't yet configured any routes to our application.
🔒 Reconfiguring The App
Now we need to modify the app we've deployed to route through our new new gateway, but a few simple changes are required
first. As the gateway will be fronting all requests, the services in front of the deployments should be switched back to
internal i.e. ClusterIP
.
- Edit both the data API & frontend Service YAML manifests, change the service type to
ClusterIP
- Change the ports to
8000
for the API and8001
for the frontend, this is the ports the containers are listening, remove thetargetPort
field, so they look like this:ports: - protocol: TCP port: 8000
- Edit the frontend Deployment YAML manifest, change the
API_ENDPOINT
environmental variable to use the same origin URI/api
no need for a scheme or host.
Apply these three changes with kubectl
and now the app will be temporarily unavailable. Note, if you have changed
namespace with kubens
you should switch back to the default namespace before running the apply!
If run kubectl get svc
you should see both services are now of type ClusterIP
and have no external IP associated.
🔀 Configuring Routes
Routes are configured using two new resources, HTTPRoute
and TLSRoute
. For this workshop we'll just be using
HTTPRoute
.
[📚 HTTPRoute Resource](https://gateway-api.sigs.k8s.io/api-types/httproute/
Create a file called http-route.yaml
with the following content:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: nanomon-routes
spec:
parentRefs:
- namespace: nginx-gateway
name: nanomon-gateway
sectionName: http
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: frontend
port: 8001
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: api
port: 8000
This configuration does the following:
- Creates a
HTTPRoute
resource - Links the route to the
nanomon-gateway
gateway we created earlier, which is in thenginx-gateway
namespace - Defines two routing rules:
- Requests with a path prefix of
/
are routed to thefrontend
service on port8001
- Requests with a path prefix of
/api
are routed to theapi
service on port8000
- Requests with a path prefix of
Apply the configuration with: kubectl apply -f http-route.yaml
You could have defined two separate
HTTPRoute
resources, each with a single rule, but this is more concise and it's unlikely you'd want to manage them separately.
If you go to the external IP address of the gateway in your browser, you should now see the frontend of the app. Also if
you go to http://<external-ip>/api/status
you should see the API status endpoint.
🖼️ Cluster & Architecture Diagram
We've reached the final state of the application deployment, yes I promise this time! The resources deployed into the cluster & in Azure at this stage can be visualized as follows:
This is a slightly simplified version from previously in order to fit everything in, so things like the Deployment resources have been omitted.