🌎 Helm & Ingress
🔥 At this point in the workshop you have a choice:
- If you want to learn about the legacy Ingress API, continue with this section. This is widely supported and in wide use, but it has been superseded in functionallity by the newer Gateway API.
- If you want to learn about the new Gateway API, which is still evolving but represents the future of L4/L7 routing in Kubernetes, go to the Helm & Gateway API section.
Only go through one of these two sections, not both!
For this section we'll touch on two slightly more advanced topics, these being the use of Helm and introducing an ingress controller to our cluster. The ingress will let us further refine & improve the networking aspects of the app we've deployed.
🗃️ Namespaces
So far we've worked in a single Namespace called default
, but Kubernetes allows you create additional Namespaces
in order to logically group and separate your resources.
Namespaces do not provide any form of network boundary or isolation of workloads, and the underlying resources (Nodes) remain shared. There are ways to achieve higher degress of isolation, but it is a matter well beyond the scope of this workshop.
Create a new namespace called ingress
:
kubectl create namespace ingress
Namespaces are simple idea but they can trip you up, you will have to add --namespace
or -n
to any kubectl
commands you want to use against a particular namespace. The following alias can be helpful to set a namespace as the
default for all kubectl
commands, meaning you don't need to add -n
, think of it like a Kubernetes equivalent of the
cd
command.
# Note the space at the end
alias kubens='kubectl config set-context --current --namespace '
🪖 Introduction to Helm
Helm is an CNCF project which can be used to greatly simplify deploying applications to Kubernetes, either applications written and developed in house, or external 3rd party software & tools.
- To use Helm, the Helm CLI tool
helm
is required. - Helm simplifies deployment using the concept called a chart, when a chart is deployed it is refereed to as a release.
- A chart consists of one or more Kubernetes YAML templates + supporting files.
- Helm charts support dynamic parameters called values. Charts expose a set of default values through their
values.yaml
file, and these values can be set and over-ridden at release time. - The use of values is critical for automated deployments and CI/CD.
- Charts can referenced through the local filesystem, or in a remote repository called a chart repository. The can also be kept in a container registry but that is an advanced and experimental topic.
We'll add the Helm chart repository for the ingress we will be deploying, this is done with the helm repo
command.
This is a public repo & chart of the extremely popular NGINX ingress controller (more on that below).
The repo name
ingress-nginx
can be any name you wish to pick, but the URL has to be pointing to the correct place.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
🚀 Ingress & Ingress Controller
An Ingress is a Kubernetes resource that manages external HTTP(S) access to services within a cluster. It provides routing rules to manage how requests are directed to various services based on the request's host or path. An Ingress Controller is a reverse proxy that implements the rules defined in Ingress resources, handling the actual routing of traffic.
📚 Kubernetes Docs: Ingress
📚 Kubernetes Docs: Ingress Controllers
- The controller is simply an instance of a HTTP reverse proxy running in one or mode Pods with a Service in front of it.
- It implements the Kubernetes controller pattern, which means it's scanning for Ingress resources to be created in the cluster, when it finds one, it reconfigures itself based on the rules and configuration it find that Ingress, in order to route traffic.
- There are MANY ingress controllers available but we will use a very common and simple one, the NGINX ingress controller maintained by the Kubernetes project.
- Often TLS is terminated by the ingress controller, and sometimes other tasks such as JWT validation for authentication can be done at this level. For the sake of this workshop no TLS & HTTPS will be used due to the dependencies it requires (such as DNS, cert management etc).
Helm greatly simplifies deploying the NGINX ingress controller, down to a single command:
helm install my-ingress ingress-nginx/ingress-nginx \
--namespace ingress \
--set controller.replicaCount=2
- The release name is
my-ingress
which can be anything you wish, it's used by the chart templates to prefix the names of created resources. - The second parameter is a reference to the chart, in the form of
repo-name/chart-name
, if we wanted to use a local chart on disk, we'd simply reference the path to the chart directory. - The
--set
part is where we can pass in values to the release, in this case we increase the replicas to two, purely as an example.
Check the status of both the pods and services with kubectl get svc,pods --namespace ingress
, ensure the pods are
running and the service has an external public IP.
You can also use the helm
CLI to query the status, here's some simple and common commands:
helm ls
orhelm ls -A
- List releases or list releases in all namespaces.helm upgrade {release-name} {chart}
- Upgrade/update a release to apply changes. Add--install
to perform an install if the release doesn't exist.--dry-run
- Add this switch to install or upgrade commands to get a view of the resources and YAML that would be created, without applying them to the cluster.helm get values {release-name}
- Get the values that were used to deploy a release.helm delete {release-name}
- Remove the release and all the resources.
🔀 Reconfiguring The App With Ingress
Now we can modify the app we've deployed to route through our new ingress controller, but a few simple changes are
required first. As the ingress controller will be fronting all requests, the services in front of the deployments should
be switched back to internal i.e. ClusterIP
.
- Edit both the data API & frontend Service YAML manifests, change the service type to
ClusterIP
- Edit the frontend Deployment YAML manifest, change the
API_ENDPOINT
environmental variable to use the same origin URI/api
no need for a scheme or host.
Apply these three changes with kubectl
and now the app will be temporarily unavailable. Note, if you have changed
namespace with kubens
you should switch back to the default namespace before running the apply!
If run kubectl get svc
you should see both services are now of type ClusterIP
and have no external IP associated.
The next thing is to configure the ingress by creating an Ingress resource. This can be a fairly complex resource to
set-up, but it boils down to a set of HTTP path mappings (routes) and which backend Service should serve them. Here is
the completed manifest file ingress.yaml
:
Click here for the Ingress YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nanomon
labels:
name: nanomon
spec:
# Important we leave this blank, as we don't have DNS configured
# Blank means these rules will match ALL HTTP requests hitting the controller IP
host:
ingressClassName: nginx
rules:
- http:
paths:
# Routing for the frontend
- pathType: Prefix
path: "/"
backend:
service:
name: frontend
port:
number: 80
# Routing for the API
- pathType: Prefix
path: "/api"
backend:
service:
name: api
port:
number: 80
Apply the same as before with kubectl
, validate the status with:
kubectl get ingress
It may take it a minute for it to be assigned an address, note the address will be the same as the external IP of the ingress-controller. You can check this with:
kubectl get svc -n ingress | grep LoadBalancer
Visit this IP in your browser, if you check the "About" screen and click the "More Details" link it should take you to the API, which should now be served from the same IP as the frontend.
🖼️ Cluster & Architecture Diagram
We've reached the final state of the application deployment, yes I promise this time! The resources deployed into the cluster & in Azure at this stage can be visualized as follows. This is a slightly simplified version from previously in order to fit everything in, so things like the Deployment resources have been omitted
Note the addition of the ingress controller Deployment and Service in the ingress
namespace, and the Ingress
resource alongside the other resources in the default
namespace.
🎉 Completion!
Congratulations, you've reached the end of the workshop! You should now have a pretty good understanding of the core concepts of Kubernetes, and have deployed a simple but complete application into a real cluster.