Trying Out Kubernete’s Gateway API (Beta) — Using Contour with Kind

A Simple Hands-On of the Successor of Ingress Gateway

Alan Wang
9 min readAug 2, 2022
Photo by Denys Nevozhai on Unsplash

Earlier in July 2022, Kubernetes has announced that the Gateway API managed by SIG Network has reached Beta.

There are already some articles and videos on this topic, but as a technical writer who’s been learning and writing about Kubernetes for the past few months, I want to try them out myself. More specifically — I want to deploy these shiny new thing in a local Kubernetes cluster and watch them work.

It turned out to be a bit difficult, since most implementations have pretty sparse documentations, either about deployment itself or its actual implemented state. Eventually I found the Contour Gateway works the best, so there you go (not sponsored, by the way). I’m sure other implementations will eventually reach to more or less the same level, including the popular Nginx and Istio my company is currently using.

This article also serves as a semi-note for myself, because what’s a better way to not forget something than write all of them down?

Test Environment

I’m using Kind with Docker engine on Windows. But I imagine it will also work for minikube or Docker Desktop with Kubernetes enabled in any platforms.

Download the Kind binary and use it to create a cluster:

kind create cluster

Pods and service we’ll use in this example

The article assumes that you have basic knowledge of containers and pods.

This manifest creates a deployment (3 pods of echo-server at port 3000) and one service (ClusterIP at port 8080):

# deploy.yaml
#
#
# Deployment
#
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
namespace: default
spec:
selector:
matchLabels:
app: echo
replicas: 3
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: ealen/echo-server:latest
env:
- name: PORT
value: "3000"
ports:
- containerPort: 3000
---
#
# Service
#
apiVersion: v1
kind: Service
metadata:
name: echo-service
namespace: default
spec:
type: ClusterIP
selector:
app: echo
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 3000

The service is an internal Kubernetes service, which use label app: echo to map all pods with the label app. Since pods may fail and have to be recreated by Kubernetes (the deployment asks for a replica set of 3 pods at all times), the service can keep linking up with pods even if their IPs are changed.

All resources are placed in the default namespace. Namespaces are virtual scope that can separate resources from different teams or functionalities.

Deploy them or attach these to other manifests:

kubectl apply -f deploy.yaml

Ingress

Photo by zaya odeesho on Unsplash

Before using the Gateway API, we’ll first take a look at a simple example using Contour Ingress Controller as comparison.

Ingress (literally the action or fact of going in or entering; the capacity or right of entrance) has been around for a while in Kubernetes. It serves as a reverse-proxy and load balancing server to route external requests to services (which in turn handle internal routing), so that we can access pods with different URL paths.

Actually, in order to use Ingress, we need an implementation, which is called an Ingress Controller:

  • The controller is running as a set of pods, usually in a specific namespace
  • The controller creates a load balance service, which listens to Ingress routing rules.

An Ingress controller is a specialized load balancer for Kubernetes (and other containerized) environments. Kubernetes is the de facto standard for managing containerized applications. For many enterprises, moving production workloads into Kubernetes brings additional challenges and complexities around application traffic management. An Ingress controller abstracts away the complexity of Kubernetes application traffic routing and provides a bridge between Kubernetes services and external ones.

— Nginx: What Is an Ingress Controller?

Ingress Manifest

To use the Contour Ingress Controller, we have to set it to use contour Ingress class with the annotation kubernetes.io/ingress.class:

# ingress.yaml
#
#
# Contour Ingress
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: contour
spec:
rules:
- http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: echo-service
port:
number: 8080

The only Ingress rule is to route /echo (or /echo/something, etc.) to the echo-service we’ve defined earlier.

Deploy Contour Ingress Controller

Contour has provided a quick start manifest so that we can quickly deploy the Ingress resources:

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

Wait until the two envoy pods are up and running: (then press Ctrl+C to stop watching)

kubectl get pods -n projectcontour -o wide --watch

I advised to wait a while more since sometimes my envoy pods seems to stuck in unhealthy conditions forever…

Now deploy the Ingress manifest:

kubectl apply -f ingress.yaml

If you check the projectcontour namespace, you’ll see a load balancer service called envoy is up (Contour Ingress is Envoy-based):

> kubectl get svc -n projectcontourNAME      TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)
contour ClusterIP 10.96.120.212 <none> 8001/TCP...
envoy LoadBalancer 10.96.249.51 <pending> 80:32763...

Now we can expose envoy to our localhost using port forwarding (in a new terminal):

> kubectl port-forward -n projectcontour service/envoy 80:80Forwarding from 127.0.0.1:80 -> 8080
Forwarding from [::1]:80 -> 8080

And now you can get results of the echo pods via http://localhost/echo:

> curl http://localhost/echo{
"host":{
"hostname":"localhost",
"ip":"::ffff:10.244.0.11",
"ips":[]
},
"http":{
"method":"GET",
"baseUrl":"",
"originalUrl":"/echo",
"protocol":"http"
},
"request":{
"params":{
"0":"/echo"
},
"query":{},
"cookies":{},
"body":{},
"headers":{
"host":"localhost",
"user-agent":"curl/7.83.1",
"accept":"*/*",
"x-forwarded-for":"10.244.0.11",
"x-forwarded-proto":"http",
"x-envoy-internal":"true",
"x-request-id":"4177cd6c-451b-4ccb-a4df-af7a6767918e",
"x-envoy-expected-rq-timeout-ms":"15000",
"x-request-start":"t=1659429551.741"
}
},
"environment":{
"PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HOSTNAME":"echo-deployment-7f78b74664-9fn8w",
"NODE_VERSION":"16.16.0",
"YARN_VERSION":"1.22.19",
"PORT":"3000",
"KUBERNETES_PORT_443_TCP":"tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP_ADDR":"10.96.0.1",
"ECHO_SERVICE_SERVICE_HOST":"10.96.162.58",
"ECHO_SERVICE_PORT_8080_TCP_ADDR":"10.96.162.58",
"ECHO_SERVICE_PORT_8080_TCP_PORT":"8080",
"KUBERNETES_PORT":"tcp://10.96.0.1:443",
"KUBERNETES_PORT_443_TCP_PROTO":"tcp",
"ECHO_SERVICE_SERVICE_PORT":"8080",
"ECHO_SERVICE_PORT":"tcp://10.96.162.58:8080",
"ECHO_SERVICE_PORT_8080_TCP":"tcp://10.96.162.58:8080",
"ECHO_SERVICE_PORT_8080_TCP_PROTO":"tcp",
"KUBERNETES_SERVICE_HOST":"10.96.0.1",
"KUBERNETES_SERVICE_PORT":"443",
"KUBERNETES_SERVICE_PORT_HTTPS":"443",
"KUBERNETES_PORT_443_TCP_PORT":"443",
"ECHO_SERVICE_SERVICE_PORT_HTTP":"8080",
"HOME":"/root"
}
}

I formatted the output here because I hate the mess of it.

Next we’ll see how things will change if we switch to Gateway API to do the same.

Gateway API

Photo by Denys Nevozhai on Unsplash

The problem of Ingress is said to be that they often need configuration via annotations or additional CRDs (custom resource definition). The Gateway API is the latest effort to unify the standards and evolve Ingress further.

The Ingress resource is one of the many Kubernetes success stories…However, five years after the creation of Ingress, there are signs of fragmentation into different but strikingly similar CRDs and overloaded annotations. The same portability that made Ingress pervasive also limited its future.

When you put it all together, you have a single load balancing infrastructure that can be safely shared by multiple teams. The Gateway API is not only a more expressive API for advanced routing, but is also a role-oriented API, designed for multi-tenant infrastructure.

— Kubernetes: Evolving Kubernetes networking with the Gateway API

Originally conceived as a successor to the well known Ingress API, the benefits of Gateway API include (but are not limited to) explicit support for many commonly used networking protocols (e.g. HTTP, TLS, TCP, UDP) as well as tightly integrated support for Transport Layer Security (TLS). The Gateway resource in particular enables implementations to manage the lifecycle of network gateways as a Kubernetes API.

— Kubernetes: Kubernetes Gateway API Graduates to Beta

In other words, the Gateway API is more capable — it can redirect requests to different namespaces and support different types of protocols (layer 7 HTTP routes, layer 4 TCP, UDP routes or TLS routes somewhere between 4 and 7). The Gateway can be shared by all while teams manage their own routing rules/traffic splitting. Everyone can happily mind their own businesses.

The Gateway API has three key resources:

  • GatewayClass — equivalent to Ingress Class, which defines the gateway template and can be configured.
  • Gateway — defines a Gateway.
  • HTTPRoute — defines a set of routing rules. A HTTP route has to be mapped to both a Gateway and some backend services.

I won’t go very detailed here. You can read more in the official documentation.

The Implementations

At the time writing this, there are already a dozen implementations, mostly still in Alpha. It turns out that many of them actually implement the Gateway API on or based on their Ingress gateway (well, it’ll starts to get confusing which “gateway” is which).

Originally I’ve tried Kong’s Gateway Operator, which will install Kong Ingress Controller as well. The Gateway itself works but does not recognize HTTPRoute (only works with a Kong Ingress! It may change later though). The Istio Gateway requires more memory and is not easy to install locally. And what about Nginx Gateway? You have to build an image first in Linux, which is a bit difficult to do on Windows.

Anyway, it feels like Gateway API is not directly Ingress 2.0 itself but implementation-wise still mostly based on Ingress Controllers. In fact you’ll see a load balancer service just like Ingress Controller.

Anyway, let’s get on with it.

Delete the cluster and create a new one:

kind delete cluster
kind create cluster

Install Gateway API

kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.5.0/standard-install.yaml

Since the API is not yet official in Kubernetes, we need to install them first and foremost.

This installs standard Gateway API resources: GatewayClass, Gateway, and HTTPRoute, which supports v1beta1 and v1alpha2.

Install Contour Gateway

Again, Contour provides a convenient provisioner manifest for us to install everything we’ll need (although there’s a small catch, which we will mention a sec later):

kubectl apply -f https://projectcontour.io/quickstart/contour-gateway-provisioner.yaml

The resources will be created under the namespace projectcontour.

Wait for the contour-gateway-provisioner pod is up and running:

kubectl get pods -n projectcontour -o wide --watch

Gateway, HTTPRoute and Everything Else

Now we can define our Gateway and HTTPRoute. The GatewayClass is named as contour. It can be called anything, as long as the controllerName is correct.

The Contour doc set the resources to projectcontour namespace, but I find default also works.

# gateway.yaml
#
#
# GatewayClass
#
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: contour
namespace: default
spec:
controllerName: projectcontour.io/gateway-controller
---
#
# Gateway
#
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: echo-gateway
namespace: default
spec:
gatewayClassName: contour
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
kinds:
- kind: HTTPRoute
namespaces:
from: All
---
#
# HTTPRoute
#
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: echo-route
namespace: default
spec:
parentRefs:
- name: echo-gateway
rules:
- backendRefs:
- name: echo-service
port: 8080
matches:
- path:
type: PathPrefix
value: /echo

I put these in the same file for convenience. In practice HTTPRoute may be deployed separately.

There are a lot of customization you can do in each resource, for example, adding HTTP redirects and rewrites, but we won’t cover them here (they may not yet be fully implemented and/or require additional CRDs anyway).

The HTTPRoute associates itself to the Gateway with parentRefs and the service with backendRefs. What it does is almost exactly the same as our Ingress example — the routing rule is pretty clear when you compare it to the Ingress example — except that now we can create rules without touching the Gateway server.

The HTTPRoute has to be in the same namespace of services (but can be a different namespace with Gateway’s). At least, this is how Contour Gateway behaves, which conforms the API design. The Gateway can decide which namespace can access the server, in this case we simply accept all namespaces and the type has to be HTTPRoute.

Now deploy the resources:

kubectl apply -f deploy.yaml
kubectl apply -f gateway.yaml

When the Gateway is ready, if you run the command

kubectl get svc

You should see a LoadBalancer service envoy-echo-gateway, which is the Contour Gateway. (It’s in default namespace because Gateway is in there.) So now expose it to localhost, just like what we did before:

kubectl port-forward service/envoy-echo-gateway 80:80

You can now again access the echo pods via http://localhost/echo.

Photo by Paul Rysz on Unsplash

--

--

Alan Wang

Technical writer, former translator and IT editor.