The gardener-extension-shoot-traefik deploys Traefik ingress controller to Gardener shoot clusters as a replacement for the nginx-ingress-controller which is out of maintenance.
- Traefik Ingress Controller: Deploys Traefik v3.x as the ingress controller in shoot clusters
- Admission Webhook: Validates that Traefik extension is only enabled for shoots with purpose "evaluation". Deployed as a separate admission controller using the same binary with the
webhooksubcommand. - ManagedResource: Uses Gardener's ManagedResource mechanism for deployment and lifecycle management
- Configurable: Supports custom Traefik image, replicas, and ingress class configuration
- Go 1.26.x or later
- GNU Make
- Docker for local development
- Gardener Local Setup for local development
- Shoot clusters with purpose "evaluation"
You can enable the extension for a Gardener Shoot
cluster by
updating the .spec.extensions of your shoot manifest.
Important: The Traefik extension can only be enabled for shoots with purpose: evaluation.
This is enforced by an admission webhook.
apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
name: my-shoot
namespace: garden-my-project
spec:
# Purpose MUST be "evaluation" for Traefik extension
purpose: evaluation
extensions:
- type: shoot-traefik
providerConfig:
apiVersion: traefik.extensions.gardener.cloud/v1alpha1
kind: TraefikConfig
spec:
# Optional: Number of replicas (default: 2)
replicas: 2
# Optional: Log level (default: INFO)
# Valid values: DEBUG, INFO, WARN, ERROR, FATAL, PANIC
logLevel: INFO
# Optional: Ingress provider type (default: KubernetesIngress)
# Valid values:
# - KubernetesIngress: Standard Kubernetes Ingress provider (ingress class: "traefik")
# - KubernetesIngressNGINX: NGINX-compatible provider with NGINX annotation support (ingress class: "nginx")
ingressProvider: KubernetesIngress
# Optional: Enable the Traefik API and dashboard (default: false)
# WARNING: Not recommended for production — see Dashboard section below.
dashboard: false
# ... rest of your shoot configuration| Field | Type | Default | Description |
|---|---|---|---|
spec.replicas |
int32 | 2 |
Number of Traefik replicas |
spec.logLevel |
string | INFO |
Traefik log level: DEBUG, INFO, WARN, ERROR, FATAL, PANIC |
spec.ingressProvider |
string | KubernetesIngress |
Kubernetes Ingress provider type: KubernetesIngress or KubernetesIngressNGINX |
spec.dashboard |
bool | false |
Enable the Traefik API and dashboard (not recommended for production) |
The extension supports two Kubernetes Ingress provider types:
The standard Kubernetes Ingress provider that implements the core Kubernetes Ingress specification.
spec:
ingressProvider: KubernetesIngressThe NGINX-compatible provider makes it easier to migrate from NGINX Ingress Controller to Traefik with minimal configuration changes. Note that only a subset of NGINX annotations is supported — see the Traefik NGINX Annotations Support page for details.
spec:
ingressProvider: KubernetesIngressNGINXNote: When using KubernetesIngressNGINX, the ingress class is automatically set to "nginx" and the IngressClass resource uses controller: k8s.io/ingress-nginx for compatibility with existing Ingress resources. Traefik handles these Ingresses using its NGINX-compatible provider.
When to use KubernetesIngressNGINX:
- You're migrating from NGINX Ingress Controller
- Your existing Ingress resources use NGINX-specific annotations
- You want to maintain compatibility with NGINX annotations during the transition
For more information, see:
- Traefik Kubernetes Ingress Documentation
- Traefik NGINX Annotations Support
- NGINX to Traefik Migration Guide
Warning: Enabling the API and the dashboard in production is not recommended, because it will expose all configuration elements, including sensitive data, for which access should be reserved to administrators.
The Traefik dashboard is disabled by default. To enable it, set spec.dashboard: true in the provider config:
spec:
dashboard: trueOnce the shoot is reconciled, you can access the dashboard by port-forwarding to the Traefik deployment on the shoot cluster:
kubectl -n kube-system port-forward deployment/traefik 9000:9000Then open http://localhost:9000/dashboard/ in your browser (the trailing / is required).
The extension includes an admission controller that validates Shoot resources to ensure
the Traefik extension can only be enabled for shoots with purpose: evaluation.
The admission controller is deployed as a separate component using the same binary
(extension-traefik webhook) and has its own Helm charts under
charts/gardener-extension-admission-shoot-traefik/. Following the Gardener extension
convention, it consists of two sub-charts:
charts/runtime/— Deployed in the runtime cluster. Contains the Deployment, Service, RBAC, VPA, and PodDisruptionBudget resources for the webhook server.charts/application/— Deployed in the virtual garden cluster. Contains the ClusterRole, ClusterRoleBinding, and ServiceAccount needed for the webhook to access Shoot resources.
When deploying via gardener-operator, the admission controller is automatically
deployed alongside the extension. The Extension resource (from group
operator.gardener.cloud/v1alpha1) specifies both the extension and the admission
deployment:
apiVersion: operator.gardener.cloud/v1alpha1
kind: Extension
metadata:
name: gardener-extension-shoot-traefik
spec:
deployment:
admission:
runtimeCluster:
helm:
ociRepository:
ref: <registry>/admission-shoot-traefik-runtime:<version>
virtualCluster:
helm:
ociRepository:
ref: <registry>/admission-shoot-traefik-application:<version>
extension:
helm:
ociRepository:
ref: <registry>/gardener-extension-shoot-traefik:<version>
resources:
- kind: Extension
type: shoot-traefikSee examples/operator-extension/ for a
complete example.
In order to build a binary of the extension, you can use the following command.
make buildThe resulting binary can be found in bin/extension-traefik.
In order to build a Docker image of the extension, you can use the following command.
make docker-buildFor local development of the gardener-extension-shoot-traefik it is recommended that
you setup a development Gardener environment.
Please refer to the next sections for more information about deploying and testing the extension in a Gardener development environment.
The extension can also be deployed via the Gardener Operator.
In order to start a local development environment with the Gardener Operator, please refer to the following documentations.
In summary, these are the steps you need to follow in order to start a local development environment with the Gardener Operator, however, please make sure that you read the documents above for additional details.
make kind-multi-zone-up operator-up operator-seed-upBefore you continue with the next steps, make sure that you configure your
KUBECONFIG to point to the kubeconfig file of the cluster, which runs the
Gardener Operator.
There will be two kubeconfig files created for you, after the dev environment has been created.
| Path | Description |
|---|---|
/path/to/gardener/example/gardener-local/kind/multi-zone/kubeconfig |
Cluster in which gardener-operator runs (a.k.a runtime cluster) |
/path/to/gardener/dev-setup/kubeconfigs/virtual-garden/kubeconfig |
The virtual garden cluster |
Throughout this document we will refer to the kubeconfigs for runtime and
virtual clusters as $KUBECONFIG_RUNTIME and $KUBECONFIG_VIRTUAL
respectively.
Before deploying the extension we need to target the runtime cluster, since
this is where the extension resources for gardener-operator reside.
export KUBECONFIG=$KUBECONFIG_RUNTIMEIn order to deploy the extension, execute the following command.
make deploy-operatorThe deploy-operator target takes care of the following.
- Builds a Docker image of the extension
- Loads the image into the
kindcluster nodes - Packages the Helm charts (extension + admission) and pushes them to the local registry
- Deploys the
Extension(from groupoperator.gardener.cloud/v1alpha1) to the runtime cluster, which includes the admission controller configuration
Verify that we have successfully created the
Extension (from group operator.gardener.cloud/v1alpha1) resource.
$ kubectl --kubeconfig $KUBECONFIG_RUNTIME get extop gardener-extension-shoot-traefik
NAME INSTALLED REQUIRED RUNTIME REQUIRED VIRTUAL AGE
gardener-extension-shoot-traefik True False False 85sVerify that the respective ControllerRegistration and ControllerDeployment
resources have been created by the gardener-operator in the virtual garden
cluster.
> kubectl --kubeconfig $KUBECONFIG_VIRTUAL get controllerregistrations,controllerdeployments gardener-extension-shoot-traefik
NAME RESOURCES AGE
controllerregistration.core.gardener.cloud/gardener-extension-shoot-traefik Extension/traefik 3m50s
NAME AGE
controllerdeployment.core.gardener.cloud/gardener-extension-shoot-traefik 3m50sNow we can create an example shoot with our extension enabled. The examples/shoot.yaml file provides a ready-to-use shoot manifest, which we will use.
kubectl --kubeconfig $KUBECONFIG_VIRTUAL apply -f examples/shoot.yamlOnce we create the shoot cluster, gardenlet will start deploying our
gardener-extension-shoot-traefik, since it is required by our shoot.
Verify that the extension has been successfully installed by checking the
corresponding ControllerInstallation resource for our extension.
$ kubectl --kubeconfig $KUBECONFIG_VIRTUAL get controllerinstallations.core.gardener.cloud
NAME REGISTRATION SEED VALID INSTALLED HEALTHY PROGRESSING AGE
gardener-extension-shoot-traefik-ng4r8 gardener-extension-shoot-traefik local True True True False 2m9sAfter your shoot cluster has been successfully created and reconciled, verify that the extension is healthy.
$ kubectl --kubeconfig $KUBECONFIG_RUNTIME --namespace shoot--local--local get extensions
NAME TYPE STATUS AGE
traefik traefik Succeeded 2m37sIn order to trigger reconciliation of the extension you can annotate the extension resource.
kubectl --kubeconfig $KUBECONFIG_RUNTIME --namespace shoot--local--local annotate extensions traefik gardener.cloud/operation=reconcilegardener-extension-shoot-traefik is hosted on
Github.
Please contribute by reporting issues, suggesting features or by sending patches using pull requests.
This project is Open Source and licensed under Apache License 2.0.