Optimizing Kubernetes Traffic Routing with PreferClose in Kubernetes 1.30+

Introduction

Before Kubernetes 1.30, a Kubernetes Service guaranteed only a list of endpoints for routing traffic. However, there was no intelligent routing within that groupall traffic was randomly distributed across available endpoints. This default behavior worked well for most workloads but introduced inefficiencies when running clusters across multiple availability zones (AZs).

The iptables-based CNI would round-robin traffic, which could result in costly and unnecessary cross-zone data transfers. This inefficiency increases latency and networking costs in cloud environments.

Kubernetes 1.30+: Introducing trafficDistribution: PreferClose

Kubernetes 1.30 introduced the trafficDistribution: PreferClose feature, allowing preferential traffic routing to the closest available node. This feature significantly reduces cross-zone network overhead and improves performance by directing traffic to the nearest node with available endpoints.

How PreferClose Works

With trafficDistribution: PreferClose, Kubernetes prioritizes routing traffic to the topologically closest node based on cluster-level metrics. The system considers factors like:

  • Same node (if an endpoint is available)
  • Same availability zone (AZ)
  • Same region (if no closer endpoint exists)

By favoring proximity-based routing, workloads experience lower latency and optimized network efficiency.

Enabling PreferClose in Kubernetes 1.30+

Starting with Kubernetes 1.31, the feature is enabled by default through the ServiceTrafficDistribution feature gate:

1
2
3
4
5
# API Server
--feature-gates="ServiceTrafficDistribution=true"

# kube-proxy
--feature-gates="ServiceTrafficDistribution=true"

In Kubernetes 1.30, you need to enable this manually.

Configuring a Kubernetes Service with PreferClose

To use PreferClose, update your Service manifest:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
service.kubernetes.io/topology-mode: auto # Enable Topology Aware Routing
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
trafficDistribution: PreferClose

Key Components Explained

  • service.kubernetes.io/topology-mode: auto Enables Topology Aware Routing.
  • trafficDistribution: PreferClose Ensures traffic prioritization to nearest available endpoints.

Benefits of PreferClose

  • Reduced Cross-Zone Costs: Avoid unnecessary data transfer charges between AZs.
  • Lower Latency: Traffic stays within the nearest node or AZ.
  • Improved Network Efficiency: Optimizes Kubernetes networking by reducing bottlenecks.

Considerations & Limitations

  • Availability of Endpoints: If no endpoints exist in the preferred zone, Kubernetes will fallback to other available zones.
  • Load Balancer Behavior: Some cloud providers might have additional routing constraints beyond Kubernetes.
  • Works Best with Multi-AZ Deployments: Ideal for large-scale clusters spanning multiple zones or regions.

Final Thoughts

With Kubernetes 1.30+, the introduction of PreferClose for trafficDistribution revolutionizes how Services handle traffic routing. By reducing cross-zone latency and costs, Kubernetes clusters can run more efficiently while ensuring optimal performance for workloads.

If you’re running Kubernetes in a multi-AZ setup, upgrading to Kubernetes 1.30+ and enabling PreferClose can significantly enhance your networking strategy.