Skip to content

AWS Multi-ENI Controller for Telco CNF Deployments

Updated: at 06:24 AM

something As someone who has been working extensively with telco workloads on Kubernetes, I’ve experienced firsthand the challenges of deploying Cloud Native Functions (CNFs) that require multiple network interfaces. Traditional Kubernetes networking with its single interface per pod simply doesn’t cut it for telecommunications applications that need complex networking topologies.

That’s why I’m excited to share the AWS Multi-ENI Controller – a purpose-built Kubernetes controller I’ve been working on that’s designed specifically for Multus CNI deployments on AWS. This controller bridges the gap between AWS networking capabilities and Kubernetes multi-network requirements, making it possible to deploy telco CNFs with the sophisticated networking they demand. 🚀

Table of contents

Open Table of contents

Why Multiple Network Interfaces Matter for Telco CNFs

If you’ve worked with telecommunications workloads, you know they have unique networking requirements that differ significantly from typical cloud-native applications:

Traditional Kubernetes networking with a single interface per pod simply cannot meet these requirements. This is where Multus CNI comes in, enabling multiple network interfaces per pod – but it requires those interfaces to exist on the nodes first.

Multus CNI Multi-Network Architecture

A worker node with multiple network interfaces(ENIs) interacting with the multus cni

The Challenge: Dynamic ENI Management

While Multus CNI solves the multi-interface problem, it creates a new challenge: how do you dynamically provision and manage the underlying network interfaces on AWS EC2 instances?

Traditionally, this required:

The AWS Multi-ENI Controller eliminates this complexity by providing a declarative, Kubernetes-native approach to ENI management. No more wrestling with infrastructure templates or manual provisioning! 🎯

Architecture Overview

The AWS Multi-ENI Controller consists of two main components working in harmony:

AWS Multi-ENI Controller Architecture Diagram

High-level architecture showing the NodeENI Controller managing ENI lifecycle and the ENI Manager DaemonSet configuring network interfaces on worker nodes.

1. NodeENI Controller

2. ENI Manager DaemonSet

Three Primary Use Cases

One of the things I love about the AWS Multi-ENI Controller is its flexibility. It supports three distinct use cases, each tailored to different telco CNF requirements:

Case 1: Regular ENI Attachment (Basic Multi-Network)

Perfect for CNFs that need network isolation without high-performance requirements. This is great for control plane functions or management interfaces.

apiVersion: networking.k8s.aws/v1alpha1
kind: NodeENI
metadata:
  name: control-plane-eni
spec:
  nodeSelector:
    ng: multi-eni
  subnetID: subnet-0f59b4f14737be9ad
  securityGroupIDs:
  - sg-05da196f3314d4af8
  deviceIndex: 2
  mtu: 9001
  description: "Control plane network for CNF"

Case 2: SR-IOV Configuration (Performance + Flexibility)

Ideal for CNFs that need better performance than regular interfaces but still require kernel-space networking. This strikes a nice balance between performance and compatibility.

apiVersion: networking.k8s.aws/v1alpha1
kind: NodeENI
metadata:
  name: user-plane-sriov-eni
spec:
  nodeSelector:
    ng: multi-eni
  subnetID: subnet-0f59b4f14737be9ad
  securityGroupIDs:
  - sg-05da196f3314d4af8
  deviceIndex: 3
  enableDPDK: false  # Use kernel driver
  dpdkPCIAddress: "0000:00:07.0"  # PCI address for SR-IOV config
  dpdkResourceName: "intel.com/sriov_kernel"
  mtu: 9001

Case 3: Full DPDK Integration (Maximum Performance)

For high-throughput CNFs like UPF that require userspace packet processing. This is where things get really exciting for performance-critical workloads! 🚀

apiVersion: networking.k8s.aws/v1alpha1
kind: NodeENI
metadata:
  name: dpdk-user-plane-eni
spec:
  nodeSelector:
    ng: multi-eni
  subnetID: subnet-0f59b4f14737be9ad
  securityGroupIDs:
  - sg-05da196f3314d4af8
  deviceIndex: 4
  enableDPDK: true
  dpdkDriver: "vfio-pci"
  dpdkPCIAddress: "0000:00:08.0"
  dpdkResourceName: "intel.com/intel_sriov_netdevice"
  mtu: 9000

Real-World Implementation Example

Now, let me show you how this all comes together in practice! Let’s walk through deploying a 5G UPF (User Plane Function) that requires both control plane and high-performance user plane interfaces. This is a real scenario I’ve worked with extensively.

Step 1: Deploy the AWS Multi-ENI Controller

First, let’s get the controller up and running. I’ve made this super easy with a Helm chart:

# Install using Helm
helm install aws-multi-eni \
  oci://ghcr.io/johnlam90/charts/aws-multi-eni-controller \
  --version 1.3.5 \
  --namespace eni-controller-system \
  --create-namespace

Step 2: Label Your Nodes

Next, we need to tell the controller which nodes should receive ENIs:

# Label nodes that should receive ENIs
kubectl label nodes ip-10-0-1-100.ec2.internal ng=multi-eni
kubectl label nodes ip-10-0-1-101.ec2.internal ng=multi-eni

Step 3: Create NodeENI Resources

Here’s where the magic happens. We’ll create two different ENIs for our UPF:

# Control plane network
apiVersion: networking.k8s.aws/v1alpha1
kind: NodeENI
metadata:
  name: upf-control-plane
spec:
  nodeSelector:
    ng: multi-eni
  subnetID: subnet-0a1b2c3d4e5f6g7h8  # Control plane subnet
  securityGroupIDs:
  - sg-0123456789abcdef0
  deviceIndex: 2
  mtu: 1500
  description: "UPF Control Plane Network"
---
# User plane network with DPDK
apiVersion: networking.k8s.aws/v1alpha1
kind: NodeENI
metadata:
  name: upf-user-plane-dpdk
spec:
  nodeSelector:
    ng: multi-eni
  subnetID: subnet-9i8h7g6f5e4d3c2b1  # User plane subnet
  securityGroupIDs:
  - sg-fedcba9876543210f
  deviceIndex: 3
  enableDPDK: true
  dpdkDriver: "vfio-pci"
  dpdkResourceName: "intel.com/upf_dataplane"
  mtu: 9000
  description: "UPF User Plane DPDK Network"

Step 4: Create NetworkAttachmentDefinitions

# Control plane network attachment
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: upf-control-net
  namespace: telco-cnf
spec:
  config: '{
    "cniVersion": "0.3.1",
    "type": "ipvlan",
    "master": "eth2",
    "mode": "l2",
    "ipam": {
      "type": "host-local",
      "subnet": "10.1.0.0/24",
      "rangeStart": "10.1.0.100",
      "rangeEnd": "10.1.0.200"
    }
  }'
---
# User plane DPDK network attachment
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: upf-dataplane-net
  namespace: telco-cnf
spec:
  config: '{
    "cniVersion": "0.3.1",
    "type": "host-device",
    "device": "0000:00:08.0",
    "ipam": {
      "type": "host-local",
      "subnet": "192.168.100.0/24",
      "rangeStart": "192.168.100.10",
      "rangeEnd": "192.168.100.50"
    }
  }'

Step 5: Deploy the UPF CNF

apiVersion: apps/v1
kind: Deployment
metadata:
  name: upf-cnf
  namespace: telco-cnf
spec:
  replicas: 2
  selector:
    matchLabels:
      app: upf-cnf
  template:
    metadata:
      labels:
        app: upf-cnf
      annotations:
        k8s.v1.cni.cncf.io/networks: |
          [
            {
              "name": "upf-control-net",
              "interface": "control0"
            },
            {
              "name": "upf-dataplane-net",
              "interface": "dataplane0"
            }
          ]
    spec:
      containers:
      - name: upf
        image: telco/upf-cnf:latest
        resources:
          limits:
            intel.com/upf_dataplane: 1  # Request DPDK resource
            memory: "4Gi"
            cpu: "2"
          requests:
            memory: "2Gi"
            cpu: "1"
        securityContext:
          privileged: true  # Required for DPDK
        env:
        - name: CONTROL_INTERFACE
          value: "control0"
        - name: DATAPLANE_INTERFACE
          value: "dataplane0"

Verification and Monitoring

Now let’s make sure everything is working as expected. Here are the key verification steps I always run:

Check NodeENI Status

kubectl get nodeenis -o wide

Expected output:

NAME                   NODE                           ENI-ID              SUBNET-ID              STATUS
upf-control-plane      ip-10-0-1-100.ec2.internal   eni-0123456789abc    subnet-0a1b2c3d4e5f    Attached
upf-user-plane-dpdk    ip-10-0-1-100.ec2.internal   eni-0987654321def    subnet-9i8h7g6f5e4d    Attached

Verify Interface Configuration

# Check interfaces on the node
kubectl exec -it eni-manager-xxxxx -n eni-controller-system -- ip link show

# Check DPDK binding status
kubectl exec -it eni-manager-xxxxx -n eni-controller-system -- /opt/dpdk/dpdk-devbind.py --status

Monitor Pod Network Interfaces

# Check pod interfaces
kubectl exec -it upf-cnf-xxxxx -n telco-cnf -- ip addr show

Expected output showing multiple interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
2: eth0@if123: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500  # Primary interface
3: control0@if124: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500  # Control plane
4: dataplane0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000  # DPDK interface

Benefits for Telco Applications

Let me share why I think this controller is a game-changer for telco workloads:

1. Simplified Operations

2. Enhanced Security

3. Performance Optimization

4. Scalability

5. Cloud-Native Integration

Best Practices for Production Deployments

Based on my experience deploying this in production environments, here are some key best practices I’ve learned:

1. Resource Planning

# Configure controller concurrency for scale
controller:
  maxConcurrentReconciles: 10
  maxConcurrentENICleanup: 5

2. Network Design

3. Security Configuration

4. Monitoring and Observability

# Monitor controller metrics
kubectl get events -n eni-controller-system

# Check ENI attachment status
kubectl describe nodeeni upf-control-plane

Conclusion

Working on the AWS Multi-ENI Controller has been an incredibly rewarding journey. It represents what I believe is a significant leap forward in enabling telco CNF deployments on Kubernetes. By seamlessly integrating AWS networking capabilities with Multus CNI, it provides telco operators with the tools they need to deploy complex, multi-network applications in a truly cloud-native way.

Whether you’re deploying 5G core functions, edge computing workloads, or any application requiring multiple network interfaces, the AWS Multi-ENI Controller simplifies the complexity while maintaining the performance and security requirements that telco applications demand.

The future of telecommunications is cloud-native, and with tools like this, that future is available today. I’m excited to see how the community adopts and extends this work! 🚀

If you’re working with telco workloads on Kubernetes and struggling with multi-network requirements, I encourage you to give the AWS Multi-ENI Controller a try. It might just solve some of those networking headaches you’ve been dealing with.

Feel free to check out the AWS Multi-ENI Controller GitHub repository to learn more and explore the source code. And if you have any questions or feedback, don’t hesitate to reach out to the community or connect with me on LinkedIn.

Happy deploying, and may your telco CNFs always have the network interfaces they need! 🌟

Cheers, John Lam