Simulating a corporate proxy environment using Kubernetes Network Policies

Practical example on how to replicate an environment with a corporate proxy with the help of Kubernetes Network Policies.

Simulating a corporate proxy environment using Kubernetes Network Policies
Photo by Viktor Forgacs / Unsplash

Imagine one of your customers reporting issues about your software not working correctly once deployed in their specific environment. After a few minutes or hours of debugging, you ended up figuring that this customer has a corporate proxy in place in their environment. All external traffic inside this environment needs to pass through this proxy, otherwise access to the public Internet is simply not allowed.

Proxies are quite common in corporate environments and are used to provide (among other things):

  • Enhanced security, through access control, content filtering and so on
  • Monitoring to ensure audit logging as well as regulatory compliance with company policies
  • Bandwidth optimization through caching and network traffic management

For easier testing, I wanted a pretty straightforward means to replicate such type of environment locally, where network traffic to the public Internet is forbidden unless it passes through a specified proxy. Applications are typically configured to send traffic through the conventional HTTP_PROXY, HTTPS_PROXY or NO_PROXY environment variables.

I initially thought about either setting up a virtual machine in some isolated virtual network, or configuring an isolated network on my router with strict firewall rules, but wanted a way more easier way to achieve this (and potentially revert if needed). And I ultimately figured that this may be achieved right in Kubernetes with the help of Network Policies.

After a very brief introduction about Network Policies, this blog post will walk you through a practical example of how we can simulate an environment with a corporate proxy in Kubernetes.

Network Policies

Network Policies are a Kubernetes construct that allow specifying rules to apply against network ingress or egress traffic in a given namespace.

As you may know, one of the fundamental design decisions in the Kubernetes networking model is that all Pods can communicate with all other pods in the cluster. And this needs to be guaranteed by the underlying network plugin implementing the Container Network Interface (CNI). Having all pods being able to communicate with any other pods may not be ideal in certain environments, but using Network Policies, we can achieve better isolation if needed. Also bear in mind that a Network Policy is scoped to the namespace in which it is created.

And this is what we are trying to mimic here: prevent our application pods from communicating with other pods and the outside; the only type of traffic allowed would be from our application pods to the proxy, and all egress traffic from the proxy.

Prerequisites

To use Network Policies, the network plugin installed in the cluster needs to support them. There are a number of CNI network plugins that support Network Policies, like Calico, Cilium, or Kube-router, to name a few. So first make sure your network plugin supports Network Policies first.

In my local testing environment, I'm just using k3d, which comes with a controller that enforces network policies out of the box. So let's create a cluster using the following command:

k3d cluster create

Once the cluster is created, we can go ahead with creating a namespace for our application:

kubectl create namespace my-ns

Deploying a Squid Proxy

We'll now deploy a proxy in its separate namespace. It is based on Squid.

kubectl create namespace proxy

Now, let's create a Deployment and a Service for this proxy (to expose it within the cluster). The Deployment can be as simple as:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: squid-deployment
  namespace: proxy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "squid"
  template:
    metadata:
      labels:
        app: squid
    spec:
      containers:
      - name: squid
        image: docker.io/ubuntu/squid:latest
        ports:
        - containerPort: 3128
          name: squid
          protocol: TCP

And the Service, so that other pods from other namespaces can connect to it using its DNS name, which will be: squid-service.proxy.svc.cluster.local

apiVersion: v1
kind: Service
metadata:
  name: squid-service
  namespace: proxy
  labels:
    app: squid
spec:
  ports:
  - port: 3128
  selector:
    app: squid

Deploying the Network Policies

As depicted in the previous sections, we want:

1- our application pods to be able to communicate with each other in the same namespace

2- our application pods to be able to only communicate with the proxy pod running in the proxy namespace

3- only the proxy pod has no restriction whatsoever.

Achieving the last point can be done by just not creating any Network Policy in the proxy namespace, which means that this namespace is open for both ingress and egress traffic.

NOTE: If users are accessing your application via an Ingress Controller, you may need an additional Network Policy allowing ingress traffic from the Ingress Controller pod(s) to your application pods. This Network Policy would need to be created inside the application namespace. For simplicity, we'll not be using an Ingress Controller, but this is left as an exercise to the readers.

Network Policy to allow external communication to only the proxy pod

We can achieve with the Network Policy below:

# Deny all egress traffic in this namespace except through a proxy.
# Proxy settings can be used to overcome this.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny-egress-with-exceptions
  namespace: my-ns
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  # allow DNS resolution (we need this allowed,
  # otherwise we won't be able to resolve the
  # DNS name of the Squid proxy service)
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
      - port: 53
        protocol: UDP
      - port: 53
        protocol: TCP
  # allow traffic to Squid proxy
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: proxy
    ports:
    - port: 3128
      protocol: TCP

Let's break this down a little bit.

This Network Policy has a single Egress policyType, which means that it enforces rules against the outside traffic from this namespace, and the only rules applied are those specified in the egress field.

The first rule is allowing DNS traffic from this namespace to the internal DNS pod (running in the kube-system namespace and with the "k8s-app: kube-dns" label on it). This is required to resolve the DNS name of the Squid Proxy Service (squid-service.proxy.svc.cluster.local), which our application will use to communicate with the proxy.

The second rule is allowing the egress traffic to the proxy namespace (identified by the namespace selector label) and to port 3128, which is effectively the port exposed by the Squid Proxy Service.

Network Policy to allow communication in the same namespace

In order for our application pods to be able to communicate with each other in the same namespace, let's create a Network Policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-same-namespace
  namespace: my-ns
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector: {}
  egress:
  - to:
    - podSelector: {}

Deploying the application pods

Now that we have our network policies in place, let's create our application pods and see how they behave with the restrictions above. For the purpose of this article, we will be creating a Pod directly, but in production, you will want to use other constructs like a Deployment or a StatefulSet.

In the example below, we will be creating a Pod that sends an HTTP request to htpbin.org. Because of the Network Policies we created earlier, this Pod should not be able to communicate with the public Internet. It should only be allowed to communicate with the Kubernetes DNS server and our proxy server.

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  namespace: my-ns
spec:
  restartPolicy: Never
  initContainers:
    - name: sleep
      image: alpine:latest
      command: [ 'sleep' ]
      args: [ '5s' ]
  containers:
    - name: my-container
      image: curlimages/curl:latest
      args:
        - '--fail'
        - '-X'
        - 'GET'
        - 'https://httpbin.org/headers'
        - '-H'
        - 'Accept: application/json'

If you are wondering why we have an init container that sleeps for 5 seconds, note that the synchronization of the rules part of a network policy is not always immediate and does not block pod startup even if the rules are not effectively synchronized yet. As a reminder, Kubernetes is overall built around the concept of eventual consistency; so this means in our case that the network policy rules will eventually be applied. We are simply waiting a little bit for this to happen (eventually). You might need to increase the init container wait time.

After creating the Pod above, we should now see a Failed to connect to httpbin.org error message in the Pod logs, which confirms that it is being denied access to the public internet.

$ kubectl -n my-ns logs my-pod 

Defaulted container "my-container" out of: my-container, sleep (init)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to httpbin.org port 443 after 14 ms: Could not connect to server

As depicted previously, pods in our namespace can only send egress traffic to our Squid proxy on its port (3128). To ensure this, let's set our conventional *_PROXY environment variables. Refer to this article from folks at GitLab to understand the details and subtleties of these conventional environment variables.

Here is the diff with the previous manifest:

diff --git a/my-pod.yaml b/my-pod.yaml
index e7b96fc..93eb8a6 100644
--- a/my-pod.yaml
+++ b/my-pod.yaml
@@ -20,4 +20,11 @@ spec:
         - 'https://httpbin.org/headers'
         - '-H'
         - 'Accept: application/json'
+      env:
+        - name: HTTP_PROXY
+          value: 'http://squid-service.proxy.svc.cluster.local:3128'
+        - name: HTTPS_PROXY
+          value: 'http://squid-service.proxy.svc.cluster.local:3128'
+        - name: NO_PROXY
+          value: 'localhost'
 

Now if we create this Pod with the environment variables above, it should complete successfully, as confirmed in its logs:

$ kubectl -n my-ns logs my-pod

Defaulted container "my-container" out of: my-container, sleep (init)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
{
  "headers": {
    "Accept": "application/json", 
    "Host": "httpbin.org", 
    "User-Agent": "curl/8.10.1", 
    "X-Amzn-Trace-Id": "Root=1-66f084cb-3de3d3066257883b6418111e"
  }
100   186  100   186    0     0    473      0 --:--:-- --:--:-- --:--:--   474
}

This indicates that the Pod was successfully able to communicate with the public Internet, because it sets the necessary proxy environment variables to communicate via our proxy Service. We can confirm this by checking the Proxy logs as well:

$ kubectl -n proxy logs deployments/squid-deployment

[...]
1727038667.251    396 10.42.0.14 TCP_TUNNEL/200 5993 CONNECT httpbin.org:443 - HIER_DIRECT/54.84.32.120 -

Wrapping Up

In this article, we saw how we can leverage Kubernetes Network Policies to simulate an environment with a corporate proxy. This allowed us to easily replicate such environment in order to quickly test the behavior of the application. I hope your found this useful. As usual, feel free to share your thoughts in the comments.