The Sidecar Pattern

|
The Sidecar Pattern

Introduction

If you’ve ever found yourself needing to perform the same common tasks for each service you deploy. For example, capturing metrics, warming up a cache, or performing middleware functionality. You could have likely benefited from the Sidecar pattern.

What is a Sidecar Pattern?

A Sidecar container is a helper process that runs alongside your main service or application, that performs a supporting, common feature. Such as:

  • TLS cert rotation
  • Log forwarding
  • Proxying requests
  • Collecting metrics or analytics

These functionalities aren’t specific to your application, and will likely be identical whichever service they are used in conjunction with. The Sidecar pattern is a great way to re-use this functionality, without cluttering your domain specific logic with boilerplate code.

Common Sidecar Usecases

Example: Log forwarding agent

What is does: Collects logs from the main container and forwards it to a collector, think Fluentbit.

Why Sidecar: Keeps logging logic separate and easily replaceable; avoids bloating your application code.


Example: Secrets Fetcher

What is does: Fetches and refreshes secrets or API tokens and writes them to a shared volume, or exposes them via a local endpoint.

Why Sidecar: Avoids embedding cloud SDKs and secrets fetching logic into your main application code.


Example: TLS Certificate Reviewer

What it does: Handles automated TLS certificate requests and renewals, updating files or notifying the app.

Why Sidecar: Centralises cert handling, especially in clusters where TLS termination isn’t offloaded to a proxy.


Example: Rate Limiter

What it does: Enforces API rate limits or quotas by intercepting requests before they reach the application.

Why Sidecar: Keeps your application stateless, and avoids duplicating rate limit logic across services.


Example: Prometheus sidecar or custom metrics translator

What it does: Scrapes, transforms, or forwards metrics to Prometheus or another backend.

Why sidecar: Keeps metric formatting and export logic outside of the core app.


Real World Example

Fetching Secrets

A trivial example is one mentioned above, which is simply fetching secrets or tokens from a specific location, and writing them to a shared volume, which the application has access to.

Obviously, this code is trivialised, but it should be enough to get the usefulness across.

package main

import (
	"encoding/json"
	"fmt"
	"io/ioutil"
	"net/http"
	"os"
	"time"
)

const (
	secretsURL  = "https://my-secrets.platform:8080/secrets"
	outputPath  = "/shared/secrets.json"
	refreshRate = 10 * time.Minute
)

type Secret struct {
	Name  string `json:"name"`
	Value string `json:"value"`
}

func fetchSecrets(token string) ([]Secret, error) {
	req, err := http.NewRequest("GET", secretsURL, nil)
	if err != nil {
		return nil, err
	}
	req.Header.Set("Authorization", "Bearer "+token)

	resp, err := http.DefaultClient.Do(req)
	if err != nil {
		return nil, fmt.Errorf("request error: %w", err)
	}
	defer resp.Body.Close()

	if resp.StatusCode != http.StatusOK {
		return nil, fmt.Errorf("unexpected status: %d", resp.StatusCode)
	}

	var secrets []Secret
	err = json.NewDecoder(resp.Body).Decode(&secrets)
	return secrets, err
}

func writeSecretsToFile(secrets []Secret, path string) error {
	data, err := json.MarshalIndent(secrets, "", "  ")
	if err != nil {
		return err
	}
	return ioutil.WriteFile(path, data, 0644)
}

func main() {
	token := os.Getenv("AUTH_TOKEN")
	if token == "" {
		fmt.Println("AUTH_TOKEN not set")
		os.Exit(1)
	}

	for {
		fmt.Println("Fetching secrets...")
		secrets, err := fetchSecrets(token)
		if err != nil {
			fmt.Printf("Error fetching secrets: %v\n", err)
		} else {
			err = writeSecretsToFile(secrets, outputPath)
			if err != nil {
				fmt.Printf("Error writing secrets to file: %v\n", err)
			} else {
				fmt.Println("Secrets written successfully.")
			}
		}

		time.Sleep(refreshRate)
	}
}

This example code, periodically calls another service within the cluster, fetches some information, and writes the results to a shared volume. This part is pretty straight-forward, so now lets look at how you would wire this together in your Kubernetes environment.

We have an auth token, set as an environment variable, which authenticates with our endpoint. We then decode the secrets, and write them to a JSON file, with a predictable structure. Which we can then reference in our application.

As you probably know, you can define as many containers as you like within a Pod. A Pod is supposed to be a collection, or group of containers that are related. This is so you can scale and manage a logical group of containers or services together.

Here’s an example spec:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-sidecar
spec:
  containers:
    - name: main-app
      image: your-main-app
      volumeMounts:
        - mountPath: /secrets
          name: shared-secrets
    - name: secret-sidecar
      image: your-sidecar-image
      env:
        - name: AUTH_TOKEN
          valueFrom:
            secretKeyRef:
              name: secret-token
              key: token
      volumeMounts:
        - mountPath: /shared
          name: shared-secrets
  volumes:
    - name: shared-secrets
      emptyDir: {}

As you can see, we create a Pod, we have our main application as one container, and our sidecar as a separate container. We also define a volume mount with our /shared path. Which both containers (or any container defined in this spec), have access to. The volume is defined and set as an empty directory.

The sidecar writes secrets.json to /shared, which maps to /secrets in your main app. The main container can then read /secrets/secrets.json at runtime.

Under volumes, our volume has an option emptyDir: {}. This is a temporary, shared volume created when a Pod is assigned to a node. It’s empty at the start, and shared among all container within that pod.

  • Lives as long as the pod does.
  • Deleted when the pod is removed.
  • Backed by the node’s disk or memory (depending on config).

Purpose: Set a max size for the volume.

Default: Unlimited (constrained only by node’s resources).

Example:

  emptyDir:
    sizeLimit: 100Mi

Purpose: Use RAM instead of disk (like a tmpfs).

Why: Faster access, useful for sensitive data you don’t want persisted to disk.

Trade-off: Limited to the node’s RAM.

Example:

emptyDir:
	medium: "Memory"

When to Use emptyDir

✅ Temporary data exchange between containers in a Pod
✅ Storing secrets/configs fetched at runtime
✅ Shared scratch space (e.g. temp build files, caches)
✅ Faster-than-disk communication using medium: "Memory"

When Not to Use It

❌ You need data to persist after Pod restart
❌ You need to share data across multiple Pods
❌ You’re working with large files and have memory pressure (if using RAM)


Deployment Considerations

Startup Order and Readiness

Problem: Your main app might start before the sidecar has fetched secrets.

Solution:

  • Use a readiness probe on the main app, that checks for the existence and validity of the secrets file.
  • Or, add a small init container that waits for the file to appear.
readinessProbe:
  exec:
    command: ["cat", "/secrets/secrets.json"]
  initialDelaySeconds: 5
  periodSeconds: 10

Crash and Lifecycle Independence

PRoblem: If the sidecar crashes, the Pod might restart, taking the main app down with it, even though the main app is fine.

Solution:

  • Add a failure handling mechanism in the sidecar (e.g., retries, backoff, never panic, etc).

Conclusion

So, we’ve covered the Sidecar pattern and its various usecases in some detail, and ran through a trivial example. Sidecar containers don’t need to be big or elaborate, you can break them down into useful utility containers, with specific purposes, which you can then mix and match to suit your services needs.

For example:

apiVersion: v1
kind: Pod
metadata:
  name: app-with-sidecar
spec:
  containers:
    - name: main-app
      image: your-main-app
      volumeMounts:
        - mountPath: /secrets
          name: shared-secrets

	# Metrics proxy
	- name: metrics-proxy
      image: your-metrics-sidecar:latest
      ports:
        - containerPort: 9100  # for Prometheus scraping
      volumeMounts:
        - name: shared-metrics
          mountPath: /metrics
      env:
        - name: TARGET_METRICS_PATH
          value: "/metrics/app"

	# Log shipper sidecar (e.g., Fluent Bit, custom Go logger)
    - name: log-shipper
      image: your-log-shipper:latest
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/app

	# Secrets Refresher
    - name: secret-sidecar
      image: your-sidecar-image
      env:
        - name: AUTH_TOKEN
          valueFrom:
            secretKeyRef:
              name: secret-token
              key: token
      volumeMounts:
        - mountPath: /shared
          name: shared-secrets
  volumes:
    - name: shared-secrets
      emptyDir: {}

A note on Go: Golang is widely used for Cloud Engineering, in fact, virtually all of Kubernetes itself is written in Go. Go is also very commonly used for use cases such as sidecars, proxies, etc. This is because Go is compiled down to a static binary. This means that proxies and sidecars can be condensed down into a 10mb Docker image. This makes it easy to deploy a lot of these within your cluster. Imagine if your Sidecar was written in Node, and you had to run a 200mb image with every single one of your services? It would be highly impractical to add an extra 200mb to each of your services deployments.

The joy of Go!

Hopefully, you can see how this would be a useful approach to have in your arsenal, and you can start building your own suite of useful utility Sidecars!