network-policy
Original:🇺🇸 English
Translated
Manage Cilium network policies: profile selection, access labels, Hubble debugging, platform namespace CNPs, and emergency escape hatch procedures. Use when: (1) Deploying a new application and setting network profile, (2) Debugging blocked traffic with Hubble, (3) Adding shared resource access, (4) Creating platform namespace CNPs, (5) Using the escape hatch for emergencies, (6) Verifying network policy enforcement. Triggers: "network policy", "hubble", "dropped traffic", "cilium", "blocked traffic", "network profile", "access label", "escape hatch", "cnp", "ccnp"
2installs
Sourceionfury/homelab
Added on
NPX Install
npx skill4agent add ionfury/homelab network-policyTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Network Policy Management
Architecture Quick Reference
All cluster traffic is implicitly denied via Cilium baseline CCNPs. Two layers control access:
- Baselines (cluster-wide CCNPs): DNS egress, health probes, Prometheus scrape, opt-in kube-API
- Profiles (per-namespace via label): Ingress/egress rules matched by
network-policy.homelab/profile=<value>
Platform namespaces (, , , etc.) use hand-crafted CNPs — never apply profiles to them.
kube-systemmonitoringdatabaseWorkflow: Deploy App with Network Policy
Step 1: Choose a Profile
| Profile | Ingress | Egress | Use Case |
|---|---|---|---|
| None | DNS only | Batch jobs, workers |
| Internal gateway | DNS only | Internal dashboards |
| Internal gateway | DNS + HTTPS | Internal apps calling external APIs |
| Both gateways | DNS + HTTPS | Public-facing web apps |
Decision tree:
- Does the app need to be reached from the internet? ->
standard - Internal-only but needs to call external APIs? ->
internal-egress - Internal-only, no external calls? ->
internal - No ingress needed at all? ->
isolated
Step 2: Apply Profile Label to Namespace
In the namespace YAML (committed to git, not ):
kubectl applyyaml
apiVersion: v1
kind: Namespace
metadata:
name: my-app
labels:
network-policy.homelab/profile: standardStep 3: Add Shared Resource Access Labels
If the app needs database, cache, or S3 access, add access labels to the namespace:
yaml
labels:
network-policy.homelab/profile: standard
access.network-policy.homelab/postgres: "true" # PostgreSQL (port 5432)
access.network-policy.homelab/dragonfly: "true" # Dragonfly/Redis (port 6379)
access.network-policy.homelab/garage-s3: "true" # Garage S3 (port 3900)
access.network-policy.homelab/kube-api: "true" # Kubernetes API (port 6443)Step 4: Verify Connectivity
After deployment, check for dropped traffic:
bash
hubble observe --verdict DROPPED --namespace my-app --since 5mIf drops appear, see the Debugging section below.
Workflow: Debug Blocked Traffic
Step 1: Identify Drops
bash
# All drops in a namespace
hubble observe --verdict DROPPED --namespace my-app --since 5m
# With source/destination details
hubble observe --verdict DROPPED --namespace my-app --since 5m -o json | \
jq '{src: .source.namespace + "/" + .source.pod_name, dst: .destination.namespace + "/" + .destination.pod_name, port: (.l4.TCP.destination_port // .l4.UDP.destination_port)}'Step 2: Classify the Drop
| Drop Pattern | Likely Cause | Fix |
|---|---|---|
Egress to | Missing DNS baseline | Should not happen — check if baseline CCNP exists |
Egress to | Missing postgres access label | Add |
Egress to | Missing dragonfly access label | Add |
Egress to internet | Profile doesn't allow HTTPS egress | Switch to |
Ingress from | Profile doesn't allow gateway ingress | Switch to |
Ingress from | Missing baseline | Should not happen — check baseline CCNP |
Step 3: Verify Specific Flows
bash
# DNS resolution
hubble observe --namespace my-app --protocol UDP --port 53 --since 5m
# Database connectivity
hubble observe --namespace my-app --to-namespace database --port 5432 --since 5m
# Internet egress
hubble observe --namespace my-app --to-identity world --port 443 --since 5m
# Gateway ingress
hubble observe --from-namespace istio-gateway --to-namespace my-app --since 5m
# Prometheus scraping
hubble observe --from-namespace monitoring --to-namespace my-app --since 5mStep 4: Check Policy Status
bash
# List all policies affecting a namespace
kubectl get cnp -n my-app
kubectl get ccnp | grep -E 'baseline|profile'
# Check which profile is active
kubectl get namespace my-app --show-labels | grep network-policyWorkflow: Emergency Escape Hatch
Use only when network policies block legitimate traffic and you need immediate relief.
Step 1: Disable Enforcement
bash
kubectl label namespace <ns> network-policy.homelab/enforcement=disabledThis triggers alerts:
- (warning) after 5 minutes
NetworkPolicyEnforcementDisabled - (critical) after 24 hours
NetworkPolicyEnforcementDisabledLong
Step 2: Verify Traffic Flows
bash
hubble observe --namespace <ns> --since 1mStep 3: Investigate Root Cause
Use the debugging workflow above to identify what policy is missing or misconfigured.
Step 4: Fix the Policy (via GitOps)
Apply the fix through a PR — never directly.
kubectl applyStep 5: Re-enable Enforcement
bash
kubectl label namespace <ns> network-policy.homelab/enforcement-See for the full procedure.
docs/runbooks/network-policy-escape-hatch.mdWorkflow: Add Platform Namespace CNP
Platform namespaces need hand-crafted CNPs (not profiles). Create in .
kubernetes/platform/config/network-policy/platform/Required Rules
Every platform CNP must include:
- DNS egress to (port 53 UDP/TCP)
kube-system/kube-dns - Prometheus scrape ingress from namespace
monitoring - Health probe ingress from entity and
health169.254.0.0/16 - HBONE rules if namespace participates in Istio mesh (port 15008 to/from )
istio-system/ztunnel - Service-specific rules for the namespace's actual traffic patterns
Template
yaml
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: <namespace>-default
namespace: <namespace>
spec:
description: "<Namespace purpose>: describe allowed traffic"
endpointSelector: {}
ingress:
# Health probes
- fromEntities: [health]
- fromCIDR: ["169.254.0.0/16"]
# Prometheus scraping
- fromEndpoints:
- matchLabels:
io.kubernetes.pod.namespace: monitoring
app.kubernetes.io/name: prometheus
toPorts:
- ports:
- port: "<metrics-port>"
protocol: TCP
# HBONE (if mesh participant)
- fromEndpoints:
- matchLabels:
io.kubernetes.pod.namespace: istio-system
app: ztunnel
toPorts:
- ports:
- port: "15008"
protocol: TCP
egress:
# DNS
- toEndpoints:
- matchLabels:
io.kubernetes.pod.namespace: kube-system
k8s-app: kube-dns
toPorts:
- ports:
- port: "53"
protocol: UDP
- port: "53"
protocol: TCP
# HBONE (if mesh participant)
- toEndpoints:
- matchLabels:
io.kubernetes.pod.namespace: istio-system
app: ztunnel
toPorts:
- ports:
- port: "15008"
protocol: TCPAfter creating, add to .
kubernetes/platform/config/network-policy/platform/kustomization.yamlAnti-Patterns
- NEVER create explicit policies — baselines provide implicit deny
default-deny - NEVER use profiles for platform namespaces — they need custom CNPs
- NEVER hardcode IP addresses — use endpoint selectors and entities
- NEVER allow port — always specify explicit port lists
any - NEVER disable enforcement without following the escape hatch runbook
- NEVER apply network policy changes via on integration/live — always through GitOps
kubectl - Dev cluster exception: Direct of network policies is permitted on dev for testing
kubectl apply
Cross-References
- network-policy/CLAUDE.md — Full architecture and directory structure
- docs/runbooks/network-policy-escape-hatch.md — Emergency bypass procedure
- docs/runbooks/network-policy-verification.md — Hubble verification commands