Honeypot and Deception Technology in Kubernetes: Canary Tokens, Fake Credentials, and Honeypod Pods
Problem
Detection-based security controls assume the attacker leaves a known signature: a known CVE, a known tool name, a known network pattern. Sophisticated adversaries operate below signature-based detection. They use living-off-the-land binaries, encrypted C2 channels, and slow enumeration that blends with legitimate traffic.
Deception technology flips the dynamic: instead of detecting bad behavior among good behavior, it creates artifacts that have only one legitimate state — untouched. Any access, read, or connection to a honeypot asset is anomalous by definition. The signal-to-noise ratio is near-perfect.
In Kubernetes environments, the attack surface that deception covers most effectively:
- Credential enumeration. An attacker with pod execution reads all Secrets in a namespace. If one of those Secrets contains a canary token — a fake API key that alerts on use — the first attempt to use it fires a high-confidence alert.
- Service discovery and lateral movement. An attacker performs DNS enumeration or scans the cluster network. A honeypot Service that appears in DNS but serves no legitimate traffic detects the scan.
- Admin interface probing. The attacker looks for exposed management APIs (etcd, Kubernetes dashboard, Prometheus). A fake management interface that logs all connections catches the probe.
- Secrets Manager enumeration. An attacker with Vault access reads all paths. A canary path that alerts on read catches the enumeration.
The specific gaps in Kubernetes environments without deception:
- Every real credential looks identical to a canary; attackers don’t know which to try first.
- Lateral movement via internal DNS leaves no logs if the destination service doesn’t exist.
- Pod enumeration via the API doesn’t trigger alerts in most default configurations.
- Vault access audit logs show enumeration but alerts require active monitoring.
Target systems: Kubernetes 1.29+, canarytokens.org or self-hosted canary infrastructure, Alertmanager 0.27+, Falco 0.38+ (for pod-level canary monitoring).
Threat Model
- Adversary 1 — Credential harvester: An attacker with namespace-level pod exec reads all Kubernetes Secrets, attempting to find API keys, DB passwords, or cloud credentials. They test the canary API key against the target API.
- Adversary 2 — Service mesh explorer: An attacker with pod execution performs DNS lookups and HTTP probes against all in-cluster DNS names, attempting to find unauthenticated internal services.
- Adversary 3 — Vault enumerator: An attacker with a stolen Vault token performs
vault listandvault readacross all accessible paths, attempting to find high-value secrets. - Adversary 4 — etcd direct reader: An attacker with etcd access reads keys, looking for Kubernetes Secrets. A canary key in etcd fires an alert on read.
- Access level: Adversary 1 has namespace-level Kubernetes RBAC. Adversary 2 has pod network access. Adversary 3 has a Vault token with list permissions. Adversary 4 has etcd client credentials.
- Objective: Discover credentials, pivot to other systems, exfiltrate data.
- Blast radius: Deception does not prevent intrusion — it detects it. The value is in reducing dwell time: an attacker who touches a honeypot triggers an alert within seconds, while traditional SIEM correlation might take hours.
Configuration
Step 1: Canary API Keys in Kubernetes Secrets
Place fake credentials in Secrets that look valuable. Use canarytokens.org (or a self-hosted equivalent) to generate tokens that phone home when used.
# Generate a canary API key via canarytokens.org or your internal canary service.
# This returns a token that triggers an alert when used against the monitored API.
CANARY_KEY=$(curl -s -X POST https://canarytokens.org/generate \
-d "type=aws_keys&memo=k8s-prod-honeypot" | jq -r .token)
# Store it as a Kubernetes Secret that looks like a real AWS key.
kubectl create secret generic aws-prod-credentials \
--namespace production \
--from-literal=AWS_ACCESS_KEY_ID="AKIA${CANARY_KEY:0:16}" \
--from-literal=AWS_SECRET_ACCESS_KEY="$(openssl rand -hex 20)"
Label it to look real:
apiVersion: v1
kind: Secret
metadata:
name: aws-prod-credentials
namespace: production
labels:
app: payments-service
environment: production
annotations:
# Internal annotation for tracking; not visible to cluster users.
security.internal/canary: "true"
security.internal/token-id: "<canary-id>"
type: Opaque
data:
AWS_ACCESS_KEY_ID: <base64-encoded-fake-key>
AWS_SECRET_ACCESS_KEY: <base64-encoded-fake-secret>
Place these canary Secrets alongside real Secrets. An attacker who performs a namespace Secret dump will harvest the canary alongside real credentials. Their first attempt to use the fake AWS key fires an alert — revealing the compromise.
Repeat for:
- Stripe API keys (canary key that alerts on API call)
- GitHub tokens (canary fine-grained PAT that alerts on use)
- Database passwords (canary credentials for a monitored honeypot DB that logs all connection attempts)
- OpenAI API keys (canary key that alerts on API call)
Step 2: Honeypot Services for Lateral Movement Detection
Deploy Kubernetes Services that appear legitimate but serve no real traffic. Any connection to them is anomalous.
# honeypot-service.yaml
apiVersion: v1
kind: Service
metadata:
name: internal-admin-panel
namespace: production
labels:
app: admin
spec:
selector:
app: honeypot-admin # No pods match this label; traffic goes nowhere.
ports:
- port: 8080
targetPort: 8080
# honeypot-pod.yaml — The actual pod that accepts connections and logs them.
apiVersion: apps/v1
kind: Deployment
metadata:
name: honeypot-admin
namespace: honeypots
spec:
replicas: 1
selector:
matchLabels:
app: honeypot-admin
template:
metadata:
labels:
app: honeypot-admin
spec:
containers:
- name: honeypot
image: nginx:alpine
ports:
- containerPort: 8080
env:
- name: HONEYPOT_NAME
value: "internal-admin-panel"
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
volumes:
- name: nginx-config
configMap:
name: honeypot-nginx-config
# honeypot-nginx-config
server {
listen 8080;
access_log /dev/stdout json;
location / {
# Return a convincing but empty admin panel.
return 200 '{"status":"ok","version":"3.2.1","admin":true}';
add_header Content-Type application/json;
}
}
Every HTTP request to internal-admin-panel.production.svc.cluster.local:8080 is logged. No legitimate application makes this request. All log lines are alerts.
Step 3: DNS Canary for Service Enumeration Detection
Create DNS entries that alert on resolution. Using ExternalDNS or a custom CoreDNS plugin:
# CoreDNS custom zone with canary entries.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
# Canary zone: any query for *.honeypot.internal is logged and forwarded
# to the honeypot DNS logger.
file /etc/coredns/honeypot.db honeypot.internal {
reload
}
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
honeypot.db: |
$ORIGIN honeypot.internal.
@ 3600 IN SOA ns1.honeypot.internal. admin.honeypot.internal. 2026042901 7200 3600 86400 300
@ 3600 IN NS ns1.honeypot.internal.
ns1 3600 IN A 10.96.0.100
; Any resolution of *.honeypot.internal returns the honeypot pod IP.
* 3600 IN A 10.96.50.100 ; Honeypot pod IP.
Log DNS queries for honeypot names in CoreDNS:
honeypot.internal:53 {
log # Logs all queries to this zone.
file /etc/coredns/honeypot.db
}
Name DNS canaries to look attractive: etcd.honeypot.internal, vault-admin.honeypot.internal, postgres-master.honeypot.internal.
Step 4: Vault Canary Paths
In Vault, create canary paths that audit-log reads:
# Create a canary secret that looks valuable.
vault kv put secret/prod/database/master-password \
password="$(openssl rand -hex 16)" # Not the real password.
# Create an audit log filter that specifically alerts on reads of this path.
Vault audit log filter (in your SIEM):
event.type = "request" AND
event.auth.path = "auth/kubernetes/*" AND
event.request.path = "secret/data/prod/database/master-password"
→ ALERT: Canary secret read - possible credential enumeration
Vault also supports response-wrapping canary tokens: a wrapped token that alerts when unwrapped.
# Create a response-wrapped canary token.
CANARY_TOKEN=$(vault token create \
-wrap-ttl=1h \
-policy=read-only \
-format=json | jq -r .wrap_info.token)
# Store CANARY_TOKEN in a Secret. Any unwrap attempt fires a Vault audit event.
Step 5: Falco Rules for Pod-Level Honeypot Monitoring
Use Falco to detect access to canary files, ports, or processes inside pods:
# falco-honeypot-rules.yaml
- rule: Honeypot File Accessed
desc: A file designated as a canary was read or executed.
condition: >
(open_read or open_write or execve) and
(fd.name in (/etc/honeypot-credentials.json, /root/.aws/credentials_canary))
output: >
Honeypot file accessed (file=%fd.name user=%user.name container=%container.name
image=%container.image.repository pod=%k8s.pod.name ns=%k8s.ns.name)
priority: CRITICAL
tags: [honeypot, deception, T1083]
- rule: Connection to Honeypot Service
desc: A pod made a network connection to a known honeypot service.
condition: >
outbound and
fd.sip in (10.96.50.100) and # Honeypot pod IP.
container.id != host
output: >
Connection to honeypot service (src_pod=%k8s.pod.name src_ns=%k8s.ns.name
dst_ip=%fd.sip dst_port=%fd.sport)
priority: CRITICAL
tags: [honeypot, lateral-movement, T1046]
- rule: Canary Secret Mounted and Read
desc: A pod read a file from a volume mounted from a canary Secret.
condition: >
open_read and
fd.name startswith "/var/run/secrets/canary/" and
container.id != host
output: >
Canary secret read (file=%fd.name pod=%k8s.pod.name ns=%k8s.ns.name
user=%user.name image=%container.image.repository)
priority: CRITICAL
tags: [honeypot, credential-access, T1552]
Step 6: Alertmanager Routing for Honeypot Alerts
Route honeypot alerts to immediate-response channels, bypassing normal alert fatigue suppression:
# alertmanager.yml
route:
routes:
- match:
alertname: HoneypotAccess
receiver: security-pagerduty
continue: false # Do not suppress; no grouping or silencing applies.
group_wait: 0s # Page immediately.
repeat_interval: 5m
receivers:
- name: security-pagerduty
pagerduty_configs:
- service_key: <security-oncall-key>
severity: critical
description: "HONEYPOT TRIGGERED: {{ $labels.description }}"
details:
source: "{{ $labels.source_pod }}"
namespace: "{{ $labels.namespace }}"
honeypot: "{{ $labels.honeypot_name }}"
timestamp: "{{ $labels.timestamp }}"
Step 7: Deception Placement Strategy
Placement is the key variable. Honeypots that look too obvious are ignored; honeypots mixed in with real infrastructure detect real attackers.
Rules for placement:
| Placement principle | Implementation |
|---|---|
| Mix canary Secrets with real Secrets in every namespace | 2-3 canary Secrets per namespace alongside real application Secrets |
| Place honeypot services in every production namespace | One honeypot Service per namespace; different plausible names per namespace |
| Name canaries attractively | “admin”, “master”, “root”, “prod”, “backup”, “legacy” in names |
| Make canary Secrets look real | Same label structure as real Secrets; plausible data format |
| Don’t advertise honeypots internally | Security team only; if developers know which Secrets are canaries, they might accidentally trigger them |
| Rotate canary tokens regularly | Monthly; regenerate canary API keys so old captures don’t produce alerts indefinitely |
Step 8: Telemetry
honeypot_access_total{type, namespace, pod} counter
honeypot_canary_token_triggered_total{token_id} counter
honeypot_dns_query_total{query_name, source_pod} counter
honeypot_vault_path_read_total{path, token_entity} counter
honeypot_ssh_connection_total{source_ip} counter
Every metric is alert-worthy. A counter incrementing by 1 is a P1 incident.
Expected Behaviour
| Signal | Without deception | With deception |
|---|---|---|
| Credential enumeration detected | Only if attacker uses known tool | Immediately on first use of canary credential |
| Lateral movement to non-existent service | No alert (DNS NXDOMAIN is silent) | DNS query logged; connection to honeypot Service logged |
| Vault enumeration | Audit log requires active monitoring | Canary path read fires immediate alert |
| File-based canary access in pod | Not detected | Falco rule triggers CRITICAL alert in seconds |
| Alert confidence | Variable; many false positives | Near-certain; any honeypot access is anomalous |
Trade-offs
| Aspect | Benefit | Cost | Mitigation |
|---|---|---|---|
| Canary credentials among real credentials | High-confidence alerts on credential theft | Developer accidentally reads canary Secret | Automate canary Secret labeling; never mount canary Secrets in legitimate application Pods. |
| Honeypot Services in namespace | Lateral movement detection | Adds noise to Service listings (minor) | Use descriptive-but-not-too-obvious names; document which are canaries in security runbooks. |
| Falco honeypot rules | Sub-second pod-level detection | CPU overhead of Falco syscall monitoring | Falco overhead is ~2-4% CPU; acceptable for production. |
| Canary tokens phoning home | Immediate external notification | Requires outbound HTTPS from cluster | Use canarytokens.org with explicit egress allowlisting; or self-hosted canary service inside cluster. |
| Alert immediacy (no grouping) | Fastest possible response | Single alert per trigger (not batched) | Honeypot alerts should be batched only if same-attacker same-honeypot within 60s; otherwise immediate. |
Failure Modes
| Failure | Symptom | Detection | Recovery |
|---|---|---|---|
| Legitimate pod reads canary Secret by accident | False positive alert | Alert fires; investigation reveals legitimate pod | Identify why the pod had access; remove the RBAC binding; update Secret placement. |
| Canary token expires before use | Honeypot fires no alert when triggered | Silent failure; attacker uses credential undetected | Rotate canary tokens monthly; test them after rotation. |
| Honeypot Service IP changes after pod restart | Falco IP-match rule doesn’t fire | Network rule silently misses connections | Use DNS-based matching in Falco or a Service label match instead of IP. |
| Attacker aware of deception | Skips anything labelled “canary”, “honeypot”, “test” | No alert fires | Use neutral names; never label canaries as canaries to external observers. |
| Falco not running on a node | Node-level honeypot events missed | Falco DaemonSet health check | Ensure Falco DaemonSet has PodDisruptionBudget and health monitoring. |
| Alert routing failure | Honeypot fires but on-call not paged | Monitor alertmanager_notifications_failed_total |
Test alert routing monthly; have a secondary channel (Slack + PagerDuty). |