5. Runtime Security with Falco
Time to Complete
Planned time: ~25 minutes
Falco is a cloud-native runtime security tool that detects abnormal behavior and potential security threats in real-time. Unlike admission controllers that enforce policies at deployment time, Falco monitors system calls and Kubernetes audit logs to detect suspicious activity in running workloads. In this lab, you’ll install Falco, trigger security alerts, and learn how to create custom detection rules.
What You’ll Learn
- How Falco integrates with Kubernetes for runtime security monitoring
- How to install and configure Falco using Helm
- How to detect interactive shells spawned in containers
- How to detect network reconnaissance and suspicious tooling
- How to interpret Falco alerts and understand their security implications
- How to create custom Falco rules for your environment
- The difference between detection (Falco) and prevention (Kyverno)
Trainer Instructions
Tested versions:
- Falco Helm chart:
7.2.1 - Falco:
0.42.1 - Kubernetes:
1.32.x - alpine image:
3.21 - netshoot image:
v0.13 - nginx image:
1.27.3
Ensure participants have:
kubectlaccess to a Kubernetes cluster with Linux nodeshelmCLI installed- Cluster-admin permissions
Note: Falco requires access to the host kernel. In kind clusters, use the modern_ebpf driver. Some alerts (like “Terminal shell in container”) require truly interactive sessions with PTY allocation.
No external integrations (Slack, Teams, Webhooks) are used in this lab.
Info
We are in the AKS cluster: kx c<x>-s1
1. Install Falco
Before we can detect runtime threats, we need to install Falco into our cluster. Falco runs as a DaemonSet to ensure it monitors all nodes.
Info
Falco uses eBPF (extended Berkeley Packet Filter) or a kernel module to observe system calls at the kernel level. This gives it visibility into all process execution, file access, and network activity - regardless of how containers are configured.
Task
- Create a namespace for Falco
- Install Falco using Helm in
7.2.1with the modern eBPF driver - Verify that Falco pods are running
Hint
Regarding the eBPF driver look for a helm value of driver.kind
Solution
kubectl create namespace falco
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco -n falco \
--version 7.2.1 \
--set driver.kind=modern_ebpf \
--set tty=true \
--wait
kubectl get pods -n falco
Questions
- Which Kubernetes resource type does Falco mainly use to run on all nodes?
- Why does Falco need access to the host system?
Answers
- Falco runs primarily as a DaemonSet so it can observe activity on every node in the cluster.
- Falco needs host access to observe system calls (syscalls) and runtime behavior at the kernel level. This is what enables it to detect activity inside containers, regardless of container configuration.
2. Detect an Interactive Shell in a Container
Interactive shells inside containers are often associated with debugging but can also indicate an intrusion. In production environments, containers should be immutable and non-interactive.
Info
The rule “Terminal shell in container” detects when a shell process is spawned with an attached terminal (PTY). This commonly occurs when using kubectl exec -it and is a strong indicator of human interaction with a container.
Task
- Create a namespace for testing
- Deploy a simple container that stays running
- Open an interactive shell inside the container
- Observe Falco logs for the alert
Test pod (~/exercise/kubernetes/falco/pod-alpine.yaml):
# Simple pod for testing shell detection
apiVersion: v1
kind: Pod
metadata:
name: alpine
labels:
app: alpine
spec:
containers:
- name: alpine
image: alpine:3.21
command: ["sleep", "3600"]
Solution
Create the test namespace and deploy the pod:
kubectl create namespace falco-lab
kubectl apply -n falco-lab -f ~/exercise/kubernetes/falco/pod-alpine.yaml
kubectl get pods -n falco-lab -w
kubectl exec -it -n falco-lab alpine -- sh
kubectl logs -n falco -l app.kubernetes.io/name=falco -f
Notice A shell was spawned in a container with an attached terminal
(container_id=... container_name=alpine image=alpine:3.21 ...)
exit
Questions
- What kind of alert do you see?
- Why is spawning a shell considered suspicious in production environments?
Answers
- You should see a Notice level alert: “A shell was spawned in a container with an attached terminal” or similar, depending on the Falco version.
- In production:
- Containers should be immutable - no manual changes
- Interactive access may indicate unauthorized access or debugging that should go through proper channels
- Legitimate debugging should use centralized logging/tracing, not shell access
- An attacker who gains shell access can explore the container, access secrets, and attempt lateral movement
3. Detect Suspicious Network Tools
Attackers often use networking tools to perform reconnaissance after gaining access to a container. Tools like nmap, curl, or wget to unusual destinations can indicate malicious activity.
Task
- Deploy a container with networking tools
- Execute reconnaissance-style commands, such as port scanning
nmap -sT -p 80,443 <ip of kube-apiserver> - Observe Falco alerts as before in a separate terminal window
Test pod with network tools (~/exercise/kubernetes/falco/pod-netshoot.yaml):
# Network debugging pod with tools that trigger Falco alerts
apiVersion: v1
kind: Pod
metadata:
name: netshoot
labels:
app: netshoot
spec:
containers:
- name: netshoot
image: nicolaka/netshoot:v0.13
command: ["sleep", "3600"]
Solution
Deploy the netshoot pod:
kubectl apply -n falco-lab -f ~/exercise/kubernetes/falco/pod-netshoot.yaml
Wait for the pod to be running:
kubectl get pods -n falco-lab -w
Exec into the pod and run network reconnaissance:
kubectl exec -it -n falco-lab netshoot -- sh
Inside the container, run:
# Get the Kubernetes API server IP from ~/.kube/config
nslookup https://<candidate>.hcp.westeurope.azmk8s.io:443
# Scan the Kubernetes API server
nmap -sT -p 80,443 <ip of kube-apiserver>
# Try to access the metadata service (cloud environments)
curl -s --connect-timeout 2 http://169.254.169.254/latest/meta-data/ || echo "Not in cloud"
In another terminal, observe Falco logs:
kubectl logs -n falco -l app.kubernetes.io/name=falco -f | grep -i "netshoot\|nmap\|network\|socket\|connect"
Expected alerts:
- “Packet socket was created in a container” (nmap creating raw sockets)
- “Unexpected connection to K8s API Server from container” (connection attempts)
Questions
- Which processes triggered alerts?
- In which scenarios could this activity be legitimate?
- In which scenarios would it be suspicious?
Answers
- nmap triggers alerts for creating packet sockets and unexpected connections
- Legitimate scenarios:
- Network debugging by authorized personnel during an incident
- Network policy testing in a test environment
- Service mesh troubleshooting
- Suspicious scenarios:
- Production containers shouldn’t need network scanning tools
- Scanning for other services could indicate lateral movement attempts
- Connections to cloud metadata services could be credential theft attempts
4. Detect Sensitive File Access
Attackers often try to read sensitive files like /etc/passwd, /etc/shadow, or credential files to gather information or escalate privileges.
Task
- Use a running container (or deploy
kubectl apply -n falco-lab -f ~/exercise/kubernetes/falco/pod-nginx.yaml) to access sensitive files A simplecatdoes the job - Observe Falco alerts for file access
Solution
Exec into the nginx container:
kubectl exec -n falco-lab nginx -- cat /etc/passwd
kubectl exec -n falco-lab nginx -- cat /etc/shadow 2>/dev/null || echo "Permission denied (expected)"
kubectl logs -n falco -l app.kubernetes.io/name=falco --tail=20 | grep -i "read\|sensitive\|passwd\|shadow"
Tip
Consider what files are sensitive in your environment: database credentials, API keys, TLS certificates, cloud credentials. Custom rules can monitor access to these specific paths.
5. Understand Custom Falco Rules
Built-in rules are generic. In real environments, teams define custom rules based on their specific risk profile and applications.
Task
Review the custom rule structure and understand how to detect package installation inside containers:
Custom rules file (~/exercise/kubernetes/falco/custom-rules.yaml):
# Custom Falco rules for the lab
# These rules demonstrate how to extend Falco's detection capabilities
# Rule: Detect package manager usage in containers
- rule: Package Manager Executed in Container
desc: Detects package manager (apt, apk, yum, dnf) execution inside a container
condition: >
spawned_process and container and
proc.name in (apt, apt-get, apk, yum, dnf, pip, pip3, npm) and
not proc.pname in (sh, bash, dash)
output: >
Package manager executed in container
(container_id=%container.id container_name=%container.name
image=%container.image.repository command=%proc.cmdline
user=%user.name)
priority: WARNING
tags: [container, software_mgmt, mitre_execution]
# Rule: Detect writing to /etc directory
- rule: Write Below Etc in Container
desc: Detects attempts to write to /etc directory inside a container
condition: >
(evt.type in (open, openat, openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0)
and container
and fd.name startswith /etc
output: >
File below /etc opened for writing in container
(file=%fd.name container_id=%container.id container_name=%container.name
image=%container.image.repository command=%proc.cmdline
user=%user.name)
priority: WARNING
tags: [container, filesystem, mitre_persistence]
# Rule: Detect reading sensitive files
- rule: Read Sensitive File in Container
desc: Detects reading of sensitive files like /etc/shadow or /etc/passwd inside containers
condition: >
open_read and container and
(fd.name = /etc/shadow or fd.name = /etc/passwd)
output: >
Sensitive file read in container
(file=%fd.name container_id=%container.id container_name=%container.name
image=%container.image.repository command=%proc.cmdline
user=%user.name)
priority: WARNING
tags: [container, filesystem, mitre_credential_access]
Rule Components
- rule: Name of the rule
- desc: Human-readable description
- condition: The Falco condition language expression that triggers the rule
- output: The message format when the rule fires
- priority: Severity level (EMERGENCY, ALERT, CRITICAL, ERROR, WARNING, NOTICE, INFO, DEBUG)
- tags: Categories for organizing and filtering rules
Key Condition Elements
| Element | Description |
|---|---|
spawned_process |
A new process was created |
container |
Event occurred inside a container |
proc.name |
Process name (e.g., apt, sh, nmap) |
proc.cmdline |
Full command line |
fd.name |
File descriptor name (file path) |
evt.type |
System call type (open, read, write, etc.) |
For a full list take a look at the Supported Fields in the documentation.
Questions
- Which processes would you look for to detect package installation?
- How could you reduce false positives?
Answers
- Processes to detect:
apt,apt-get,apk,yum,dnf,pip,npm,gem - Reduce false positives by:
- Excluding known debug namespaces (
kube-system,monitoring) - Excluding init containers (legitimate for some setups)
- Matching only production namespaces
- Adding multiple conditions (e.g., require specific arguments like
install)
- Excluding known debug namespaces (
6. Bonus: Deploy Custom Rules
Bonus Exercise
This section is optional and provides an additional challenge.
Task
- Deploy Falco with custom rules using Helm values
- Trigger the custom rule by running a package manager in a container
- Verify the custom alert appears
Helm values file (~/exercise/kubernetes/falco/values-custom-rules.yaml):
# Helm values for Falco with custom rules
# Used with: helm upgrade falco falcosecurity/falco -n falco -f values-custom-rules.yaml
tty: true
driver:
kind: modern_ebpf
customRules:
custom-rules.yaml: |
# Rule: Detect package manager usage in containers
- rule: Package Manager Executed in Container
desc: Detects package manager (apt, apk, yum, dnf) execution inside a container
condition: >
spawned_process and container and
proc.name in (apt, apt-get, apk, yum, dnf, pip, pip3, npm)
output: >
Package manager executed in container
(container_id=%container.id container_name=%container.name
image=%container.image.repository command=%proc.cmdline
user=%user.name)
priority: WARNING
tags: [container, software_mgmt, mitre_execution]
# Rule: Detect writing to /etc directory
- rule: Write Below Etc in Container
desc: Detects attempts to write to /etc directory inside a container
condition: >
(evt.type in (open, openat, openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0)
and container
and fd.name startswith /etc
output: >
File below /etc opened for writing in container
(file=%fd.name container_id=%container.id container_name=%container.name
image=%container.image.repository command=%proc.cmdline
user=%user.name)
priority: WARNING
tags: [container, filesystem, mitre_persistence]
Solution
Upgrade Falco with custom rules:
helm upgrade falco falcosecurity/falco -n falco \
--version 7.2.1 \
-f ~/exercise/kubernetes/falco/values-custom-rules.yaml \
--wait
kubectl rollout status daemonset/falco -n falco
kubectl exec -n falco-lab alpine -- apk update
kubectl logs -n falco -l app.kubernetes.io/name=falco --tail=20 | grep -i "package\|apk"
7. Bonus: Falco with Kubernetes Audit Logs
Bonus Exercise
This section is optional and provides an additional challenge.
Falco can also consume Kubernetes audit logs to detect suspicious API server activity, such as:
- Unauthorized access attempts
- Secrets being read
- Exec into pods
- ConfigMap modifications
Conceptual Overview
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ API Server │────▶│ Audit Logs │────▶│ Falco │
│ (K8s actions) │ │ (webhook/file) │ │ (detection) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
The k8saudit plugin embeds a webserver inside Falco that receives audit events via a webhook from the API server.
Task
- Create a kind cluster with audit webhook enabled
- Deploy a Falco instance dedicated to Kubernetes audit log processing
- Trigger audit events and observe the alerts
Hint
The Falco Helm chart ships a ready-made values file for the k8saudit plugin. See the Kubernetes Audit Events docs for details.
Solution
Step 1 - Create a kind cluster with audit webhook:
The API server needs an audit policy and a webhook config that sends events to Falco via NodePort.
Audit webhook config (~/exercise/kubernetes/falco/audit-webhook.yaml):
apiVersion: v1
kind: Config
clusters:
- name: falco
cluster:
server: http://localhost:30007/k8s-audit
contexts:
- name: default-context
context:
cluster: falco
current-context: default-context
Kind cluster config (~/exercise/kubernetes/falco/kind-config-k8saudit.yaml):
# Kind cluster configuration with Kubernetes audit webhook for Falco k8saudit plugin
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: ./audit-policy.yaml
containerPath: /etc/kubernetes/audit/audit-policy.yaml
readOnly: true
- hostPath: ./audit-webhook.yaml
containerPath: /etc/kubernetes/audit/audit-webhook.yaml
readOnly: true
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
audit-policy-file: /etc/kubernetes/audit/audit-policy.yaml
audit-webhook-config-file: /etc/kubernetes/audit/audit-webhook.yaml
audit-webhook-batch-max-wait: "5s"
extraVolumes:
- name: audit-policy
hostPath: /etc/kubernetes/audit/audit-policy.yaml
mountPath: /etc/kubernetes/audit/audit-policy.yaml
readOnly: true
pathType: File
- name: audit-webhook
hostPath: /etc/kubernetes/audit/audit-webhook.yaml
mountPath: /etc/kubernetes/audit/audit-webhook.yaml
readOnly: true
pathType: File
Create the cluster (the audit policy is reused from the audit-logs lab):
kind create cluster --name falco-audit --config ~/exercise/kubernetes/falco/kind-config-k8saudit.yaml
Step 2 - Deploy Falco with the k8saudit plugin:
This runs as a separate Deployment (not a DaemonSet) because it only needs to receive webhook events, not observe syscalls. The key is using falcoctl.config.artifact.install.refs (not falcoctl.artifact.install.refs) to install the k8saudit plugin and rules.
Helm values (~/exercise/kubernetes/falco/values-k8saudit.yaml):
# Falco Helm values for k8saudit plugin (Kubernetes audit log processing)
# This disables syscall monitoring and runs Falco as a Deployment to receive
# audit events via webhook from the API server.
driver:
enabled: false
collectors:
enabled: false
tty: true
controller:
kind: deployment
deployment:
replicas: 1
falcoctl:
artifact:
install:
enabled: true
follow:
enabled: true
config:
artifact:
install:
refs: [k8saudit-rules:0.16, k8saudit:0.16]
follow:
refs: [k8saudit-rules:0.16]
falco:
plugins:
- name: k8saudit
library_path: libk8saudit.so
open_params: "http://:9765/k8s-audit"
- name: json
library_path: libjson.so
load_plugins:
- k8saudit
- json
rules_files:
- /etc/falco/k8s_audit_rules.yaml
services:
- name: k8saudit-webhook
type: NodePort
ports:
- port: 9765
nodePort: 30007
protocol: TCP
kubectl create namespace falco
helm install falco-k8saudit falcosecurity/falco -n falco \
--version 7.2.1 \
-f ~/exercise/kubernetes/falco/values-k8saudit.yaml \
--wait --timeout 3m
Verify the pod is running:
kubectl get pods -n falco
Step 3 - Trigger audit events and check alerts:
# Create a test namespace and pod
kubectl create namespace falco-lab
kubectl run test-pod --image=alpine:3.21 -n falco-lab --command -- sleep 3600
kubectl wait --for=condition=Ready pod/test-pod -n falco-lab --timeout=60s
# Create and read a secret (triggers "K8s Secret Created" and "K8s Secret Get")
kubectl create secret generic my-secret --from-literal=password=test -n falco-lab
kubectl get secret my-secret -n falco-lab -o yaml
# Exec into a pod (triggers "Attach/Exec to pod")
kubectl exec -n falco-lab test-pod -- ls /
# Wait for the audit webhook batch to flush and check the alerts
sleep 10
kubectl logs -n falco -l app.kubernetes.io/instance=falco-k8saudit -c falco --tail=20
Expected alerts:
Informational K8s Secret Created (user=kubernetes-admin secret=my-secret ns=falco-lab ...)
Error K8s Secret Get Successfully (user=kubernetes-admin secret=my-secret ns=falco-lab ...)
Notice Attach/Exec to pod (user=kubernetes-admin pod=test-pod ns=falco-lab action=exec command=ls)
Example audit-related rules (shipped with the k8saudit plugin):
- K8s Secret Get/List - Detect when secrets are accessed
- K8s Pod Exec - Detect exec commands (complements syscall detection)
- K8s ConfigMap Modified - Detect configuration changes
Tip
Combining syscall monitoring (what happens inside containers) with Kubernetes audit logs (what happens at the API level) provides comprehensive visibility into cluster activity. See the Falco k8saudit plugin docs and the plugin README for more details.
8. Clean Up
Remove the resources created during this lab:
kubectl delete namespace falco-lab
kind delete cluster --name falco-audit
Optional: Uninstall Falco
If you want to completely remove Falco:
helm uninstall falco -n falco
kubectl delete namespace falco
Recap
You have:
- Installed Falco
0.42.1using Helm with the modern eBPF driver - Detected interactive shell access in containers
- Detected network reconnaissance with tools like nmap
- Understood how Falco monitors file access
- Learned the structure of custom Falco rules
- (Bonus) Deployed custom rules to detect package manager usage
- (Bonus) Understood how Falco can integrate with Kubernetes audit logs
Wrap-Up Questions
Discussion
- Which alert provided the strongest security signal?
- Which behaviors would you want to block completely (use admission controllers)?
- Which behaviors would you only monitor (use Falco)?
- How does Falco fit into a defense-in-depth strategy?
Discussion Points
- Strongest signals: Interactive shell access and network scanning tools are high-confidence indicators of suspicious activity in production workloads.
- Block vs Monitor:
- Block (Kyverno): Privileged containers, host namespaces, dangerous capabilities, missing security context
- Monitor (Falco): Shell access, unusual network activity, file access patterns, process execution
- Defense in depth:
- RBAC: Controls who can do what
- Kyverno: Controls what can be deployed
- Falco: Detects what is happening at runtime
- Together they provide prevention, enforcement, and detection layers
Further Reading
- Falco Documentation
- Falco GitHub Repository
- Falco Rules Reference
- Falco Helm Chart
- eBPF Explained
- MITRE ATT&CK Framework (referenced in Falco rule tags)
- Kubernetes Audit Logs
End of Lab