text
stringlengths 1
1k
| id
int64 0
8.58k
|
---|---|
stand Configure a Pod to Use a ConfigMap .
Real World Example: Configuring Redis using a
ConfigMap
Follow the steps below to configure a Redis cache using data stored in a ConfigMap.
First create a ConfigMap with an empty configuration block:
cat <<EOF >./example-redis-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: ""
EOF
Apply the ConfigMap created above, along with a Redis pod manifest:•
•
•
•
•
•
| 8,300 |
kubectl apply -f example-redis-config.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/
examples/pods/config/redis-pod.yaml
Examine the contents of the Redis pod manifest and note the following:
A volume named config is created by spec.volumes[1]
The key and path under spec.volumes[1].configMap.items[0] exposes the redis-config key
from the example-redis-config ConfigMap as a file named redis.conf on the config
volume.
The config volume is then mounted at /redis-master by
spec.containers[0].volumeMounts[1] .
This has the net effect of exposing the data in data.redis-config from the example-redis-config
ConfigMap above as /redis-master/redis.conf inside the Pod.
pods/config/redis-pod.yaml
apiVersion : v1
kind: Pod
metadata :
name : redis
spec:
containers :
- name : redis
image : redis:5.0.4
command :
- redis-server
- "/redis-master/redis.conf"
env:
- name : MASTER
value : "true"
ports :
| 8,301 |
- containerPort : 6379
resources :
limits :
cpu: "0.1"
volumeMounts :
- mountPath : /redis-master-data
name : data
- mountPath : /redis-master
name : config
volumes :
- name : data
emptyDir : {}
- name : config
configMap :
name : example-redis-config
items :
- key: redis-config
path: redis.conf
Examine the created objects:•
•
| 8,302 |
kubectl get pod/redis configmap/example-redis-config
You should see the following output:
NAME READY STATUS RESTARTS AGE
pod/redis 1/1 Running 0 8s
NAME DATA AGE
configmap/example-redis-config 1 14s
Recall that we left redis-config key in the example-redis-config ConfigMap blank:
kubectl describe configmap/example-redis-config
You should see an empty redis-config key:
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:
kubectl exec -it redis -- redis-cli
Check maxmemory :
127.0.0.1:6379> CONFIG GET maxmemory
It should show the default value of 0:
1) "maxmemory"
2) "0"
Similarly, check maxmemory-policy :
127.0.0.1:6379> CONFIG GET maxmemory-policy
Which should also yield its default value of noeviction :
1) "maxmemory-policy"
2) "noeviction"
Now let | 8,303 |
's add some configuration values to the example-redis-config ConfigMap:
pods/config/example-redis-config.yaml
apiVersion : v1
kind: ConfigMap
metadata :
name : example-redis-confi | 8,304 |
data:
redis-config : |
maxmemory 2mb
maxmemory-policy allkeys-lru
Apply the updated ConfigMap:
kubectl apply -f example-redis-config.yaml
Confirm that the ConfigMap was updated:
kubectl describe configmap/example-redis-config
You should see the configuration values we just added:
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
----
maxmemory 2mb
maxmemory-policy allkeys-lru
Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was
applied:
kubectl exec -it redis -- redis-cli
Check maxmemory :
127.0.0.1:6379> CONFIG GET maxmemory
It remains at the default value of 0:
1) "maxmemory"
2) "0"
Similarly, maxmemory-policy remains at the noeviction default setting:
127.0.0.1:6379> CONFIG GET maxmemory-policy
Returns:
1) "maxmemory-policy"
2) "noeviction"
The configuration values have not changed because the Pod needs to be restarted to grab
updated values from associat | 8,305 |
ed ConfigMaps. Let's delete and recreate the Pod | 8,306 |
kubectl delete pod redis
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/
examples/pods/config/redis-pod.yaml
Now re-check the configuration values one last time:
kubectl exec -it redis -- redis-cli
Check maxmemory :
127.0.0.1:6379> CONFIG GET maxmemory
It should now return the updated value of 2097152:
1) "maxmemory"
2) "2097152"
Similarly, maxmemory-policy has also been updated:
127.0.0.1:6379> CONFIG GET maxmemory-policy
It now reflects the desired value of allkeys-lru :
1) "maxmemory-policy"
2) "allkeys-lru"
Clean up your work by deleting the created resources:
kubectl delete pod/redis configmap/example-redis-config
What's next
Learn more about ConfigMaps .
Security
Security is an important concern for most organizations and people who run Kubernetes
clusters. You can find a basic security checklist elsewhere in the Kubernetes documentation.
To learn how to deploy and manage security aspects of Kubernetes, you can follow the tutorials
in this | 8,307 |
section.
Apply Pod Security Standards at the Cluster Level
Apply Pod Security Standards at the Namespace Level
Restrict a Container's Access to Resources with AppArmor
Restrict a Container's Syscalls with seccomp | 8,308 |
Apply Pod Security Standards at the
Cluster Level
Note
This tutorial applies only for new clusters.
Pod Security is an admission controller that carries out checks against the Kubernetes Pod
Security Standards when new pods are created. It is a feature GA'ed in v1.25. This tutorial
shows you how to enforce the baseline Pod Security Standard at the cluster level which applies
a standard configuration to all namespaces in a cluster.
To apply Pod Security Standards to specific namespaces, refer to Apply Pod Security Standards
at the namespace level .
If you are running a version of Kubernetes other than v1.29, check the documentation for that
version.
Before you begin
Install the following on your workstation:
kind
kubectl
This tutorial demonstrates what you can configure for a Kubernetes cluster that you fully
control. If you are learning how to configure Pod Security Admission for a managed cluster
where you are not able to configure the control plane, read Apply Pod Security Standard | 8,309 |
s at the
namespace level .
Choose the right Pod Security Standard to apply
Pod Security Admission lets you apply built-in Pod Security Standards with the following
modes: enforce , audit , and warn .
To gather information that helps you to choose the Pod Security Standards that are most
appropriate for your configuration, do the following:
Create a cluster with no Pod Security Standards applied:
kind create cluster --name psa-wo-cluster-pss
The output is similar to:
Creating cluster "psa-wo-cluster-pss" ...
Ensuring node image (kindest/node:v1.29.2)
Preparing nodes
Writing configuration
Starting control-plane
Installing CNI •
•
1 | 8,310 |
Installing StorageClass
Set kubectl context to "kind-psa-wo-cluster-pss"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-wo-cluster-pss
Thanks for using kind!
Set the kubectl context to the new cluster:
kubectl cluster-info --context kind-psa-wo-cluster-pss
The output is similar to this:
Kubernetes control plane is running at https://127.0.0.1:61350
CoreDNS is running at https://127.0.0.1:61350/api/v1/namespaces/kube-system/services/
kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Get a list of namespaces in the cluster:
kubectl get ns
The output is similar to this:
NAME STATUS AGE
default Active 9m30s
kube-node-lease Active 9m32s
kube-public Active 9m32s
kube-system Active 9m32s
local-path-storage Active 9m26s
Use --dry-run=server to understand what happens when different Pod Security Standards
are applied:
Privileged
kubectl label --dry-r | 8,311 |
un =server --overwrite ns --all \
pod-security.kubernetes.io/enforce =privileged
The output is similar to:
namespace/default labeled
namespace/kube-node-lease labeled
namespace/kube-public labeled
namespace/kube-system labeled
namespace/local-path-storage labeled
Baseline
kubectl label --dry-run =server --overwrite ns --all \
pod-security.kubernetes.io/enforce =baseline2.
3.
4.
1.
2 | 8,312 |
The output is similar to:
namespace/default labeled
namespace/kube-node-lease labeled
namespace/kube-public labeled
Warning: existing pods in namespace "kube-system" violate the new PodSecurity
enforce level "baseline:latest"
Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host
namespaces, hostPath volumes
Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath
volumes
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged
namespace/kube-system labeled
namespace/local-path-storage labeled
Restricted
kubectl label --dry-run =server --overwrite ns --all \
pod-security.kubernetes.io/enforce =restricted
The output is similar to:
namespace/default labeled
namespace/kube-node-lease labeled
namespace/kube-public labeled
Warning: existing pods in namespace "kube-system" violate the new PodSecurity
enforce level "restricted:latest"
Warning: coredns-7bb9c7b568-hsptc (and 1 other pod): unrestricted capabilities,
runAsNonRoot != true, s | 8,313 |
eccompProfile
Warning: etcd-psa-wo-cluster-pss-control-plane (and 3 other pods): host
namespaces, hostPath volumes, allowPrivilegeEscalation != false, unrestricted
capabilities, restricted volume types, runAsNonRoot != true
Warning: kindnet-vzj42: non-default capabilities, host namespaces, hostPath
volumes, allowPrivilegeEscalation != false, unrestricted capabilities, restricted
volume types, runAsNonRoot != true, seccompProfile
Warning: kube-proxy-m6hwf: host namespaces, hostPath volumes, privileged,
allowPrivilegeEscalation != false, unrestricted capabilities, restricted volume types,
runAsNonRoot != true, seccompProfile
namespace/kube-system labeled
Warning: existing pods in namespace "local-path-storage" violate the new
PodSecurity enforce level "restricted:latest"
Warning: local-path-provisioner-d6d9f7ffc-lw9lh: allowPrivilegeEscalation != false,
unrestricted capabilities, runAsNonRoot != true, seccompProfile
namespace/local-path-storage labeled
From the previous output, y | 8,314 |
ou'll notice that applying the privileged Pod Security Standard
shows no warnings for any namespaces. However, baseline and restricted standards both have
warnings, specifically in the kube-system namespace.3 | 8,315 |
Set modes, versions and standards
In this section, you apply the following Pod Security Standards to the latest version:
baseline standard in enforce mode.
restricted standard in warn and audit mode.
The baseline Pod Security Standard provides a convenient middle ground that allows keeping
the exemption list short and prevents known privilege escalations.
Additionally, to prevent pods from failing in kube-system , you'll exempt the namespace from
having Pod Security Standards applied.
When you implement Pod Security Admission in your own environment, consider the
following:
Based on the risk posture applied to a cluster, a stricter Pod Security Standard like
restricted might be a better choice.
Exempting the kube-system namespace allows pods to run as privileged in this
namespace. For real world use, the Kubernetes project strongly recommends that you
apply strict RBAC policies that limit access to kube-system , following the principle of
least privilege. To implement the pr | 8,316 |
eceding standards, do the following:
Create a configuration file that can be consumed by the Pod Security Admission
Controller to implement these Pod Security Standards:
mkdir -p /tmp/pss
cat <<EOF > /tmp/pss/cluster-level-pss.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodSecurity
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1
kind: PodSecurityConfiguration
defaults:
enforce: "baseline"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
usernames: []
runtimeClasses: []
namespaces: [kube-system]
EOF
Note: pod-security.admission.config.k8s.io/v1 configuration requires v1.25+. For v1.23
and v1.24, use v1beta1 . For v1.22, use v1alpha1 .
Configure the API server to consume this file during cluster creation:•
•
1.
2.
3.
4 | 8,317 |
cat <<EOF > /tmp/pss/cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
admission-control-config-file: /etc/config/cluster-level-pss.yaml
extraVolumes:
- name: accf
hostPath: /etc/config
mountPath: /etc/config
readOnly: false
pathType: "DirectoryOrCreate"
extraMounts:
- hostPath: /tmp/pss
containerPath: /etc/config
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: None
EOF
Note: If you use Docker Desktop with kind on macOS, you ca | 8,318 |
n add /tmp as a Shared
Directory under the menu item Preferences > Resources > File Sharing .
Create a cluster that uses Pod Security Admission to apply these Pod Security Standards:
kind create cluster --name psa-with-cluster-pss --config /tmp/pss/cluster-config.yaml
The output is similar to this:
Creating cluster "psa-with-cluster-pss" ...
Ensuring node image (kindest/node:v1.29.2)
Preparing nodes
Writing configuration
Starting control-plane
Installing CNI
Installing StorageClass
Set kubectl context to "kind-psa-with-cluster-pss"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-with-cluster-pss5 | 8,319 |
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/
#community
Point kubectl to the cluster:
kubectl cluster-info --context kind-psa-with-cluster-pss
The output is similar to this:
Kubernetes control plane is running at https://127.0.0.1:63855
CoreDNS is running at https://127.0.0.1:63855/api/v1/namespaces/kube-system/services/
kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Create a Pod in the default namespace:
security/example-baseline-pod.yaml
apiVersion : v1
kind: Pod
metadata :
name : nginx
spec:
containers :
- image : nginx
name : nginx
ports :
- containerPort : 80
kubectl apply -f https://k8s.io/examples/security/example-baseline-pod.yaml
The pod is started normally, but the output includes a warning:
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false
(container "nginx" must set securityContext.allowPrivilegeEscalatio | 8,320 |
n=false), unrestricted
capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]),
runAsNonRoot != true (pod or container "nginx" must set
securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set
securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/nginx created
Clean up
Now delete the clusters which you created above by running the following command:
kind delete cluster --name psa-with-cluster-pss
kind delete cluster --name psa-wo-cluster-pss6.
7 | 8,321 |
What's next
Run a shell script to perform all the preceding steps at once:
Create a Pod Security Standards based cluster level Configuration
Create a file to let API server consume this configuration
Create a cluster that creates an API server with this configuration
Set kubectl context to this new cluster
Create a minimal pod yaml file
Apply this file to create a Pod in the new cluster
Pod Security Admission
Pod Security Standards
Apply Pod Security Standards at the namespace level
Apply Pod Security Standards at the
Namespace Level
Note
This tutorial applies only for new clusters.
Pod Security Admission is an admission controller that applies Pod Security Standards when
pods are created. It is a feature GA'ed in v1.25. In this tutorial, you will enforce the baseline Pod
Security Standard, one namespace at a time.
You can also apply Pod Security Standards to multiple namespaces at once at the cluster level.
For instructions, refer to Apply Pod Security Standards at the cluster leve | 8,322 |
l .
Before you begin
Install the following on your workstation:
kind
kubectl
Create cluster
Create a kind cluster as follows:
kind create cluster --name psa-ns-level
The output is similar to this:
Creating cluster "psa-ns-level" ...
Ensuring node image (kindest/node:v1.29.2)
Preparing nodes
Writing configuration
Starting control-plane
Installing CNI
Installing StorageClass •
1.
2.
3.
4.
5.
6.
•
•
•
•
•
1 | 8,323 |
Set kubectl context to "kind-psa-ns-level"
You can now use your cluster with:
kubectl cluster-info --context kind-psa-ns-level
Not sure what to do next? Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Set the kubectl context to the new cluster:
kubectl cluster-info --context kind-psa-ns-level
The output is similar to this:
Kubernetes control plane is running at https://127.0.0.1:50996
CoreDNS is running at https://127.0.0.1:50996/api/v1/namespaces/kube-system/services/
kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Create a namespace
Create a new namespace called example :
kubectl create ns example
The output is similar to this:
namespace/example created
Enable Pod Security Standards checking for that
namespace
Enable Pod Security Standards on this namespace using labels supported by built-in Pod
Security Admission. In this step you will configure a check to warn on Pods that don't
meet the latest version of the baseline | 8,324 |
pod security standard.
kubectl label --overwrite ns example \
pod-security.kubernetes.io/warn =baseline \
pod-security.kubernetes.io/warn-version =latest
You can configure multiple pod security standard checks on any namespace, using labels.
The following command will enforce the baseline Pod Security Standard, but warn and
audit for restricted Pod Security Standards as per the latest version (default value)
kubectl label --overwrite ns example \
pod-security.kubernetes.io/enforce =baseline \
pod-security.kubernetes.io/enforce-version =latest \
pod-security.kubernetes.io/warn =restricted \
pod-security.kubernetes.io/warn-version =latest \
pod-security.kubernetes.io/audit =restricted \
pod-security.kubernetes.io/audit-version =latest2.
1.
2 | 8,325 |
Verify the Pod Security Standard enforcement
Create a baseline Pod in the example namespace:
kubectl apply -n example -f https://k8s.io/examples/security/example-baseline-pod.yaml
The Pod does start OK; the output includes a warning. For example:
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false
(container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted
capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]),
runAsNonRoot != true (pod or container "nginx" must set
securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set
securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
pod/nginx created
Create a baseline Pod in the default namespace:
kubectl apply -n default -f https://k8s.io/examples/security/example-baseline-pod.yaml
Output is similar to this:
pod/nginx created
The Pod Security Standards enforcement and warning settings were appl | 8,326 |
ied only to the
example namespace. You could create the same Pod in the default namespace with no warnings.
Clean up
Now delete the cluster which you created above by running the following command:
kind delete cluster --name psa-ns-level
What's next
Run a shell script to perform all the preceding steps all at once.
Create kind cluster
Create new namespace
Apply baseline Pod Security Standard in enforce mode while applying restricted
Pod Security Standard also in warn and audit mode.
Create a new pod with the following pod security standards applied
Pod Security Admission
Pod Security Standards
Apply Pod Security Standards at the cluster level1.
2.
•
1.
2.
3.
4.
•
•
| 8,327 |
Restrict a Container's Access to Resources
with AppArmor
FEATURE STATE: Kubernetes v1.4 [beta]
AppArmor is a Linux kernel security module that supplements the standard Linux user and
group based permissions to confine programs to a limited set of resources. AppArmor can be
configured for any application to reduce its potential attack surface and provide greater in-
depth defense. It is configured through profiles tuned to allow the access needed by a specific
program or container, such as Linux capabilities, network access, file permissions, etc. Each
profile can be run in either enforcing mode, which blocks access to disallowed resources, or
complain mode, which only reports violations.
AppArmor can help you to run a more secure deployment by restricting what containers are
allowed to do, and/or provide better auditing through system logs. However, it is important to
keep in mind that AppArmor is not a silver bullet and can only do so much to protect against
exploits in your appli | 8,328 |
cation code. It is important to provide good, restrictive profiles, and
harden your applications and cluster from other angles as well.
Objectives
See an example of how to load a profile on a node
Learn how to enforce the profile on a Pod
Learn how to check that the profile is loaded
See what happens when a profile is violated
See what happens when a profile cannot be loaded
Before you begin
Make sure:
Kubernetes version is at least v1.4 -- Kubernetes support for AppArmor was added in
v1.4. Kubernetes components older than v1.4 are not aware of the new AppArmor
annotations, and will silently ignore any AppArmor settings that are provided. To
ensure that your Pods are receiving the expected protections, it is important to verify the
Kubelet version of your nodes:
kubectl get nodes -o =jsonpath =$'{range .items[*]}{@.metadata.name}:
{@.status.nodeInfo.kubeletVersion}\n{end}'
gke-test-default-pool-239f5d02-gyn2: v1.4.0
gke-test-default-pool-239f5d02-x1kf: v1.4.0
gke-test-default-pool-23 | 8,329 |
9f5d02-xwux: v1.4.0
AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor
profile, the AppArmor kernel module must be installed and enabled. Several distributions
enable the module by default, such as Ubuntu and SUSE, and many others provide
optional support. To check whether the module is enabled, check the /sys/module/
apparmor/parameters/enabled file:•
•
•
•
•
1.
2 | 8,330 |
cat /sys/module/apparmor/parameters/enabled
Y
If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with
AppArmor options if the kernel module is not enabled.
Note: Ubuntu carries many AppArmor patches that have not been merged into the upstream
Linux kernel, including patches that add additional hooks and features. Kubernetes has only
been tested with the upstream version, and does not promise support for other features.
Container runtime supports AppArmor -- Currently all common Kubernetes-supported
container runtimes should support AppArmor, like Docker , CRI-O or containerd . Please
refer to the corresponding runtime documentation and verify that the cluster fulfills the
requirements to use AppArmor.
Profile is loaded -- AppArmor is applied to a Pod by specifying an AppArmor profile that
each container should be run with. If any of the specified profiles is not already loaded in
the kernel, the Kubelet (>= v1.4) will reject the Pod. You can view which pr | 8,331 |
ofiles are
loaded on a node by checking the /sys/kernel/security/apparmor/profiles file. For
example:
ssh gke-test-default-pool-239f5d02-gyn2 "sudo cat /sys/kernel/security/apparmor/profiles
| sort"
apparmor-test-deny-write (enforce)
apparmor-test-audit-write (enforce)
docker-default (enforce)
k8s-nginx (enforce)
For more details on loading profiles on nodes, see Setting up nodes with profiles .
As long as the Kubelet version includes AppArmor support (>= v1.4), the Kubelet will reject a
Pod with AppArmor options if any of the prerequisites are not met. You can also verify
AppArmor support on nodes by checking the node ready condition message (though this is
likely to be removed in a later release):
kubectl get nodes -o =jsonpath ='{range .items[*]}{@.metadata.name}: {.status.conditions[?
(@.reason=="KubeletReady")].message}{"\n"}{end}'
gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled
gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready | 8,332 |
status. AppArmor enabled
gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled
Securing a Pod
Note: AppArmor is currently in beta, so options are specified as annotations. Once support
graduates to general availability, the annotations will be replaced with first-class fields.
AppArmor profiles are specified per-container . To specify the AppArmor profile to run a Pod
container with, add an annotation to the Pod's metadata:
container.apparmor.security.beta.kubernetes.io/<container_name> : <profile_ref>1.
2 | 8,333 |
Where <container_name> is the name of the container to apply the profile to, and <profile_ref>
specifies the profile to apply. The profile_ref can be one of:
runtime/default to apply the runtime's default profile
localhost/<profile_name> to apply the profile loaded on the host with the name
<profile_name>
unconfined to indicate that no profiles will be loaded
See the API Reference for the full details on the annotation and profile name formats.
Kubernetes AppArmor enforcement works by first checking that all the prerequisites have been
met, and then forwarding the profile selection to the container runtime for enforcement. If the
prerequisites have not been met, the Pod will be rejected, and will not run.
To verify that the profile was applied, you can look for the AppArmor security option listed in
the container created event:
kubectl get events | grep Created
22s 22s 1 hello-apparmor Pod spec.containers{hello} Normal Created
{kubelet | 8,334 |
e2e-test-stclair-node-pool-31nt} Created container with docker id 269a53b202d3;
Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
You can also verify directly that the container's root process is running with the correct profile
by checking its proc attr:
kubectl exec <pod_name> -- cat /proc/1/attr/current
k8s-apparmor-example-deny-write (enforce)
Example
This example assumes you have already set up a cluster with AppArmor support.
First, we need to load the profile we want to use onto our nodes. This profile denies all file
writes:
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags =(attach_disconnected ) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our
nodes. For this example we'll use SSH to install the profiles, but other approaches are discussed
in Setting up nodes with profiles .
NODES =(
# The SSH-acces | 8,335 |
sible domain names of your nodes•
•
| 8,336 |
gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-xwux.us-central1-a.my-k8s )
for NODE in ${NODES [*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
EOF'
done
Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
pods/security/hello-apparmor.yaml
apiVersion : v1
kind: Pod
metadata :
name : hello-apparmor
annotations :
# Tell Kubernetes to apply the AppArmor profile "k8s-apparmor-example-deny-write".
# Note that this is ignored if the Kubernetes node is not running version 1.4 or greater.
container.apparmor.security.beta.kubernetes.io/hello : localhost/k8s-apparmor-example-deny-
write
spec:
containers :
- name : hello
image : busybox:1.28
command : [ "sh", "-c" | 8,337 |
, "echo 'Hello AppArmor!' && sleep 1h" ]
kubectl create -f ./hello-apparmor.yaml
If we look at the pod events, we can see that the Pod container was created with the AppArmor
profile "k8s-apparmor-example-deny-write":
kubectl get events | grep hello-apparmor
14s 14s 1 hello-apparmor Pod Normal Scheduled {default-
scheduler } Successfully assigned hello-apparmor to gke-test-default-
pool-239f5d02-gyn2
14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling
{kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox"
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled
{kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox"
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Created
{kubelet gke-test-default-po | 8,338 |
ol-239f5d02-gyn2} Created container with docker id 06b6cd1c0989;
Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write | 8,339 |
13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Started
{kubelet gke-test-default-pool-239f5d02-gyn2} Started container with docker id 06b6cd1c0989
We can verify that the container is actually running with that profile by checking its proc attr:
kubectl exec hello-apparmor -- cat /proc/1/attr/current
k8s-apparmor-example-deny-write (enforce)
Finally, we can see what happens if we try to violate the profile by writing to a file:
kubectl exec hello-apparmor -- touch /tmp/test
touch: /tmp/test: Permission denied
error: error executing remote command: command terminated with non-zero exit code: Error
executing in Docker Container: 1
To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded:
kubectl create -f /dev/stdin <<EOF
apiVersion : v1
kind: Pod
metadata :
name : hello-apparmor-2
annotations :
container.apparmor.security.beta.kubernetes.io/hello : localhost/k8s-apparmor-example-allow-
wr | 8,340 |
ite
spec:
containers :
- name : hello
image : busybox:1.28
command : [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
EOF
pod/hello-apparmor-2 created
kubectl describe pod hello-apparmor-2
Name: hello-apparmor-2
Namespace: default
Node: gke-test-default-pool-239f5d02-x1kf/
Start Time: Tue, 30 Aug 2016 17:58:56 -0700
Labels: <none>
Annotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-
example-allow-write
Status: Pending
Reason: AppArmor
Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is
not loaded
IP:
Controllers: <none>
Containers:
hello:
Container ID | 8,341 |
Image: busybox
Image ID:
Port:
Command:
sh
-c
echo 'Hello AppArmor!' && sleep 1h
State: Waiting
Reason: Blocked
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-dnz7v (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-dnz7v:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-dnz7v
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason
Message
--------- -------- ----- ---- ------------- -------- ------ -------
23s 23s 1 {default-scheduler } No | 8,342 |
rmal Scheduled
Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5
23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning
AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not
loaded
Note the pod status is Pending, with a helpful error message: Pod Cannot enforce AppArmor:
profile "k8s-apparmor-example-allow-write" is not loaded . An event was also recorded with the
same message.
Administration
Setting up nodes with profiles
Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles
onto nodes. There are lots of ways to set up the profiles though, such as:
Through a DaemonSet that runs a Pod on each node to ensure the correct profiles are
loaded. An example implementation can be found here.
At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.)
or image.•
| 8,343 |
By copying the profiles to each node and loading them through SSH, as demonstrated in
the Example .
The scheduler is not aware of which profiles are loaded onto which node, so the full set of
profiles must be loaded onto every node. An alternative approach is to add a node label for each
profile (or class of profiles) on the node, and use a node selector to ensure the Pod is run on a
node with the required profile.
Disabling AppArmor
If you do not want AppArmor to be available on your cluster, it can be disabled by a command-
line flag:
--feature-gates=AppArmor=false
When disabled, any Pod that includes an AppArmor profile will fail validation with a
"Forbidden" error.
Note: Even if the Kubernetes feature is disabled, runtimes may still enforce the default profile.
The option to disable the AppArmor feature will be removed when AppArmor graduates to
general availability (GA).
Authoring Profiles
Getting AppArmor profiles specified correctly can be a tricky business. Fortunately there | 8,344 |
are
some tools to help with that:
aa-genprof and aa-logprof generate profile rules by monitoring an application's activity
and logs, and admitting the actions it takes. Further instructions are provided by the
AppArmor documentation .
bane is an AppArmor profile generator for Docker that uses a simplified profile language.
To debug problems with AppArmor, you can check the system logs to see what, specifically,
was denied. AppArmor logs verbose messages to dmesg , and errors can usually be found in the
system logs or through journalctl . More information is provided in AppArmor failures .
API Reference
Pod Annotation
Specifying the profile a container will run with:
key: container.apparmor.security.beta.kubernetes.io/<container_name> Where
<container_name> matches the name of a container in the Pod. A separate profile can be
specified for each container in the Pod.
value : a profile reference, described below
Profile Reference
runtime/default : Refers to the default runtime prof | 8,345 |
ile.
Equivalent to not specifying a profile, except it still requires AppArmor to be
enabled.•
•
•
•
•
•
| 8,346 |
In practice, many container runtimes use the same OCI default profile, defined
here: https://github.com/containers/common/blob/main/pkg/apparmor/
apparmor_linux_template.go
localhost/<profile_name> : Refers to a profile loaded on the node (localhost) by name.
The possible profile names are detailed in the core policy reference .
unconfined : This effectively disables AppArmor on the container.
Any other profile reference format is invalid.
What's next
Additional resources:
Quick guide to the AppArmor profile language
AppArmor core policy reference
Restrict a Container's Syscalls with
seccomp
FEATURE STATE: Kubernetes v1.19 [stable]
Seccomp stands for secure computing mode and has been a feature of the Linux kernel since
version 2.6.12. It can be used to sandbox the privileges of a process, restricting the calls it is able
to make from userspace into the kernel. Kubernetes lets you automatically apply seccomp
profiles loaded onto a node to your Pods and containers.
Identifying the pri | 8,347 |
vileges required for your workloads can be difficult. In this tutorial, you will
go through how to load seccomp profiles into a local Kubernetes cluster, how to apply them to a
Pod, and how you can begin to craft profiles that give only the necessary privileges to your
container processes.
Objectives
Learn how to load seccomp profiles on a node
Learn how to apply a seccomp profile to a container
Observe auditing of syscalls made by a container process
Observe behavior when a missing profile is specified
Observe a violation of a seccomp profile
Learn how to create fine-grained seccomp profiles
Learn how to apply a container runtime default seccomp profile
Before you begin
In order to complete all steps in this tutorial, you must install kind and kubectl .
The commands used in the tutorial assume that you are using Docker as your container
runtime. (The cluster that kind creates may use a different container runtime internally). You
could also use Podman but in that case, you would hav | 8,348 |
e to follow specific instructions in order
to complete the tasks successfully.◦
•
◦
•
•
•
•
•
•
•
•
•
| 8,349 |
This tutorial shows some examples that are still beta (since v1.25) and others that use only
generally available seccomp functionality. You should make sure that your cluster is configured
correctly for the version you are using.
The tutorial also uses the curl tool for downloading examples to your computer. You can adapt
the steps to use a different tool if you prefer.
Note: It is not possible to apply a seccomp profile to a container running with privileged: true
set in the container's securityContext . Privileged containers always run as Unconfined .
Download example seccomp profiles
The contents of these profiles will be explored later on, but for now go ahead and download
them into a directory named profiles/ so that they can be loaded into the cluster.
audit.json
violation.json
fine-grained.json
pods/security/seccomp/profiles/audit.json
{
"defaultAction" : "SCMP_ACT_LOG"
}
pods/security/seccomp/profiles/violation.json
{
"defaultAction" : "SCMP_ACT_ERRNO"
}
pods/sec | 8,350 |
urity/seccomp/profiles/fine-grained.json
{
"defaultAction" : "SCMP_ACT_ERRNO" ,
"architectures" : [
"SCMP_ARCH_X86_64" ,
"SCMP_ARCH_X86" ,
"SCMP_ARCH_X32"
],
"syscalls" : [
{
"names" : [
"accept4" ,
"epoll_wait" ,
"pselect6" ,
"futex" ,
"madvise" ,
"epoll_ctl" ,
"getsockname" ,
"setsockopt" ,
"vfork" ,
"mmap" ,•
•
| 8,351 |
"read" ,
"write" ,
"close" ,
"arch_prctl" ,
"sched_getaffinity" ,
"munmap" ,
"brk" ,
"rt_sigaction" ,
"rt_sigprocmask" ,
"sigaltstack" ,
"gettid" ,
"clone" ,
"bind" ,
"socket" ,
"openat" ,
"readlinkat" ,
"exit_group" ,
"epoll_create1" ,
"listen" ,
"rt_sigreturn" ,
"sched_yield" ,
"clock_gettime" ,
"connect" ,
"dup2" ,
"epoll_pwait" ,
"execve" ,
"exit" ,
"fcntl" ,
"getpid" ,
"getuid" ,
"ioctl" ,
"mprotect" ,
"nanosleep" ,
"open" ,
"poll" ,
| 8,352 |
"recvfrom" ,
"sendto" ,
"set_tid_address" ,
"setitimer" ,
"writev"
],
"action" : "SCMP_ACT_ALLOW"
}
]
}
Run these commands:
mkdir ./profiles
curl -L -o profiles/audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json
curl -L -o profiles/violation.json https://k8s.io/examples/pods/security/seccomp/profiles/
violation.json
curl -L -o profiles/fine-grained.json https://k8s.io/examples/pods/security/seccomp/profiles | 8,353 |
fine-grained.json
ls profiles
You should see three profiles listed at the end of the final step:
audit.json fine-grained.json violation.json
Create a local Kubernetes cluster with kind
For simplicity, kind can be used to create a single node cluster with the seccomp profiles loaded.
Kind runs Kubernetes in Docker, so each node of the cluster is a container. This allows for files
to be mounted in the filesystem of each container similar to loading files onto a node.
pods/security/seccomp/kind.yaml
apiVersion : kind.x-k8s.io/v1alpha4
kind: Cluster
nodes :
- role: control-plane
extraMounts :
- hostPath : "./profiles"
containerPath : "/var/lib/kubelet/seccomp/profiles"
Download that example kind configuration, and save it to a file named kind.yaml :
curl -L -O https://k8s.io/examples/pods/security/seccomp/kind.yaml
You can set a specific Kubernetes version by setting the node's container image. See Nodes
within the kind documentation about configuration for more details on this | 8,354 |
. This tutorial
assumes you are using Kubernetes v1.29.
As a beta feature, you can configure Kubernetes to use the profile that the container runtime
prefers by default, rather than falling back to Unconfined . If you want to try that, see enable the
use of RuntimeDefault as the default seccomp profile for all workloads before you continue.
Once you have a kind configuration in place, create the kind cluster with that configuration:
kind create cluster --config =kind.yaml
After the new Kubernetes cluster is ready, identify the Docker container running as the single
node cluster:
docker ps
You should see output indicating that a container is running with name kind-control-plane . The
output is similar to:
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
6a96207fed4b kindest/node:v1.18.2 "/usr/local/bin/entr..." 27 seconds ago Up 24
seconds 127.0.0.1:42223->6443/tcp ki | 8,355 |
nd-control-plan | 8,356 |
If observing the filesystem of that container, you should see that the profiles/ directory has been
successfully loaded into the default seccomp path of the kubelet. Use docker exec to run a
command in the Pod:
# Change 6a96207fed4b to the container ID you saw from "docker ps"
docker exec -it 6a96207fed4b ls /var/lib/kubelet/seccomp/profiles
audit.json fine-grained.json violation.json
You have verified that these seccomp profiles are available to the kubelet running within kind.
Create a Pod that uses the container runtime default
seccomp profile
Most container runtimes provide a sane set of default syscalls that are allowed or not. You can
adopt these defaults for your workload by setting the seccomp type in the security context of a
pod or container to RuntimeDefault .
Note: If you have the seccompDefault configuration enabled, then Pods use the RuntimeDefault
seccomp profile whenever no other seccomp profile is specified. Otherwise, the default is
Unconfined .
Here's a manif | 8,357 |
est for a Pod that requests the RuntimeDefault seccomp profile for all its
containers:
pods/security/seccomp/ga/default-pod.yaml
apiVersion : v1
kind: Pod
metadata :
name : default-pod
labels :
app: default-pod
spec:
securityContext :
seccompProfile :
type: RuntimeDefault
containers :
- name : test-container
image : hashicorp/http-echo:1.0
args:
- "-text=just made some more syscalls!"
securityContext :
allowPrivilegeEscalation : false
Create that Pod:
kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/default-pod.yaml
kubectl get pod default-pod
The Pod should be showing as having started successfully | 8,358 |
NAME READY STATUS RESTARTS AGE
default-pod 1/1 Running 0 20s
Delete the Pod before moving to the next section:
kubectl delete pod default-pod --wait --now
Create a Pod with a seccomp profile for syscall auditing
To start off, apply the audit.json profile, which will log all syscalls of the process, to a new Pod.
Here's a manifest for that Pod:
pods/security/seccomp/ga/audit-pod.yaml
apiVersion : v1
kind: Pod
metadata :
name : audit-pod
labels :
app: audit-pod
spec:
securityContext :
seccompProfile :
type: Localhost
localhostProfile : profiles/audit.json
containers :
- name : test-container
image : hashicorp/http-echo:1.0
args:
- "-text=just made some syscalls!"
securityContext :
allowPrivilegeEscalation : false
Note: Older versions of Kubernetes allowed you to configure seccomp behavior using
annotations . Kubernetes 1.29 only supports using fields within .spec.securityContext to
configure seccomp, a | 8,359 |
nd this tutorial explains that approach.
Create the Pod in the cluster:
kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/audit-pod.yaml
This profile does not restrict any syscalls, so the Pod should start successfully.
kubectl get pod audit-pod
NAME READY STATUS RESTARTS AGE
audit-pod 1/1 Running 0 30s
In order to be able to interact with this endpoint exposed by this container, create a NodePort
Service that allows access to the endpoint from inside the kind control plane container.
kubectl expose pod audit-pod --type NodePort --port 567 | 8,360 |
Check what port the Service has been assigned on the node.
kubectl get service audit-pod
The output is similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
audit-pod NodePort 10.111.36.142 <none> 5678:32373/TCP 72s
Now you can use curl to access that endpoint from inside the kind control plane container, at
the port exposed by this Service. Use docker exec to run the curl command within the container
belonging to that control plane container:
# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw
from "docker ps"
docker exec -it 6a96207fed4b curl localhost:32373
just made some syscalls!
You can see that the process is running, but what syscalls did it actually make? Because this
Pod is running in a local cluster, you should be able to see those in /var/log/syslog on your local
system. Open up a new terminal window and tail the output for calls from http-echo :
# The log path on your computer might | 8,361 |
be different from "/var/log/syslog"
tail -f /var/log/syslog | grep 'http-echo'
You should already see some logs of syscalls made by http-echo , and if you run curl again inside
the control plane container you will see more output written to the log.
For example:
Jul 6 15:37:40 my-machine kernel: [369128.669452] audit: type=1326
audit(1594067860.484:14536): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=51 compat=0 ip=0x46fe1f
code=0x7ffc0000
Jul 6 15:37:40 my-machine kernel: [369128.669453] audit: type=1326
audit(1594067860.484:14537): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=54 compat=0 ip=0x46fdba
code=0x7ffc0000
Jul 6 15:37:40 my-machine kernel: [369128.669455] audit: type=1326
audit(1594067860.484:14538): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=202 compat=0 | 8,362 |
ip=0x455e53
code=0x7ffc0000
Jul 6 15:37:40 my-machine kernel: [369128.669456] audit: type=1326
audit(1594067860.484:14539): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=288 compat=0 ip=0x46fdba
code=0x7ffc0000
Jul 6 15:37:40 my-machine kernel: [369128.669517] audit: type=1326
audit(1594067860.484:14540): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=0 compat=0 ip=0x46fd44
code=0x7ffc0000
Jul 6 15:37:40 my-machine kernel: [369128.669519] audit: type=1326
audit(1594067860.484:14541): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b | 8,363 |
code=0x7ffc0000
Jul 6 15:38:40 my-machine kernel: [369188.671648] audit: type=1326
audit(1594067920.488:14559): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=270 compat=0 ip=0x4559b1
code=0x7ffc0000
Jul 6 15:38:40 my-machine kernel: [369188.671726] audit: type=1326
audit(1594067920.488:14560): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=29064
comm="http-echo" exe="/http-echo" sig=0 arch=c000003e syscall=202 compat=0 ip=0x455e53
code=0x7ffc0000
You can begin to understand the syscalls required by the http-echo process by looking at the
syscall= entry on each line. While these are unlikely to encompass all syscalls it uses, it can
serve as a basis for a seccomp profile for this container.
Delete the Service and the Pod before moving to the next section:
kubectl delete service audit-pod --wait
kubectl delete pod audit-pod --wait --now
Create a Pod with a seccomp profile that causes violation
For demonstrati | 8,364 |
on, apply a profile to the Pod that does not allow for any syscalls.
The manifest for this demonstration is:
pods/security/seccomp/ga/violation-pod.yaml
apiVersion : v1
kind: Pod
metadata :
name : violation-pod
labels :
app: violation-pod
spec:
securityContext :
seccompProfile :
type: Localhost
localhostProfile : profiles/violation.json
containers :
- name : test-container
image : hashicorp/http-echo:1.0
args:
- "-text=just made some syscalls!"
securityContext :
allowPrivilegeEscalation : false
Attempt to create the Pod in the cluster:
kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/violation-pod.yaml
The Pod creates, but there is an issue. If you check the status of the Pod, you should see that it
failed to start | 8,365 |
kubectl get pod violation-pod
NAME READY STATUS RESTARTS AGE
violation-pod 0/1 CrashLoopBackOff 1 6s
As seen in the previous example, the http-echo process requires quite a few syscalls. Here
seccomp has been instructed to error on any syscall by setting "defaultAction":
"SCMP_ACT_ERRNO" . This is extremely secure, but removes the ability to do anything
meaningful. What you really want is to give workloads only the privileges they need.
Delete the Pod before moving to the next section:
kubectl delete pod violation-pod --wait --now
Create a Pod with a seccomp profile that only allows
necessary syscalls
If you take a look at the fine-grained.json profile, you will notice some of the syscalls seen in
syslog of the first example where the profile set "defaultAction": "SCMP_ACT_LOG" . Now the
profile is setting "defaultAction": "SCMP_ACT_ERRNO" , but explicitly allowing a set of syscalls
in the "action": "SCMP_ACT_ALLOW" block. Ideally, the co | 8,366 |
ntainer will run successfully and
you will see no messages sent to syslog .
The manifest for this example is:
pods/security/seccomp/ga/fine-pod.yaml
apiVersion : v1
kind: Pod
metadata :
name : fine-pod
labels :
app: fine-pod
spec:
securityContext :
seccompProfile :
type: Localhost
localhostProfile : profiles/fine-grained.json
containers :
- name : test-container
image : hashicorp/http-echo:1.0
args:
- "-text=just made some syscalls!"
securityContext :
allowPrivilegeEscalation : false
Create the Pod in your cluster:
kubectl apply -f https://k8s.io/examples/pods/security/seccomp/ga/fine-pod.yaml
kubectl get pod fine-pod
The Pod should be showing as having started successfully | 8,367 |
NAME READY STATUS RESTARTS AGE
fine-pod 1/1 Running 0 30s
Open up a new terminal window and use tail to monitor for log entries that mention calls from
http-echo :
# The log path on your computer might be different from "/var/log/syslog"
tail -f /var/log/syslog | grep 'http-echo'
Next, expose the Pod with a NodePort Service:
kubectl expose pod fine-pod --type NodePort --port 5678
Check what port the Service has been assigned on the node:
kubectl get service fine-pod
The output is similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fine-pod NodePort 10.111.36.142 <none> 5678:32373/TCP 72s
Use curl to access that endpoint from inside the kind control plane container:
# Change 6a96207fed4b to the control plane container ID and 32373 to the port number you saw
from "docker ps"
docker exec -it 6a96207fed4b curl localhost:32373
just made some syscalls!
You should see no output in the syslog . This is because t | 8,368 |
he profile allowed all necessary syscalls
and specified that an error should occur if one outside of the list is invoked. This is an ideal
situation from a security perspective, but required some effort in analyzing the program. It
would be nice if there was a simple way to get closer to this security without requiring as much
effort.
Delete the Service and the Pod before moving to the next section:
kubectl delete service fine-pod --wait
kubectl delete pod fine-pod --wait --now
Enable the use of RuntimeDefault as the default seccomp
profile for all workloads
FEATURE STATE: Kubernetes v1.27 [stable]
To use seccomp profile defaulting, you must run the kubelet with the --seccomp-default
command line flag enabled for each node where you want to use it.
If enabled, the kubelet will use the RuntimeDefault seccomp profile by default, which is defined
by the container runtime, instead of using the Unconfined (seccomp disabled) mode. The default
profiles aim to provide a strong set of se | 8,369 |
curity defaults while preserving the functionality of the
workload. It is possible that the default profiles differ between container runtimes and their
release versions, for example when comparing those from CRI-O and containerd | 8,370 |
Note: Enabling the feature will neither change the Kubernetes securityContext.seccompProfile
API field nor add the deprecated annotations of the workload. This provides users the
possibility to rollback anytime without actually changing the workload configuration. Tools like
crictl inspect can be used to verify which seccomp profile is being used by a container.
Some workloads may require a lower amount of syscall restrictions than others. This means
that they can fail during runtime even with the RuntimeDefault profile. To mitigate such a
failure, you can:
Run the workload explicitly as Unconfined .
Disable the SeccompDefault feature for the nodes. Also making sure that workloads get
scheduled on nodes where the feature is disabled.
Create a custom seccomp profile for the workload.
If you were introducing this feature into production-like cluster, the Kubernetes project
recommends that you enable this feature gate on a subset of your nodes and then test workload
execution before r | 8,371 |
olling the change out cluster-wide.
You can find more detailed information about a possible upgrade and downgrade strategy in the
related Kubernetes Enhancement Proposal (KEP): Enable seccomp by default .
Kubernetes 1.29 lets you configure the seccomp profile that applies when the spec for a Pod
doesn't define a specific seccomp profile. However, you still need to enable this defaulting for
each node where you would like to use it.
If you are running a Kubernetes 1.29 cluster and want to enable the feature, either run the
kubelet with the --seccomp-default command line flag, or enable it through the kubelet
configuration file . To enable the feature gate in kind, ensure that kind provides the minimum
required Kubernetes version and enables the SeccompDefault feature in the kind configuration :
kind: Cluster
apiVersion : kind.x-k8s.io/v1alpha4
nodes :
- role: control-plane
image : kindest/
node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c
k | 8,372 |
ubeadmConfigPatches :
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
seccomp-default: "true"
- role: worker
image : kindest/
node:v1.28.0@sha256:9f3ff58f19dcf1a0611d11e8ac989fdb30a28f40f236f59f0bea31fb956ccf5c
kubeadmConfigPatches :
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
seccomp-default: "true"
If the cluster is ready, then running a pod:•
•
| 8,373 |
kubectl run --rm -it --restart =Never --image =alpine alpine -- sh
Should now have the default seccomp profile attached. This can be verified by using docker
exec to run crictl inspect for the container on the kind worker:
docker exec -it kind-worker bash -c \
'crictl inspect $(crictl ps --name=alpine -q) | jq .info.runtimeSpec.linux.seccomp'
{
"defaultAction" : "SCMP_ACT_ERRNO" ,
"architectures" : ["SCMP_ARCH_X86_64" , "SCMP_ARCH_X86" , "SCMP_ARCH_X32" ],
"syscalls" : [
{
"names" : ["..."]
}
]
}
What's next
You can learn more about Linux seccomp:
A seccomp Overview
Seccomp Security Profiles for Docker
Stateless Applications
Exposing an External IP Address to Access an Application in a Cluster
Example: Deploying PHP Guestbook application with Redis
Exposing an External IP Address to Access
an Application in a Cluster
This page shows how to create a Kubernetes Service object that exposes an external IP address.
Before you begin
Install kubectl .
Use a cloud p | 8,374 |
rovider like Google Kubernetes Engine or Amazon Web Services to create a
Kubernetes cluster. This tutorial creates an external load balancer , which requires a cloud
provider.
Configure kubectl to communicate with your Kubernetes API server. For instructions, see
the documentation for your cloud provider.•
•
•
•
| 8,375 |
Objectives
Run five instances of a Hello World application.
Create a Service object that exposes an external IP address.
Use the Service object to access the running application.
Creating a service for an application running in five pods
Run a Hello World application in your cluster:
service/load-balancer-example.yaml
apiVersion : apps/v1
kind: Deployment
metadata :
labels :
app.kubernetes.io/name : load-balancer-example
name : hello-world
spec:
replicas : 5
selector :
matchLabels :
app.kubernetes.io/name : load-balancer-example
template :
metadata :
labels :
app.kubernetes.io/name : load-balancer-example
spec:
containers :
- image : gcr.io/google-samples/node-hello:1.0
name : hello-world
ports :
- containerPort : 8080
kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
The preceding command creates a Deployment and an associated ReplicaSet . The
ReplicaSet has five Pods each o | 8,376 |
f which runs the Hello World application.
Display information about the Deployment:
kubectl get deployments hello-world
kubectl describe deployments hello-world
Display information about your ReplicaSet objects:
kubectl get replicasets
kubectl describe replicasets
Create a Service object that exposes the deployment:
kubectl expose deployment hello-world --type =LoadBalancer --name =my-service•
•
•
1.
2.
3.
4 | 8,377 |
Display information about the Service:
kubectl get services my-service
The output is similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
Note: The type=LoadBalancer service is backed by external cloud providers, which is not
covered in this example, please refer to this page for the details.
Note: If the external IP address is shown as <pending>, wait for a minute and enter the
same command again.
Display detailed information about the Service:
kubectl describe services my-service
The output is similar to:
Name: my-service
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints: | 8,378 |
10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
Session Affinity: None
Events: <none>
Make a note of the external IP address ( LoadBalancer Ingress ) exposed by your service. In
this example, the external IP address is 104.198.205.71. Also note the value of Port and
NodePort . In this example, the Port is 8080 and the NodePort is 32377.
In the preceding output, you can see that the service has several endpoints:
10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more. These are internal addresses of the pods
that are running the Hello World application. To verify these are pod addresses, enter this
command:
kubectl get pods --output =wide
The output is similar to:
NAME ... IP NODE
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-2e5uh ... 10.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
hell | 8,379 |
o-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
Use the external IP address ( LoadBalancer Ingress ) to access the Hello World application:5.
6.
7.
8 | 8,380 |
curl http://<external-ip>:<port>
where <external-ip> is the external IP address ( LoadBalancer Ingress ) of your Service,
and <port> is the value of Port in your Service description. If you are using minikube,
typing minikube service my-service will automatically open the Hello World application
in a browser.
The response to a successful request is a hello message:
Hello Kubernetes!
Cleaning up
To delete the Service, enter this command:
kubectl delete services my-service
To delete the Deployment, the ReplicaSet, and the Pods that are running the Hello World
application, enter this command:
kubectl delete deployment hello-world
What's next
Learn more about connecting applications with services .
Example: Deploying PHP Guestbook
application with Redis
This tutorial shows you how to build and deploy a simple (not production ready) , multi-tier web
application using Kubernetes and Docker . This example consists of the following components:
A single-instance Redis to store guestbook ent | 8,381 |
ries
Multiple web frontend instances
Objectives
Start up a Redis leader.
Start up two Redis followers.
Start up the guestbook frontend.
Expose and view the Frontend Service.
Clean up.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured
to communicate with your cluster. It is recommended to run this tutorial on a cluster with at•
•
•
•
•
•
| 8,382 |
least two nodes that are not acting as control plane hosts. If you do not already have a cluster,
you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Killercoda
Play with Kubernetes
Your Kubernetes server must be at or later than version v1.14. To check the version, enter
kubectl version .
Start up the Redis Database
The guestbook application uses Redis to store its data.
Creating the Redis Deployment
The manifest file, included below, specifies a Deployment controller that runs a single replica
Redis Pod.
application/guestbook/redis-leader-deployment.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion : apps/v1
kind: Deployment
metadata :
name : redis-leader
labels :
app: redis
role: leader
tier: backend
spec:
replicas : 1
selector :
matchLabels :
app: redis
template :
metadata :
labels :
app: redis
role: leader
tier: backend
spec:
| 8,383 |
containers :
- name : leader
image : "docker.io/redis:6.0.5"
resources :
requests :
cpu: 100m
memory : 100Mi
ports :
- containerPort : 6379
Launch a terminal window in the directory you downloaded the manifest files.•
•
1 | 8,384 |
Apply the Redis Deployment from the redis-leader-deployment.yaml file:
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-
deployment.yaml
Query the list of Pods to verify that the Redis Pod is running:
kubectl get pods
The response should be similar to this:
NAME READY STATUS RESTARTS AGE
redis-leader-fb76b4755-xjr2n 1/1 Running 0 13s
Run the following command to view the logs from the Redis leader Pod:
kubectl logs -f deployment/redis-leader
Creating the Redis leader Service
The guestbook application needs to communicate to the Redis to write its data. You need to
apply a Service to proxy the traffic to the Redis Pod. A Service defines a policy to access the
Pods.
application/guestbook/redis-leader-service.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion : v1
kind: Service
metadata :
name : redis-leader
labels :
app: redis
role: leader
tier: back | 8,385 |
end
spec:
ports :
- port: 6379
targetPort : 6379
selector :
app: redis
role: leader
tier: backend
Apply the Redis Service from the following redis-leader-service.yaml file:
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml
Query the list of Services to verify that the Redis Service is running:
kubectl get service
The response should be similar to this:2.
3.
4.
1.
2 | 8,386 |
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
redis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 16s
Note: This manifest file creates a Service named redis-leader with a set of labels that match the
labels previously defined, so the Service routes network traffic to the Redis Pod.
Set up Redis followers
Although the Redis leader is a single Pod, you can make it highly available and meet traffic
demands by adding a few Redis followers, or replicas.
application/guestbook/redis-follower-deployment.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion : apps/v1
kind: Deployment
metadata :
name : redis-follower
labels :
app: redis
role: follower
tier: backend
spec:
replicas : 2
selector :
matchLabels :
app: redis
template :
metadata :
labels :
app: redis
role: follower
tier: back | 8,387 |
end
spec:
containers :
- name : follower
image : us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2
resources :
requests :
cpu: 100m
memory : 100Mi
ports :
- containerPort : 6379
Apply the Redis Deployment from the following redis-follower-deployment.yaml file:
kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-
deployment.yaml
Verify that the two Redis follower replicas are running by querying the list of Pods:
kubectl get pods1.
2 | 8,388 |
The response should be similar to this:
NAME READY STATUS RESTARTS AGE
redis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s
redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s
redis-leader-fb76b4755-xjr2n 1/1 Running 0 11m
Creating the Redis follower service
The guestbook application needs to communicate with the Redis followers to read data. To
make the Redis followers discoverable, you must set up another Service .
application/guestbook/redis-follower-service.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion : v1
kind: Service
metadata :
name : redis-follower
labels :
app: redis
role: follower
tier: backend
spec:
ports :
# the port that this service should serve on
- port: 6379
selector :
app: redis
role: follower
tier: backend
Apply the Redis Service from the following redis-follower-service.yaml file:
kubectl apply -f | 8,389 |
https://k8s.io/examples/application/guestbook/redis-follower-
service.yaml
Query the list of Services to verify that the Redis Service is running:
kubectl get service
The response should be similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h
redis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 9s
redis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 6m10s
Note: This manifest file creates a Service named redis-follower with a set of labels that match
the labels previously defined, so the Service routes network traffic to the Redis Pod.1.
2 | 8,390 |
Set up and Expose the Guestbook Frontend
Now that you have the Redis storage of your guestbook up and running, start the guestbook
web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.
The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis
follower or leader Services, depending on whether the request is a read or a write. The frontend
exposes a JSON interface, and serves a jQuery-Ajax-based UX.
Creating the Guestbook Frontend Deployment
application/guestbook/frontend-deployment.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion : apps/v1
kind: Deployment
metadata :
name : frontend
spec:
replicas : 3
selector :
matchLabels :
app: guestbook
tier: frontend
template :
metadata :
labels :
app: guestbook
tier: frontend
spec:
containers :
- name : php-redis
image : us-docker.pkg.dev/google-samp | 8,391 |
les/containers/gke/gb-frontend:v5
env:
- name : GET_HOSTS_FROM
value : "dns"
resources :
requests :
cpu: 100m
memory : 100Mi
ports :
- containerPort : 80
Apply the frontend Deployment from the frontend-deployment.yaml file:
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-
deployment.yaml
Query the list of Pods to verify that the three frontend replicas are running:
kubectl get pods -l app=guestbook -l tier=frontend
The response should be similar to this:1.
2 | 8,392 |
NAME READY STATUS RESTARTS AGE
frontend-85595f5bf9-5tqhb 1/1 Running 0 47s
frontend-85595f5bf9-qbzwm 1/1 Running 0 47s
frontend-85595f5bf9-zchwc 1/1 Running 0 47s
Creating the Frontend Service
The Redis Services you applied is only accessible within the Kubernetes cluster because the
default type for a Service is ClusterIP . ClusterIP provides a single IP address for the set of Pods
the Service is pointing to. This IP address is accessible only within the cluster.
If you want guests to be able to access your guestbook, you must configure the frontend Service
to be externally visible, so a client can request the Service from outside the Kubernetes cluster.
However a Kubernetes user can use kubectl port-forward to access the service even though it
uses a ClusterIP .
Note: Some cloud providers, like Google Compute Engine or Google Kubernetes Engine,
support external load balancers. If your cloud provider | 8,393 |
supports load balancers and you want to
use it, uncomment type: LoadBalancer .
application/guestbook/frontend-service.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion : v1
kind: Service
metadata :
name : frontend
labels :
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
#type: LoadBalancer
ports :
# the port that this service should serve on
- port: 80
selector :
app: guestbook
tier: frontend
Apply the frontend Service from the frontend-service.yaml file:
kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
Query the list of Services to verify that the frontend Service is running:
kubectl get services
The response should be similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend | 8,394 |
ClusterIP 10.97.28.230 <none> 80/TCP 19s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h1.
2 | 8,395 |
redis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 5m48s
redis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 11m
Viewing the Frontend Service via kubectl port-forward
Run the following command to forward port 8080 on your local machine to port 80 on the
service.
kubectl port-forward svc/frontend 8080:80
The response should be similar to this:
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
load the page http://localhost:8080 in your browser to view your guestbook.
Viewing the Frontend Service via LoadBalancer
If you deployed the frontend-service.yaml manifest with type: LoadBalancer you need to find
the IP address to view your Guestbook.
Run the following command to get the IP address for the frontend Service.
kubectl get service frontend
The response should be similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372 | 8,396 |
/TCP 1m
Copy the external IP address, and load the page in your browser to view your guestbook.
Note: Try adding some guestbook entries by typing in a message, and clicking Submit. The
message you typed appears in the frontend. This message indicates that data is successfully
added to Redis through the Services you created earlier.
Scale the Web Frontend
You can scale up or down as needed because your servers are defined as a Service that uses a
Deployment controller.
Run the following command to scale up the number of frontend Pods:
kubectl scale deployment frontend --replicas =5
Query the list of Pods to verify the number of frontend Pods running:
kubectl get pods
The response should look similar to this:
NAME READY STATUS RESTARTS AGE
frontend-85595f5bf9-5df5m 1/1 Running 0 83s1.
2.
1.
2.
1.
2 | 8,397 |
frontend-85595f5bf9-7zmg5 1/1 Running 0 83s
frontend-85595f5bf9-cpskg 1/1 Running 0 15m
frontend-85595f5bf9-l2l54 1/1 Running 0 14m
frontend-85595f5bf9-l9c8z 1/1 Running 0 14m
redis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m
redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m
redis-leader-fb76b4755-xjr2n 1/1 Running 0 108m
Run the following command to scale down the number of frontend Pods:
kubectl scale deployment frontend --replicas =2
Query the list of Pods to verify the number of frontend Pods running:
kubectl get pods
The response should look similar to this:
NAME READY STATUS RESTARTS AGE
frontend-85595f5bf9-cpskg 1/1 Running 0 16m
frontend-85595f5bf9-l9c8z 1/1 Running 0 15m
redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m
redis-follower-dddfbdcc9-qr | 8,398 |
t5k 1/1 Running 0 98m
redis-leader-fb76b4755-xjr2n 1/1 Running 0 109m
Cleaning up
Deleting the Deployments and Services also deletes any running Pods. Use labels to delete
multiple resources with one command.
Run the following commands to delete all Pods, Deployments, and Services.
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment frontend
kubectl delete service frontend
The response should look similar to this:
deployment.apps "redis-follower" deleted
deployment.apps "redis-leader" deleted
deployment.apps "frontend" deleted
service "frontend" deleted
Query the list of Pods to verify that no Pods are running:
kubectl get pods
The response should look similar to this:
No resources found in default namespace.3.
4.
1.
2 | 8,399 |