text
stringlengths 1
1k
| id
int64 0
8.58k
|
---|---|
drain
Drain node "foo", even if there are pods not managed by a replication controller,
replica set, job, daemon set or stateful set on it
kubectl drain foo --force
As above, but abort if there are pods not managed by a replication controller,
replica set, job, daemon set or stateful set, and use a grace period of 15 minutes
kubectl drain foo --grace-period =900
Drain node in preparation for maintenance.
The given node will be marked unschedulable to prevent new pods from arriving. 'drain' evicts
the pods if the API server supports https://kubernetes.io/docs/concepts/workloads/pods/
disruptions/ . Otherwise, it will use normal DELETE to delete the pods. The 'drain' evicts or
deletes all pods except mirror pods (which cannot be deleted through the API server). If there
are daemon set-managed pods, drain will not proceed without --ignore-daemonsets, and
regardless it will not delete any daemon set-managed pods, because those pods would be
immediately replaced by the daemon set controlle | 8,000 |
r, which ignores unschedulable markings. If
there are any pods that are neither mirror pods nor managed by a replication controller, replica
set, daemon set, stateful set, or job, then drain will not delete any pods unless you use --force. --
force will also allow deletion to proceed if the managing resource of one or more pods is
missing.
'drain' waits for graceful termination. You should not operate on the machine until the
command completes.
When you are ready to put the node back into service, use kubectl uncordon, which will make
the node schedulable again.
https://kubernetes.io/images/docs/kubectl_drain.svg
Usage
$ kubectl drain NODE
Flags
Name Shorthand Default Usage
chunk-size 500Return large lists in chunks rather than all at once. Pass 0
to disable. This flag is beta and may change in the future.
delete-
emptydir-datafalseContinue even if there are pods using emptyDir (local data
that will be deleted when the node is drained).
delete-local-
datafalseContinue even if there a | 8,001 |
re pods using emptyDir (local data
that will be deleted when the node is drained).
disable-
evictionfalseForce drain to use delete, even if eviction is supported.
This will bypass checking PodDisruptionBudgets, use with
caution.
dry-run non | 8,002 |
Name Shorthand Default Usage
Must be "none", "server", or "client". If client strategy, only
print the object that would be sent, without sending it. If
server strategy, submit server-side request without
persisting the resource.
force falseContinue even if there are pods not managed by a
ReplicationController, ReplicaSet, Job, DaemonSet or
StatefulSet.
grace-period -1Period of time in seconds given to each pod to terminate
gracefully. If negative, the default value specified in the
pod will be used.
ignore-
daemonsetsfalse Ignore DaemonSet-managed pods.
ignore-errors false Ignore errors occurred between drain nodes in group.
pod-selector Label selector to filter pods on the node
selector l Selector (label query) to filter on
skip-wait-for-
delete-timeout0If pod DeletionTimestamp older than N seconds, skip
waiting for the pod. Seconds must be greater than 0 to
skip.
timeout 0sThe length of time to wait before giving up, zero means
infinite
taint
Update node 'foo' with a taint | 8,003 |
with key 'dedicated' and value 'special-user' and
effect 'NoSchedule' # If a taint with that key and effect already exists, its value is
replaced as specified
kubectl taint nodes foo dedicated =special-user:NoSchedule
Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one
exists
kubectl taint nodes foo dedicated :NoSchedule-
Remove from node 'foo' all the taints with key 'dedicated'
kubectl taint nodes foo dedicated-
Add a taint with key 'dedicated' on nodes having label mylabel=X
kubectl taint node -l myLabel= X dedicated= foo:PreferNoSchedule
Add to node 'foo' a taint with key 'bar' and no value
kubectl taint nodes foo bar:NoSchedule
Update the taints on one or more nodes.
A taint consists of a key, value, and effect. As an argument here, it is expressed as
key=value:effect. | 8,004 |
The key must begin with a letter or number, and may contain letters, numbers, hyphens,
dots, and underscores, up to 253 characters.
Optionally, the key can begin with a DNS subdomain prefix and a single '/', like
example.com/my-app.
The value is optional. If given, it must begin with a letter or number, and may contain
letters, numbers, hyphens, dots, and underscores, up to 63 characters.
The effect must be NoSchedule, PreferNoSchedule or NoExecute.
Currently taint can only apply to node.
Usage
$ kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ...
KEY_N=VAL_N:TAINT_EFFECT_N
Flags
Name Shorthand Default Usage
all false Select all nodes in the cluster
allow-
missing-
template-
keystrueIf true, ignore any errors in templates when a field or map
key is missing in the template. Only applies to golang and
jsonpath output formats.
dry-run noneMust be "none", "server", or "client". If client strategy, only
print the object that would be sent, without sending it. If
server strategy, submi | 8,005 |
t server-side request without
persisting the resource.
field-
managerkubectl-
taintName of the manager used to track field ownership.
output oOutput format. One of: json|yaml|name|go-template|go-
template-file|template|templatefile|jsonpath|jsonpath-as-
json|jsonpath-file.
overwrite falseIf true, allow taints to be overwritten, otherwise reject taint
updates that overwrite existing taints.
selector lSelector (label query) to filter on, supports '=', '==', and '!='.
(e.g. -l key1=value1,key2=value2)
show-
managed-
fieldsfalseIf true, keep the managedFields when printing objects in
JSON or YAML format.
templateTemplate string or path to template file to use when -o=go-
template, -o=go-template-file. The template format is
golang templates [http://golang.org/pkg/text/template/
#pkg-overview ].
validate true If true, use a schema to validate the input before sending it
uncordon
Mark node "foo" as schedulable
kubectl uncordon foo•
•
•
•
| 8,006 |
Mark node as schedulable.
Usage
$ kubectl uncordon NODE
Flags
Name Shorthand Default Usage
dry-
runnoneMust be "none", "server", or "client". If client strategy, only print
the object that would be sent, without sending it. If server
strategy, submit server-side request without persisting the
resource.
selector l Selector (label query) to filter on
KUBECTL SETTINGS AND USAGE
alpha
These commands correspond to alpha features that are not enabled in Kubernetes clusters by
default.
Usage
$ kubectl alpha
api-resources
Print the supported API resources
kubectl api-resources
Print the supported API resources with more information
kubectl api-resources -o wide
Print the supported API resources sorted by a column
kubectl api- resources --sort-by= name
Print the supported namespaced resources
kubectl api-resources --namespaced= true
Print the supported non-namespaced resources
kubectl api-resources --namespaced= false | 8,007 |
Print the supported API resources with a specific APIGroup
kubectl api-resources --api-group =extensions
Print the supported API resources on the server.
Usage
$ kubectl api-resources
Flags
Name Shorthand Default Usage
api-group Limit to resources in the specified API group.
cached false Use the cached list of resources if available.
namespaced trueIf false, non-namespaced resources will be returned,
otherwise returning namespaced resources by default.
no-headers falseWhen using the default or custom-column output format,
don't print headers (default print headers).
output o Output format. One of: wide|name.
sort-byIf non-empty, sort list of resources using specified field. The
field can be either 'name' or 'kind'.
verbs [] Limit to resources that support the specified verbs.
completion
Installing bash completion on macOS using homebrew ## If running Bash 3.2
included with macOS
brew install bash-completion
or, if running Bash 4.1+
brew install bash- completion@ 2
If kubectl is | 8,008 |
installed via homebrew, this should start working immediately ## If
you've installed via other means, you may need add the completion to your
completion directory
kubectl completion bash > $(brew --prefix )/etc/bash_completion.d/kubectl
Installing bash completion on Linux ## If bash-completion is not installed on Linux,
install the 'bash-completion' package ## via your distribution's package manager. ##
Load the kubectl completion code for bash into the current shell
source <( kubectl completion bash)
Write bash completion code to a file and source it from .bash_profil | 8,009 |
kubectl completion bash > ~/.kube/completion. bash.inc
printf "
Kubectl shell completion
source '$HOME /.kube/completion.bash.inc'
" >> $HOME /.bash_profile
source $HOME /.bash_profile
Load the kubectl completion code for zsh[1] into the current shell
source <( kubectl completion zsh)
Set the kubectl completion code for zsh[1] to autoload on startup
kubectl completion zsh > "${fpath[1]} /_kubectl"
Output shell completion code for the specified shell (bash or zsh). The shell code must be
evaluated to provide interactive completion of kubectl commands. This can be done by sourcing
it from the .bash_profile.
Detailed instructions on how to do this are available here:
for macOS: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#enable-shell-
autocompletion
for linux: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-
autocompletion
for windows: https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#enable-shell-
autocompletion
Note for zsh | 8,010 |
users: [1] zsh completions are only supported in versions of zsh >= 5.2.
Usage
$ kubectl completion SHELL
config
Modify kubeconfig files using subcommands like "kubectl config set current-context my-
context"
The loading order follows these rules:
If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once
and no merging takes place.
If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal
path delimiting rules for your system). These paths are merged. When a value is modified,
it is modified in the file that defines the stanza. When a value is created, it is created in
the first file that exists. If no files in the chain exist, then it creates the last file in the list.
Otherwise, ${HOME}/.kube/config is used and no merging takes place.1.
2.
3 | 8,011 |
Usage
$ kubectl config SUBCOMMAND
current-context
Display the current-context
kubectl config current-context
Display the current-context.
Usage
$ kubectl config current-context
delete-cluster
Delete the minikube cluster
kubectl config delete-cluster minikube
Delete the specified cluster from the kubeconfig.
Usage
$ kubectl config delete-cluster NAME
delete-context
Delete the context for the minikube cluster
kubectl config delete-context minikube
Delete the specified context from the kubeconfig.
Usage
$ kubectl config delete-context NAME
delete-user
Delete the minikube user
kubectl config delete-user minikube | 8,012 |
Delete the specified user from the kubeconfig.
Usage
$ kubectl config delete-user NAME
get-clusters
List the clusters that kubectl knows about
kubectl config get-clusters
Display clusters defined in the kubeconfig.
Usage
$ kubectl config get-clusters
get-contexts
List all the contexts in your kubeconfig file
kubectl config get-contexts
Describe one context in your kubeconfig file
kubectl config get-contexts my-context
Display one or many contexts from the kubeconfig file.
Usage
$ kubectl config get-contexts [(-o|--output=)name)]
Flags
Name Shorthand Default Usage
no-
headersfalseWhen using the default or custom-column output format, don't
print headers (default print headers).
output o Output format. One of: name
get-users
List the users that kubectl knows about
kubectl config get-users
Display users defined in the kubeconfig. | 8,013 |
Usage
$ kubectl config get-users
rename-context
Rename the context 'old-name' to 'new-name' in your kubeconfig file
kubectl config rename-context old-name new-name
Renames a context from the kubeconfig file.
CONTEXT_NAME is the context name that you want to change.
NEW_NAME is the new name you want to set.
Note: If the context being renamed is the 'current-context', this field will also be updated.
Usage
$ kubectl config rename-context CONTEXT_NAME NEW_NAME
set
Set the server field on the my-cluster cluster to https://1.2.3.4
kubectl config set clusters.my-cluster.server https: //1.2.3.4
Set the certificate-authority-data field on the my-cluster cluster
kubectl config set clusters.my-cluster.certificate-authority-data $(echo "cert_data_here" | base64
-i -)
Set the cluster field in the my-context context to my-cluster
kubectl config set contexts.my-context.cluster my-cluster
Set the client-key-data field in the cluster-admin user using --set-raw-bytes option
kubectl config set us | 8,014 |
ers.cluster-admin.client-key-data cert_data_here --set-raw-bytes =true
Set an individual value in a kubeconfig file.
PROPERTY_NAME is a dot delimited name where each token represents either an attribute
name or a map key. Map keys may not contain dots.
PROPERTY_VALUE is the new value you want to set. Binary fields such as 'certificate-
authority-data' expect a base64 encoded string unless the --set-raw-bytes flag is used.
Specifying an attribute name that already exists will merge new fields on top of existing values | 8,015 |
Usage
$ kubectl config set PROPERTY_NAME PROPERTY_VALUE
Flags
Name Shorthand Default Usage
set-raw-
bytesfalseWhen writing a []byte PROPERTY_VALUE, write the given
string directly without base64 decoding.
set-cluster
Set only the server field on the e2e cluster entry without touching other values
kubectl config set-cluster e2e --server =https://1.2.3.4
Embed certificate authority data for the e2e cluster entry
kubectl config set-cluster e2e --embed-certs --certificate-authority =~/.kube/e2e/
kubernetes.ca.crt
Disable cert checking for the dev cluster entry
kubectl config set-cluster e2e --insecure-skip-tls-verify =true
Set custom TLS server name to use for validation for the e2e cluster entry
kubectl config set-cluster e2e --tls-server-name =my-cluster-name
Set a cluster entry in kubeconfig.
Specifying a name that already exists will merge new fields on top of existing values for those
fields.
Usage
$ kubectl config set-cluster NAME [--server=server] [--certificate-authority=path/ | 8,016 |
to/certificate/
authority] [--insecure-skip-tls-verify=true] [--tls-server-name=example.com]
Flags
Name Shorthand Default Usage
embed-certs false embed-certs for the cluster entry in kubeconfig
set-context
Set the user field on the gce context entry without touching other values
kubectl config set-context gce --user =cluster-admi | 8,017 |
Set a context entry in kubeconfig.
Specifying a name that already exists will merge new fields on top of existing values for those
fields.
Usage
$ kubectl config set-context [NAME | --current] [--cluster=cluster_nickname] [--
user=user_nickname] [--namespace=namespace]
Flags
Name Shorthand Default Usage
current false Modify the current context
set-credentials
Set only the "client-key" field on the "cluster-admin" # entry, without touching
other values
kubectl config set-credentials cluster-admin --client-key =~/.kube/admin.key
Set basic auth for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --username =admin --password =uXFGweU9l35qcif
Embed client certificate data in the "cluster-admin" entry
kubectl config set-credentials cluster-admin --client-certificate =~/.kube/admin.crt --embed-
certs =true
Enable the Google Compute Platform auth provider for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --auth-provider =gcp
Enable the OpenID | 8,018 |
Connect auth provider for the "cluster-admin" entry with
additional args
kubectl config set-credentials cluster-admin --auth-provider =oidc --auth-provider-arg =client-
id=foo --auth-provider-arg =client-secret=bar
Remove the "client-secret" config value for the OpenID Connect auth provider for
the "cluster-admin" entry
kubectl config set-credentials cluster-admin --auth-provider =oidc --auth-provider-arg =client-
secret-
Enable new exec auth plugin for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --exec-command =/path/to/the/executable --exec-
api-version =client.authentication.k8s.io/v1beta1
Define new exec auth plugin args for the "cluster-admin" entr | 8,019 |
kubectl config set-credentials cluster-admin --exec-arg =arg1 --exec-arg =arg2
Create or update exec auth plugin environment variables for the "cluster-admin"
entry
kubectl config set-credentials cluster-admin --exec-env =key1=val1 --exec-env =key2=val2
Remove exec auth plugin environment variables for the "cluster-admin" entry
kubectl config set-credentials cluster-admin --exec-env =var-to-remove-
Set a user entry in kubeconfig.
Specifying a name that already exists will merge new fields on top of existing values.
Client-certificate flags: --client-certificate=certfile --client-key=keyfile
Bearer token flags: --token=bearer_token
Basic auth flags: --username=basic_user --password=basic_password
Bearer token and basic auth are mutually exclusive.
Usage
$ kubectl config set-credentials NAME [--client-certificate=path/to/certfile] [--client-key=path/
to/keyfile] [--token=bearer_token] [--username=basic_user] [--password=basic_password] [--
auth-provider=provider_name] [--auth-provider | 8,020 |
-arg=key=value] [--exec-
command=exec_command] [--exec-api-version=exec_api_version] [--exec-arg=arg] [--exec-
env=key=value]
Flags
Name Shorthand Default Usage
auth-provider Auth provider for the user entry in kubeconfig
auth-provider-
arg[] 'key=value' arguments for the auth provider
embed-certs false Embed client cert/key for the user entry in kubeconfig
exec-api-
versionAPI version of the exec credential plugin for the user entry
in kubeconfig
exec-arg []New arguments for the exec credential plugin command
for the user entry in kubeconfig
exec-
commandCommand for the exec credential plugin for the user entry
in kubeconfig
exec-env []'key=value' environment values for the exec credential
plugin
unset
Unset the current-contex | 8,021 |
kubectl config unset current-context
Unset namespace in foo context
kubectl config unset contexts.foo.namespace
Unset an individual value in a kubeconfig file.
PROPERTY_NAME is a dot delimited name where each token represents either an attribute
name or a map key. Map keys may not contain dots.
Usage
$ kubectl config unset PROPERTY_NAME
use-context
Use the context for the minikube cluster
kubectl config use-context minikube
Set the current-context in a kubeconfig file.
Usage
$ kubectl config use-context CONTEXT_NAME
view
Show merged kubeconfig settings
kubectl config view
Show merged kubeconfig settings and raw certificate data
kubectl config view --raw
Get the password for the e2e user
kubectl config view -o jsonpath ='{.users[?(@.name == "e2e")].user.password}'
Display merged kubeconfig settings or a specified kubeconfig file.
You can use --output jsonpath={...} to extract specific values using a jsonpath expression.
Usage
$ kubectl config view | 8,022 |
Flags
Name Shorthand Default Usage
allow-
missing-
template-keystrueIf true, ignore any errors in templates when a field or map
key is missing in the template. Only applies to golang and
jsonpath output formats.
flatten falseFlatten the resulting kubeconfig file into self-contained
output (useful for creating portable kubeconfig files)
merge true Merge the full hierarchy of kubeconfig files
minify falseRemove all information not used by current-context from
the output
output o yamlOutput format. One of: json|yaml|name|go-template|go-
template-file|template|templatefile|jsonpath|jsonpath-as-
json|jsonpath-file.
raw false Display raw byte data
show-
managed-
fieldsfalseIf true, keep the managedFields when printing objects in
JSON or YAML format.
templateTemplate string or path to template file to use when -o=go-
template, -o=go-template-file. The template format is
golang templates [http://golang.org/pkg/text/template/
#pkg-overview ].
explain
Get the documentation of the resourc | 8,023 |
e and its fields
kubectl explain pods
Get the documentation of a specific field of a resource
kubectl explain pods .spec .containers
List the fields for supported resources.
This command describes the fields associated with each supported API resource. Fields are
identified via a simple JSONPath identifier:
<type>.<fieldName> [.<fieldName> ]
Add the --recursive flag to display all of the fields at once without descriptions. Information
about each field is retrieved from the server in OpenAPI format.
Use "kubectl api-resources" for a complete list of supported resources.
Usage
$ kubectl explain RESOURC | 8,024 |
Flags
Name Shorthand Default Usage
api-
versionGet different explanations for particular API version (API
group/version)
recursive false Print the fields of fields (Currently only 1 level deep)
options
Print flags inherited by all commands
kubectl options
Print the list of flags inherited by all commands
Usage
$ kubectl options
plugin
Provides utilities for interacting with plugins.
Plugins provide extended functionality that is not part of the major command-line distribution.
Please refer to the documentation and examples for more information about how write your
own plugins.
The easiest way to discover and install plugins is via the kubernetes sub-project krew. To install
krew, visit https://krew.sigs.k8s.io/docs/user-guide/setup/install/
Usage
$ kubectl plugin [flags]
list
List all available plugin files on a user's PATH.
Available plugin files are those that are: - executable - anywhere on the user's PATH - begin
with "kubectl-"
Usage
$ kubectl plugin list | 8,025 |
Flags
Name Shorthand Default Usage
name-
onlyfalseIf true, display only the binary name of each plugin, rather than
its full path
version
Print the client and server versions for the current context
kubectl version
Print the client and server version information for the current context.
Usage
$ kubectl version
Flags
Name Shorthand Default Usage
client false If true, shows client version only (no server required).
output o One of 'yaml' or 'json'.
short false If true, print just the version number. | 8,026 |
This section lists the different ways to set up and run Kubernetes. When you install Kubernetes,
choose an installation type based on: ease of maintenance, security, control, available resources,
and expertise required to operate and manage a cluster.
You can download Kubernetes to deploy a Kubernetes cluster on a local machine, into the
cloud, or for your own datacenter.
Several Kubernetes components such as kube-apiserver or kube-proxy can also be deployed as
container images within the cluster.
It is recommended to run Kubernetes components as container images wherever that is
possible, and to have Kubernetes manage those components. Components that run containers -
notably, the kubelet - can't be included in this category.
If you don't want to manage a Kubernetes cluster yourself, you could pick a managed service,
including certified platforms . There are also other standardized and custom solutions across a
wide range of cloud and bare metal environments.
Learning environme | 8,027 |
nt
If you're learning Kubernetes, use the tools supported by the Kubernetes community, or tools in
the ecosystem to set up a Kubernetes cluster on a local machine. See Install tools .
Production environment
When evaluating a solution for a production environment , consider which aspects of operating
a Kubernetes cluster (or abstractions ) you want to manage yourself and which you prefer to
hand off to a provider.
For a cluster you're managing yourself, the officially supported tool for deploying Kubernetes is
kubeadm .
What's next
Download Kubernetes
Download and install tools including kubectl
Select a container runtime for your new cluster
Learn about best practices for cluster setup
Kubernetes is designed for its control plane to run on Linux. Within your cluster you can run
applications on Linux or other operating systems, including Windows.
Learn to set up clusters with Windows nodes
Install Tools
Set up Kubernetes tools on your computer.•
•
•
•
| 8,028 |
kubectl
The Kubernetes command-line tool, kubectl , allows you to run commands against Kubernetes
clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and
view logs. For more information including a complete list of kubectl operations, see the kubectl
reference documentation .
kubectl is installable on a variety of Linux platforms, macOS and Windows. Find your preferred
operating system below.
Install kubectl on Linux
Install kubectl on macOS
Install kubectl on Windows
kind
kind lets you run Kubernetes on your local computer. This tool requires that you have either
Docker or Podman installed.
The kind Quick Start page shows you what you need to do to get up and running with kind.
View kind Quick Start Guide
minikube
Like kind, minikube is a tool that lets you run Kubernetes locally. minikube runs an all-in-one
or a multi-node local Kubernetes cluster on your personal computer (including Windows,
macOS and Linux PCs) so that you can try out K | 8,029 |
ubernetes, or for daily development work.
You can follow the official Get Started! guide if your focus is on getting the tool installed.
View minikube Get Started! Guide
Once you have minikube working, you can use it to run a sample application .
kubeadm
You can use the kubeadm tool to create and manage Kubernetes clusters. It performs the actions
necessary to get a minimum viable, secure cluster up and running in a user friendly way.
Installing kubeadm shows you how to install kubeadm. Once installed, you can use it to create a
cluster .
View kubeadm Install Guide
Production environment
Create a production-quality Kubernetes cluster•
•
| 8,030 |
A production-quality Kubernetes cluster requires planning and preparation. If your Kubernetes
cluster is to run critical workloads, it must be configured to be resilient. This page explains
steps you can take to set up a production-ready cluster, or to promote an existing cluster for
production use. If you're already familiar with production setup and want the links, skip to
What's next .
Production considerations
Typically, a production Kubernetes cluster environment has more requirements than a personal
learning, development, or test environment Kubernetes. A production environment may require
secure access by many users, consistent availability, and the resources to adapt to changing
demands.
As you decide where you want your production Kubernetes environment to live (on premises
or in a cloud) and the amount of management you want to take on or hand to others, consider
how your requirements for a Kubernetes cluster are influenced by the following issues:
Availability : A single-ma | 8,031 |
chine Kubernetes learning environment has a single point of
failure. Creating a highly available cluster means considering:
Separating the control plane from the worker nodes.
Replicating the control plane components on multiple nodes.
Load balancing traffic to the cluster’s API server .
Having enough worker nodes available, or able to quickly become available, as
changing workloads warrant it.
Scale : If you expect your production Kubernetes environment to receive a stable amount
of demand, you might be able to set up for the capacity you need and be done. However,
if you expect demand to grow over time or change dramatically based on things like
season or special events, you need to plan how to scale to relieve increased pressure from
more requests to the control plane and worker nodes or scale down to reduce unused
resources.
Security and access management : You have full admin privileges on your own Kubernetes
learning cluster. But shared clusters with important workloads, and mor | 8,032 |
e than one or two
users, require a more refined approach to who and what can access cluster resources. You
can use role-based access control ( RBAC ) and other security mechanisms to make sure
that users and workloads can get access to the resources they need, while keeping
workloads, and the cluster itself, secure. You can set limits on the resources that users and
workloads can access by managing policies and container resources .
Before building a Kubernetes production environment on your own, consider handing off some
or all of this job to Turnkey Cloud Solutions providers or other Kubernetes Partners . Options
include:
Serverless : Just run workloads on third-party equipment without managing a cluster at all.
You will be charged for things like CPU usage, memory, and disk requests.
Managed control plane : Let the provider manage the scale and availability of the cluster's
control plane, as well as handle patches and upgrades.
Managed worker nodes : Configure pools of nodes to me | 8,033 |
et your needs, then the provider
makes sure those nodes are available and ready to implement upgrades when needed.•
◦
◦
◦
◦
•
•
•
•
| 8,034 |
Integration : There are providers that integrate Kubernetes with other services you may
need, such as storage, container registries, authentication methods, and development
tools.
Whether you build a production Kubernetes cluster yourself or work with partners, review the
following sections to evaluate your needs as they relate to your cluster’s control plane , worker
nodes , user access , and workload resources .
Production cluster setup
In a production-quality Kubernetes cluster, the control plane manages the cluster from services
that can be spread across multiple computers in different ways. Each worker node, however,
represents a single entity that is configured to run Kubernetes pods.
Production control plane
The simplest Kubernetes cluster has the entire control plane and worker node services running
on the same machine. You can grow that environment by adding worker nodes, as reflected in
the diagram illustrated in Kubernetes Components . If the cluster is meant to be available | 8,035 |
for a
short period of time, or can be discarded if something goes seriously wrong, this might meet
your needs.
If you need a more permanent, highly available cluster, however, you should consider ways of
extending the control plane. By design, one-machine control plane services running on a single
machine are not highly available. If keeping the cluster up and running and ensuring that it can
be repaired if something goes wrong is important, consider these steps:
Choose deployment tools : You can deploy a control plane using tools such as kubeadm,
kops, and kubespray. See Installing Kubernetes with deployment tools to learn tips for
production-quality deployments using each of those deployment methods. Different
Container Runtimes are available to use with your deployments.
Manage certificates : Secure communications between control plane services are
implemented using certificates. Certificates are automatically generated during
deployment or you can generate them using your own c | 8,036 |
ertificate authority. See PKI
certificates and requirements for details.
Configure load balancer for apiserver : Configure a load balancer to distribute external API
requests to the apiserver service instances running on different nodes. See Create an
External Load Balancer for details.
Separate and backup etcd service : The etcd services can either run on the same machines
as other control plane services or run on separate machines, for extra security and
availability. Because etcd stores cluster configuration data, backing up the etcd database
should be done regularly to ensure that you can repair that database if needed. See the
etcd FAQ for details on configuring and using etcd. See Operating etcd clusters for
Kubernetes and Set up a High Availability etcd cluster with kubeadm for details.
Create multiple control plane systems : For high availability, the control plane should not be
limited to a single machine. If the control plane services are run by an init service (such
as | 8,037 |
systemd), each service should run on at least three machines. However, running
control plane services as pods in Kubernetes ensures that the replicated number of
services that you request will always be available. The scheduler should be fault tolerant,
but not highly available. Some deployment tools set up Raft consensus algorithm to do
leader election of Kubernetes services. If the primary goes away, another service elects
itself and take over.•
•
•
•
•
| 8,038 |
Span multiple zones : If keeping your cluster available at all times is critical, consider
creating a cluster that runs across multiple data centers, referred to as zones in cloud
environments. Groups of zones are referred to as regions. By spreading a cluster across
multiple zones in the same region, it can improve the chances that your cluster will
continue to function even if one zone becomes unavailable. See Running in multiple
zones for details.
Manage on-going features : If you plan to keep your cluster over time, there are tasks you
need to do to maintain its health and security. For example, if you installed with
kubeadm, there are instructions to help you with Certificate Management and Upgrading
kubeadm clusters . See Administer a Cluster for a longer list of Kubernetes administrative
tasks.
To learn about available options when you run control plane services, see kube-apiserver , kube-
controller-manager , and kube-scheduler component pages. For highly available control | 8,039 |
plane
examples, see Options for Highly Available topology , Creating Highly Available clusters with
kubeadm , and Operating etcd clusters for Kubernetes . See Backing up an etcd cluster for
information on making an etcd backup plan.
Production worker nodes
Production-quality workloads need to be resilient and anything they rely on needs to be
resilient (such as CoreDNS). Whether you manage your own control plane or have a cloud
provider do it for you, you still need to consider how you want to manage your worker nodes
(also referred to simply as nodes ).
Configure nodes : Nodes can be physical or virtual machines. If you want to create and
manage your own nodes, you can install a supported operating system, then add and run
the appropriate Node services . Consider:
The demands of your workloads when you set up nodes by having appropriate
memory, CPU, and disk speed and storage capacity available.
Whether generic computer systems will do or you have workloads that need GPU
processors, | 8,040 |
Windows nodes, or VM isolation.
Validate nodes : See Valid node setup for information on how to ensure that a node meets
the requirements to join a Kubernetes cluster.
Add nodes to the cluster : If you are managing your own cluster you can add nodes by
setting up your own machines and either adding them manually or having them register
themselves to the cluster’s apiserver. See the Nodes section for information on how to set
up Kubernetes to add nodes in these ways.
Scale nodes : Have a plan for expanding the capacity your cluster will eventually need. See
Considerations for large clusters to help determine how many nodes you need, based on
the number of pods and containers you need to run. If you are managing nodes yourself,
this can mean purchasing and installing your own physical equipment.
Autoscale nodes : Most cloud providers support Cluster Autoscaler to replace unhealthy
nodes or grow and shrink the number of nodes as demand requires. See the Frequently
Asked Questions fo | 8,041 |
r how the autoscaler works and Deployment for how it is
implemented by different cloud providers. For on-premises, there are some virtualization
platforms that can be scripted to spin up new nodes based on demand.
Set up node health checks : For important workloads, you want to make sure that the nodes
and pods running on those nodes are healthy. Using the Node Problem Detector daemon,
you can ensure your nodes are healthy.•
•
•
◦
◦
•
•
•
•
| 8,042 |
Production user management
In production, you may be moving from a model where you or a small group of people are
accessing the cluster to where there may potentially be dozens or hundreds of people. In a
learning environment or platform prototype, you might have a single administrative account
for everything you do. In production, you will want more accounts with different levels of
access to different namespaces.
Taking on a production-quality cluster means deciding how you want to selectively allow
access by other users. In particular, you need to select strategies for validating the identities of
those who try to access your cluster (authentication) and deciding if they have permissions to
do what they are asking (authorization):
Authentication : The apiserver can authenticate users using client certificates, bearer
tokens, an authenticating proxy, or HTTP basic auth. You can choose which
authentication methods you want to use. Using plugins, the apiserver can leverage your
organiz | 8,043 |
ation’s existing authentication methods, such as LDAP or Kerberos. See
Authentication for a description of these different methods of authenticating Kubernetes
users.
Authorization : When you set out to authorize your regular users, you will probably
choose between RBAC and ABAC authorization. See Authorization Overview to review
different modes for authorizing user accounts (as well as service account access to your
cluster):
Role-based access control (RBAC ): Lets you assign access to your cluster by allowing
specific sets of permissions to authenticated users. Permissions can be assigned for
a specific namespace (Role) or across the entire cluster (ClusterRole). Then using
RoleBindings and ClusterRoleBindings, those permissions can be attached to
particular users.
Attribute-based access control (ABAC ): Lets you create policies based on resource
attributes in the cluster and will allow or deny access based on those attributes.
Each line of a policy file identifies versioning pr | 8,044 |
operties (apiVersion and kind) and
a map of spec properties to match the subject (user or group), resource property,
non-resource property (/version or /apis), and readonly. See Examples for details.
As someone setting up authentication and authorization on your production Kubernetes cluster,
here are some things to consider:
Set the authorization mode : When the Kubernetes API server ( kube-apiserver ) starts, the
supported authentication modes must be set using the --authorization-mode flag. For
example, that flag in the kube-adminserver.yaml file (in /etc/kubernetes/manifests ) could be
set to Node,RBAC. This would allow Node and RBAC authorization for authenticated
requests.
Create user certificates and role bindings (RBAC) : If you are using RBAC authorization,
users can create a CertificateSigningRequest (CSR) that can be signed by the cluster CA.
Then you can bind Roles and ClusterRoles to each user. See Certificate Signing Requests
for details.
Create policies that combine a | 8,045 |
ttributes (ABAC) : If you are using ABAC authorization, you
can assign combinations of attributes to form policies to authorize selected users or
groups to access particular resources (such as a pod), namespace, or apiGroup. For more
information, see Examples .
Consider Admission Controllers : Additional forms of authorization for requests that can
come in through the API server include Webhook Token Authentication . Webhooks and•
•
◦
◦
•
•
•
| 8,046 |
other special authorization types need to be enabled by adding Admission Controllers to
the API server.
Set limits on workload resources
Demands from production workloads can cause pressure both inside and outside of the
Kubernetes control plane. Consider these items when setting up for the needs of your cluster's
workloads:
Set namespace limits : Set per-namespace quotas on things like memory and CPU. See
Manage Memory, CPU, and API Resources for details. You can also set Hierarchical
Namespaces for inheriting limits.
Prepare for DNS demand : If you expect workloads to massively scale up, your DNS service
must be ready to scale up as well. See Autoscale the DNS service in a Cluster .
Create additional service accounts : User accounts determine what users can do on a
cluster, while a service account defines pod access within a particular namespace. By
default, a pod takes on the default service account from its namespace. See Managing
Service Accounts for information on creating a | 8,047 |
new service account. For example, you
might want to:
Add secrets that a pod could use to pull images from a particular container registry.
See Configure Service Accounts for Pods for an example.
Assign RBAC permissions to a service account. See ServiceAccount permissions for
details.
What's next
Decide if you want to build your own production Kubernetes or obtain one from available
Turnkey Cloud Solutions or Kubernetes Partners .
If you choose to build your own cluster, plan how you want to handle certificates and set
up high availability for features such as etcd and the API server .
Choose from kubeadm , kops or Kubespray deployment methods.
Configure user management by determining your Authentication and Authorization
methods.
Prepare for application workloads by setting up resource limits , DNS autoscaling and
service accounts .
Container Runtimes
Note: Dockershim has been removed from the Kubernetes project as of release 1.24. Read the
Dockershim Removal FAQ for furth | 8,048 |
er details.
You need to install a container runtime into each node in the cluster so that Pods can run there.
This page outlines what is involved and describes related tasks for setting up nodes.
Kubernetes 1.29 requires that you use a runtime that conforms with the Container Runtime
Interface (CRI).
See CRI version support for more information.•
•
•
◦
◦
•
•
•
•
| 8,049 |
This page provides an outline of how to use several common container runtimes with
Kubernetes.
containerd
CRI-O
Docker Engine
Mirantis Container Runtime
Note:
Kubernetes releases before v1.24 included a direct integration with Docker Engine, using a
component named dockershim . That special direct integration is no longer part of Kubernetes
(this removal was announced as part of the v1.20 release). You can read Check whether
Dockershim removal affects you to understand how this removal might affect you. To learn
about migrating from using dockershim, see Migrating from dockershim .
If you are running a version of Kubernetes other than v1.29, check the documentation for that
version.
Install and configure prerequisites
The following steps apply common settings for Kubernetes nodes on Linux.
You can skip a particular setting if you're certain you don't need it.
For more information, see Network Plugin Requirements or the documentation for your specific
container runtime.
Forwarding IP | 8,050 |
v4 and letting iptables see bridged traffic
Execute the below mentioned instructions:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
Verify that the br_netfilter , overlay modules are loaded by running the following commands:•
•
•
| 8,051 |
lsmod | grep br_netfilter
lsmod | grep overlay
Verify that the net.bridge.bridge-nf-call-iptables , net.bridge.bridge-nf-call-ip6tables , and
net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the following
command:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
cgroup drivers
On Linux, control groups are used to constrain resources that are allocated to processes.
Both the kubelet and the underlying container runtime need to interface with control groups to
enforce resource management for pods and containers and set resources such as cpu/memory
requests and limits. To interface with control groups, the kubelet and the container runtime
need to use a cgroup driver . It's critical that the kubelet and the container runtime use the same
cgroup driver and are configured the same.
There are two cgroup drivers available:
cgroupfs
systemd
cgroupfs driver
The cgroupfs driver is the default cgroup driver | 8,052 |
in the kubelet . When the cgroupfs driver is
used, the kubelet and the container runtime directly interface with the cgroup filesystem to
configure cgroups.
The cgroupfs driver is not recommended when systemd is the init system because systemd
expects a single cgroup manager on the system. Additionally, if you use cgroup v2 , use the
systemd cgroup driver instead of cgroupfs .
systemd cgroup driver
When systemd is chosen as the init system for a Linux distribution, the init process generates
and consumes a root control group ( cgroup ) and acts as a cgroup manager.
systemd has a tight integration with cgroups and allocates a cgroup per systemd unit. As a
result, if you use systemd as the init system with the cgroupfs driver, the system gets two
different cgroup managers.
Two cgroup managers result in two views of the available and in-use resources in the system.
In some cases, nodes that are configured to use cgroupfs for the kubelet and container runtime,
but use systemd for | 8,053 |
the rest of the processes become unstable under resource pressure.
The approach to mitigate this instability is to use systemd as the cgroup driver for the kubelet
and the container runtime when systemd is the selected init system.
To set systemd as the cgroup driver, edit the KubeletConfiguration option of cgroupDriver and
set it to systemd . For example:•
| 8,054 |
apiVersion : kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
...
cgroupDriver : systemd
Note: Starting with v1.22 and later, when creating a cluster with kubeadm, if the user does not
set the cgroupDriver field under KubeletConfiguration , kubeadm defaults it to systemd .
In Kubernetes v1.28, with the KubeletCgroupDriverFromCRI feature gate enabled and a
container runtime that supports the RuntimeConfig CRI RPC, the kubelet automatically detects
the appropriate cgroup driver from the runtime, and ignores the cgroupDriver setting within
the kubelet configuration.
If you configure systemd as the cgroup driver for the kubelet, you must also configure systemd
as the cgroup driver for the container runtime. Refer to the documentation for your container
runtime for instructions. For example:
containerd
CRI-O
Caution:
Changing the cgroup driver of a Node that has joined a cluster is a sensitive operation. If the
kubelet has created Pods using the semantics of one cgroup driver | 8,055 |
, changing the container
runtime to another cgroup driver can cause errors when trying to re-create the Pod sandbox for
such existing Pods. Restarting the kubelet may not solve such errors.
If you have automation that makes it feasible, replace the node with another using the updated
configuration, or reinstall it using automation.
Migrating to the systemd driver in kubeadm managed clusters
If you wish to migrate to the systemd cgroup driver in existing kubeadm managed clusters,
follow configuring a cgroup driver .
CRI version support
Your container runtime must support at least v1alpha2 of the container runtime interface.
Kubernetes starting v1.26 only works with v1 of the CRI API. Earlier versions default to v1
version, however if a container runtime does not support the v1 API, the kubelet falls back to
using the (deprecated) v1alpha2 API instead.
Container runtimes
Note: This section links to third party projects that provide functionality required by
Kubernetes. The Kubernete | 8,056 |
s project authors aren't responsible for these projects, which are
listed alphabetically. To add a project to this list, read the content guide before submitting a
change. More information.•
| 8,057 |
containerd
This section outlines the necessary steps to use containerd as CRI runtime.
To install containerd on your system, follow the instructions on getting started with containerd .
Return to this step once you've created a valid config.toml configuration file.
Linux
Windows
You can find this file under the path /etc/containerd/config.toml .
You can find this file under the path C:\Program Files\containerd\config.toml .
On Linux the default CRI socket for containerd is /run/containerd/containerd.sock . On
Windows the default CRI endpoint is npipe://./pipe/containerd-containerd .
Configuring the systemd cgroup driver
To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
The systemd cgroup driver is recommended if you use cgroup v2 .
Note:
If you installed containerd from a package (for exa | 8,058 |
mple, RPM or .deb), you may find that the
CRI integration plugin is disabled by default.
You need CRI support enabled to use containerd with Kubernetes. Make sure that cri is not
included in the disabled_plugins list within /etc/containerd/config.toml ; if you made changes to
that file, also restart containerd .
If you experience container crash loops after the initial cluster installation or after installing a
CNI, the containerd configuration provided with the package might contain incompatible
configuration parameters. Consider resetting the containerd configuration with containerd
config default > /etc/containerd/config.toml as specified in getting-started.md and then set the
configuration parameters specified above accordingly.
If you apply this change, make sure to restart containerd:
sudo systemctl restart containerd
When using kubeadm, manually configure the cgroup driver for kubelet .
In Kubernetes v1.28, you can enable automatic detection of the cgroup driver as an alpha
| 8,059 |
feature. See systemd cgroup driver for more details.•
| 8,060 |
Overriding the sandbox (pause) image
In your containerd config you can overwrite the sandbox image by setting the following config:
[plugins. "io.containerd.grpc.v1.cri" ]
sandbox_image = "registry.k8s.io/pause:3.2"
You might need to restart containerd as well once you've updated the config file: systemctl
restart containerd .
Please note, that it is a best practice for kubelet to declare the matching pod-infra-container-
image . If not configured, kubelet may attempt to garbage collect the pause image. There is
ongoing work in containerd to pin the pause image and not require this setting on kubelet any
longer.
CRI-O
This section contains the necessary steps to install CRI-O as a container runtime.
To install CRI-O, follow CRI-O Install Instructions .
cgroup driver
CRI-O uses the systemd cgroup driver per default, which is likely to work fine for you. To
switch to the cgroupfs cgroup driver, either edit /etc/crio/crio.conf or place a drop-in
configuration in /etc/crio/crio.co | 8,061 |
nf.d/02-cgroup-manager.conf , for example:
[crio.runtime]
conmon_cgroup = "pod"
cgroup_manager = "cgroupfs"
You should also note the changed conmon_cgroup , which has to be set to the value pod when
using CRI-O with cgroupfs . It is generally necessary to keep the cgroup driver configuration of
the kubelet (usually done via kubeadm) and CRI-O in sync.
In Kubernetes v1.28, you can enable automatic detection of the cgroup driver as an alpha
feature. See systemd cgroup driver for more details.
For CRI-O, the CRI socket is /var/run/crio/crio.sock by default.
Overriding the sandbox (pause) image
In your CRI-O config you can set the following config value:
[crio.image]
pause_image= "registry.k8s.io/pause:3.6"
This config option supports live configuration reload to apply this change: systemctl reload crio
or by sending SIGHUP to the crio process | 8,062 |
Docker Engine
Note: These instructions assume that you are using the cri-dockerd adapter to integrate Docker
Engine with Kubernetes.
On each of your nodes, install Docker for your Linux distribution as per Install Docker
Engine .
Install cri-dockerd , following the instructions in that source code repository.
For cri-dockerd , the CRI socket is /run/cri-dockerd.sock by default.
Mirantis Container Runtime
Mirantis Container Runtime (MCR) is a commercially available container runtime that was
formerly known as Docker Enterprise Edition.
You can use Mirantis Container Runtime with Kubernetes using the open source cri-dockerd
component, included with MCR.
To learn more about how to install Mirantis Container Runtime, visit MCR Deployment Guide .
Check the systemd unit named cri-docker.socket to find out the path to the CRI socket.
Overriding the sandbox (pause) image
The cri-dockerd adapter accepts a command line argument for specifying which container
image to use as the Pod infrast | 8,063 |
ructure container (“pause image”). The command line argument
to use is --pod-infra-container-image .
What's next
As well as a container runtime, your cluster will need a working network plugin .
Installing Kubernetes with deployment
tools
There are many methods and tools for setting up your own production Kubernetes cluster. For
example:
kubeadm
kops : An automated cluster provisioning tool. For tutorials, best practices, configuration
options and information on reaching out to the community, please check the kOps
website for details.
kubespray : A composition of Ansible playbooks, inventory , provisioning tools, and
domain knowledge for generic OS/Kubernetes clusters configuration management tasks.
You can reach out to the community on Slack channel #kubespray .1.
2.
•
•
| 8,064 |
Bootstrapping clusters with kubeadm
Installing kubeadm
Troubleshooting kubeadm
Creating a cluster with kubeadm
Customizing components with the kubeadm API
Options for Highly Available Topology
Creating Highly Available Clusters with kubeadm
Set up a High Availability etcd Cluster with kubeadm
Configuring each kubelet in your cluster using kubeadm
Dual-stack support with kubeadm
Installing kubeadm
This page shows how to install the kubeadm toolbox. For information on how to create a cluster
with kubeadm once you have performed this installation process, see the Creating a cluster
with kubeadm page.
This installation guide is for Kubernetes v1.29. If you want to use a different Kubernetes
version, please refer to the following pages instead:
Installing kubeadm (Kubernetes v1.28)
Installing kubeadm (Kubernetes v1.27)
Installing kubeadm (Kubernetes v1.26)
Installing kubeadm (Kubernetes v1.25)
Before you begin
A compatible Linux host. The Kubernetes project provides generic instructions f | 8,065 |
or Linux
distributions based on Debian and Red Hat, and those distributions without a package
manager.
2 GB or more of RAM per machine (any less will leave little room for your apps).
2 CPUs or more.
Full network connectivity between all machines in the cluster (public or private network
is fine).
Unique hostname, MAC address, and product_uuid for every node. See here for more
details.
Certain ports are open on your machines. See here for more details.
Swap configuration. The default behavior of a kubelet was to fail to start if swap memory
was detected on a node. Swap has been supported since v1.22. And since v1.28, Swap is•
•
•
•
•
•
•
•
•
•
| 8,066 |
supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but
disabled by default.
You MUST disable swap if the kubelet is not properly configured to use swap. For
example, sudo swapoff -a will disable swapping temporarily. To make this change
persistent across reboots, make sure swap is disabled in config files like /etc/fstab ,
systemd.swap , depending how it was configured on your system.
Note: The kubeadm installation is done via binaries that use dynamic linking and assumes that
your target system provides glibc . This is a reasonable assumption on many Linux distributions
(including Debian, Ubuntu, Fedora, CentOS, etc.) but it is not always the case with custom and
lightweight distributions which don't include glibc by default, such as Alpine Linux. The
expectation is that the distribution either includes glibc or a compatibility layer that provides
the expected symbols.
Verify the MAC address and product_uuid are unique for
every node
You can get the | 8,067 |
MAC address of the network interfaces using the command ip link or
ifconfig -a
The product_uuid can be checked by using the command sudo cat /sys/class/dmi/id/
product_uuid
It is very likely that hardware devices will have unique addresses, although some virtual
machines may have identical values. Kubernetes uses these values to uniquely identify the
nodes in the cluster. If these values are not unique to each node, the installation process may
fail.
Check network adapters
If you have more than one network adapter, and your Kubernetes components are not
reachable on the default route, we recommend you add IP route(s) so Kubernetes cluster
addresses go via the appropriate adapter.
Check required ports
These required ports need to be open in order for Kubernetes components to communicate with
each other. You can use tools like netcat to check if a port is open. For example:
nc 127.0.0.1 6443 -v
The pod network plugin you use may also require certain ports to be open. Since this dif | 8,068 |
fers
with each pod network plugin, please see the documentation for the plugins about what port(s)
those need.
Installing a container runtime
To run containers in Pods, Kubernetes uses a container runtime .
By default, Kubernetes uses the Container Runtime Interface (CRI) to interface with your
chosen container runtime.◦
•
| 8,069 |
If you don't specify a runtime, kubeadm automatically tries to detect an installed container
runtime by scanning through a list of known endpoints.
If multiple or no container runtimes are detected kubeadm will throw an error and will request
that you specify which one you want to use.
See container runtimes for more information.
Note: Docker Engine does not implement the CRI which is a requirement for a container
runtime to work with Kubernetes. For that reason, an additional service cri-dockerd has to be
installed. cri-dockerd is a project based on the legacy built-in Docker Engine support that was
removed from the kubelet in version 1.24.
The tables below include the known endpoints for supported operating systems:
Linux
Windows
Linux container runtimes
Runtime Path to Unix domain socket
containerd unix:///var/run/containerd/containerd.sock
CRI-O unix:///var/run/crio/crio.sock
Docker Engine (using cri-dockerd) unix:///var/run/cri-dockerd.sock
Windows container runtimes
Runtime | 8,070 |
Path to Windows named pipe
containerd npipe:////./pipe/containerd-containerd
Docker Engine (using cri-dockerd) npipe:////./pipe/cri-dockerd
Installing kubeadm, kubelet and kubectl
You will install these packages on all of your machines:
kubeadm : the command to bootstrap the cluster.
kubelet : the component that runs on all of the machines in your cluster and does things
like starting pods and containers.
kubectl : the command line util to talk to your cluster.
kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they
match the version of the Kubernetes control plane you want kubeadm to install for you. If you
do not, there is a risk of a version skew occurring that can lead to unexpected, buggy behaviour.
However, one minor version skew between the kubelet and the control plane is supported, but
the kubelet version may never exceed the API server version. For example, the kubelet running
1.7.0 should be fully compatible with a 1.8.0 API server, | 8,071 |
but not vice versa.
For information about installing kubectl , see Install and set up kubectl .
Warning: These instructions exclude all Kubernetes packages from any system upgrades. This
is because kubeadm and Kubernetes require special attention to upgrade .•
•
•
•
| 8,072 |
For more information on version skews, see:
Kubernetes version and version-skew policy
Kubeadm-specific version skew policy
Note: The legacy package repositories ( apt.kubernetes.io and yum.kubernetes.io ) have been
deprecated and frozen starting from September 13, 2023 . Using the new package repositories
hosted at pkgs.k8s.io is strongly recommended and required in order to install
Kubernetes versions released after September 13, 2023. The deprecated legacy repositories,
and their contents, might be removed at any time in the future and without a further notice
period. The new package repositories provide downloads for Kubernetes versions starting with
v1.24.0.
Note: There's a dedicated package repository for each Kubernetes minor version. If you want to
install a minor version other than 1.29, please see the installation guide for your desired minor
version.
Debian-based distributions
Red Hat-based distributions
Without a package manager
These instructions are for Kubernetes 1 | 8,073 |
.29.
Update the apt package index and install packages needed to use the Kubernetes apt
repository:
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Download the public signing key for the Kubernetes package repositories. The same
signing key is used for all repositories so you can disregard the version in the URL:
# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl
command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /
etc/apt/keyrings/kubernetes-apt-keyring.gpg
Note: In releases older than Debian 12 and Ubuntu 22.04, folder /etc/apt/keyrings does not exist
by default, and it should be created before the curl command.
Add the appropriate Kubernetes apt repository. Please note that this repository have
packages only for Ku | 8,074 |
bernetes 1.29; for other Kubernetes minor versions, you need to
change the Kubernetes minor version in the URL to match your desired minor version
(you should also check that you are reading the documentation for the version of
Kubernetes that you plan to install).
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/
core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update the apt package index, install kubelet, kubeadm and kubectl, and pin their
version:•
•
•
•
•
1.
2.
1.
2 | 8,075 |
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Set SELinux to permissive mode:
These instructions are for Kubernetes 1.29.
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Caution:
Setting SELinux in permissive mode by running setenforce 0 and sed ... effectively
disables it. This is required to allow containers to access the host filesystem; for example,
some cluster network plugins require that. You have to do this until SELinux support is
improved in the kubelet.
You can leave SELinux enabled if you know how to configure it but it may require
settings that are not supported by kubeadm.
Add the Kubernetes yum repository. The exclude parameter in the repository definition
ensures that the packages related to Kubernetes are not upgraded upon running yum
update as there's a special procedure that must be followed | 8,076 |
for upgrading Kubernetes.
Please note that this repository have packages only for Kubernetes 1.29; for other
Kubernetes minor versions, you need to change the Kubernetes minor version in the URL
to match your desired minor version (you should also check that you are reading the
documentation for the version of Kubernetes that you plan to install).
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
Install kubelet, kubeadm and kubectl, and enable kubelet to ensure it's automatically
started on startup:
sudo yum install -y kubelet kubeadm kubectl --disableexcludes =kubernetes
sudo systemctl enable --now kubelet
Install CNI plugins (required for most pod network):
CNI_PL | 8,077 |
UGINS_VERSION ="v1.3.0"
ARCH ="amd64"
DEST ="/opt/cni/bin"
sudo mkdir -p "$DEST "
curl -L "https://github.com/containernetworking/plugins/releases/download/ ${CNI_PLUGINS_V
ERSION }/cni-plugins-linux- ${ARCH }-${CNI_PLUGINS_VERSION }.tgz" | sudo tar -C "$DEST " -xz1.
•
•
1.
2 | 8,078 |
Define the directory to download command files:
Note: The DOWNLOAD_DIR variable must be set to a writable directory. If you are running
Flatcar Container Linux, set DOWNLOAD_DIR="/opt/bin" .
DOWNLOAD_DIR ="/usr/local/bin"
sudo mkdir -p "$DOWNLOAD_DIR "
Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)):
CRICTL_VERSION ="v1.28.0"
ARCH ="amd64"
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/ ${CRICTL_VERSION }/
crictl- ${CRICTL_VERSION }-linux- ${ARCH }.tar.gz" | sudo tar -C $DOWNLOAD_DIR -xz
Install kubeadm , kubelet , kubectl and add a kubelet systemd service:
RELEASE ="$(curl -sSL https://dl.k8s.io/release/stable.txt )"
ARCH ="amd64"
cd $DOWNLOAD_DIR
sudo curl -L --remote-name-all https://dl.k8s.io/release/ ${RELEASE }/bin/linux/ ${ARCH }/{kubead
m,kubelet }
sudo chmod +x {kubeadm,kubelet }
RELEASE_VERSION ="v0.16.2"
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/ ${RELEASE_VERSION }/cmd/
krel/templates/lat | 8,079 |
est/kubelet/kubelet.service" | sed "s:/usr/bin: ${DOWNLOAD_DIR }:g" | sudo
tee /etc/systemd/system/kubelet.service
sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/ ${RELEASE_VERSION }/cmd/
krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin: ${DOWNLOAD_DIR }:g" |
sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Note: Please refer to the note in the Before you begin section for Linux distributions that do
not include glibc by default.
Install kubectl by following the instructions on Install Tools page .
Enable and start kubelet :
systemctl enable --now kubelet
Note: The Flatcar Container Linux distribution mounts the /usr directory as a read-only
filesystem. Before bootstrapping your cluster, you need to take additional steps to configure a
writable directory. See the Kubeadm Troubleshooting guide to learn how to set up a writable
directory.
The kubelet is now restarting every few s | 8,080 |
econds, as it waits in a crashloop for kubeadm to tell it
what to do.
Configuring a cgroup driver
Both the container runtime and the kubelet have a property called "cgroup driver" , which is
important for the management of cgroups on Linux machines | 8,081 |
Warning:
Matching the container runtime and kubelet cgroup drivers is required or otherwise the kubelet
process will fail.
See Configuring a cgroup driver for more details.
Troubleshooting
If you are running into difficulties with kubeadm, please consult our troubleshooting docs .
What's next
Using kubeadm to Create a Cluster
Troubleshooting kubeadm
As with any program, you might run into an error installing or running kubeadm. This page
lists some common failure scenarios and have provided steps that can help you understand and
fix the problem.
If your problem is not listed below, please follow the following steps:
If you think your problem is a bug with kubeadm:
Go to github.com/kubernetes/kubeadm and search for existing issues.
If no issue exists, please open one and follow the issue template.
If you are unsure about how kubeadm works, you can ask on Slack in #kubeadm , or open
a question on StackOverflow . Please include relevant tags like #kubernetes and #kubeadm
so folks can | 8,082 |
help you.
Not possible to join a v1.18 Node to a v1.17 cluster due to
missing RBAC
In v1.18 kubeadm added prevention for joining a Node in the cluster if a Node with the same
name already exists. This required adding RBAC for the bootstrap-token user to be able to GET
a Node object.
However this causes an issue where kubeadm join from v1.18 cannot join a cluster created by
kubeadm v1.17.
To workaround the issue you have two options:
Execute kubeadm init phase bootstrap-token on a control-plane node using kubeadm v1.18.
Note that this enables the rest of the bootstrap-token permissions as well.
or
Apply the following RBAC manually using kubectl apply -f ... :•
•
◦
◦
| 8,083 |
apiVersion : rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata :
name : kubeadm:get-nodes
rules :
- apiGroups :
- ""
resources :
- nodes
verbs :
- get
---
apiVersion : rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata :
name : kubeadm:get-nodes
roleRef :
apiGroup : rbac.authorization.k8s.io
kind: ClusterRole
name : kubeadm:get-nodes
subjects :
- apiGroup : rbac.authorization.k8s.io
kind: Group
name : system:bootstrappers:kubeadm:default-node-token
ebtables or some similar executable not found during
installation
If you see the following warnings while running kubeadm init
[preflight] WARNING: ebtables not found in system path
[preflight] WARNING: ethtool not found in system path
Then you may be missing ebtables , ethtool or a similar executable on your node. You can install
them with the following commands:
For Ubuntu/Debian users, run apt install ebtables ethtool .
For CentOS/Fedora users, run yum install ebtables et | 8,084 |
htool .
kubeadm blocks waiting for control plane during
installation
If you notice that kubeadm init hangs after printing out the following line:
[apiclient] Created API client, waiting for the control plane to become ready
This may be caused by a number of problems. The most common are:
network connection problems. Check that your machine has full network connectivity
before continuing.•
•
| 8,085 |
the cgroup driver of the container runtime differs from that of the kubelet. To understand
how to configure it properly, see Configuring a cgroup driver .
control plane containers are crashlooping or hanging. You can check this by running
docker ps and investigating each container by running docker logs . For other container
runtime, see Debugging Kubernetes nodes with crictl .
kubeadm blocks when removing managed containers
The following could happen if the container runtime halts and does not remove any
Kubernetes-managed containers:
sudo kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
(block)
A possible solution is to restart the container runtime and then re-run kubeadm reset . You can
also use crictl to debug the state of the container runtime. See Debugging Kubernetes nodes
with crictl .
Pods in RunContainerError , CrashLoopBackOf | 8,086 |
f or Error
state
Right after kubeadm init there should not be any pods in these states.
If there are pods in one of these states right after kubeadm init , please open an issue in
the kubeadm repo. coredns (or kube-dns ) should be in the Pending state until you have
deployed the network add-on.
If you see Pods in the RunContainerError , CrashLoopBackOff or Error state after
deploying the network add-on and nothing happens to coredns (or kube-dns ), it's very
likely that the Pod Network add-on that you installed is somehow broken. You might
have to grant it more RBAC privileges or use a newer version. Please file an issue in the
Pod Network providers' issue tracker and get the issue triaged there.
coredns is stuck in the Pending state
This is expected and part of the design. kubeadm is network provider-agnostic, so the admin
should install the pod network add-on of choice. You have to install a Pod Network before
CoreDNS may be deployed fully. Hence the Pending state before | 8,087 |
the network is set up.
HostPort services do not work
The HostPort and HostIP functionality is available depending on your Pod Network provider.
Please contact the author of the Pod Network add-on to find out whether HostPort and HostIP
functionality are available.
Calico, Canal, and Flannel CNI providers are verified to support HostPort.•
•
•
| 8,088 |
For more information, see the CNI portmap documentation .
If your network provider does not support the portmap CNI plugin, you may need to use the
NodePort feature of services or use HostNetwork=true .
Pods are not accessible via their Service IP
Many network add-ons do not yet enable hairpin mode which allows pods to access
themselves via their Service IP. This is an issue related to CNI. Please contact the network
add-on provider to get the latest status of their support for hairpin mode.
If you are using VirtualBox (directly or via Vagrant), you will need to ensure that
hostname -i returns a routable IP address. By default, the first interface is connected to a
non-routable host-only network. A work around is to modify /etc/hosts , see this
Vagrantfile for an example.
TLS certificate errors
The following error indicates a possible certificate mismatch.
# kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority (possibly
because of "cry | 8,089 |
pto/rsa: verification error" while trying to verify candidate authority certificate
"kubernetes")
Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a
certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The
base64 --decode command can be used to decode the certificate and openssl x509 -text -
noout can be used for viewing the certificate information.
Unset the KUBECONFIG environment variable using:
unset KUBECONFIG
Or set it to the default KUBECONFIG location:
export KUBECONFIG =/etc/kubernetes/admin.conf
Another workaround is to overwrite the existing kubeconfig for the "admin" user:
mv $HOME /.kube $HOME /.kube.bak
mkdir $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config
sudo chown $(id -u ):$(id -g) $HOME /.kube/config
Kubelet client certificate rotation fails
By default, kubeadm configures a kubelet with automatic rotation of client certificates by using
the /var/lib/kubelet/pki/kubelet- | 8,090 |
client-current.pem symlink specified in /etc/kubernetes/•
•
•
•
| 8,091 |
kubelet.conf . If this rotation process fails you might see errors such as x509: certificate has
expired or is not yet valid in kube-apiserver logs. To fix the issue you must follow these steps:
Backup and delete /etc/kubernetes/kubelet.conf and /var/lib/kubelet/pki/kubelet-client*
from the failed node.
From a working control plane node in the cluster that has /etc/kubernetes/pki/ca.key
execute kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE
> kubelet.conf . $NODE must be set to the name of the existing failed node in the cluster.
Modify the resulted kubelet.conf manually to adjust the cluster name and server
endpoint, or pass kubeconfig user --config (see Generating kubeconfig files for additional
users ). If your cluster does not have the ca.key you must sign the embedded certificates in
the kubelet.conf externally.
Copy this resulted kubelet.conf to /etc/kubernetes/kubelet.conf on the failed node.
Restart the kubelet ( systemctl restart kubelet | 8,092 |
) on the failed node and wait for /var/lib/
kubelet/pki/kubelet-client-current.pem to be recreated.
Manually edit the kubelet.conf to point to the rotated kubelet client certificates, by
replacing client-certificate-data and client-key-data with:
client-certificate : /var/lib/kubelet/pki/kubelet-client-current.pem
client-key : /var/lib/kubelet/pki/kubelet-client-current.pem
Restart the kubelet.
Make sure the node becomes Ready .
Default NIC When using flannel as the pod network in
Vagrant
The following error might indicate that something was wrong in the pod network:
Error from server (NotFound ): the server could not find the requested resource
If you're using flannel as the pod network inside Vagrant, then you will have to specify
the default interface name for flannel.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are
assigned the IP address 10.0.2.15 , is for external traffic that gets NATed.
This may lead to problems with flannel, which de | 8,093 |
faults to the first interface on a host.
This leads to all hosts thinking they have the same public IP address. To prevent this, pass
the --iface eth1 flag to flannel so that the second interface is chosen.
Non-public IP used for containers
In some situations kubectl logs and kubectl run commands may return with the following
errors in an otherwise functional cluster:1.
2.
3.
4.
5.
6.
7.
| 8,094 |
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/
mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
This may be due to Kubernetes using an IP that can not communicate with other IPs on
the seemingly same subnet, possibly by policy of the machine provider.
DigitalOcean assigns a public IP to eth0 as well as a private one to be used internally as
anchor for their floating IP feature, yet kubelet will pick the latter as the node's InternalIP
instead of the public one.
Use ip addr show to check for this scenario instead of ifconfig because ifconfig will not
display the offending alias IP address. Alternatively an API endpoint specific to
DigitalOcean allows to query for the anchor IP from the droplet:
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
The workaround is to tell kubelet which IP to use using --node-ip . When using
DigitalOcean, it can be the public one (assigned to eth0) or the private | 8,095 |
one (assigned to
eth1) should you want to use the optional private network. The kubeletExtraArgs section
of the kubeadm NodeRegistrationOptions structure can be used for this.
Then restart kubelet :
systemctl daemon-reload
systemctl restart kubelet
coredns pods have CrashLoopBackOff or Error state
If you have nodes that are running SELinux with an older version of Docker, you might
experience a scenario where the coredns pods are not starting. To solve that, you can try one of
the following options:
Upgrade to a newer version of Docker .
Disable SELinux .
Modify the coredns deployment to set allowPrivilegeEscalation to true:
kubectl -n kube-system get deployment coredns -o yaml | \
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl apply -f -
Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in
Kubernetes detects a loop. A number of workarounds are available to avoid Kubernetes trying
to restart the Core | 8,096 |
DNS Pod every time CoreDNS detects the loop and exits.
Warning: Disabling SELinux or setting allowPrivilegeEscalation to true can compromise the
security of your cluster.
etcd pods restart continually
If you encounter the following error:•
•
•
•
| 8,097 |
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting
container process caused "process_linux.go:110: decoding init error from pipe caused \"read
parent: connection reset by peer\""
This issue appears if you run CentOS 7 with Docker 1.13.1.84. This version of Docker can
prevent the kubelet from executing into the etcd container.
To work around the issue, choose one of these options:
Roll back to an earlier version of Docker, such as 1.13.1-75
yum downgrade docker-1.13.1-75.git8633870.el7.centos.x86_64 docker-
client-1.13.1-75.git8633870.el7.centos.x86_64 docker-
common-1.13.1-75.git8633870.el7.centos.x86_64
Install one of the more recent recommended versions, such as 18.06:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-
ce.repo
yum install docker-ce-18.06.1.ce-3.el7.x86_64
Not possible to pass a comma separated list of values to
arguments inside a --component-extra-args flag
kubeadm init flags such as --comp | 8,098 |
onent-extra-args allow you to pass custom arguments to a
control-plane component like the kube-apiserver. However, this mechanism is limited due to
the underlying type used for parsing the values ( mapStringString ).
If you decide to pass an argument that supports multiple, comma-separated values such as --
apiserver-extra-args "enable-admission-plugins=LimitRanger,NamespaceExists" this flag will
fail with flag: malformed pair, expect string=string . This happens because the list of arguments
for --apiserver-extra-args expects key=value pairs and in this case NamespacesExists is
considered as a key that is missing a value.
Alternatively, you can try separating the key=value pairs like so: --apiserver-extra-args "enable-
admission-plugins=LimitRanger,enable-admission-plugins=NamespaceExists" but this will
result in the key enable-admission-plugins only having the value of NamespaceExists .
A known workaround is to use the kubeadm configuration file .
kube-proxy scheduled before | 8,099 |