text
stringlengths 1
1k
| id
int64 0
8.58k
|
---|---|
supported by kubectl. The default "patchtype" is "strategic". "extension" must be either "json"
or "yaml". "suffix" is an optional string that can be used to determine which patches are applied
first alpha-numerically.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
Add a new local etcd member
Synopsis
Add a new local etcd member
kubeadm join phase control-plane-join etcd [flags]
Options
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will
advertise it's listening on. If not set the default network interface will be used.
--config string
Path to a kubeadm configuration file.
--control-plane
Create a new control plane instance on this node
--dry-run
Don't apply any changes; just output what would be done.
-h, --help
help for etcd
--node-name string
Specify the node name.
--patches string
Path to a directory that contains files named "target[suffix][ | 6,000 |
+patchtype].extension". For
example, "kube-apiserver0+merge.yaml" or just "etcd.json". "target" can be one of "kube-
apiserver", "kube-controller-manager", "kube-scheduler", "etcd", "kubeletconfiguration" | 6,001 |
"patchtype" can be one of "strategic", "merge" or "json" and they match the patch formats
supported by kubectl. The default "patchtype" is "strategic". "extension" must be either "json"
or "yaml". "suffix" is an optional string that can be used to determine which patches are applied
first alpha-numerically.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config
ConfigMap (DEPRECATED)
Synopsis
Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config
ConfigMap (DEPRECATED)
kubeadm join phase control-plane-join update-status [flags]
Options
--apiserver-advertise-address string
If the node should host a new control plane instance, the IP address the API Server will
advertise it's listening on. If not set the default network interface will be used.
--config string
Path to a kubeadm configuration file.
--con | 6,002 |
trol-plane
Create a new control plane instance on this node
-h, --help
help for update-status
--node-name string
Specify the node name.
Options inherited from parent commands
--rootfs strin | 6,003 |
[EXPERIMENTAL] The path to the 'real' host root filesystem.
Mark a node as a control-plane
Synopsis
Mark a node as a control-plane
kubeadm join phase control-plane-join mark-control-plane [flags]
Options
--config string
Path to a kubeadm configuration file.
--control-plane
Create a new control plane instance on this node
--dry-run
Don't apply any changes; just output what would be done.
-h, --help
help for mark-control-plane
--node-name string
Specify the node name.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
What's next
kubeadm init to bootstrap a Kubernetes control-plane node
kubeadm join to connect a node to the cluster
kubeadm reset to revert any changes made to this host by kubeadm init or kubeadm join
kubeadm alpha to try experimental functionality•
•
•
• | 6,004 |
kubeadm kubeconfig
kubeadm kubeconfig provides utilities for managing kubeconfig files.
For examples on how to use kubeadm kubeconfig user see Generating kubeconfig files for
additional users .
kubeadm kubeconfig
overview
Kubeconfig file utilities
Synopsis
Kubeconfig file utilities.
Options
-h, --help
help for kubeconfig
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
kubeadm kubeconfig user
This command can be used to output a kubeconfig file for an additional user.
user
Output a kubeconfig file for an additional user
Synopsis
Output a kubeconfig file for an additional user.
kubeadm kubeconfig user [flags]
Examples
# Output a kubeconfig file for an additional user named foo
kubeadm kubeconfig user --client-name=foo•
• | 6,005 |
# Output a kubeconfig file for an additional user named foo using a kubeadm config file bar
kubeadm kubeconfig user --client-name=foo --config=bar
Options
--client-name string
The name of user. It will be used as the CN if client certificates are created
--config string
Path to a kubeadm configuration file.
-h, --help
help for user
--org strings
The organizations of the client certificate. It will be used as the O if client certificates are
created
--token string
The token that should be used as the authentication mechanism for this kubeconfig, instead of
client certificates
--validity-period duration Default: 8760h0m0s
The validity period of the client certificate. It is an offset from the current time.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
kubeadm reset phase
kubeadm reset phase enables you to invoke atomic steps of the node reset process. Hence, you
can let kubeadm do some of the work and you can fil | 6,006 |
l in the gaps if you wish to apply
customization.
kubeadm reset phase is consistent with the kubeadm reset workflow , and behind the scene both
use the same code | 6,007 |
kubeadm reset phase
phase
Use this command to invoke single phase of the reset workflow
Synopsis
Use this command to invoke single phase of the reset workflow
Options
-h, --help
help for phase
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
kubeadm reset phase preflight
Using this phase you can execute preflight checks on a node that is being reset.
preflight
Run reset pre-flight checks
Synopsis
Run pre-flight checks for kubeadm reset.
kubeadm reset phase preflight [flags]
Options
--dry-run
Don't apply any changes; just output what would be done.
-f, --force
Reset the node without prompting for confirmation.
-h, --help•
• | 6,008 |
help for preflight
--ignore-preflight-errors strings
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'.
Value 'all' ignores errors from all checks.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
kubeadm reset phase remove-etcd-member
Using this phase you can remove this control-plane node's etcd member from the etcd cluster.
remove-etcd-member
Remove a local etcd member.
Synopsis
Remove a local etcd member for a control plane node.
kubeadm reset phase remove-etcd-member [flags]
Options
--dry-run
Don't apply any changes; just output what would be done.
-h, --help
help for remove-etcd-member
--kubeconfig string Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard
locations can be searched for an existing kubeconfig file.
Options inherited from parent commands
--rootfs string• | 6,009 |
[EXPERIMENTAL] The path to the 'real' host root filesystem.
kubeadm reset phase cleanup-node
Using this phase you can perform cleanup on this node.
cleanup-node
Run cleanup node.
Synopsis
Run cleanup node.
kubeadm reset phase cleanup-node [flags]
Options
--cert-dir string Default: "/etc/kubernetes/pki"
The path to the directory where the certificates are stored. If specified, clean this directory.
--cleanup-tmp-dir
Cleanup the "/etc/kubernetes/tmp" directory
--cri-socket string
Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this
option only if you have more than one CRI installed or if you have non-standard CRI socket.
--dry-run
Don't apply any changes; just output what would be done.
-h, --help
help for cleanup-node
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.• | 6,010 |
What's next
kubeadm init to bootstrap a Kubernetes control-plane node
kubeadm join to connect a node to the cluster
kubeadm reset to revert any changes made to this host by kubeadm init or kubeadm join
kubeadm alpha to try experimental functionality
kubeadm upgrade phase
In v1.15.0, kubeadm introduced preliminary support for kubeadm upgrade node phases. Phases
for other kubeadm upgrade sub-commands such as apply , could be added in the following
releases.
kubeadm upgrade node phase
Using this phase you can choose to execute the separate steps of the upgrade of secondary
control-plane or worker nodes. Please note that kubeadm upgrade apply still has to be called on
a primary control-plane node.
phase
preflight
control-plane
kubelet-config
Use this command to invoke single phase of the node workflow
Synopsis
Use this command to invoke single phase of the node workflow
Options
-h, --help
help for phase
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path | 6,011 |
to the 'real' host root filesystem.
Run upgrade node pre-flight checks
Synopsis
Run pre-flight checks for kubeadm upgrade node.•
•
•
•
•
•
•
| 6,012 |
kubeadm upgrade node phase preflight [flags]
Options
-h, --help
help for preflight
--ignore-preflight-errors strings
A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'.
Value 'all' ignores errors from all checks.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
Upgrade the control plane instance deployed on this node, if any
Synopsis
Upgrade the control plane instance deployed on this node, if any
kubeadm upgrade node phase control-plane [flags]
Options
--certificate-renewal Default: true
Perform the renewal of certificates used by component changed during upgrades.
--dry-run
Do not change any state, just output the actions that would be performed.
--etcd-upgrade Default: true
Perform the upgrade of etcd.
-h, --help
help for control-plane
--kubeconfig string Default: "/etc/kubernetes/admin.conf" | 6,013 |
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard
locations can be searched for an existing kubeconfig file.
--patches string
Path to a directory that contains files named "target[suffix][+patchtype].extension". For
example, "kube-apiserver0+merge.yaml" or just "etcd.json". "target" can be one of "kube-
apiserver", "kube-controller-manager", "kube-scheduler", "etcd", "kubeletconfiguration".
"patchtype" can be one of "strategic", "merge" or "json" and they match the patch formats
supported by kubectl. The default "patchtype" is "strategic". "extension" must be either "json"
or "yaml". "suffix" is an optional string that can be used to determine which patches are applied
first alpha-numerically.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
Upgrade the kubelet configuration for this node
Synopsis
Download the kubelet configuration from the kubelet-config ConfigMap stored i | 6,014 |
n the cluster
kubeadm upgrade node phase kubelet-config [flags]
Options
--dry-run
Do not change any state, just output the actions that would be performed.
-h, --help
help for kubelet-config
--kubeconfig string Default: "/etc/kubernetes/admin.conf"
The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard
locations can be searched for an existing kubeconfig file.
--patches string
Path to a directory that contains files named "target[suffix][+patchtype].extension". For
example, "kube-apiserver0+merge.yaml" or just "etcd.json". "target" can be one of "kube-
apiserver", "kube-controller-manager", "kube-scheduler", "etcd", "kubeletconfiguration".
"patchtype" can be one of "strategic", "merge" or "json" and they match the patch formats
supported by kubectl. The default "patchtype" is "strategic". "extension" must be either "json | 6,015 |
or "yaml". "suffix" is an optional string that can be used to determine which patches are applied
first alpha-numerically.
Options inherited from parent commands
--rootfs string
[EXPERIMENTAL] The path to the 'real' host root filesystem.
What's next
kubeadm init to bootstrap a Kubernetes control-plane node
kubeadm join to connect a node to the cluster
kubeadm reset to revert any changes made to this host by kubeadm init or kubeadm join
kubeadm upgrade to upgrade a kubeadm node
kubeadm alpha to try experimental functionality
Implementation details
FEATURE STATE: Kubernetes v1.10 [stable]
kubeadm init and kubeadm join together provides a nice user experience for creating a best-
practice but bare Kubernetes cluster from scratch. However, it might not be obvious how
kubeadm does that.
This document provides additional details on what happen under the hood, with the aim of
sharing knowledge on Kubernetes cluster best practices.
Core design principles
The cluster that kubeadm init | 6,016 |
and kubeadm join set up should be:
Secure : It should adopt latest best-practices like:
enforcing RBAC
using the Node Authorizer
using secure communication between the control plane components
using secure communication between the API server and the kubelets
lock-down the kubelet API
locking down access to the API for system components like the kube-proxy and
CoreDNS
locking down what a Bootstrap Token can access
User-friendly : The user should not have to run anything more than a couple of
commands:
kubeadm init
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f <network-of-choice.yaml>
kubeadm join --token <token> <endpoint>:<port>•
•
•
•
•
•
◦
◦
◦
◦
◦
◦
◦
•
◦
◦
◦
| 6,017 |
Extendable :
It should not favor any particular network provider. Configuring the cluster
network is out-of-scope
It should provide the possibility to use a config file for customizing various
parameters
Constants and well-known values and paths
In order to reduce complexity and to simplify development of higher level tools that build on
top of kubeadm, it uses a limited set of constant values for well-known paths and file names.
The Kubernetes directory /etc/kubernetes is a constant in the application, since it is clearly the
given path in a majority of cases, and the most intuitive location; other constants paths and file
names are:
/etc/kubernetes/manifests as the path where kubelet should look for static Pod manifests.
Names of static Pod manifests are:
etcd.yaml
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml
/etc/kubernetes/ as the path where kubeconfig files with identities for control plane
components are stored. Names of kubeconfig files are:
kubelet.co | 6,018 |
nf (bootstrap-kubelet.conf during TLS bootstrap)
controller-manager.conf
scheduler.conf
admin.conf for the cluster admin and kubeadm itself
super-admin.conf for the cluster super-admin that can bypass RBAC
Names of certificates and key files :
ca.crt , ca.key for the Kubernetes certificate authority
apiserver.crt , apiserver.key for the API server certificate
apiserver-kubelet-client.crt , apiserver-kubelet-client.key for the client certificate
used by the API server to connect to the kubelets securely
sa.pub , sa.key for the key used by the controller manager when signing
ServiceAccount
front-proxy-ca.crt , front-proxy-ca.key for the front proxy certificate authority
front-proxy-client.crt , front-proxy-client.key for the front proxy client
kubeadm init workflow internal design
The kubeadm init internal workflow consists of a sequence of atomic work tasks to perform, as
described in kubeadm init .
The kubeadm init phase command allows users to invoke each task individuall | 6,019 |
y, and ultimately
offers a reusable and composable API/toolbox that can be used by other Kubernetes bootstrap
tools, by any IT automation tool or by an advanced user for creating custom clusters.•
◦
◦
•
◦
◦
◦
◦
•
◦
◦
◦
◦
◦
•
◦
◦
◦
◦
◦
| 6,020 |
Preflight checks
Kubeadm executes a set of preflight checks before starting the init, with the aim to verify
preconditions and avoid common cluster startup problems. The user can skip specific preflight
checks or all of them with the --ignore-preflight-errors option.
[warning] If the Kubernetes version to use (specified with the --kubernetes-version flag)
is at least one minor version higher than the kubeadm CLI version.
Kubernetes system requirements:
if running on linux:
[error] if Kernel is older than the minimum required version
[error] if required cgroups subsystem aren't set up
[error] if the CRI endpoint does not answer
[error] if user is not root
[error] if the machine hostname is not a valid DNS subdomain
[warning] if the host name cannot be reached via network lookup
[error] if kubelet version is lower that the minimum kubelet version supported by
kubeadm (current minor -1)
[error] if kubelet version is at least one minor higher than the required controlplane
version (unsup | 6,021 |
ported version skew)
[warning] if kubelet service does not exist or if it is disabled
[warning] if firewalld is active
[error] if API server bindPort or ports 10250/10251/10252 are used
[Error] if /etc/kubernetes/manifest folder already exists and it is not empty
[Error] if /proc/sys/net/bridge/bridge-nf-call-iptables file does not exist/does not contain
1
[Error] if advertise address is ipv6 and /proc/sys/net/bridge/bridge-nf-call-ip6tables does
not exist/does not contain 1.
[Error] if swap is on
[Error] if conntrack , ip, iptables , mount , nsenter commands are not present in the
command path
[warning] if ebtables , ethtool , socat , tc, touch , crictl commands are not present in the
command path
[warning] if extra arg flags for API server, controller manager, scheduler contains some
invalid options
[warning] if connection to https://API.AdvertiseAddress:API.BindPort goes through
proxy
[warning] if connection to services subnet goes through proxy (only first address
checked)
[wa | 6,022 |
rning] if connection to Pods subnet goes through proxy (only first address checked)
If external etcd is provided:
[Error] if etcd version is older than the minimum required version
[Error] if etcd certificates or keys are specified, but not provided
If external etcd is NOT provided (and thus local etcd will be installed):
[Error] if ports 2379 is used
[Error] if Etcd.DataDir folder already exists and it is not empty
If authorization mode is ABAC:
[Error] if abac_policy.json does not exist
If authorization mode is WebHook
[Error] if webhook_authz.conf does not exist•
•
◦
▪
▪
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
◦
◦
•
◦
◦
•
◦
•
| 6,023 |
Please note that:
Preflight checks can be invoked individually with the kubeadm init phase preflight
command
Generate the necessary certificates
Kubeadm generates certificate and private key pairs for different purposes:
A self signed certificate authority for the Kubernetes cluster saved into ca.crt file and
ca.key private key file
A serving certificate for the API server, generated using ca.crt as the CA, and saved into
apiserver.crt file with its private key apiserver.key . This certificate should contain
following alternative names:
The Kubernetes service's internal clusterIP (the first address in the services CIDR,
e.g. 10.96.0.1 if service subnet is 10.96.0.0/12 )
Kubernetes DNS names, e.g. kubernetes.default.svc.cluster.local if --service-dns-
domain flag value is cluster.local , plus default DNS names kubernetes.default.svc ,
kubernetes.default , kubernetes
The node-name
The --apiserver-advertise-address
Additional alternative names specified by the user
A client cert | 6,024 |
ificate for the API server to connect to the kubelets securely, generated using
ca.crt as the CA and saved into apiserver-kubelet-client.crt file with its private key
apiserver-kubelet-client.key . This certificate should be in the system:masters organization
A private key for signing ServiceAccount Tokens saved into sa.key file along with its
public key sa.pub
A certificate authority for the front proxy saved into front-proxy-ca.crt file with its key
front-proxy-ca.key
A client cert for the front proxy client, generate using front-proxy-ca.crt as the CA and
saved into front-proxy-client.crt file with its private key front-proxy-client.key
Certificates are stored by default in /etc/kubernetes/pki , but this directory is configurable using
the --cert-dir flag.
Please note that:
If a given certificate and private key pair both exist, and its content is evaluated
compliant with the above specs, the existing files will be used and the generation phase
for the given certificate s | 6,025 |
kipped. This means the user can, for example, copy an existing
CA to /etc/kubernetes/pki/ca.{crt,key} , and then kubeadm will use those files for signing
the rest of the certs. See also using custom certificates
Only for the CA, it is possible to provide the ca.crt file but not the ca.key file, if all other
certificates and kubeconfig files already are in place kubeadm recognize this condition
and activates the ExternalCA , which also implies the csrsigner controller in controller-
manager won't be started
If kubeadm is running in external CA mode ; all the certificates must be provided by the
user, because kubeadm cannot generate them by itself1.
•
•
◦
◦
◦
◦
◦
•
•
•
•
1.
2.
3 | 6,026 |
In case of kubeadm is executed in the --dry-run mode, certificates files are written in a
temporary folder
Certificate generation can be invoked individually with the kubeadm init phase certs all
command
Generate kubeconfig files for control plane components
Kubeadm generates kubeconfig files with identities for control plane components:
A kubeconfig file for the kubelet to use during TLS bootstrap - /etc/kubernetes/bootstrap-
kubelet.conf. Inside this file there is a bootstrap-token or embedded client certificates for
authenticating this node with the cluster.
This client cert should:
Be in the system:nodes organization, as required by the Node Authorization
module
Have the Common Name (CN) system:node:<hostname-lowercased>
A kubeconfig file for controller-manager, /etc/kubernetes/controller-manager.conf ; inside
this file is embedded a client certificate with controller-manager identity. This client cert
should have the CN system:kube-controller-manager , as defined by default RBAC | 6,027 |
core
components roles
A kubeconfig file for scheduler, /etc/kubernetes/scheduler.conf ; inside this file is
embedded a client certificate with scheduler identity. This client cert should have the CN
system:kube-scheduler , as defined by default RBAC core components roles
Additionally, a kubeconfig file for kubeadm as an administrative entity is generated and stored
in /etc/kubernetes/admin.conf . This file includes a certificate with Subject: O =
kubeadm:cluster-admins, CN = kubernetes-admin . kubeadm:cluster-admins is a group managed
by kubeadm. It is bound to the cluster-admin ClusterRole during kubeadm init , by using the
super-admin.conf file, which does not require RBAC. This admin.conf file must remain on
control plane nodes and not be shared with additional users.
During kubeadm init another kubeconfig file is generated and stored in /etc/kubernetes/super-
admin.conf . This file includes a certificate with Subject: O = system:masters, CN = kubernetes-
super-admin . syst | 6,028 |
em:masters is a super user group that bypasses RBAC and makes super-
admin.conf useful in case of an emergency where a cluster is locked due to RBAC
misconfiguration. The super-admin.conf file can be stored in a safe location and not shared with
additional users.
See RBAC user facing role bindings for additional information RBAC and built-in ClusterRoles
and groups.
Please note that:
ca.crt certificate is embedded in all the kubeconfig files.
If a given kubeconfig file exists, and its content is evaluated compliant with the above
specs, the existing file will be used and the generation phase for the given kubeconfig
skipped
If kubeadm is running in ExternalCA mode , all the required kubeconfig must be provided
by the user as well, because kubeadm cannot generate any of them by itself4.
5.
•
◦
◦
•
•
1.
2.
3 | 6,029 |
In case of kubeadm is executed in the --dry-run mode, kubeconfig files are written in a
temporary folder
Kubeconfig files generation can be invoked individually with the kubeadm init phase
kubeconfig all command
Generate static Pod manifests for control plane components
Kubeadm writes static Pod manifest files for control plane components to /etc/kubernetes/
manifests . The kubelet watches this directory for Pods to create on startup.
Static Pod manifest share a set of common properties:
All static Pods are deployed on kube-system namespace
All static Pods get tier:control-plane and component:{component-name} labels
All static Pods use the system-node-critical priority class
hostNetwork: true is set on all static Pods to allow control plane startup before a network
is configured; as a consequence:
The address that the controller-manager and the scheduler use to refer the API
server is 127.0.0.1
If using a local etcd server, etcd-servers address will be set to 127.0.0.1:2379
L | 6,030 |
eader election is enabled for both the controller-manager and the scheduler
Controller-manager and the scheduler will reference kubeconfig files with their
respective, unique identities
All static Pods get any extra flags specified by the user as described in passing custom
arguments to control plane components
All static Pods get any extra Volumes specified by the user (Host path)
Please note that:
All images will be pulled from registry.k8s.io by default. See using custom images for
customizing the image repository
In case of kubeadm is executed in the --dry-run mode, static Pods files are written in a
temporary folder
Static Pod manifest generation for control plane components can be invoked individually
with the kubeadm init phase control-plane all command
API server
The static Pod manifest for the API server is affected by following parameters provided by the
users:
The apiserver-advertise-address and apiserver-bind-port to bind to; if not provided, those
value defaults to th | 6,031 |
e IP address of the default network interface on the machine and port
6443
The service-cluster-ip-range to use for services4.
5.
•
•
•
•
◦
◦
•
•
•
•
1.
2.
3.
•
| 6,032 |
If an external etcd server is specified, the etcd-servers address and related TLS settings
(etcd-cafile , etcd-certfile , etcd-keyfile ); if an external etcd server is not be provided, a local
etcd will be used (via host network)
If a cloud provider is specified, the corresponding --cloud-provider is configured, together
with the --cloud-config path if such file exists (this is experimental, alpha and will be
removed in a future version)
Other API server flags that are set unconditionally are:
--insecure-port=0 to avoid insecure connections to the api server
--enable-bootstrap-token-auth=true to enable the BootstrapTokenAuthenticator
authentication module. See TLS Bootstrapping for more details
--allow-privileged to true (required e.g. by kube proxy)
--requestheader-client-ca-file to front-proxy-ca.crt
--enable-admission-plugins to:
NamespaceLifecycle e.g. to avoid deletion of system reserved namespaces
LimitRanger and ResourceQuota to enforce limits on namespaces
ServiceAc | 6,033 |
count to enforce service account automation
PersistentVolumeLabel attaches region or zone labels to PersistentVolumes as
defined by the cloud provider (This admission controller is deprecated and will be
removed in a future version. It is not deployed by kubeadm by default with v1.9
onwards when not explicitly opting into using gce or aws as cloud providers)
DefaultStorageClass to enforce default storage class on PersistentVolumeClaim
objects
DefaultTolerationSeconds
NodeRestriction to limit what a kubelet can modify (e.g. only pods on this node)
--kubelet-preferred-address-types to InternalIP,ExternalIP,Hostname; this makes kubectl
logs and other API server-kubelet communication work in environments where the
hostnames of the nodes aren't resolvable
Flags for using certificates generated in previous steps:
--client-ca-file to ca.crt
--tls-cert-file to apiserver.crt
--tls-private-key-file to apiserver.key
--kubelet-client-certificate to apiserver-kubelet-client.crt
--kubelet | 6,034 |
-client-key to apiserver-kubelet-client.key
--service-account-key-file to sa.pub
--requestheader-client-ca-file tofront-proxy-ca.crt
--proxy-client-cert-file to front-proxy-client.crt
--proxy-client-key-file to front-proxy-client.key
Other flags for securing the front proxy ( API Aggregation ) communications:
--requestheader-username-headers=X-Remote-User
--requestheader-group-headers=X-Remote-Group
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-allowed-names=front-proxy-client•
•
•
•
•
•
•
◦
◦
◦
◦
◦
◦
◦
•
•
◦
◦
◦
◦
◦
◦
◦
◦
◦
•
◦
◦
◦
| 6,035 |
Controller manager
The static Pod manifest for the controller manager is affected by following parameters provided
by the users:
If kubeadm is invoked specifying a --pod-network-cidr , the subnet manager feature
required for some CNI network plugins is enabled by setting:
--allocate-node-cidrs=true
--cluster-cidr and --node-cidr-mask-size flags according to the given CIDR
If a cloud provider is specified, the corresponding --cloud-provider is specified, together
with the --cloud-config path if such configuration file exists (this is experimental, alpha
and will be removed in a future version)
Other flags that are set unconditionally are:
--controllers enabling all the default controllers plus BootstrapSigner and TokenCleaner
controllers for TLS bootstrap. See TLS Bootstrapping for more details
--use-service-account-credentials to true
Flags for using certificates generated in previous steps:
--root-ca-file to ca.crt
--cluster-signing-cert-file to ca.crt , if External CA mode | 6,036 |
is disabled, otherwise to ""
--cluster-signing-key-file to ca.key , if External CA mode is disabled, otherwise to ""
--service-account-private-key-file to sa.key
Scheduler
The static Pod manifest for the scheduler is not affected by parameters provided by the users.
Generate static Pod manifest for local etcd
If you specified an external etcd this step will be skipped, otherwise kubeadm generates a static
Pod manifest file for creating a local etcd instance running in a Pod with following attributes:
listen on localhost:2379 and use HostNetwork=true
make a hostPath mount out from the dataDir to the host's filesystem
Any extra flags specified by the user
Please note that:
The etcd container image will be pulled from registry.gcr.io by default. See using custom
images for customizing the image repository.
If you run kubeadm in --dry-run mode, the etcd static Pod manifest is written into a
temporary folder.
You can directly invoke static Pod manifest generation for local etcd, usi | 6,037 |
ng the kubeadm
init phase etcd local command.•
◦
◦
•
•
•
•
◦
◦
◦
◦
•
•
•
1.
2.
3 | 6,038 |
Wait for the control plane to come up
kubeadm waits (upto 4m0s) until localhost:6443/healthz (kube-apiserver liveness) returns ok.
However in order to detect deadlock conditions, kubeadm fails fast if localhost:10255/healthz
(kubelet liveness) or localhost:10255/healthz/syncloop (kubelet readiness) don't return ok
within 40s and 60s respectively.
kubeadm relies on the kubelet to pull the control plane images and run them properly as static
Pods. After the control plane is up, kubeadm completes the tasks described in following
paragraphs.
Save the kubeadm ClusterConfiguration in a ConfigMap for later
reference
kubeadm saves the configuration passed to kubeadm init in a ConfigMap named kubeadm-
config under kube-system namespace.
This will ensure that kubeadm actions executed in future (e.g kubeadm upgrade ) will be able to
determine the actual/current cluster state and make new decisions based on that data.
Please note that:
Before saving the ClusterConfiguration, sensitive informa | 6,039 |
tion like the token is stripped
from the configuration
Upload of control plane node configuration can be invoked individually with the
command kubeadm init phase upload-config .
Mark the node as control-plane
As soon as the control plane is available, kubeadm executes following actions:
Labels the node as control-plane with node-role.kubernetes.io/control-plane=""
Taints the node with node-role.kubernetes.io/control-plane:NoSchedule
Please note that the phase to mark the control-plane phase can be invoked individually with the
kubeadm init phase mark-control-plane command.
Taints the node with node-role.kubernetes.io/master:NoSchedule and node-
role.kubernetes.io/control-plane:NoSchedule
Please note that:
The node-role.kubernetes.io/master taint is deprecated and will be removed in kubeadm
version 1.25
Mark control-plane phase can be invoked individually with the command kubeadm init
phase mark-control-plane
Configure TLS-Bootstrapping for node joining
Kubeadm uses Authenticating w | 6,040 |
ith Bootstrap Tokens for joining new nodes to an existing
cluster; for more details see also design proposal .1.
2.
•
•
•
1.
2 | 6,041 |
kubeadm init ensures that everything is properly configured for this process, and this includes
following steps as well as setting API server and controller flags as already described in
previous paragraphs.
Please note that:
TLS bootstrapping for nodes can be configured with the command kubeadm init phase
bootstrap-token , executing all the configuration steps described in following paragraphs;
alternatively, each step can be invoked individually
Create a bootstrap token
kubeadm init create a first bootstrap token, either generated automatically or provided by the
user with the --token flag; as documented in bootstrap token specification, token should be
saved as secrets with name bootstrap-token-<token-id> under kube-system namespace.
Please note that:
The default token created by kubeadm init will be used to validate temporary user during
TLS bootstrap process; those users will be member of
system:bootstrappers:kubeadm:default-node-token group
The token has a limited validi | 6,042 |
ty, default 24 hours (the interval may be changed with the —
token-ttl flag)
Additional tokens can be created with the kubeadm token command, that provide as well
other useful functions for token management.
Allow joining nodes to call CSR API
Kubeadm ensures that users in system:bootstrappers:kubeadm:default-node-token group are
able to access the certificate signing API.
This is implemented by creating a ClusterRoleBinding named kubeadm:kubelet-bootstrap
between the group above and the default RBAC role system:node-bootstrapper .
Set up auto approval for new bootstrap tokens
Kubeadm ensures that the Bootstrap Token will get its CSR request automatically approved by
the csrapprover controller.
This is implemented by creating ClusterRoleBinding named kubeadm:node-autoapprove-
bootstrap between the system:bootstrappers:kubeadm:default-node-token group and the default
role system:certificates.k8s.io:certificatesigningrequests:nodeclient .
The role system:certificates.k8s.io:certific | 6,043 |
atesigningrequests:nodeclient should be created as
well, granting POST permission to /apis/certificates.k8s.io/certificatesigningrequests/nodeclient .
Set up nodes certificate rotation with auto approval
Kubeadm ensures that certificate rotation is enabled for nodes, and that new certificate request
for nodes will get its CSR request automatically approved by the csrapprover controller.1.
1.
2.
3 | 6,044 |
This is implemented by creating ClusterRoleBinding named kubeadm:node-autoapprove-
certificate-rotation between the system:nodes group and the default role
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient .
Create the public cluster-info ConfigMap
This phase creates the cluster-info ConfigMap in the kube-public namespace.
Additionally it creates a Role and a RoleBinding granting access to the ConfigMap for
unauthenticated users (i.e. users in RBAC group system:unauthenticated ).
Please note that:
The access to the cluster-info ConfigMap is not rate-limited. This may or may not be a
problem if you expose your cluster's API server to the internet; worst-case scenario here
is a DoS attack where an attacker uses all the in-flight requests the kube-apiserver can
handle to serving the cluster-info ConfigMap.
Install addons
Kubeadm installs the internal DNS server and the kube-proxy addon components via the API
server.
Please note that:
This phase can be invoked in | 6,045 |
dividually with the command kubeadm init phase addon all .
proxy
A ServiceAccount for kube-proxy is created in the kube-system namespace; then kube-proxy is
deployed as a DaemonSet:
The credentials ( ca.crt and token ) to the control plane come from the ServiceAccount
The location (URL) of the API server comes from a ConfigMap
The kube-proxy ServiceAccount is bound to the privileges in the system:node-proxier
ClusterRole
DNS
The CoreDNS service is named kube-dns . This is done to prevent any interruption in
service when the user is switching the cluster DNS from kube-dns to CoreDNS the --
config method described here.
A ServiceAccount for CoreDNS is created in the kube-system namespace.
The coredns ServiceAccount is bound to the privileges in the system:coredns ClusterRole
In Kubernetes version 1.21, support for using kube-dns with kubeadm was removed. You can
use CoreDNS with kubeadm even when the related Service is named kube-dns .1.
1.
•
•
•
•
•
| 6,046 |
kubeadm join phases internal design
Similarly to kubeadm init , also kubeadm join internal workflow consists of a sequence of
atomic work tasks to perform.
This is split into discovery (having the Node trust the Kubernetes Master) and TLS bootstrap
(having the Kubernetes Master trust the Node).
see Authenticating with Bootstrap Tokens or the corresponding design proposal .
Preflight checks
kubeadm executes a set of preflight checks before starting the join, with the aim to verify
preconditions and avoid common cluster startup problems.
Please note that:
kubeadm join preflight checks are basically a subset kubeadm init preflight checks
Starting from 1.24, kubeadm uses crictl to communicate to all known CRI endpoints.
Starting from 1.9, kubeadm provides support for joining nodes running on Windows; in
that case, linux specific controls are skipped.
In any case the user can skip specific preflight checks (or eventually all preflight checks)
with the --ignore-preflight-errors option. | 6,047 |
Discovery cluster-info
There are 2 main schemes for discovery. The first is to use a shared token along with the IP
address of the API server. The second is to provide a file (that is a subset of the standard
kubeconfig file).
Shared token discovery
If kubeadm join is invoked with --discovery-token , token discovery is used; in this case the
node basically retrieves the cluster CA certificates from the cluster-info ConfigMap in the kube-
public namespace.
In order to prevent "man in the middle" attacks, several steps are taken:
First, the CA certificate is retrieved via insecure connection (this is possible because
kubeadm init granted access to cluster-info users for system:unauthenticated )
Then the CA certificate goes trough following validation steps:
Basic validation: using the token ID against a JWT signature
Pub key validation: using provided --discovery-token-ca-cert-hash . This value is
available in the output of kubeadm init or can be calculated using standard tools
| 6,048 |
(the hash is calculated over the bytes of the Subject Public Key Info (SPKI) object as
in RFC7469). The --discovery-token-ca-cert-hash flag may be repeated multiple
times to allow more than one public key.
As a additional validation, the CA certificate is retrieved via secure connection and
then compared with the CA retrieved initially1.
2.
3.
4.
•
•
◦
◦
| 6,049 |
Please note that:
Pub key validation can be skipped passing --discovery-token-unsafe-skip-ca-verification
flag; This weakens the kubeadm security model since others can potentially impersonate
the Kubernetes Master.
File/https discovery
If kubeadm join is invoked with --discovery-file , file discovery is used; this file can be a local file
or downloaded via an HTTPS URL; in case of HTTPS, the host installed CA bundle is used to
verify the connection.
With file discovery, the cluster CA certificates is provided into the file itself; in fact, the
discovery file is a kubeconfig file with only server and certificate-authority-data attributes set,
as described in kubeadm join reference doc; when the connection with the cluster is
established, kubeadm try to access the cluster-info ConfigMap, and if available, uses it.
TLS Bootstrap
Once the cluster info are known, the file bootstrap-kubelet.conf is written, thus allowing
kubelet to do TLS Bootstrapping.
The TLS bootstrap mechanism use | 6,050 |
s the shared token to temporarily authenticate with the
Kubernetes API server to submit a certificate signing request (CSR) for a locally created key
pair.
The request is then automatically approved and the operation completes saving ca.crt file and
kubelet.conf file to be used by kubelet for joining the cluster, while bootstrap-kubelet.conf is
deleted.
Please note that:
The temporary authentication is validated against the token saved during the kubeadm
init process (or with additional tokens created with kubeadm token )
The temporary authentication resolve to a user member of
system:bootstrappers:kubeadm:default-node-token group which was granted access to
CSR api during the kubeadm init process
The automatic CSR approval is managed by the csrapprover controller, according with
configuration done the kubeadm init process
Command line tool (kubectl)
Kubernetes provides a command line tool for communicating with a Kubernetes cluster's
control plane , using the Kubernetes API. | 6,051 |
This tool is named kubectl .
For configuration, kubectl looks for a file named config in the $HOME/.kube directory. You can
specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting
the --kubeconfig flag.1.
•
•
| 6,052 |
This overview covers kubectl syntax, describes the command operations, and provides common
examples. For details about each command, including all the supported flags and subcommands,
see the kubectl reference documentation.
For installation instructions, see Installing kubectl ; for a quick guide, see the cheat sheet . If
you're used to using the docker command-line tool, kubectl for Docker Users explains some
equivalent commands for Kubernetes.
Syntax
Use the following syntax to run kubectl commands from your terminal window:
kubectl [command ] [TYPE ] [NAME ] [flags ]
where command , TYPE , NAME , and flags are:
command : Specifies the operation that you want to perform on one or more resources, for
example create , get, describe , delete .
TYPE : Specifies the resource type . Resource types are case-insensitive and you can
specify the singular, plural, or abbreviated forms. For example, the following commands
produce the same output:
kubectl get pod pod1
kubectl get pods pod | 6,053 |
1
kubectl get po pod1
NAME : Specifies the name of the resource. Names are case-sensitive. If the name is
omitted, details for all resources are displayed, for example kubectl get pods .
When performing an operation on multiple resources, you can specify each resource by
type and name or specify one or more files:
To specify resources by type and name:
To group resources if they are all the same type: TYPE1 name1 name2
name<#> .
Example: kubectl get pod example-pod1 example-pod2
To specify multiple resource types individually: TYPE1/name1 TYPE1/name2
TYPE2/name3 TYPE<#>/name<#> .
Example: kubectl get pod/example-pod1 replicationcontroller/example-rc1
To specify resources with one or more files: -f file1 -f file2 -f file<#>
Use YAML rather than JSON since YAML tends to be more user-friendly,
especially for configuration files.
Example: kubectl get -f ./pod.yaml
flags : Specifies optional flags. For example, you can use the -s or --server flags to specify
the address and port of the | 6,054 |
Kubernetes API server.
Caution: Flags that you specify from the command line override default values and any
corresponding environment variables.•
•
•
◦
▪
▪
◦
▪
| 6,055 |
If you need help, run kubectl help from the terminal window.
In-cluster authentication and namespace overrides
By default kubectl will first determine if it is running within a pod, and thus in a cluster. It
starts by checking for the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT
environment variables and the existence of a service account token file at /var/run/secrets/
kubernetes.io/serviceaccount/token . If all three are found in-cluster authentication is assumed.
To maintain backwards compatibility, if the POD_NAMESPACE environment variable is set
during in-cluster authentication it will override the default namespace from the service account
token. Any manifests or tools relying on namespace defaulting will be affected by this.
POD_NAMESPACE environment variable
If the POD_NAMESPACE environment variable is set, cli operations on namespaced resources
will default to the variable value. For example, if the variable is set to seattle , kubectl get pods
would return pods i | 6,056 |
n the seattle namespace. This is because pods are a namespaced resource,
and no namespace was provided in the command. Review the output of kubectl api-resources to
determine if a resource is namespaced.
Explicit use of --namespace <value> overrides this behavior.
How kubectl handles ServiceAccount tokens
If:
there is Kubernetes service account token file mounted at /var/run/secrets/kubernetes.io/
serviceaccount/token , and
the KUBERNETES_SERVICE_HOST environment variable is set, and
the KUBERNETES_SERVICE_PORT environment variable is set, and
you don't explicitly specify a namespace on the kubectl command line
then kubectl assumes it is running in your cluster. The kubectl tool looks up the namespace of
that ServiceAccount (this is the same as the namespace of the Pod) and acts against that
namespace. This is different from what happens outside of a cluster; when kubectl runs outside
a cluster and you don't specify a namespace, the kubectl command acts against the namespace
set f | 6,057 |
or the current context in your client configuration. To change the default namespace for
your kubectl you can use the following command:
kubectl config set-context --current --namespace =<namespace-name>
Operations
The following table includes short descriptions and the general syntax for all of the kubectl
operations:
Operation Syntax Description
alpha kubectl alpha SUBCOMMAND [flags]List the available commands
that correspond to alpha
features, which are not enabled•
•
•
| 6,058 |
Operation Syntax Description
in Kubernetes clusters by
default.
annotatekubectl annotate (-f FILENAME | TYPE NAME |
TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
[--overwrite] [--all] [--resource-version=version]
[flags]Add or update the annotations
of one or more resources.
api-
resourceskubectl api-resources [flags]List the API resources that are
available.
api-
versionskubectl api-versions [flags]List the API versions that are
available.
apply kubectl apply -f FILENAME [flags]Apply a configuration change to
a resource from a file or stdin.
attachkubectl attach POD -c CONTAINER [-i] [-t]
[flags]Attach to a running container
either to view the output stream
or interact with the container
(stdin).
auth kubectl auth [flags] [options] Inspect authorization.
autoscalekubectl autoscale (-f FILENAME | TYPE NAME |
TYPE/NAME) [--min=MINPODS] --
max=MAXPODS [--cpu-percent=CPU] [flags]Automatically scale the set of
pods that are managed by a
replication controller.
certificate kubectl certificate | 6,059 |
SUBCOMMAND [options] Modify certificate resources.
cluster-
infokubectl cluster-info [flags]Display endpoint information
about the master and services in
the cluster.
completion kubectl completion SHELL [options]Output shell completion code
for the specified shell (bash or
zsh).
config kubectl config SUBCOMMAND [flags]Modifies kubeconfig files. See
the individual subcommands for
details.
convert kubectl convert -f FILENAME [options]Convert config files between
different API versions. Both
YAML and JSON formats are
accepted. Note - requires
kubectl-convert plugin to be
installed.
cordon kubectl cordon NODE [options] Mark node as unschedulable.
cpkubectl cp <file-spec-src> <file-spec-dest>
[options]Copy files and directories to and
from containers.
create kubectl create -f FILENAME [flags]Create one or more resources
from a file or stdin.
deletekubectl delete (-f FILENAME | TYPE [NAME | /
NAME | -l label | --all]) [flags]Delete resources either from a
file, stdin, or specifying label | 6,060 |
selectors, names, resource
selectors, or resources.
describekubectl describe (-f FILENAME | TYPE
[NAME_PREFIX | /NAME | -l label]) [flags]Display the detailed state of one
or more resources.
diff kubectl diff -f FILENAME [flags]Diff file or stdin against live
configuration | 6,061 |
Operation Syntax Description
drain kubectl drain NODE [options]Drain node in preparation for
maintenance.
editkubectl edit (-f FILENAME | TYPE NAME | TYPE/
NAME) [flags]Edit and update the definition of
one or more resources on the
server by using the default
editor.
events kubectl events List events
execkubectl exec POD [-c CONTAINER] [-i] [-t]
[flags] [-- COMMAND [args...]]Execute a command against a
container in a pod.
explain kubectl explain TYPE [--recursive=false] [flags]Get documentation of various
resources. For instance pods,
nodes, services, etc.
exposekubectl expose (-f FILENAME | TYPE NAME |
TYPE/NAME) [--port=port] [--protocol=TCP|
UDP] [--target-port=number-or-name] [--
name=name] [--external-ip=external-ip-of-
service] [--type=type] [flags]Expose a replication controller,
service, or pod as a new
Kubernetes service.
getkubectl get (-f FILENAME | TYPE [NAME | /
NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o
| --output]=OUTPUT_FORMAT] [flags]List one or more resourc | 6,062 |
es.
kustomize kubectl kustomize <dir> [flags] [options]List a set of API resources
generated from instructions in a
kustomization.yaml file. The
argument must be the path to
the directory containing the file,
or a git repository URL with a
path suffix specifying same with
respect to the repository root.
labelkubectl label (-f FILENAME | TYPE NAME |
TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
[--overwrite] [--all] [--resource-version=version]
[flags]Add or update the labels of one
or more resources.
logskubectl logs POD [-c CONTAINER] [--follow]
[flags]Print the logs for a container in
a pod.
options kubectl optionsList of global command-line
options, which apply to all
commands.
patchkubectl patch (-f FILENAME | TYPE NAME |
TYPE/NAME) --patch PATCH [flags]Update one or more fields of a
resource by using the strategic
merge patch process.
plugin kubectl plugin [flags] [options]Provides utilities for interacting
with plugins.
port-
forwardkubectl port-forward POD
[LOCAL_PORT:]REMOTE_POR | 6,063 |
T [...
[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]Forward one or more local ports
to a pod.
proxykubectl proxy [--port=PORT] [--www=static-dir]
[--www-prefix=prefix] [--api-prefix=prefix]
[flags]Run a proxy to the Kubernetes
API server.
replace kubectl replace -f FILENAM | 6,064 |
Operation Syntax Description
Replace a resource from a file or
stdin.
rollout kubectl rollout SUBCOMMAND [options]Manage the rollout of a
resource. Valid resource types
include: deployments,
daemonsets and statefulsets.
runkubectl run NAME --image=image [--
env="key=value"] [--port=port] [--dry-run=server|
client|none] [--overrides=inline-json] [flags]Run a specified image on the
cluster.
scalekubectl scale (-f FILENAME | TYPE NAME |
TYPE/NAME) --replicas=COUNT [--resource-
version=version] [--current-replicas=count]
[flags]Update the size of the specified
replication controller.
set kubectl set SUBCOMMAND [options] Configure application resources.
taintkubectl taint NODE NAME
KEY_1=VAL_1:TAINT_EFFECT_1 ...
KEY_N=VAL_N:TAINT_EFFECT_N [options]Update the taints on one or
more nodes.
top kubectl top (POD | NODE) [flags] [options]Display Resource (CPU/
Memory/Storage) usage of pod
or node.
uncordon kubectl uncordon NODE [options] Mark node as schedulable.
version kubectl version [--cl | 6,065 |
ient] [flags]Display the Kubernetes version
running on the client and server.
waitkubectl wait ([-f FILENAME] | resource.group/
resource.name | resource.group [(-l label | --all)])
[--for=delete|--for condition=available] [options]Experimental: Wait for a specific
condition on one or many
resources.
To learn more about command operations, see the kubectl reference documentation.
Resource types
The following table includes a list of all the supported resource types and their abbreviated
aliases.
(This output can be retrieved from kubectl api-resources , and was accurate as of Kubernetes
1.25.0)
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolum | 6,066 |
NAME SHORTNAMES APIVERSION NAMESPACED KIND
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurationsadmissionregistration.k8s.io/
v1false MutatingWebhookConfiguration
validatingwebhookconfigurationsadmissionregistration.k8s.io/
v1false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessrevi | 6,067 |
ews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
flowschemasflowcontrol.apiserver.k8s.io/
v1beta2false FlowSchema
prioritylevelconfigurationsflowcontrol.apiserver.k8s.io/
v1beta2false PriorityLevelConfiguration
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
podsecur | 6,068 |
itypolicies psp policy/v1beta1 false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClas | 6,069 |
NAME SHORTNAMES APIVERSION NAMESPACED KIND
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
Output options
Use the following sections for information about how you can format or sort the output of
certain commands. For details about which commands support the various output options, see
the kubectl reference documentation.
Formatting output
The default output format for all kubectl commands is the human readable plain-text format. To
output details to your terminal window in a specific format, you can add either the -o or --
output flags to a supported kubectl command.
Syntax
kubectl [command ] [TYPE ] [NAME ] -o <output_format>
Depending on the kubectl operation, the following output formats are supported:
Output format Description
-o custom-columns=<spec> Print a table | 6,070 |
using a comma separated list of custom columns .
-o custom-columns-
file=<filename>Print a table using the custom columns template in the
<filename> file.
-o json Output a JSON formatted API object.
-o jsonpath=<template> Print the fields defined in a jsonpath expression.
-o jsonpath-file=<filename>Print the fields defined by the jsonpath expression in the
<filename> file.
-o name Print only the resource name and nothing else.
-o wideOutput in the plain-text format with any additional information.
For pods, the node name is included.
-o yaml Output a YAML formatted API object.
Example
In this example, the following command outputs the details for a single pod as a YAML
formatted object:
kubectl get pod web-pod-13je7 -o yaml
Remember: See the kubectl reference documentation for details about which output format is
supported by each command | 6,071 |
Custom columns
To define custom columns and output only the details that you want into a table, you can use
the custom-columns option. You can choose to define the custom columns inline or use a
template file: -o custom-columns=<spec> or -o custom-columns-file=<filename> .
Examples
Inline:
kubectl get pods <pod-name> -o custom-columns =NAME:.metadata.name,RSRC:.metadata.reso
urceVersion
Template file:
kubectl get pods <pod-name> -o custom-columns-file =template.txt
where the template.txt file contains:
NAME RSRC
metadata.name metadata.resourceVersion
The result of running either command is similar to:
NAME RSRC
submit-queue 610995
Server-side columns
kubectl supports receiving specific column information from the server about objects. This
means that for any given resource, the server will return columns and rows relevant to that
resource, for the client to print. This allows for consistent human-readable output across clients
used against the same cluster, by | 6,072 |
having the server encapsulate the details of printing.
This feature is enabled by default. To disable it, add the --server-print=false flag to the kubectl
get command.
Examples
To print information about the status of a pod, use a command like the following:
kubectl get pods <pod-name> --server-print =false
The output is similar to:
NAME AGE
pod-name 1m
Sorting list objects
To output objects to a sorted list in your terminal window, you can add the --sort-by flag to a
supported kubectl command. Sort your objects by specifying any numeric or string field with
the --sort-by flag. To specify a field, use a jsonpath expression | 6,073 |
Syntax
kubectl [command ] [TYPE ] [NAME ] --sort-by =<jsonpath_exp>
Example
To print a list of pods sorted by name, you run:
kubectl get pods --sort-by =.metadata.name
Examples: Common operations
Use the following set of examples to help you familiarize yourself with running the commonly
used kubectl operations:
kubectl apply - Apply or Update a resource from a file or stdin.
# Create a service using the definition in example-service.yaml.
kubectl apply -f example-service.yaml
# Create a replication controller using the definition in example-controller.yaml.
kubectl apply -f example-controller.yaml
# Create the objects that are defined in any .yaml, .yml, or .json file within the <directory>
directory.
kubectl apply -f <directory>
kubectl get - List one or more resources.
# List all pods in plain-text output format.
kubectl get pods
# List all pods in plain-text output format and include additional information (such as node
name).
kubectl get pods -o wide
# List the replication co | 6,074 |
ntroller with the specified name in plain-text output format. Tip: You
can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.
kubectl get replicationcontroller <rc-name>
# List all replication controllers and services together in plain-text output format.
kubectl get rc,services
# List all daemon sets in plain-text output format.
kubectl get ds
# List all pods running on node server01
kubectl get pods --field-selector =spec.nodeName =server01
kubectl describe - Display detailed state of one or more resources, including the uninitialized
ones by default | 6,075 |
# Display the details of the node with name <node-name>.
kubectl describe nodes <node-name>
# Display the details of the pod with name <pod-name>.
kubectl describe pods/<pod-name>
# Display the details of all the pods that are managed by the replication controller named <rc-
name>.
# Remember: Any pods that are created by the replication controller get prefixed with the name
of the replication controller.
kubectl describe pods <rc-name>
# Describe all pods
kubectl describe pods
Note: The kubectl get command is usually used for retrieving one or more resources of the
same resource type. It features a rich set of flags that allows you to customize the output format
using the -o or --output flag, for example. You can specify the -w or --watch flag to start
watching updates to a particular object. The kubectl describe command is more focused on
describing the many related aspects of a specified resource. It may invoke several API calls to
the API server to build a view for the user. | 6,076 |
For example, the kubectl describe node command
retrieves not only the information about the node, but also a summary of the pods running on
it, the events generated for the node etc.
kubectl delete - Delete resources either from a file, stdin, or specifying label selectors, names,
resource selectors, or resources.
# Delete a pod using the type and name specified in the pod.yaml file.
kubectl delete -f pod.yaml
# Delete all the pods and services that have the label '<label-key>=<label-value>'.
kubectl delete pods,services -l <label-key> =<label-value>
# Delete all pods, including uninitialized ones.
kubectl delete pods --all
kubectl exec - Execute a command against a container in a pod.
# Get output from running 'date' from pod <pod-name>. By default, output is from the first
container.
kubectl exec <pod-name> -- date
# Get output from running 'date' in container <container-name> of pod <pod-name>.
kubectl exec <pod-name> -c <container-name> -- date
# Get an interactive TTY and run | 6,077 |
/bin/bash from pod <pod-name>. By default, output is from
the first container.
kubectl exec -ti <pod-name> -- /bin/bash
kubectl logs - Print the logs for a container in a pod.
# Return a snapshot of the logs from pod <pod-name>.
kubectl logs <pod-name | 6,078 |
# Start streaming the logs from pod <pod-name>. This is similar to the 'tail -f' Linux command.
kubectl logs -f <pod-name>
kubectl diff - View a diff of the proposed updates to a cluster.
# Diff resources included in "pod.json".
kubectl diff -f pod.json
# Diff file read from stdin.
cat service.yaml | kubectl diff -f -
Examples: Creating and using plugins
Use the following set of examples to help you familiarize yourself with writing and using
kubectl plugins:
# create a simple plugin in any language and name the resulting executable file
# so that it begins with the prefix "kubectl-"
cat ./kubectl-hello
#!/bin/sh
# this plugin prints the words "hello world"
echo "hello world"
With a plugin written, let's make it executable:
chmod a+x ./kubectl-hello
# and move it to a location in our PATH
sudo mv ./kubectl-hello /usr/local/bin
sudo chown root:root /usr/local/bin
# You have now created and "installed" a kubectl plugin.
# You can begin using this plugin by invoking it from kubectl as | 6,079 |
if it were a regular command
kubectl hello
hello world
# You can "uninstall" a plugin, by removing it from the folder in your
# $PATH where you placed it
sudo rm /usr/local/bin/kubectl-hello
In order to view all of the plugins that are available to kubectl , use the kubectl plugin list
subcommand:
kubectl plugin list
The output is similar to:
The following kubectl-compatible plugins are available:
/usr/local/bin/kubectl-hell | 6,080 |
/usr/local/bin/kubectl-foo
/usr/local/bin/kubectl-bar
kubectl plugin list also warns you about plugins that are not executable, or that are shadowed
by other plugins; for example:
sudo chmod -x /usr/local/bin/kubectl-foo # remove execute permission
kubectl plugin list
The following kubectl-compatible plugins are available:
/usr/local/bin/kubectl-hello
/usr/local/bin/kubectl-foo
- warning: /usr/local/bin/kubectl-foo identified as a plugin, but it is not executable
/usr/local/bin/kubectl-bar
error: one plugin warning was found
You can think of plugins as a means to build more complex functionality on top of the existing
kubectl commands:
cat ./kubectl-whoami
The next few examples assume that you already made kubectl-whoami have the following
contents:
#!/bin/bash
# this plugin makes use of the `kubectl config` command in order to output
# information about the current user, based on the currently selected context
kubectl config view --template ='{{ range .contexts }}{{ if eq .name "' | 6,081 |
$(kubectl config current-
context )'" }}Current user: {{ printf "%s\n" .context.user }}{{ end }}{{ end }}'
Running the above command gives you an output containing the user for the current context in
your KUBECONFIG file:
# make the file executable
sudo chmod +x ./kubectl-whoami
# and move it into your PATH
sudo mv ./kubectl-whoami /usr/local/bin
kubectl whoami
Current user: plugins-user
What's next
Read the kubectl reference documentation:
the kubectl command reference
the command line arguments reference
Learn about kubectl usage conventions
Read about JSONPath support in kubectl•
◦
◦
•
| 6,082 |
Read about how to extend kubectl with plugins
To find out more about plugins, take a look at the example CLI plugin .
kubectl Quick Reference
This page contains a list of commonly used kubectl commands and flags.
Note: These instructions are for Kubernetes v1.29. To check the version, use the kubectl version
command.
Kubectl autocomplete
BASH
source <(kubectl completion bash ) # set up autocomplete in bash into the current shell, bash-
completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc
# add autocomplete permanently to your bash shell.
You can also use a shorthand alias for kubectl that also works with completion:
alias k=kubectl
complete -o default -F __start_kubectl k
ZSH
source <(kubectl completion zsh ) # set up autocomplete in zsh into the current shell
echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add
autocomplete permanently to your zsh shell
FISH
Require kubectl version 1.23 or abo | 6,083 |
ve.
echo 'kubectl completion fish | source' >> ~/.config/fish/config.fish # add kubectl
autocompletion permanently to your fish shell
A note on --all-namespaces
Appending --all-namespaces happens frequently enough that you should be aware of the
shorthand for --all-namespaces :
kubectl -A
Kubectl context and configuration
Set which Kubernetes cluster kubectl communicates with and modifies configuration
information. See Authenticating Across Clusters with kubeconfig documentation for detailed
config file information.•
| 6,084 |
kubectl config view # Show Merged kubeconfig settings.
# use multiple kubeconfig files at the same time and view merged config
KUBECONFIG =~/.kube/config:~/.kube/kubconfig2
kubectl config view
# get the password for the e2e user
kubectl config view -o jsonpath ='{.users[?(@.name == "e2e")].user.password}'
kubectl config view -o jsonpath ='{.users[].name}' # display the first user
kubectl config view -o jsonpath ='{.users[*].name}' # get a list of users
kubectl config get-contexts # display list of contexts
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
kubectl config set-cluster my-cluster-name # set a cluster entry in the kubeconfig
# configure the URL to a proxy server to use for requests made by this client in the kubeconfig
kubectl config set-cluster my-cluster-name --proxy-url =my-proxy-url
# add a new user | 6,085 |
to your kubeconf that supports basic auth
kubectl config set-credentials kubeuser/foo.kubernetes.com --username =kubeuser --
password =kubepassword
# permanently save the namespace for all subsequent kubectl commands in that context.
kubectl config set-context --current --namespace =ggckad-s2
# set a context utilizing a specific username and namespace.
kubectl config set-context gce --user =cluster-admin --namespace =foo \
&& kubectl config use-context gce
kubectl config unset users.foo # delete user foo
# short alias to set/show context/namespace (only works for bash and bash-compatible shells,
current context to be set before using kn to set namespace)
alias kx='f() { [ "$1" ] && kubectl config use-context $1 || kubectl config current-context ; } ; f'
alias kn='f() { [ "$1" ] && kubectl config set-context --current --namespace $1 || kubectl config
view --minify | grep namespace | cut -d" " -f6 ; } ; f'
Kubectl apply
apply manages applications through files | 6,086 |
defining Kubernetes resources. It creates and updates
resources in a cluster through running kubectl apply . This is the recommended way of
managing Kubernetes applications on production. See Kubectl Book .
Creating objects
Kubernetes manifests can be defined in YAML or JSON. The file extension .yaml , .yml, and .json
can be used | 6,087 |
kubectl apply -f ./my-manifest.yaml # create resource(s)
kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files
kubectl apply -f ./dir # create resource(s) in all manifest files in dir
kubectl apply -f https://example.com/manifest.yaml # create resource(s) from url (Note: this is
an example domain and does not contain a valid manifest)
kubectl create deployment nginx --image =nginx # start a single instance of nginx
# create a Job which prints "Hello World"
kubectl create job hello --image =busybox:1.28 -- echo "Hello World"
# create a CronJob that prints "Hello World" every minute
kubectl create cronjob hello --image =busybox:1.28 --schedule ="*/1 * * * *" -- echo "Hello
World"
kubectl explain pods # get the documentation for pod manifests
# Create multiple YAML objects from stdin
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
contai | 6,088 |
ners:
- name: busybox
image: busybox:1.28
args:
- sleep
- "1000000"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep-less
spec:
containers:
- name: busybox
image: busybox:1.28
args:
- sleep
- "1000"
EOF
# Create a secret with several keys
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: $(echo -n "s33msi4" | base64 -w0 | 6,089 |
username: $(echo -n "jane" | base64 -w0)
EOF
Viewing and finding resources
# Get commands with basic output
kubectl get services # List all services in the namespace
kubectl get pods --all-namespaces # List all pods in all namespaces
kubectl get pods -o wide # List all pods in the current namespace, with more
details
kubectl get deployment my-dep # List a particular deployment
kubectl get pods # List all pods in the namespace
kubectl get pod my-pod -o yaml # Get a pod's YAML
# Describe commands with verbose output
kubectl describe nodes my-node
kubectl describe pods my-pod
# List Services Sorted by Name
kubectl get services --sort-by =.metadata.name
# List pods Sorted by Restart Count
kubectl get pods --sort-by ='.status.containerStatuses[0].restartCount'
# List PersistentVolumes sorted by capacity
kubectl get pv --sort-by =.spec.capacity.storage
# Get the version labe | 6,090 |
l of all pods with label app=cassandra
kubectl get pods --selector =app=cassandra -o \
jsonpath ='{.items[*].metadata.labels.version}'
# Retrieve the value of a key with dots, e.g. 'ca.crt'
kubectl get configmap myconfig \
-o jsonpath ='{.data.ca\.crt}'
# Retrieve a base64 encoded value with dashes instead of underscores.
kubectl get secret my-secret --template ='{{index .data "key-name-with-dashes"}}'
# Get all worker nodes (use a selector to exclude results that have a label
# named 'node-role.kubernetes.io/control-plane')
kubectl get node --selector ='!node-role.kubernetes.io/control-plane'
# Get all running pods in the namespace
kubectl get pods --field-selector =status.phase =Running
# Get ExternalIPs of all nodes
kubectl get nodes -o jsonpath ='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath, it can be found at
https://jqlang.github.io/ | 6,091 |
jq/
sel=${$(kubectl get rc my-rc --output =json | jq -j '.spec.selector | to_entries | .[] | "\(.key)= | 6,092 |
(.value),"' )%?}
echo $(kubectl get pods --selector =$sel --output =jsonpath ={.items..metadata.name })
# Show labels for all pods (or any other Kubernetes object that supports labelling)
kubectl get pods --show-labels
# Check which nodes are ready
JSONPATH ='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}
={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath ="$JSONPATH " | grep "Ready=True"
# Check which nodes are ready with custom-columns
kubectl get node -o custom-columns ='NODE_NAME:.metadata.name,STATUS:.status.conditions
[?(@.type=="Ready")].status'
# Output decoded secrets without external tools
kubectl get secret my-secret -o go-template ='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|
base64decode}}{{"\n\n"}}{{end}}'
# List all Secrets currently in use by a pod
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' |
grep -v null | sort | uniq
# List all containerIDs of initContainer of all pods | 6,093 |
# Helpful when cleaning up stopped containers, while avoiding removal of initContainers.
kubectl get pods --all-namespaces -o jsonpath ='{range .items[*].status.initContainerStatuses[*]}
{.containerID}{"\n"}{end}' | cut -d/ -f3
# List Events sorted by timestamp
kubectl get events --sort-by =.metadata.creationTimestamp
# List all warning events
kubectl events --types =Warning
# Compares the current state of the cluster against the state that the cluster would be in if the
manifest was applied.
kubectl diff -f ./my-manifest.yaml
# Produce a period-delimited tree of all keys returned for nodes
# Helpful when locating a key within a complex nested JSON structure
kubectl get nodes -o json | jq -c 'paths|join(".")'
# Produce a period-delimited tree of all keys returned for pods, etc
kubectl get pods -o json | jq -c 'paths|join(".")'
# Produce ENV for all pods, assuming you have a default container for the pods, default
namespace and the `env` command is supported.
# Helpful when running | 6,094 |
any supported command across all pods, not just `env`
for pod in $(kubectl get po --output =jsonpath ={.items..metadata.name }); do echo $pod &&
kubectl exec -it $pod -- env; done
# Get a deployment's status subresource
kubectl get deployment nginx-deployment --subresource =statu | 6,095 |
Updating resources
kubectl set image deployment/frontend www =image:v2 # Rolling update "www"
containers of "frontend" deployment, updating the image
kubectl rollout history deployment/frontend # Check the history of deployments
including the revision
kubectl rollout undo deployment/frontend # Rollback to the previous deployment
kubectl rollout undo deployment/frontend --to-revision =2 # Rollback to a specific revision
kubectl rollout status -w deployment/frontend # Watch rolling update status of
"frontend" deployment until completion
kubectl rollout restart deployment/frontend # Rolling restart of the "frontend"
deployment
cat pod.json | kubectl replace -f - # Replace a pod based on the JSON passed
into stdin
# Force replace, delete and then re-create the resource. Will cause a service outage.
kubectl replace --force -f ./pod.json
# Create a ser | 6,096 |
vice for a replicated nginx, which serves on port 80 and connects to the containers
on port 8000
kubectl expose rc nginx --port =80 --target-port =8000
# Update a single-container pod's image version (tag) to v4
kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -
kubectl label pods my-pod new-label =awesome # Add a Label
kubectl label pods my-pod new-label- # Remove a label
kubectl label pods my-pod new-label =new-value --overwrite # Overwrite an existing value
kubectl annotate pods my-pod icon-url =http://goo.gl/XXBTWq # Add an annotation
kubectl annotate pods my-pod icon-url- # Remove annotation
kubectl autoscale deployment foo --min =2 --max =10 # Auto scale a deployment "foo"
Patching resources
# Partially update a node
kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
# Update a container's image; spec.containers[*].name | 6,097 |
is required because it's a merge key
kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-
hostname","image":"new image"}]}}'
# Update a container's image using a json patch with positional arrays
kubectl patch pod valid-pod --type ='json' -p='[{"op": "replace", "path": "/spec/containers/0/
image", "value":"new image"}]'
# Disable a deployment livenessProbe using a json patch with positional arrays
kubectl patch deployment valid-deployment --type json -p ='[{"op": "remove", "path": "/spec/
template/spec/containers/0/livenessProbe"}]'
# Add a new element to a positional arra | 6,098 |
kubectl patch sa default --type ='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name":
"whatever" } }]'
# Update a deployment's replica count by patching its scale subresource
kubectl patch deployment nginx-deployment --subresource ='scale' --type ='merge' -p '{"spec":
{"replicas":2}}'
Editing resources
Edit any API resource in your preferred editor.
kubectl edit svc/docker-registry # Edit the service named docker-registry
KUBE_EDITOR ="nano" kubectl edit svc/docker-registry # Use an alternative editor
Scaling resources
kubectl scale --replicas =3 rs/foo # Scale a replicaset named 'foo' to 3
kubectl scale --replicas =3 -f foo.yaml # Scale a resource specified in "foo.yaml"
to 3
kubectl scale --current-replicas =2 --replicas =3 deployment/mysql # If the deployment named
mysql's current size is 2, scale mysql to 3
kubectl scale --replicas =5 rc/foo rc/bar rc/baz # Scale | 6,099 |