prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm trying to install <strong>Datadog</strong> agent for a <strong>Kubernetes</strong> cluster using <strong>Helm</strong>.</p>
<p>This is the <strong>helm command</strong> I'm using for it:</p>
<pre><code>helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install datadog datadog/datadog \
--namespace monitoring \
--create-namespace \
--atomic \
--set datadog.apiKey=<MY-DATADOG-API-KEY> \
--set targetSystem=linux \
--values values.yaml
</code></pre>
<p><strong>Values file</strong>:</p>
<pre><code>datadog:
kubelet:
host:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
hostCAPath: /etc/kubernetes/certs/kubeletserver.crt
tlsVerify: false # Required as of Agent 7.35. See Notes.
</code></pre>
<p>However, when I run the <strong>liveness probe error with error 500</strong> which shows the error below:</p>
<blockquote>
<p>CLUSTER | ERROR | (pkg/forwarder/transaction/transaction.go:344 in internalProcess) | API Key invalid, dropping transaction for <a href="https://orchestrator.datadoghq.com/api/v1/orchestrator" rel="nofollow noreferrer">https://orchestrator.datadoghq.com/api/v1/orchestrator</a>.</p>
</blockquote>
| <p><strong>Here's how I solved it</strong>:</p>
<p>The issue had to do with the <strong>Datadog Destination Site</strong>. The Destination site for my metrics, traces, and logs is supposed to be <code>datadoghq.eu</code>. This is set using the variable <code>DD_SITE</code>, and it defaults to <code>datadoghq.com</code> if it is not set.</p>
<p>To check what your <strong>Datadog Destination Site</strong> just look at the URL of your Datadog dashboard:</p>
<ul>
<li>For US it will be - <a href="https://app.datadoghq.com/" rel="noreferrer">https://app.datadoghq.com/</a></li>
<li>For EU it will be - <a href="https://app.datadoghq.eu/" rel="noreferrer">https://app.datadoghq.eu/</a></li>
</ul>
<p>To set this in your helm chart simply do either of the following:</p>
<pre><code>helm repo add datadog https://helm.datadoghq.com
helm repo update
helm upgrade --install datadog datadog/datadog \
--namespace monitoring \
--create-namespace \
--atomic \
--set datadog.apiKey=<MY-DATADOG-API-KEY> \
--set targetSystem=linux \
--set datadog.site=datadoghq.eu \
--values values.yaml
</code></pre>
<p>OR set it in your values file:</p>
<pre><code>datadog:
site: datadoghq.eu
kubelet:
host:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
hostCAPath: /etc/kubernetes/certs/kubeletserver.crt
tlsVerify: false # Required as of Agent 7.35. See Notes.
</code></pre>
<p><strong>References</strong>:</p>
<ol>
<li><p><a href="https://github.com/DataDog/datadog-agent/issues/1697" rel="noreferrer">Datadog Agent Forwarder fails liveness probe when new spot instance joins cluster, causing multiple restarts #1697</a></p>
</li>
<li><p><a href="https://github.com/DataDog/datadog-agent/issues/9180" rel="noreferrer">DD_SITE Set to us3.datadoghq.com, but process-agent and security-agent Still Try to Connect to non us3 endpoints #9180</a></p>
</li>
</ol>
|
<p>I have installed Rancher Desktop. It is working perfectly except for the inability for nerdctl and k3s to download docker images from hub.docker.com from behind my corporate firewall.</p>
<p>Question 1: After downloading Rancher Desktop, how do I set my corporate proxy credentials such that Kubernetes (with Rancher Desktop) pulls images from hub.docker.com.</p>
<p>Question 2: After downloading Rancher Desktop, how do I set my corporate proxy credentials such that the following command works from behind my corporate firewall.</p>
<pre><code>% nerdctl run --name jerrod-mysql-test -e MYSQL_ROOT_PASSWORD=password -p 7700:3306 mysql:8.0
INFO[0000] trying next host error="failed to do request: Head \"https://registry-1.docker.io/v2/library/mysql/manifests/8.0\": dial tcp: lookup registry-1.docker.io on xxx.xxx.x.x:53: no such host" host=registry-1.docker.io
FATA[0000] failed to resolve reference "docker.io/library/mysql:8.0": failed to do request: Head "https://registry-1.docker.io/v2/library/mysql/manifests/8.0": dial tcp: lookup registry-1.docker.io on xxx.xxx.x.x:53: no such host
</code></pre>
| <p>I am using mac. In mac. machine, add proxy details to /etc/conf.d/docker file.</p>
<pre><code>#rdctl shell
#sudo su -
#vi /etc/conf.d/docker
NO_PROXY="localhost,127.0.0.1"
HTTPS_PROXY="http://HOST:PORT"
HTTP_PROXY="http://HOST:PORT"
export HTTP_PROXY
export HTTPS_PROXY
export NO_PROXY
</code></pre>
|
<p>I have the following code:</p>
<pre><code> kubectl get job <JOB-NAME> -o json | jq 'del(.spec.selector)' | jq 'del(.spec.template.metadata.labels)' | kubectl replace --force -f -
</code></pre>
<p>That replaces an existing Kubernetes job with itself. Is it possible to do this with the Python Kubernetes API?</p>
<p>Here's what I am trying to do now:</p>
<pre><code>import kubernetes as kubernetes
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
configuration = kubernetes.client.Configuration()
name = "job-1"
namespace = "default"
v1 = client.CoreV1Api()
# Grab the exact name of the pod based on the job
metabase_create_readonly_user_name = ''
pods = v1.list_namespaced_pod(namespace, label_selector='job-name={}'.format(name), watch=False)
for pod in pods.items:
job_name = pod.metadata.name
# THIS DOES NOT WORK!
v1.replace_namespaced_pod(job_name, namespace)
</code></pre>
<p>However, <code>replace_namespaced_pod</code> needs a <code>body</code> passed in as a parameter, so I'm not sure what to give it there. Is there a way to grab the exact body of the existing job and pass it in here?</p>
| <p>The kubectl replace command has a <strong>--force</strong> option which actually does not use the replace, i.e., PUT, API endpoint. It <strong>forcibly deletes</strong> (DELETE) and then <strong>recreates</strong>, (POST) the resource using the provided spec.</p>
<p>According to the <a href="https://github.com/kubernetes-client/python/tree/master/kubernetes" rel="nofollow noreferrer">kubernetes python client docs</a>:</p>
<p>Under the class <code>BatchV1Api</code>(<a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md" rel="nofollow noreferrer">Api</a>), there are 3 methods:</p>
<p><code>read_namespaced_job</code>, <code>delete_namespaced_job</code> and <code>create_namespaced_job</code>.</p>
<p>So the following code snippet will suffice:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config()
batch_v1 = client.BatchV1Api()
...
v1_job = batch_v1.read_namespaced_job(name, namespace)
# some processing to v1_job
v1_job.metadata.resource_version = None
v1_job.metadata.uid = None
# as in your example
v1_job.spec.selector = None
v1_job.spec.template.metadata.labels = None
...
# forcibly deletes
batch_v1.delete_namespaced_job(name, namespace)
# recreates
batch_v1.create_namespaced_job(namespace, v1_job)
</code></pre>
|
<p>I have a azure function app in powershell, dedicated to kubernetes functions.</p>
<p>There are multiple functions but on each of them, the first thing we do is something like this.</p>
<pre><code> az login --service-principal -u $usr -p $pwd --tenant $tenant
$config = $HOME + "/.kube/$tenant";
az account set --subscription $subscription
az aks get-credentials --overwrite-existing --name $cluster --resource-group $rg --admin --file $config
kubectl config set-context --current --namespace=$tenant
</code></pre>
<p>This sets the contexts we will be working with on our next kubectl commands.</p>
<p>We do it, because we have multiple clusters on multiple regions, so everytime we get a call on the function app, we need to connect to the given cluster to work on it.</p>
<p>The problem with this is when we get 2 requests at exactly the same time for 2 different clusters and the earliest call will get its context overriden and be executing kubectl commands on the incorrect cluster.
Is there a way to isolate each call or a solution to this?</p>
| <p>You should be able to just use kubectl's global <code>--context</code> parameter (and <code>--kubeconfig</code> if needed), example:</p>
<pre class="lang-bash prettyprint-override"><code>az aks get-credentials -n $cluster -g $rg
kubectl get pod -n $tenant --context $cluster
</code></pre>
<p>In case you have two different clusters that are named the same, you can use <code>az aks get-credentials</code> own <code>--context</code> command param, and just pick a discriminator for your concurrent script executions:</p>
<pre class="lang-bash prettyprint-override"><code>$id = "$(new-guid)"
$contextName = "$cluster-$id"
az aks get-credentials -n $cluster -g $rg --context $contextName
kubectl get pod -n $tenant --context $contextName
</code></pre>
<p>Note that adding <code>--admin</code> command param when fetching credentials will also append <code>-admin</code> to the names in the kubeconfig; thus forcing you to add "-admin" to the <code>$contextName</code>:</p>
<pre class="lang-bash prettyprint-override"><code>$contextName = "$cluster-$id-admin"
</code></pre>
<p>There is a limitation to this, in that <code>az aks get-credentials</code> doesn't allow us to override the parameter <code>name</code> for the generated <code>user</code> object (at least not to my knowledge, <a href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials" rel="nofollow noreferrer">nor is it mentioned in the docs</a>). Its format is "static" and will always be <code>clusterUser_{resourceGroupName}_{clusterName}</code> or <code>clusterAdmin_{resourceGroupName}_{clusterName}</code> if using <code>--admin</code>.</p>
<p>This is really only a concern if you have 2 different clusters that have the exact same name and resource group name (I'd question why this would ever be the case, but alas).</p>
<p>You'd most likely see 2 different outcomes here:</p>
<ul>
<li>"An object of same name already exist"-ish error, or</li>
<li>The 2nd invocation of <code>az aks get-credentials</code> will override the existing object with the new values <em>if</em> you use <code>--overwrite-existing</code> command param, causing all API calls to the 1st invocation's cluster to fail with either 401 or 403 related errors (is my guess at least)</li>
</ul>
<p>To counter this you can write to a specific kubeconfig file, again unique to the script's current context/execution, and then use kubectl's global <code>--kubeconfig</code> command param:</p>
<pre class="lang-bash prettyprint-override"><code>$id = "$(new-guid)"
$contextName = $kubeconfig = "$cluster-$id"
az aks get-credentials -n $cluster -g $rg --context $contextName --file $kubeconfig
kubectl get pod -n $tenant --context $contextName --kubeconfig $kubeconfig
</code></pre>
<p>Remember to add "-admin" to your <code>$contextName</code>, if you use <code>--admin</code> (and separate the <code>$contextName</code> and <code>$kubeconfig</code> assignments, obviously).</p>
|
<p>i have a question to ask you about kubernates. I need to configure a Horizontal Load Balancer on a service with Kubernates. Specifically in my yaml file I used the following kind: HorizontalPodAutoscaler; the problem arises on the logic of uploading files to the volume: in a nutshell, when a pod is released, the data inside is overwritten, so with a new pod the volume is emptied. Am I wrong in configurations?
Specifically, I set the volume in the Dockerfile to build the corresponding image of the service launched with Kubernates.</p>
| <p>it's not good idea to store data in Dockerimage however still what you can do is,</p>
<ul>
<li>Copy data to docker image</li>
<li>Inside the Kubernetes POD create the PVC volume</li>
<li>Attach POD to PVC and use the volume mount</li>
<li>Use the <strong>init container</strong> to copy data from the docker path to the volume mount file path</li>
</ul>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM python:latest
COPY data.csv ./tmp/
</code></pre>
<p><strong>Kubernetes YAML</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: POD-name
spec:
replicas: 2
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
initContainers:
- name: copy
image: busybox:1.28
command: ["/bin/sh", "-c", "cp /tmp/data.csv /data/data.csv"]
volumeMounts:
- name: pvc
mountPath: /data/
containers:
- name: test
image: image:tag
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
volumeMounts:
- mountPath: "/data/"
name: pvc
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc-claim
</code></pre>
|
<p>I need help with setting up TLS termination (using ingress-nginx) for two services in different backends. Basically, the idea is to go with:</p>
<pre><code>https://some.url.here/serviceOne/ -> http://hostOne/
https://some.url.here/serviceTwo/ -> http://hostTwo/someServiceNameHere/
</code></pre>
<p>I tried doing it with <code>rewrite-target: "/"</code>, <code>rewrite-target: "/$1"</code>, and <code>use-regexp: "true"</code>, but connecting to <code>http://hostTwo/someServiceNameHere/</code> constantly fails.</p>
<p>My ingress definition looks like this:</p>
<pre><code>kind: Ingress
metadata:
name: rp-some-url-here
namespace: reverse-proxy
annotations:
kubernetes.io/ingress.class: nginx
# ??? nginx.ingress.kubernetes.io/rewrite-target: "/$1"
# ??? nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "false"
nginx.ingress.kubernetes.io/upstream-vhost: "some.url.here"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
# proxy-buffering is now disabled by default,
# just having it here for reference
nginx.ingress.kubernetes.io/proxy-buffering: "off"
spec:
tls:
- hosts:
- some.url.here
secretName: my-tls-secret
rules:
- host: some-url-here
http:
paths:
- backend:
service:
name: serviceOne
port:
number: 80
# path: /serviceOne # using this with rewrite-target
path: /serviceOne/.*
pathType: Prefix
- backend:
service:
name: serviceTwo
port:
number: 8180
# path: /serviceTwo # using this with rewrite-target
path: /serviceTwo/.*
pathType: Prefix
</code></pre>
<p>Service definition is as simply as</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: service-one
namespace: reverse-proxy
spec:
type: ExternalName
externalName: some.hostname.here
---
apiVersion: v1
kind: Service
metadata:
name: service-two
namespace: reverse-proxy
spec:
type: ExternalName
externalName: another.hostname.here
</code></pre>
| <p>here is a quick example of how it should be done ,also make sure that your second service responds to the port that you specified (8180)</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
ingressClassName: nginx
rules:
- host: rewrite.bar.com
http:
paths:
- path: /something(/|$)(.*)
pathType: Prefix
backend:
service:
name: http-svc1
port:
number: 80
- path: /something2(/|$)(.*)
pathType: Prefix
backend:
service:
name: http-svc2
port:
number: 8180
</code></pre>
|
<p>I have a very big issue. We have a large number of deployments for a component which up until now were using the RollingUpdate deployment strategy type.</p>
<pre><code>spec:
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
</code></pre>
<p>Now a new design decision needs us to move this to a <code>Recreate</code> strategy, which would be fairly easy if we could just write a shell script to call <code>kubectl patch</code> and use the <code>$retainKeys</code> method <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#use-strategic-merge-patch-to-update-a-deployment-using-the-retainkeys-strategy" rel="nofollow noreferrer">described in the docs</a>.</p>
<p>However, we use a custom tool written in Golang to manage our gazillion deployments and we'd like to integrate this into our regular update cycle and avoid this hack. Therefore, I've changed the code to look like this:</p>
<pre><code>deploy := &appsv1.Deployment{
Spec: appsv1.DeploymentSpec{
Strategy: appsv1.DeploymentStrategy{
Type: appsv1.RecreateDeploymentStrategyType,
},
</code></pre>
<p>Obviously, when we run the update procedure, the tool fails with the following error:</p>
<pre><code>Error updating *v1.Deployment Deployment.apps "cluster-agent" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
</code></pre>
<p>According to the above link, the <code>retainKeys</code> trick has to be used but I have not managed to find out how that can be done from within Golang.</p>
<p>I can see in the k8s api sources that the patch strategy supports the retainKeys method:</p>
<pre><code>Strategy DeploymentStrategy `json:"strategy,omitempty" patchStrategy:"retainKeys"
</code></pre>
<p>But would someone be so kind as to let me know how can I specify the <code>retainKeys</code> list within my Golang structs/code? Thank you very much!</p>
| <p>Latest update from this <a href="https://github.com/gimlet-io/onechart/issues/35" rel="nofollow noreferrer">issue</a> works</p>
<pre><code> strategy:
rollingUpdate: null
type: Recreate
</code></pre>
|
<p>I have a new install of K8s master and node both on ubuntu-18. The master is using weave for CNI and all pods are running:</p>
<pre><code>$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-29qg5 1/1 Running 0 31m
kube-system coredns-6d4b75cb6d-kxxc8 1/1 Running 0 31m
kube-system etcd-ubuntu-18-extssd 1/1 Running 2 31m
kube-system kube-apiserver-ubuntu-18-extssd 1/1 Running 2 31m
kube-system kube-controller-manager-ubuntu-18-extssd 1/1 Running 2 31m
kube-system kube-proxy-nvqjl 1/1 Running 0 31m
kube-system kube-scheduler-ubuntu-18-extssd 1/1 Running 2 31m
kube-system weave-net-th4kv 2/2 Running 0 31m
</code></pre>
<p>When I execute the <code>kubeadm join</code> command on the node I get the following error:</p>
<pre><code>sudo kubeadm join 192.168.0.12:6443 --token ikk2kd.177ij0f6n211sonl --discovery-token-ca-cert-hash sha256:8717baa3c634321438065f40395751430b4fb55f43668fac69489136335721dc
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E0724 16:24:41.009234 8391 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-07-24T16:24:41-06:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>The only problem showing up in <code>journalctl -r -u kubelet</code> is:</p>
<pre><code>kubelet.service: Main process exited, code=exited, status=1/FAILURE
...
Error: failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml
</code></pre>
<p>That is from several minutes before the <code>join</code> failed when kubelet was trying to start. I would expect that config.yaml file to be missing until the node joined a cluster.</p>
<p>The preflight error message says</p>
<pre><code>[ERROR CRI]: container runtime is not running: output: E0724 16:32:41.120653 10509 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
</code></pre>
<p>What is this trying to tell me?</p>
<p>====Edit=====
I am running CrashPlan on the worker node that is failing, but I have <code>fs.inotify.max_user_watches=1048576</code> in /etc/sysctl.conf.</p>
<p>This node worked before both with on-prem master and with GKE with kubernetes 1.20.</p>
| <p>[ERROR CRI]: container runtime is not running: output: E0724 16:32:41.120653 10509 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"..............</p>
<p>rm /etc/containerd/config.toml...
systemctl restart containerd...now u can run kubeadm init command</p>
|
<p>I am trying to create PostgreSQL pods on k8s.
after the pod deploy when I use kubectl get pods I see out</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pgadmin 1/1 Running 5 9m
postgres 1/1 Running 5 8m
</code></pre>
<p>however when I run kubectl get pods -o wide then I see this output</p>
<pre><code>pgadmin 0/1 CrashLoopBackOff 4 7m
postgres 0/1 CrashLoopBackOff 4 7
</code></pre>
<p>I am not sure why I see 2 different output.
when I run kubectl logs pgadmin-63545-634536 I see the following output</p>
<pre><code>pgAdmin 4 - Application Initialisation
======================================
[2022-11-15 14:43:28 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2022-11-15 14:43:28 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
[2022-11-15 14:43:28 +0000] [1] [INFO] Using worker: gthread
[2022-11-15 14:43:28 +0000] [90] [INFO] Booting worker with pid: 90
[2022-11-15 14:44:24 +0000] [1] [INFO] Handling signal: term
[2022-11-15 14:44:25 +0000] [90] [INFO] Worker exiting (pid: 90)
[2022-11-15 14:44:26 +0000] [1] [INFO] Shutting down: Master
</code></pre>
<p>Can you please explain me what is this behavior and why my pods shutdown ?
I am very new to K8s.
Thanks in advance.</p>
<p>I try to inspect the log file</p>
| <p>To answer why you are seeing two different outputs, you have to understand how a container runs in Kubernetes.</p>
<p>In docker, a container can run and then terminate, but then stay stopped unless you tell docker you want that container to automatically restart using the <code>--restart yes</code> switch</p>
<p>In a Kubernetes deployment (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a>), the <code>--restart yes</code> is implied -- namely, when a container (or "pod" in the Kubernetes world) exits, regardless of whether the exit was intentional or not, then Kubernetes will restart the container and will keep trying to restart it.</p>
<p>The exception to this is when you are running a <code>Job</code> (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/</a>) or <code>CronJob</code> (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a>), which will run and restart until they complete successfully and then terminate, but that is beyond the scope of your question.</p>
<p>You can see from the "Restarts" count in your output that the number of restarts is increasing. Kubernetes will attempt to restart the exited containers as described above, but if it detects that it is restarting it repeatedly, it will start to add a "back off" period (i.e. it will add a delay before it tries to restart the container) -- during this delay period, the status of the pod will be "CrashLoopBackOff"</p>
<p>To answer why this is happening, you should describe the pod using <code>kubectl describe</code>. For example:</p>
<pre><code>kubectl describe pod --namespace {name-of-namespace} pgadmin
</code></pre>
<p>This will give you details of the pod and look under the <code>Events</code> section -- it may have some details of what happened. Most likely the Liveness Probe (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</a>) failed and therefore Kubernetes thinks the pods are dead, and restarts them accordingly.</p>
|
<p>I currently am doing something like this</p>
<pre><code>watch, err := s.clientset.CoreV1().Pods("").Watch(context.TODO(), metav1.ListOptions{
FieldSelector: fmt.Sprintf("spec.myfoo=%s", s.foo),
})
for event := range watch.ResultChan() {
.......
}
</code></pre>
<p>I am curious if I have something similar in two different go routines will both of the watches get the same events or if both routines might get different events. Based on who got it first?</p>
| <p>Watch internally establishes a long poll connection with the API server. After establishing a connection, the API server will send a batch of initial events and any subsequent changes. Once a timeout has occurred, the connection will be dropped.</p>
<p>Since your scenario involves two go routines, we cannot guarantee that both will start executing simultaneously and that both long poll connections will be established simultaneously. Furthermore, the connection may drop at some point.</p>
<p>In a large cluster, pods are constantly being killed and created. Thus, it is certainly possible for two go routines to receive different events.</p>
|
<p>I am trying to deploy elasticsearch and kibana to kubernetes using <a href="https://github.com/elastic/helm-charts" rel="nofollow noreferrer">this chart</a> and getting this error inside the kibana container, therefore ingress returns 503 error and container is never ready.</p>
<p>Error:</p>
<pre><code>[2022-11-08T12:30:53.321+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. socket hang up - Local: 10.112.130.148:42748, Remote: 10.96.237.95:9200
</code></pre>
<p>Ip adress 10.96.237.95 is a valid elasticsearch service address, and port is right.</p>
<p>When i am doing curl to elasticsearch from inside the kibana container, it successfully returns a response.</p>
<p>Am i missing something in my configurations?</p>
<p><strong>Chart version: 7.17.3</strong></p>
<p>Values for elasticsearch chart:</p>
<pre><code>clusterName: "elasticsearch"
nodeGroup: "master"
createCert: false
roles:
master: "true"
data: "true"
ingest: "true"
ml: "true"
transform: "true"
remote_cluster_client: "true"
protocol: https
replicas: 2
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
imageTag: "7.17.3"
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
clusterHealthCheckParams: "wait_for_status=green&timeout=20s"
antiAffinity: "soft"
resources:
requests:
cpu: "100m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "1Gi"
esJavaOpts: "-Xms512m -Xmx512m"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 30Gi
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs
</code></pre>
<p>Values for kibana chart:</p>
<pre><code>elasticSearchHosts: "https://elasticsearch-master:9200"
extraEnvs:
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: username
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-creds
key: password
- name: KIBANA_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: encryption-key
key: encryption_key
kibanaConfig:
kibana.yml: |
server.ssl:
enabled: true
key: /usr/share/kibana/config/certs/elastic-certificate.pem
certificate: /usr/share/kibana/config/certs/elastic-certificate.pem
xpack.security.encryptionKey: ${KIBANA_ENCRYPTION_KEY}
elasticsearch.ssl:
certificateAuthorities: /usr/share/kibana/config/certs/elastic-certificate.pem
verificationMode: certificate
protocol: https
secretMounts:
- name: elastic-certificate-pem
secretName: elastic-certificate-pem
path: /usr/share/kibana/config/certs
imageTag: "7.17.3"
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-issuer
kubernetes.io/ingress.allow-http: 'false'
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
hosts:
- host: mydomain.com
paths:
- path: /
pathType: Prefix
backend:
serviceName: kibana
servicePort: 5601
tls:
- hosts:
- mydomain.com
secretName: mydomain.com
</code></pre>
<p>UPD: tried it with other image version (8.4.1), nothing has changed, i am getting the same error. By the way, logstash is successfully shipping logs to this elasticsearch instance, so i think problem is in kibana.</p>
| <p>Figured it out. It was a complete pain in the ass. I hope these tips will help others:</p>
<ol>
<li><code>xpack.security.http.ssl.enabled</code> should be set to false. I can't find another way around it, but if you do i'd be glad to hear any advices. As i see it, you don't need security for http layer since kibana connects to elastic via transport layer (correct me if i am wrong). Therefore <code>xpack.security.transport.ssl.enabled</code> should be still set to true, but <code>xpack.security.http.ssl.enabled</code> should be set to false. (don't forget to change your <code>protocol</code> field for readinessProbe to http, and also change protocol for elasticsearch in kibana chart to http.</li>
<li><code>ELASTIC_USERNAME</code> env variable is pointless in elasticsearch chart, only password is used, user is always <code>elastic</code></li>
<li><code>ELASTICSEARCH_USERNAME</code> in kibana chart should be actually set to <code>kibana_systems</code> user with according password for that user</li>
</ol>
|
<p>I create some pods with containers for which I set <code>ephemeral-storage</code> request and limit, like: (here <code>10GB</code>)</p>
<p><a href="https://i.stack.imgur.com/87ew4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/87ew4.png" alt="enter image description here" /></a></p>
<p>Unfortunately, for some containers, the <code>ephemeral-storage</code> will be fully filled for unknown reasons. I would like to understand which dirs/files are responsible for filling it all, but I did not find a solution to do it.</p>
<p>I tried with <code>df -h</code>, but unfortunately, it will give stats for the whole node and not only for the particular pod/container.</p>
<p>Is there a way to retrieve the kubernetes container's <code>ephemeral-storage</code> usage details?</p>
| <p>Pods use ephemeral local storage for scratch space, caching, and for logs. The kubelet can provide scratch space to Pods using local ephemeral storage to mount emptyDir volumes into containers.</p>
<p>Depending on your Kubernetes platform, <strong>You may not be able to easily determine where these files are being written, any filesystem can fill up, but rest assured that disk is being consumed somewhere</strong> (or worse, memory - depending on the specific configuration of your emptyDir and/or Kubernetes platform).</p>
<p>Refer to this <a href="https://stackoverflow.com/questions/70931881"><strong>SO</strong></a> link for more details on how by default & allocatable ephemeral-storage in a standard kubernetes environment is sourced from filesystem(mounted to /var/lib/kubelet).</p>
<p>And also refer to kubernetes documentation on how <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">ephemeral storage can be managed</a> & <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption" rel="nofollow noreferrer">Ephemeral storage consumption management works</a>.</p>
<p><strong>I am assuming you're a GCP user, you can get a sense of your ephemeral-storage usage way:</strong>
Menu>Monitoring>Metrics Explorer>
Resource type: <code>kubernetes node</code> & Metric: <code>Ephemeral Storage</code></p>
<p><strong>Try the below commands to know kubernetes pod/container's ephemeral-storage usage details :</strong></p>
<ol>
<li>Try <strong>du -sh</strong> / [run inside a container] : du -sh will give the space consumed by your container files. Which simply returns the amount of disk space the current directory and all those stuff in it are using as a whole, something like: 2.4G.</li>
</ol>
<p>Also you can check the complete file size using the <strong>du -h someDir</strong> command.</p>
<p><a href="https://i.stack.imgur.com/ptxQT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ptxQT.png" alt="enter image description here" /></a></p>
<ol start="2">
<li><a href="https://blog.px.dev/container-filesystems/" rel="nofollow noreferrer">Inspecting container filesystems</a> : You can use <strong>/bin/df</strong> as a tool to monitor <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption" rel="nofollow noreferrer">ephemeral storage usage</a> on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers.</li>
</ol>
|
<p>I have AKS with kubernetes version 1.23. I want to activate podsecurity on cluster level by setting it via AdmissionConfiguration as explained here:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/enforce-standards-admission-controller/</a></p>
<p>As I have read, "PodSecurity feature gate" is enabled on kubernetes version 1.23 by default. I have created a yaml file based on the configuration that is shown on the link however when I apply it I get the following error:</p>
<pre><code>$ k create -f podsecurity.yaml
error: unable to recognize "podsecurity.yaml": no matches for kind "AdmissionConfiguration" in version "apiserver.config.k8s.io/v1"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5",
GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5",
GitCommit:"8211ae4d6757c3fedc53cd740d163ef65287276a", GitTreeState:"clean", BuildDate:"2022-03-31T20:28:03Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I googled a lot but couldn't find a solution for it or what it caused it.</p>
<p>I would apprecaite if someone can help.</p>
<p>I am able to activate it in namespace level like it is explained here:
<a href="https://kubernetes.io/docs/tutorials/security/ns-level-pss/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/security/ns-level-pss/</a>
by adding a label under namesapce however I want to activate it on cluster level and it doesn't work.</p>
| <p>This is because the administration configuration is not a file to be applied inside the Kubernetes cluster.</p>
<blockquote>
<p>It's a <strong>static</strong> file that must be given to the API-server.</p>
</blockquote>
<p>If your Kubernetes cluster is managed by a clouder and you don't have access to the api-server directly you can use a <a href="https://github.com/kubernetes/pod-security-admission/tree/master/webhook" rel="nofollow noreferrer">pod security admission webhook</a> in your cluster.<br />
It's very simple to install and works very well.</p>
<p>This way you will be able to edit a configmap containing the cluster wide config.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: pod-security-webhook
namespace: pod-security-webhook
data:
podsecurityconfiguration.yaml: |
apiVersion: pod-security.admission.config.k8s.io/v1beta1
kind: PodSecurityConfiguration
defaults:
enforce: "restricted"
enforce-version: "latest"
audit: "restricted"
audit-version: "latest"
warn: "restricted"
warn-version: "latest"
exemptions:
# Array of authenticated usernames to exempt.
usernames: []
# Array of runtime class names to exempt.
runtimeClasses: []
# Array of namespaces to exempt.
namespaces: ["kube-system","policy-test1"]
</code></pre>
<p>For more information I have found the EKS documentation pretty usefull : <a href="https://aws.github.io/aws-eks-best-practices/security/docs/pods/" rel="nofollow noreferrer">https://aws.github.io/aws-eks-best-practices/security/docs/pods/</a></p>
<p><em>You should also note that namespace labels will take precedence over the cluster wide config.</em></p>
|
<p>I need to write a <code>golang</code> application with the help of <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> which will listen/watch a particular namespace for any of these events:</p>
<ul>
<li>A new pod has been created</li>
<li>A pod has been deleted</li>
<li>A new container has been added to existing pods</li>
<li>Image for container for any pod has changed</li>
</ul>
<p>And I want to communicate this information to another application application running in other namespace.</p>
<p>I am really new to the <code>client-go</code> library and I searched their documentation but couldn't find something similar to <a href="https://kopf.readthedocs.io/en/latest/events/" rel="nofollow noreferrer">Events in Kopf</a></p>
<p><strong>I am new to this library and I couldn't find a method/function of doing this. I don't need to have the full code of doing this, but I appreciate where I can look into, so I can find my way out</strong></p>
<p>Can someone help me on this?</p>
| <p>You can create a clientset from parsing the kubeconfig file and then use this clientset to create a sharedInformerfactory for your particular namespace. Get a informer for your pods and add Event Handler functions. Implement those functions according to your requirement. You can check for container updates between oldPod and newPod in the <code>OnUpdate</code> function. Use the clientset for however you want to communicate with other applications. I would say explore the methods that clientset implements to get a detailed idea how it works.</p>
<pre><code>package main
import (
"flag"
"fmt"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
"k8s.io/klog/v2"
"path/filepath"
"time"
)
func main() {
// parse the .kubeconfig file
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// create config from the kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
// create the clientset
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
// stop signal for the informer
stopper := make(chan struct{})
defer close(stopper)
// create shared informers for resources in all known API group versions with a reSync period and namespace
factory := informers.NewSharedInformerFactoryWithOptions(clientSet, 10*time.Second, informers.WithNamespace("demo"))
podInformer := factory.Core().V1().Pods().Informer()
defer runtime.HandleCrash()
// start informer ->
go factory.Start(stopper)
// start to sync and call list
if !cache.WaitForCacheSync(stopper, podInformer.HasSynced) {
runtime.HandleError(fmt.Errorf("Timed out waiting for caches to sync"))
return
}
podInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: onAdd, // register add eventhandler
UpdateFunc: onUpdate,
DeleteFunc: onDelete,
})
// block the main go routine from exiting
<-stopper
}
func onAdd(obj interface{}) {
pod := obj.(*corev1.Pod)
klog.Infof("POD CREATED: %s/%s", pod.Namespace, pod.Name)
}
func onUpdate(oldObj interface{}, newObj interface{}) {
oldPod := oldObj.(*corev1.Pod)
newPod := newObj.(*corev1.Pod)
klog.Infof(
"POD UPDATED. %s/%s %s",
oldPod.Namespace, oldPod.Name, newPod.Status.Phase,
)
}
func onDelete(obj interface{}) {
pod := obj.(*corev1.Pod)
klog.Infof("POD DELETED: %s/%s", pod.Namespace, pod.Name)
}
</code></pre>
|
<p>I have 2 services deployed in Kubernetes</p>
<ol>
<li>Application A (asp.net core 5 gRPC service)</li>
<li>Application B (asp.net core 5 api)</li>
</ol>
<p>Application B is accessible via ingress-nginx-controller over https from out side of my cluster.</p>
<p>Application A is expose via Service and only accessible inside my cluster.</p>
<p>My question is how can I connect from Application B to Application A over SSL/TLS?</p>
<p><a href="https://i.stack.imgur.com/v1yKy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v1yKy.png" alt="enter image description here" /></a></p>
| <p>For HTTPS communication, you can setup certificate with "dotnet dev-certs https". Each pod will need setup self-signed certificate on port 443, for development purpose, but not recommended.</p>
<p>However, GRPC actually can be use with HTTP based, with service mesh support for http2/grpc for service-to-service communication. These steps can be</p>
<ol>
<li><p>Call GRPC use HTTP</p>
<p><a href="https://learn.microsoft.com/en-us/aspnet/core/grpc/troubleshoot?view=aspnetcore-3.0#call-insecure-grpc-services-with-net-core-client-2" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/grpc/troubleshoot?view=aspnetcore-3.0#call-insecure-grpc-services-with-net-core-client-2</a></p>
</li>
<li><p>Setup Linkerd</p>
<p><a href="https://techcommunity.microsoft.com/t5/azure-developer-community-blog/meshing-with-linkerd2-using-grpc-enabled-net-core-services/ba-p/1377867" rel="nofollow noreferrer">https://techcommunity.microsoft.com/t5/azure-developer-community-blog/meshing-with-linkerd2-using-grpc-enabled-net-core-services/ba-p/1377867</a></p>
</li>
</ol>
<p>Hope this helps</p>
|
<p>This is how I'm trying to create a secret for my kubernetes mongodb, which gets deployed using the <a href="https://github.com/bitnami/charts/blob/main/bitnami/mongodb/values.yaml" rel="nofollow noreferrer">bitnami mongodb helm chart</a>:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
namespace: mongodb
labels:
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: 'encoded value'
mongodb-passwords: '???'
mongodb-metrics-password: 'encoded value'
mongodb-replica-set-key: 'encoded value'
</code></pre>
<p>The helm chart values.yaml says:</p>
<pre><code>auth:
## MongoDB(&reg;) custom users and databases
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mongodb#creating-a-user-and-database-on-first-run
## @param auth.usernames List of custom users to be created during the initialization
## @param auth.passwords List of passwords for the custom users set at `auth.usernames`
## @param auth.databases List of custom databases to be created during the initialization
##
usernames: []
passwords: []
databases: []
## @param auth.existingSecret Existing secret with MongoDB(&reg;) credentials (keys: `mongodb-passwords`, `mongodb-root-password`, `mongodb-metrics-password`, ` mongodb-replica-set-key`)
## NOTE: When it's set the previous parameters are ignored.
##
existingSecret: ""
</code></pre>
<p>So <code>passwords</code> is an array of strings for each <code>username</code> and each <code>database</code>.</p>
<p>How do I have to implement these multiple passwords in my secret?</p>
<p>The helm template should give me a hint, but I don't understand it: <a href="https://github.com/bitnami/charts/blob/main/bitnami/mongodb/templates/secrets.yaml" rel="nofollow noreferrer">secret.yaml</a></p>
<p>Or is it a simple string with all passwords separated by <code>,</code> and encoded?</p>
| <p>Should be something like:</p>
<pre><code>auth:
usernames: ["bob", "alice"]
passwords: ["bobpass", "alicepass"]
databases: ["bobdb", "alicedb"]
</code></pre>
<p>If you want to pass those on the cli --set flag instead, you should be able to use curly braces as per this comment: <a href="https://github.com/helm/helm/issues/1987#issuecomment-280497496" rel="nofollow noreferrer">https://github.com/helm/helm/issues/1987#issuecomment-280497496</a> - like:</p>
<pre><code>--set auth.usernames={bob,alice},auth.passwords={bobpass,alicepass},auth.databases={bobdb,alicedb}
</code></pre>
<p>This would produce a secret like following - which you can check with <code>helm template</code> command:</p>
<pre><code>---
# Source: mongodb/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-mongodb
namespace: default
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-13.4.4
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
type: Opaque
data:
mongodb-root-password: "Uk1tZThhYzNFZg=="
mongodb-passwords: "Ym9icGFzcyxhbGljZXBhc3M="
---
</code></pre>
<p>You can decode mongodb-passwords, using:</p>
<pre><code>echo -n Ym9icGFzcyxhbGljZXBhc3M= | base64 -d
</code></pre>
<p>and notice that it looks as following: <code>bobpass,alicepass</code></p>
<p>Also note that there seems to be an option to have mongodb.createSecret flag set to false and creating that secret manually (which may be more secure depending on the exact workflow).</p>
|
<p>I am attempting to build a simple app with FastAPI and React. I have been advised by our engineering dept, that I should Dockerize it as one app instead of a front and back end...</p>
<p>I have the app functioning as I need without any issues, my current directory structure is.</p>
<pre class="lang-bash prettyprint-override"><code>.
├── README.md
├── backend
│ ├── Dockerfile
│ ├── Pipfile
│ ├── Pipfile.lock
│ └── main.py
└── frontend
├── Dockerfile
├── index.html
├── package-lock.json
├── package.json
├── postcss.config.js
├── src
│ ├── App.jsx
│ ├── favicon.svg
│ ├── index.css
│ ├── logo.svg
│ └── main.jsx
├── tailwind.config.js
└── vite.config.js
</code></pre>
<p>I am a bit of a Docker noob and have only ever built an image for projects that don't arent split into a front and back end.</p>
<p>I have a <code>.env</code> file in each, only simple things like URLs or hosts.</p>
<p>I currently run the app, with the front end and backend separately as an example.</p>
<pre class="lang-bash prettyprint-override"><code>> ./frontend
> npm run dev
</code></pre>
<pre class="lang-bash prettyprint-override"><code>> ./backend
> uvicorn ....
</code></pre>
<p>Can anyone give me tips /advice on how I can dockerize this as one?</p>
| <p>Following up on Vinalti's answer. I would also recommend using one Dockerfile for the backend, one for the frontend and a docker-compose.yml file to link them together. Given the following project structure, this is what worked for me.</p>
<p>Project running fastapi (backend) on port 8000 and reactjs (frontend) on port 3006.</p>
<pre><code>.
├── README.md
├── docker-compose.yml
├── backend
│ ├── .env
│ ├── Dockerfile
│ ├── app/
│ ├── venv/
│ ├── requirements.txt
│ └── main.py
└── frontend
├── .env
├── Dockerfile
├── package.json
├── package-lock.json
├── src/
├── ...
</code></pre>
<p>backend/Dockerfile</p>
<pre><code>FROM python:3.10
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./ /code/
CMD ["uvicorn", "app.api:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>frontend/Dockerfile</p>
<pre><code># pull official base image
FROM node:latest as build
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
# Silent clean install of npm
RUN npm ci --silent
RUN npm install [email protected] -g --silent
# add app
COPY . /app/
# Build production
RUN npm run build
RUN npm install -g serve
## Start the app on port 3006
CMD serve -s build -l 3006
</code></pre>
<p>docker-compose.yml</p>
<pre><code>version: '3.8'
services:
backend:
env_file:
- backend/.env
build:
context: ./backend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:8000:8000"
expose:
- 8000
frontend:
env_file:
- frontend/.env
build:
context: ./frontend/
dockerfile: ./Dockerfile
restart: always
ports:
- "127.0.0.1:3006:3006"
expose:
- 3006
</code></pre>
|
<p>I am experimenting in a small lab created with AutomatedLab that contains Windows Server 2022 machines running ActiveDirectory and SQLServer along with CentOS 8.5 machines running a Kubernetes cluster. My test application is a .Net 6 console application that simply connect to a SQLServer database running in the the lab over a trusted connection. It is containerized based on the official aspnet:6.0 image. The Kubernetes POD contains an InitContainer that executes kinit to generate a Kerberos token placed in a shared volume. I have made two versions of the test application: one that uses an OdbcConnection to connect to the database and the second one uses a SqlConnection. The version with the OdbcConnection successfully connects to the database but the one using the SqlConnection crashes when opening the connection to the database.</p>
<p>Here is the code of the application using the OdbcConnection:</p>
<pre><code>using (var connection =
new OdbcConnection(
"Driver={ODBC Driver 17 for SQL Server};Server=sql1.contoso.com,1433;Database=KubeDemo;Trusted_Connection=Yes;"))
{
Log.Information("connection created");
var command = new OdbcCommand
("select * from KubeDemo.dbo.Test", connection);
connection.Open();
Log.Information("Connection opened");
using (var reader = command.ExecuteReader())
{
Log.Information("Read");
while (reader.Read())
{
Console.WriteLine($"{reader[0]}");
}
}
}
</code></pre>
<p>The logs of the container show that it can connect to the database and read its content</p>
<pre><code>[16:24:35 INF] Starting the application
[16:24:35 INF] connection created
[16:24:35 INF] Connection opened
[16:24:35 INF] Read
1
</code></pre>
<p>Here is the code of the application using the SqlConnection:</p>
<pre><code>using (var connection =
new SqlConnection(
"Server=sql1.contoso.com,1433;Initial Catalog=KubeDemo;Integrated Security=True;"))
{
Log.Information("connection created");
var command = new SqlCommand
("select * from KubeDemo.dbo.Test", connection);
connection.Open();
Log.Information("Connection opened");
using (var reader = command.ExecuteReader())
{
Log.Information("Read");
while (reader.Read())
{
Console.WriteLine($"{reader[0]}");
}
}
}
</code></pre>
<p>The container crashes, based on the log when the connection is being opened:</p>
<pre><code>[16:29:58 INF] Starting the application
[16:29:58 INF] connection created
</code></pre>
<p>I have deployed the Kubernetes pod with a command "tail -f /dev/null" so that I could execute the application manually and I get an extra line:</p>
<pre><code>[16:29:58 INF] Starting the application
[16:29:58 INF] connection created
Segmentation fault (core dumped)
</code></pre>
<p>According to Google, this is C++ error message that indicates an attempt to access an unauthorized memory section. Unfortunately I have no idea how to work around that. Does anyone has an idea how to get it to work?</p>
<p>To be complete, here is the Dockerfile for the containerized application</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:6.0
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install curl gnupg2 -y
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/11/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install --assume-yes --no-install-recommends --allow-unauthenticated unixodbc msodbcsql17 mssql-tools
RUN apt-get remove curl gnupg2 -y
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
WORKDIR /app
EXPOSE 80
COPY ./ .
ENTRYPOINT ["dotnet", "DbTest.dll"]
</code></pre>
<p>And the POD Helm template:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dbtest
labels:
app: test
spec:
restartPolicy: Never
volumes:
- name: kbr5-cache
emptyDir:
medium: Memory
- name: keytab-dir
secret:
secretName: back01-keytab
defaultMode: 0444
- name: krb5-conf
configMap:
name: krb5-conf
defaultMode: 0444
initContainers:
- name: kerberos-init
image: gambyseb/private:kerberos-init-0.2.0
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: true
env:
- name: KRB5_CONFIG
value: /krb5
volumeMounts:
- name: kbr5-cache
mountPath: /dev/shm
- name: keytab-dir
mountPath: /keytab
- name: krb5-conf
mountPath: /krb5
containers:
- name: dbtest
image: {{ .Values.image.repository }}:DbTest-{{ .Chart.AppVersion }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: ASPNETCORE_ENVIRONMENT
value: "{{ .Values.environment.ASPNETCORE }}"
- name: KRB5_CONFIG
value: /krb5
{{/* command:*/}}
{{/* - "tail"*/}}
{{/* - "-f"*/}}
{{/* - "/dev/null"*/}}
securityContext:
allowPrivilegeEscalation: true
privileged: true
ports:
- containerPort: 80
volumeMounts:
- name: kbr5-cache
mountPath: /dev/shm
- name: krb5-conf
mountPath: /krb5
- name: keytab-dir
mountPath: /keytab
{{/* - name: kerberos-refresh*/}}
{{/* image: gambyseb/private:kerberos-refresh-0.1.0*/}}
{{/* imagePullPolicy: {{ .Values.image.pullPolicy }}*/}}
{{/* env:*/}}
{{/* - name: KRB5_CONFIG*/}}
{{/* value: /krb5*/}}
{{/* volumeMounts:*/}}
{{/* - name: kbr5-cache*/}}
{{/* mountPath: /dev/shm*/}}
{{/* - name: keytab-dir*/}}
{{/* mountPath: /keytab*/}}
{{/* - name: krb5-conf*/}}
{{/* mountPath: /krb5*/}}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
</code></pre>
| <p>This may not be Auth reated.</p>
<p>If you are deploying to a linux container you need to make sure you don't deploy System.Data.SqlClient as this is a Windows only library. It will just blow up your container (as you are experiencing) when it first loads the library.</p>
<p>I found that if I added Microsoft.Data.SqlClient it didn't get added but I think I was leaving Dapper or EF to add the dependency and it went into the release as System.Data.SqlClient. As the container blew up in AWS I had very little feedback as to the cause!</p>
<p>See <a href="https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/" rel="nofollow noreferrer">https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/</a></p>
|
<p>Hi I have added secret in my hashi corp vault in the below path</p>
<p>cep-kv/dev/sqlpassword</p>
<p>I am trying to access secret in my manifest as below</p>
<pre><code>spec:
serviceAccountName: default
containers: # List
- name: cep-container
image: myinage:latest
env:
- name: AppSettings__Key
value: vault:cep-kv/dev/sqlpassword#sqlpassword
</code></pre>
<p>This is throwing error below</p>
<pre><code>failed to inject secrets from vault: failed to read secret from path: cep-kv/dev/sqlpassword: Error making API request.\n\nURL: GET https://vaultnet/v1/cep-kv/dev/sqlpassword?version=-1\nCode: 403. Errors:\n\n* 1 error occurred:\n\t* permission denied\n\n" app=vault-env
</code></pre>
<p>Is the path I am trying to access is correct value:</p>
<blockquote>
<p>vault:cep-kv/dev/sqlpassword#sqlpassword</p>
</blockquote>
<p>I tried with below path too</p>
<pre><code>value: vault:cep-kv/dev/sqlpassword
</code></pre>
<p>This says secret not found in respective path. Can someone help me to get secret from hashi corp vault. Any help would be appreciated. Thanks</p>
| <p>As you are getting 403 permission you need to Configure Kubernetes authentication, you can configure authentication from the following step:</p>
<ol>
<li>Enable the Kubernetes auth method:</li>
</ol>
<blockquote>
<p><code>vault enable auth kubernetes</code></p>
</blockquote>
<ol start="2">
<li>Configure the Kubernetes authentication method to use the location of the Kubernetes API</li>
</ol>
<pre><code>vault write auth/kubernetes/config \
kubernetes_host=https://192.168.99.100:<your TCP port or blank for 443>
</code></pre>
<ol start="3">
<li><p>Create a named role:</p>
<pre><code>vault write auth/kubernetes/role/demo \
bound_service_account_names=myapp \
bound_service_account_namespaces=default \
policies=default \
ttl=1h
</code></pre>
</li>
<li><p>Write out the ” myapp ” policy that enables the “read” capability for secrets at the path .</p>
<pre><code>vault policy write myapp -path "yourpath"
{ capabilities = ["read"] }
</code></pre>
</li>
</ol>
<p>For more information follow <a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes#configuration" rel="nofollow noreferrer">Configuration</a>, Here is a <a href="https://blog.cloudthat.com/detailed-guide-to-securely-manage-secrets-for-kubernetes-using-hashicorp-vault/" rel="nofollow noreferrer">blog</a> explaining the usage of secrets in kubernetes.</p>
|
<p>We have a kubernetes cluster, where a reconcile is triggered in response to a custom event.
Implemented in Golang using the following format:</p>
<pre><code>type reconciler struct {}
func (reconciler) Reconcile(ctx context.Context, o reconcile.Request) (reconcile.Result, error) {
// Implement business logic of reading and writing objects here
return reconcile.Result{}, nil
}
</code></pre>
<p>We have identified possible bottlenecks in the reconcile logic when the custom events are too many. So, the code has been updated to have a non-blocking reconcile logic.
Example:</p>
<pre><code>type reconciler struct {}
func (reconciler) Reconcile(ctx context.Context, o reconcile.Request) (reconcile.Result, error) {
go func() {
// Implement business logic of reading and writing objects here
}()
return reconcile.Result{}, nil
}
</code></pre>
<p>However, there are some places where the non-blocking go routine may return
<code>return ctrl.Result{Requeue: true}, nil</code>
or
<code>return ctrl.Result{RequeueAfter: someTime}, nil</code></p>
<p>How could we requeue such events to the reconcile loop in such scenarios, since the return would not return to the caller Reconcile()</p>
| <p>I think a better approach would be to use concurrent reconciles allowing other requests to be handled while a single reconcile request is slow to handle. This way the reconcile request queue won't be blocked, and idle go routines are fairly cheap so it should not affect your performance all that much.</p>
<p>Check out the <code>MaxConcurrentReconciles</code> in <code>controller-runtime</code>:
<a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/controller</a></p>
|
<p>I read a lot about best practices applying DataBase Migrations for Kubernetes Apps. Actually there are three common solutions:</p>
<ol>
<li><strong>Separate CI/CD stage for DB Migrations</strong> (usually we run it before deploy new App version but in some cases we can deploy first, it doesn't matter for current question). Just decouple app and migrations, and run them in different stages one by one.</li>
<li><strong>InitContainer:</strong> we can run DB migraitions before main app container starts. It is a good solution, but we have to be careful to tune it to run migrations once for all pods and replicas, do not run on pod restarts and check probes (kubelet can kill container on timeout while migration still running)</li>
<li><strong>Separate Kubernetes Job or Helm Hook.</strong> Is close to initContainer, cannot share data with app container (but it is not really necessary for DB Migrations, so it's ok). Must be careful with timeout too - Helm can kill Job before migration is completed.</li>
</ol>
<p><strong>But the question - how to apply rollbacks for these solutions???</strong></p>
<p>Lets try to form my ideas:</p>
<ol>
<li>Separate stage in CI/CD: we can save previous migration name, and then rollback to it in another stage. Pipeline: Migrations -> Deploy -> Test -> Rollback DB and ReDeploy</li>
<li>But for InitContainer and HelmHook I have to idea how to realise Rollback! Do we need additional containers for that? Does helm rollback affect DB too (don't think so). <strong>What are the best practices for that?</strong></li>
</ol>
<p>I will be very glad to any suggestions!</p>
| <p>I started investigating this issue too, and it looks like all manuals, tutorials and practices are meant to be forward-only. Should you ever need to rollback a database migration, you face with the limitations of Helm rollback. There is an open issue in Helm (<a href="https://github.com/helm/helm/issues/5825" rel="nofollow noreferrer">https://github.com/helm/helm/issues/5825</a>) that addresses this very problem, but it looks like no solution so far.</p>
<p>So probably Helm rollback is not a suitable mechanism for databases as you end up with creating a batch/v1.Job with container image that knows nothing about the changes you need to rollback. I use Liquibase and it requires that the changesets you need to rollback are present in the changelog.</p>
<p>I think for now this problem can be solved with forward-only approach:</p>
<ol>
<li>Build the container image that introduces a new changeset into your database.</li>
<li>In release values for Helm specify the container image that should be run to perform a migration, and a command (like <code>liquibase update</code>).</li>
<li>If the migration fails and you see that rollback is required, you do not invoke <code>helm rollback</code>. You deploy a new release, and in its values you specify the container image that should perform the database migration rollback, and the command (like <code>liquibase rollback <tag></code>). I am myself going this way now and it looks like the best solution I could came up with.</li>
</ol>
<p>Hope that helps.</p>
|
<p>I am new to <code>golang</code> and I have the following requirement:</p>
<ol>
<li>I have a deployment running with the image <code>nginx:latest</code></li>
<li>Then manually someone update this image to something else (eg: <code>nginx:1.22</code>)</li>
<li>I need to get the old image version and the new image version</li>
</ol>
<p>So, I researched on the "Shared Informers" in Go Lang. And I wrote this:</p>
<pre><code>
func main() {
. . .
// create shared informers for resources in all known API group versions with a reSync period and namespace
factory := informers.NewSharedInformerFactoryWithOptions(clientSet, 1*time.Hour, informers.WithNamespace(namespace_to_watch))
podInformer := factory.Core().V1().Pods().Informer()
defer runtime.HandleCrash()
// start informer ->
go factory.Start(stopper)
// start to sync and call list
if !cache.WaitForCacheSync(stopper, podInformer.HasSynced) {
runtime.HandleError(fmt.Errorf("Timed out waiting for caches to sync"))
return
}
podInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: onAdd, // register add eventhandler
UpdateFunc: onUpdate,
DeleteFunc: onDelete,
})
}
func onUpdate(oldObj interface{}, newObj interface{}) {
oldPod := oldObj.(*corev1.Pod)
newPod := newObj.(*corev1.Pod)
for container_old := range oldPod.Spec.Containers {
fmt.Printf("Container Old :%s", container_old.Image )
}
for container_new := range newPod.Spec.Containers {
fmt.Printf("Container New :%s", container_new.Image )
}
}
</code></pre>
<p>As above, according to the <code>onUpdate</code> function, I should be getting the two values, instead, I am getting the same value on each update as below:</p>
<blockquote>
<p>Container Old : nginx:latest</p>
</blockquote>
<blockquote>
<p>Container New : nginx:latest</p>
</blockquote>
<blockquote>
<p>Container Old : nginx:1.22</p>
</blockquote>
<blockquote>
<p>Container New : nginx:1.22</p>
</blockquote>
<blockquote>
<p>Container Old : nginx:1.22</p>
</blockquote>
<blockquote>
<p>Container New : nginx:1.22</p>
</blockquote>
<p>Above output is because somehow, the <code>onUpdate</code> function is triggered thrice, but as you can see in all the times, the values are the same (for each output).
But I was expecting something like below:</p>
<blockquote>
<p>Container Old : nginx:latest</p>
</blockquote>
<blockquote>
<p>Container New : nginx:1.22</p>
</blockquote>
<p>Can someone help me with this?</p>
| <p>When an image is updated in the deployments, the pods restart with new image containers and there's no trace of the old pod spec remains. You should watch the deployments instead of the pods and check the image between old deployment and new deployment object. Here's a sample of how you can modify your <code>onUpdate</code> method.</p>
<pre class="lang-golang prettyprint-override"><code>func onUpdate(oldObj interface{}, newObj interface{}) {
oldDepl := oldObj.(*v1.Deployment)
newDepl := newObj.(*v1.Deployment)
for oldContainerID := range oldDepl.Spec.Template.Spec.Containers {
for newContainerID := range newDepl.Spec.Template.Spec.Containers {
if oldDepl.Spec.Template.Spec.Containers[oldContainerID].Name == newDepl.Spec.Template.Spec.Containers[newContainerID].Name {
if oldDepl.Spec.Template.Spec.Containers[oldContainerID].Image != newDepl.Spec.Template.Spec.Containers[newContainerID].Image {
fmt.Printf("CONTAINER IMAGE UPDATED FROM %s to %s",
oldDepl.Spec.Template.Spec.Containers[oldContainerID].Image, newDepl.Spec.Template.Spec.Containers[newContainerID].Image)
}
}
}
}
}
</code></pre>
|
<p>Kubernetes version:
V1.22.2</p>
<p>Cloud Provider Vsphere version 6.7</p>
<p>Architecture:</p>
<ul>
<li>3 Masters</li>
<li>15 Workers</li>
</ul>
<p>What happened:
One of the pods for some "unknown" reason went down, and when we try to lift him up, it couldn't attach the existing PVC.
This only happened to a specific pod, all the others didn't have any kind of problem.</p>
<p>What did you expect to happen:
Pods should dynamically assume PVCs</p>
<p>Validation:
First step: The connection to Vsphere has been validated, and we have confirmed that the PVC exists.
Second step: The Pod was restarted (Statefulset 1/1 replicas) to see if the pod would rise again and assume the pvc, but without success.
Third step: Made a restart to the services (kube-controller, kube-apiserve, etc)
Last step: All workers and masters were rebooted but without success, each time the pod was launched it had the same error ""Multi-Attach error for volume "pvc......" Volume is already exclusively attached to one node and can't be attached to another""</p>
<p>When I delete a pod and try to recreate it, I get this warning:
Multi-Attach error for volume "pvc-xxxxx" The volume is already exclusively attached to a node
and cannot be attached to another</p>
<p>Anything else we need to know:
I have a cluster (3 master and 15 nodes)</p>
<p>Temporary resolution:
Erase the existing PVC and launch the pod again to recreate the PVC.
Since this is data, it is not the best solution to delete the existing PVC.</p>
<blockquote>
<p><strong>Multi-Attach error for volume "pvc-xxx" Volume is already
exclusively attached to one node and can't be attached to another</strong></p>
</blockquote>
<p><a href="https://i.stack.imgur.com/Q68qn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q68qn.png" alt="ERROR" /></a></p>
| <p>A longer term solution is referring to 2 facts:</p>
<ol>
<li><p>You're using <code>ReadWriteOnce</code> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> where the volume can be mounted as read-write by a single node.</p>
</li>
<li><p>Pods might be schedule by K8S Scheduler on a different node for multiple reason.</p>
</li>
</ol>
<p>Consider switching to <code>ReadWriteMany</code> where the volume can be mounted as read-write by many nodes.</p>
|
<p>I have a private GKE cluster with no public endpoint. I have confirmed that I can authenticate and run <code>kubectl</code> commands against the cluster with my principal account. I am deploying Terraform from the same VM I tested the <code>kubectl</code> commands against and have added that IP address to the cluster's Master Authorized Networks.</p>
<p>Whenever I try to deploy workload identity with Terraform, I receive this error:</p>
<pre><code>Error: Post "https://10.0.0.2/api/v1/namespaces/default/serviceaccounts": context deadline exceeded
on .terraform/modules/gke-workload-identity/modules/workload-identity/main.tf line 48, in resource "kubernetes_service_account" "main":
48: resource "kubernetes_service_account" "main" {
</code></pre>
<p>I have granted the Service Account for Terraform deployment the proper IAM Roles for WI. I am using the standard <code>terraform-google-modules</code> for <a href="https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/workload-identity" rel="nofollow noreferrer">workload identity</a> and GKE <a href="https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/safer-cluster" rel="nofollow noreferrer">cluster</a>.</p>
<p>Here is also my TF Kubernetes provider block:</p>
<pre><code>provider "kubernetes" {
host = "https://${module.gke.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(module.gke.ca_certificate)
}
</code></pre>
| <p><strong>Kubectl config view</strong></p>
<p>Displays merged kubeconfig settings or a specified kubeconfig file.</p>
<p><strong>Synopsis</strong></p>
<p>Displays merged kubeconfig settings or a specified kubeconfig file.</p>
<p>You can use –output jsonpath={…} to extract specific values using a jsonpath expression.</p>
<p><strong>Examples :</strong></p>
<pre><code>kubectl config view --raw
</code></pre>
<p>For Community reference <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_config_view/" rel="nofollow noreferrer">kubectl_config_view</a></p>
|
<p>I'm building a CLI application that would allow me to run an arbitrary command in my shell against any kube cluster in my kubeconfig that matches a given regex. I want to use the official <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> package to accomplish this, but for some reason, switching kube contexts is less than intuitive. So I'm starting by modifying the example out-of-cluster program in the repo, and I'm struggling with just switching the context to the one I specify. Here is the code I started with, which gets the number of pods in the cluster loaded in the kubeconfig:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"context"
"flag"
"fmt"
"path/filepath"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
pods, err := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the test cluster\n", len(pods.Items))
}
</code></pre>
<p>Unfortunately, I cannot figure out how to load a specific cluster with a name as defined in my kubeconfig. I would love to have a sort of <code>SwitchContext("cluster-name")</code> function, but the number of Configs, ClientConfigs, RawConfigs, and restclient.Configs are confusing me. Any help would be appreciated!</p>
<p>System: Ubuntu 22.04, Intel, kube server version 1.23.8-gke.1900, client version 1.25.3</p>
| <p>You can override the current context via <code>NewNonInteractiveDeferredLoadingClientConfig</code> method from <code>clientcmd</code> package.</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"context"
"flag"
"fmt"
"k8s.io/client-go/rest"
"path/filepath"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// using `contextName` context in kubeConfig
contextName := "gce"
config, err = buildConfigWithContextFromFlags(contextName, *kubeconfig)
if err != nil {
panic(err)
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
pods, err := clientset.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
fmt.Printf("There are %d pods in the test cluster\n", len(pods.Items))
}
func buildConfigWithContextFromFlags(context string, kubeconfigPath string) (*rest.Config, error) {
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
&clientcmd.ClientConfigLoadingRules{ExplicitPath: kubeconfigPath},
&clientcmd.ConfigOverrides{
CurrentContext: context,
}).ClientConfig()
}
</code></pre>
|
<p>Had an issue recently using the <code>terraform-aws-eks</code> module where we wanted to create the <code>aws-auth</code> config map and set the <code>mapRoles</code> and <code>mapAccounts</code> fields in the module but manage the <code>mapUsers</code> elsewhere.</p>
<p><code>kubernetes_config_map_v1_data</code> seemed tailor made for this but we kept running into conflict issues where the fields created by <code>kubernetes_config_map_v1_data</code> kept wanting to get destroyed by the <code>kubernetes_config_map</code> resource.</p>
<p>We found a solution and put the answer below to anybody else having this issue.</p>
| <p>terraform-aws-eks module version <code>17</code></p>
<p>within the terraform module <code>aws-auth.tf</code>:</p>
<pre><code>resource "kubernetes_config_map" "aws_auth" {
count = var.create_eks && var.manage_aws_auth ? 1 : 0
metadata {
name = "aws-auth"
namespace = "kube-system"
labels = merge(
{
"app.kubernetes.io/managed-by" = "Terraform"
# / are replaced by . because label validator fails in this lib
# https://github.com/kubernetes/apimachinery/blob/1bdd76d09076d4dc0362456e59c8f551f5f24a72/pkg/util/validation/validation.go#L166
"terraform.io/module" = "terraform-aws-modules.eks.aws"
},
var.aws_auth_additional_labels
)
}
lifecycle {
ignore_changes = [
data,
]
}
depends_on = [data.http.wait_for_cluster[0]]
}
resource "kubernetes_config_map_v1_data" "aws_auth" {
count = var.create_eks && var.manage_aws_auth ? 1 : 0
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = yamlencode(
distinct(concat(
local.configmap_roles,
var.map_roles,
))
)
mapAccounts = yamlencode(var.map_accounts)
}
field_manager = "aws-eks-module"
depends_on = [data.http.wait_for_cluster[0], kubernetes_config_map.aws_auth[0]]
}
</code></pre>
<p>From another repo:</p>
<pre><code>resource "kubernetes_config_map_v1_data" "aws_auth_example" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapUsers = yamlencode(var.users)
}
field_manager = "example"
}
</code></pre>
|
<p>I have Apache Nifi working in Kubernetes. I have deployed it with yaml file.
I have several Python scripts I would like to call on Apache Nifi.</p>
<p>I used this yaml file to deploy Apache Nifi:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nifi
namespace: test-namespace
labels:
name : test-nifi
app : test-nifi
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: test-nifi
template:
metadata:
labels:
app: test-nifi
spec:
restartPolicy: Always
containers:
- name: nifi1
image: XXX
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8443
name: nifi1
env:
- name: "NIFI_SENSITIVE_PROPS_KEY"
value: "..."
- name: ALLOW_ANONYMOUS_LOGIN
value: "no"
- name: SINGLE_USER_CREDENTIALS_USERNAME
value: XXX
- name: SINGLE_USER_CREDENTIALS_PASSWORD
value: XXX
- name: NIFI_WEB_HTTPS_HOST
value: "0.0.0.0"
- name: NIFI_WEB_HTTPS_PORT
value: "8443"
- name: NIFI_WEB_PROXY_HOST
value: 0.0.0.0:8443
- name: HOSTNAME
value: "nifi1"
- name: NIFI_ANALYTICS_PREDICT_ENABLED
value: "true"
- name: NIFI_ELECTION_MAX_CANDIDATES
value: "1"
- name: NIFI_ELECTION_MAX_WAIT
value: "20 sec"
- name: NIFI_JVM_HEAP_INIT
value: "3g"
- name: NIFI_JVM_HEAP_MAX
value: "4g"
volumeMounts:
- name: pv-01
mountPath: /opt/nifi/nifi-current/data
subPath: data
livenessProbe:
exec:
command:
- pgrep
- java
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
successThreshold: 1
readinessProbe:
tcpSocket:
port: 8443
initialDelaySeconds: 240
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
successThreshold: 1
resources:
requests:
cpu: 400m
memory: 4Gi
limits:
cpu: 500m
memory: 4Gi
imagePullSecrets:
- name: repo-secret
volumes:
- name: pv-01
persistentVolumeClaim:
claimName: pv-01-claim
</code></pre>
<p>Solution I have :</p>
<ul>
<li>Inject these scripts as config maps, this way the Apache Nifi container will have access to scripts.</li>
</ul>
<p>How can I do it with config maps ?</p>
| <p>To inject scripts as config map you should:</p>
<ol>
<li>Create kubernetes ConfigMap and paste python script you want to have</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: pyhton-scripts-cofigmap
data:
python_script_1.py: |
import foo from bar
foo()
python_script_2.py: |
print('Hello World')
</code></pre>
<ol start="2">
<li>Attach this ConfigMap as volume to pod in your Deployment</li>
</ol>
<p>in <code>volumes:</code> section of your deployment add</p>
<pre class="lang-yaml prettyprint-override"><code>- configMap:
name: pyhton-scripts-cofigmap
name: pyhton-scripts-volume
</code></pre>
<p>it will be look</p>
<pre class="lang-yaml prettyprint-override"><code>volumes:
- name: pv-01
persistentVolumeClaim:
claimName: pv-01-claim
- configMap:
name: pyhton-scripts-cofigmap
name: pyhton-scripts-volume
</code></pre>
<p>and in <code>volumeMounts:</code> section add</p>
<pre><code>- mountPath: /path/to/scripts
name: pyhton-scripts-volume
readOnly: true
</code></pre>
<p>it will be look like</p>
<pre class="lang-yaml prettyprint-override"><code>volumeMounts:
- name: pv-01
mountPath: /opt/nifi/nifi-current/data
subPath: data
- mountPath: /path/to/scripts
name: pyhton-scripts-volume
readOnly: true
</code></pre>
<p>You can create ConfigMap separately or add to your yaml like this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: pyhton-scripts-cofigmap
data:
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nifi
namespace: test-namespace
labels:
name : test-nifi
app : test-nifi
...
---
</code></pre>
|
<p>I am creating</p>
<ul>
<li>A pod every two minutes using cron-job</li>
<li>It has custom python image that reads the data from redis</li>
</ul>
<p>The problem is, it keeps on creating pods every two minutes irrespective of others terminated or not, I am trying to terminate the pod, if I get a response like "Empty" from the pod, the operation that is running in the pod, gives the response like that, if it doesn't give response like this, I want to terminate within 10 sec.</p>
| <p>You can do something like</p>
<pre><code>kubectl wait --for=condition=complete job/jobname
</code></pre>
<p>and you can also set a timeout:</p>
<pre><code>kubectl wait --for=condition=complete --timeout=10s job/jobname
</code></pre>
<p>Example python script that will be checking the redis</p>
<pre><code>#!/usr/bin/env python
import time
import rediswq
host="redis"
# Uncomment next two lines if you do not have Kube-DNS working.
# import os
# host = os.getenv("REDIS_SERVICE_HOST")
q = rediswq.RedisWQ(name="job2", host=host)
print("Worker with sessionID: " + q.sessionID())
print("Initial queue state: empty=" + str(q.empty()))
while not q.empty():
item = q.lease(lease_secs=10, block=True, timeout=2)
if item is not None:
itemstr = item.decode("utf-8")
print("Working on " + itemstr)
time.sleep(10) # Put your actual work here instead of sleep.
q.complete(item)
else:
print("Waiting for work")
print("Queue empty, exiting")
</code></pre>
<p>ref : <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/</a></p>
|
<p>I installed mongodb as a replicaset with 3 replicas on my k8s cluster using the bitnami helm chart.</p>
<p>So I get these pods:</p>
<pre><code>mongodb-0.mongodb-headless.mongodb.svc.cluster.local:27017
mongodb-1.mongodb-headless.mongodb.svc.cluster.local:27017
mongodb-2.mongodb-headless.mongodb.svc.cluster.local:27017
</code></pre>
<p>Now I would like to get access using mongodb compass.</p>
<p>I set a port forward (at 27017 I'm running a local mongodb)</p>
<pre><code>kubectl port-forward svc/mongodb-headless -n mongodb 27018:27017
</code></pre>
<p>and tried to connect compass with the uri</p>
<pre><code>mongodb://localhost:27018
</code></pre>
<p>But this gives me the error</p>
<pre><code>getaddrinfo ENOTFOUND mongodb-0.mongodb-headless.mongodb.svc.cluster.local
</code></pre>
<p>What am I doing wrong to connect to my k8s cluster mongodb using compass?</p>
<hr />
<p><em>Update</em></p>
<pre><code>% kubectl get all -n mongodb
NAME READY STATUS RESTARTS AGE
pod/mongodb-0 1/1 Running 0 25h
pod/mongodb-1 1/1 Running 0 25h
pod/mongodb-2 1/1 Running 0 25h
pod/mongodb-arbiter-0 1/1 Running 0 2d14h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongodb-arbiter-headless ClusterIP None <none> 27017/TCP 2d14h
service/mongodb-headless ClusterIP None <none> 27017/TCP 2d14h
NAME READY AGE
statefulset.apps/mongodb 3/3 2d14h
statefulset.apps/mongodb-arbiter 1/1 2d14h
</code></pre>
<p><strong>values.yaml for bitnami helm chart</strong></p>
<pre><code>image:
registry: docker.io
repository: bitnami/mongodb
digest: "sha256:916202d7af766dd88c2fff63bf711162c9d708ac7a3ffccd2aa812e3f03ae209" # tag: 4.4.15
pullPolicy: IfNotPresent
architecture: replicaset
replicaCount: 2
updateStrategy:
type: RollingUpdate
containerPorts:
mongodb: 27017
auth:
enabled: true
rootUser: root
rootPassword: "password"
usernames: ["user"]
passwords: ["userpass"]
databases: ["db"]
service:
portName: mongodb
ports:
mongodb: 27017
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
volumePermissions:
enabled: true
livenessProbe:
enabled: false
readinessProbe:
enabled: false
</code></pre>
| <p> Just have recreated your setup. Everything works fine</p>
<pre><code>$ k create ns mongo-test
namespace/mongo-test created
$ k -n mongo-test create -f mongo-svc-sts.yaml
statefulset.apps/mongo created
service/mongo-headless created
</code></pre>
<p>result</p>
<pre><code>$ k -n mongo-test get all
NAME READY STATUS RESTARTS AGE
pod/mongo-0 1/1 Running 0 44s
pod/mongo-1 1/1 Running 0 40s
pod/mongo-2 1/1 Running 0 27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongo-headless ClusterIP None <none> 27017/TCP 13m
NAME READY AGE
statefulset.apps/mongo 3/3 45s
</code></pre>
<p>port forward</p>
<pre><code>$ k -n mongo-test port-forward svc/mongo-headless 27018:27017
Forwarding from 127.0.0.1:27018 -> 27017
Forwarding from [::1]:27018 -> 27017
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
Handling connection for 27018
</code></pre>
<p>compass
<a href="https://i.stack.imgur.com/lH9qU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lH9qU.png" alt="enter image description here" /></a></p>
<p>mongo-svc-sts.yaml</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
template:
metadata:
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
spec:
tolerations:
- operator: Exists
containers:
- name: mongo
image: mongo:latest
args:
- --bind_ip
- 0.0.0.0
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
service: logging
name: mongo-headless
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 27017
selector:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: backend
</code></pre>
<p>to be able to help you pls use that YAMLs and post the outputs. If it does not work probably you should debug your k8s installation</p>
|
<p>I deployed kubernetes cluster in <code>minikube</code> which has one master node and one worker node. When I tried to see the kube-proxy with:</p>
<pre><code>kubectl get pods -n kube-system
</code></pre>
<p>two kube-proxies apear</p>
<pre><code>kube-proxy-6jxgq
kube-proxy-sq58d
</code></pre>
<p>According to the refrence architecture <a href="https://www.stackoverflow.com/">https://kubernetes.io/docs/concepts/overview/components/</a> kube-proxy is the component of worker node. I expect to see one kube-proxy not two. what is the reason?</p>
| <p>kube-proxy runs on every node, when you use</p>
<pre><code>kubectl get po -n kube-system -o wide
</code></pre>
<p>you will see the node the pods are running on</p>
|
<p>I am having some issues with a Kubernetes CronJob running two containers inside of a GKE cluster.</p>
<p>One of the two containers is actually executing the job that must be done by the CronJob.</p>
<p>This works perfectly fine. It is started when it is supposed to be started, does the job and then terminates. All fine up until this point.</p>
<p>What seems to be causing some issues is the second container, which is a sidecar container used to access a database instance. This won't terminate and seems to be leading to the problem that the CronJob itself won't terminate. Which is an issue, since I see an accumulation of running Job instances over time.</p>
<p>Is there a way to configure a Kubernetes batch CronJob to be terminating when one of the container is successfully exe</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: chron-job-with-a-sidecar
namespace: my-namespace
spec:
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
# │ │ │ │ │
schedule: "0 8 * * *" # -> Every day At 8AM
jobTemplate:
metadata:
labels:
app: my-label
spec:
template:
containers:
# --- JOB CONTAINER -----------------------------------------------
- image: my-job-image:latest
imagePullPolicy: Always
name: my-job
command:
- /bin/sh
- -c
- /some-script.sh; exit 0;
# --- SIDECAR CONTAINER ----------------------------------------------
- command:
- "/cloud_sql_proxy"
- "-instances=my-instance:antarctica-south-3:user=tcp:1234"
# ... some other settings ...
image: gcr.io/cloudsql-docker/gce-proxy:1.30.0
imagePullPolicy: Always
name: cloudsql-proxy
# ... some other values ...
</code></pre>
| <p>No, strictly speaking there is no way to make Kubernetes stop a sidecar container automatically once a "main" container is done.</p>
<p>The closest "kubernetes-native" solution I can think of is setting CronJob <code>concurrencyPolicy</code> to <code>Replace</code> (see <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/cron-job-v1/#CronJobSpec" rel="nofollow noreferrer">CronJobSpec</a>). It won't stop the sidecar once done, but at least each new job will be overriding the previous one, so the jobs won't be accumulating. Unfortunately, with this solution, you are gonna lose job history.</p>
<p>If this solution does not fit your needs, you will need to implement some form of communication between containers, but nothing like that is built in into Kubernetes itself. There are some external tools though, that can help, e.g. <a href="https://github.com/karlkfi/kubexit" rel="nofollow noreferrer">kubexit</a>.</p>
|
<p>My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.</p>
<p>However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.</p>
<pre><code>docker run 24d7638e2aff
</code></pre>
<p><a href="https://i.stack.imgur.com/W0yCJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W0yCJ.png" alt="" /></a></p>
<p>This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause</p>
<p>docker-compose.yaml</p>
<pre><code>version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
</code></pre>
<p>Dockerfile</p>
<pre><code>ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
</code></pre>
<p>entrypoint</p>
<pre><code>#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$@"
</code></pre>
<p>How do I build images of these containers so that I can deploy them to k8s?</p>
| <p>The Compose <code>command:</code> overrides the Dockerfile <code>CMD</code>. <code>docker run</code> doesn't look at the <code>docker-compose.yml</code> file at all, and <code>docker run</code> with no particular command runs the image's <code>CMD</code>. You haven't declared anything for that, which is why the container exits immediately.</p>
<p>Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a <code>CMD</code> line to the Dockerfile</p>
<pre class="lang-bash prettyprint-override"><code>CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
</code></pre>
<p>Now plain <code>docker run</code> as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override</p>
<pre class="lang-bash prettyprint-override"><code>docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
</code></pre>
<p>If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use <code>args:</code> in your pod spec to provide the alternate command, not <code>command:</code>.</p>
|
<p>Has anyone ben able to install Kubernetes using <a href="https://kops.sigs.k8s.io" rel="nofollow noreferrer">kOps</a> on bare metal/ local virtual machine running Ubuntu.</p>
| <p>From one of Kops maintainer</p>
<blockquote>
<p>As kOps maintainer, I can at least say that there is no active project to support bare-metal anymore. And the code we once had going in that direction was obsolete and removed.</p>
</blockquote>
<p>Source: <a href="https://github.com/kubernetes/enhancements/issues/360" rel="nofollow noreferrer">https://github.com/kubernetes/enhancements/issues/360</a></p>
<p>If you are looking for installing kubernetes clusters in local bare-metal machine, check out <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">Kind</a>. It is looking very promising.</p>
|
<p>I am writing this question to share the solution we found in our company.
We migrated Solr over a docker only solution to a kubernetes solution.</p>
<p>On kubernetes the environment ended up with slowness.
At least for me the solution was atypical.</p>
<p><strong>Environment:</strong></p>
<ul>
<li>solr(8.2.0) with just one node</li>
<li>solr database with 250GB on disk</li>
<li>kubernetes over Rancher</li>
<li>Node with 24vcpus and 32GB of Ram</li>
<li>Node hosts Solr and nginx ingress</li>
<li>Reserved 30GB for the Solr pod in kubernetes</li>
<li>Reserved 25GB for the Solr</li>
</ul>
<p><strong>Expected Load:</strong></p>
<ul>
<li>350 updates/min (pdf documents and html documents)</li>
<li>50 selects/min</li>
</ul>
<p>The result was Solr degrading over time having high loads on host. The culpirit was heavy disk access.</p>
| <p>After one week of frustrated adjustments this is the simple solution we found:</p>
<p>Solr JVM had 25 GB. We decreased the value to 10GB.</p>
<p>This is the command to start solr with the new values:</p>
<p>/opt/solr/bin/solr start -f -force -a '-Xms10g -Xmx10g' -p 8983</p>
<p>If someone can explain what happened that would be great.
My guess is that solr was trying to make cash and kubernetes was reapping this cache. So Solr ended up in a continuous reading of the disk trying to build its cache.</p>
|
<p>We're using <code>sasl_tls</code> mechanism with bitnami/kafka helm chart. We're using Let's Encrypt and cert-manager for issuing the certificate. Created a secret out of the Let's Encrypt generated certificate and passed the secret to the <code>existingSecrets</code> parameter in the helm chart. Now when I'm using KafkaJS library to connect to the Kafka broker, with <code>ssl: true</code> it is throwing an error:</p>
<pre><code>KafkaJSConnectionError: Connection error: unable to verify the first certificate
</code></pre>
<p><strong>Detailed Steps/How to generate:</strong></p>
<ul>
<li>Enabled external access to kafka chart so that it gives us an IP at port 9094</li>
</ul>
<pre><code>externalAccess.enabled: true
externalAccess.autoDiscovery.enabled: true
externalAccess.service.type: LoadBalancer
externalAccess.service.ports.external: 9094
externalAccess.service.domain: ""
</code></pre>
<ul>
<li>Bound this IP to a domain <code>xyz.com</code></li>
<li>Bound this domain name to Let's Encrypt certificate issuer to issue certificate for this domain</li>
<li><code>tls.crt</code> and <code>tls.key</code> are generated</li>
<li>Renamed these files and used these to create a secret</li>
</ul>
<pre><code>kubectl create secret generic kafka-tls-0 --from-file=tls.crt=kafka-0.tls.crt --from-file=tls.key=kafka-0.tls.key
</code></pre>
<ul>
<li>Modified chart value to configure tls part</li>
</ul>
<pre><code>tls.type: pem
tls.pemChainIncluded: true
tls.existingSecrets: ["kafka-tls-0"]
</code></pre>
<ul>
<li>Applied the values of the chart (started broker)</li>
<li>Now in KafkaJS client setup, tried to pass value to the <code>brokers</code> parameter in either format <code>ip:9094</code> or <code>xyz.com:9094</code>, also passed <code>ssl:true</code></li>
</ul>
<p><strong>My Questions:</strong></p>
<ul>
<li><p>Is the flow correct? Or are we going to the wrong direction?</p>
</li>
<li><p>What is the reason behind the problem? Is this the certificate chain that is being being wrong? (seems like it is!)</p>
</li>
<li><p>Is there any other chart that I can use to achieve my goal?</p>
</li>
</ul>
<p><strong>Followup Question:</strong></p>
<ol>
<li>If we can make it work, what will be the next steps for ensuring auto-renewal of the certificates? Is it managed automatically? Or should we have to maintain a script for Lets' Encrypt certificate auto-renewal?</li>
</ol>
| <p>There could be multiple causes. I'll try to list what needs to be true for this to work:</p>
<ol>
<li>Your node.js KafkaJS client should have a certificate store that is able to verify the CA that signed the Let's Encrypt certificate. node.js has a built-in list of certificates, and you can add to it. I have not checked but I expect the Let's Encrypt root CAs to be there.</li>
</ol>
<p>A couple years ago Let's Encrypt switched root CAs, so if you have an old version of node.js that could be it.</p>
<ol start="2">
<li>The Kafka broker must present a certificate chain (not just your signed certificate) that includes a certificate that the node.js client can verify.
Depending on which CAs your client can verify, this could mean the chain needs to go as far as the root CA (as opposed to an intermediate CA).</li>
</ol>
<p>You should check which certificates are in your chain. You can do this with the OpenSSL CLI:
<code>openssl x509 -in cert.pem -text -noout</code></p>
<p>Specific advice for the Bitnami Kafka chart: we've had trouble with how the scripts included with the chart deal with PEM keys and cert chains, where it would sometimes not extract the entire chain as it processes the PEM, and then Kafka would only see a partial chain.</p>
<p>I would try to use the JKS format (Java keystore & truststore) instead and see if that helps. You would create a JKS keystore with your key and a truststore with all the certificates in the chain.</p>
<p>Regarding auto-renewal of certificates - you should be able to achieve that with cert-manager, however that might be challenging with the Bitnami Kafka chart as it's not suited to renewing certificates periodically, and is less suited for short-lived certificates from a CA like Let's Encrypt. Normally, you'd use Let's Encrypt with a load balancer like NGINX, you'd usually have a Kubernetes ingress controller that handles noticing the new certificates and reloading the load balancer.</p>
<p>In your case, since you are trying to generate TLS certificates for use by your backend services to communicate with Kafka, you might have an easier time doing this with something that was intended for inter-service communication (which Let's Encrypt is not), like SPIRE and a matching Kubernetes operator.</p>
<p><a href="https://github.com/spiffe/spire" rel="nofollow noreferrer">SPIRE</a>, which is a CNCF project that deals with attesting workload identities and representing them cryptographically - in your case as a TLS keypair for the Kafka server. It takes care of things like renewing the certificates.</p>
<p>To make SPIRE easy to use in Kubernetes, deploy it together with <a href="https://github.com/otterize/spire-integration-operator" rel="nofollow noreferrer">Otterize SPIRE integration operator</a>, which uses SPIRE to generate TLS credentials, saves them in Kubernetes secrets, and takes care of refreshing the secrets as the certificates require renewal by SPIRE. You deploy it in your cluster than annotate pods with what you'd like the secret to be called that holds the certificates, and you can use other annotations to configure things like whether the format is PEM or JKS or what the TTL is. That set of configuration should make it easy to get it working with Bitnami. We use it with the Bitnami chart successfully, and <a href="https://docs.otterize.com/guides/ibac-for-k8s-kafka/" rel="nofollow noreferrer">even have a tutorial for getting it working with Bitnami</a> - stop at the section that configures ACLs if all you want is TLS.</p>
<p>Since you also mentioned you use SASL, you might want to just replace the username/password completely with certificates and switch to mTLS. If you also want to add Kafka ACLs into the mix and allow access to certain topics/operations only for certain workloads, you can also deploy the <a href="https://github.com/otterize/intents-operator" rel="nofollow noreferrer">Otterize intents operator</a>. It lets you declare which topics a workload needs access to, and works together with SPIRE and built-in Kafka ACLs so that workloads can only access what they've declared.</p>
|
<p>I am new to <code>golang</code> and I am trying to list the nodes in my <code>minikube</code> cluster with the <code>client-go</code>. And I encounter the following issue:</p>
<blockquote>
<p>nodeList.Items undefined (type *invalid type has no field or method Items)compilerMissingFieldOrMethod</p>
</blockquote>
<p>And here's my code snippet for this:</p>
<pre><code>package main
import (
"context"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
rules := clientcmd.NewDefaultClientConfigLoadingRules()
kubeconfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(rules, &clientcmd.ConfigOverrides{})
config, err := kubeconfig.ClientConfig()
if err != nil {
panic(err)
}
clientset := kubernetes.NewForConfigOrDie(config)
nodeList, err := clientset.CoreV1().Nodes().List(context.Background(), metav1.ListOptions{})
if err != nil {
panic(err)
}
for _, node := range nodeList.Items {
fmt.Printf("%s\n", node.Name)
}
}
</code></pre>
<p>Can someone help me what is the problem here?</p>
<p>Thank you!</p>
| <p>It looks like the clientset is not created from your kubeconfig. I would suggest you to create the clientset in the following way. I have used out of cluster config here to create the clientset. You can also create it using <a href="https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration" rel="nofollow noreferrer">InclusterConfig</a>.</p>
<pre><code>package main
import (
"context"
"flag"
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/util/homedir"
"path/filepath"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
// parse the .kubeconfig file
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
nodeList, err := clientset.CoreV1().Nodes().List(context.Background(), metav1.ListOptions{})
if err != nil {
panic(err)
}
for _, node := range nodeList.Items {
fmt.Printf("%s\n", node.Name)
}
}
</code></pre>
|
<p>I am writing a script that receives a Kubernetes context name as an input and outputs the different elements of the cluster -></p>
<pre><code>class GKE:
def __init__(self, context):
s = context.split("_")
self.provider: str = s[0]
self.project: str = s[1]
self.data_center: GKE.DataCenter = GKE.DataCenter(data_center=s[2])
self.cluster_name: str = s[3]
def __str__(self):
return f'provider: {self.provider}, project: {self.project}, {self.data_center}, cluster name: {self.cluster_name}'
class DataCenter:
def __init__(self, data_center: str):
s = data_center.split("-")
self.country: str = s[0]
self.region: str = s[1]
self.zone: str = s[2]
def __str__(self):
return f'country: {self.country}, region: {self.region}, zone: {self.zone}'
class EKS:
# TODO: What are the fields? What is the convention?
pass
class AKS:
# TODO: What are the fields? What is the convention?
pass
if __name__ == '__main__':
print(GKE(context="gke_XXX-YYY-ZZZ_us-central1-c_name"))
</code></pre>
<p>Output:</p>
<pre><code>provider: gke, project: XXX-YYY-ZZZ, country: us, region: central1, zone: c, cluster name: name
</code></pre>
<p>This will support only the three main providers (GKE, EKS, AKS).</p>
<p>My question is:</p>
<p>What are the different elements of EKS and AKS context names?</p>
| <p>You need to differentiate between the correct name of the cluster and the naming schema of a resource.</p>
<p>When I run <code>kubectl config get-contexts</code> on the clusters Aks, Eks, and Gke I get the following results:</p>
<pre><code>NAME AUTHINFO
gke_project-1234_us-central1-c_myGKECluster gke_project-1234_us-central1-c_myGKECluster
myAKSCluster clusterUser_myResourceGroup_myAKSCluster
arn:aws:eks:eu-west-1:1234:cluster/myEKSCluster arn:aws:eks:eu-west-1:1234:cluster/myEKSCluster
</code></pre>
<p>In all three clouds, the correct name of the cluster in this example is <code>my***Cluster</code>.</p>
<p>The naming scheme in <code>~/.kube/config</code> is used to distinguish one cluster (contexts wise) from another.
For example when you want to change the context with kubectl, then you have to differentiate between cluster whose name is <code>myCluster</code> and is in <code>region-code1</code> Compared to another cluster whose name is also <code>myCluster</code> but he is in <code>region-code2</code>, and so on, so you will use the naming scheme.</p>
<p><strong>GKE:</strong></p>
<p>As you wrote, the naming scheme in gke consists of 4 parts: <code>provider_project-id_zone_cluster-name</code><br />
For example <code>gke_project-123_us-central1-c_myGKECluster</code></p>
<ul>
<li>provider: <code>gke</code></li>
<li>project-id: <code>project-123</code></li>
<li>zone: <code>us-central1-c</code></li>
<li>cluster-name: <code>myGKECluster</code></li>
</ul>
<p><strong>AKS:</strong><br />
In aks the naming schema is the name of the cluster.<br />
But the <code>AUTHINFO</code>, (which is actually the configuration of the user in the kubeconfig file), consists of three parts: <code>Resource-type_Resource-group_Resource-name</code><br />
For example <code>clusterUser_myResourceGroup_myAKSCluster</code></p>
<ul>
<li>The Resource-type is <code>clusterUser</code></li>
<li>The Resource-group is <code>myResourceGroup</code></li>
<li>The Resource-name is <code>myAKSCluster</code></li>
</ul>
<p><strong>EKS:</strong></p>
<blockquote>
<p><strong>AWS</strong> requires an ARN when needed to specify a resource unambiguously across all of AWS.</p>
</blockquote>
<p>The ARN format is <code>arn:partition:service:region:account-id:resource-type/resource-id</code><br />
For example <code>arn:aws:eks:eu-west-1:1234:cluster/myEKSCluster</code></p>
<ul>
<li>partition: the partition in which the resource is located (such as <code>aws</code> Regions).</li>
<li>service: The service namespace that identifies the AWS product (such as <code>eks</code>).</li>
<li>region: The Region code (such as <code>eu-west-1</code>).</li>
<li>account-id: The ID of the AWS account that owns the resource(such as <code>1234</code>).</li>
<li>resource-type: The resource type (such as <code>cluster</code>).</li>
<li>resource-id The resource identifier. This is the name of the resource, the ID of the resource, or a resource path (such as <code>myEKSCluster</code>).</li>
</ul>
<hr />
<p>Additional resources:</p>
<p><a href="https://stackoverflow.com/a/63824179/20571972">https://stackoverflow.com/a/63824179/20571972</a>
<a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#aws-resource-eks-cluster-return-values" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#aws-resource-eks-cluster-return-values</a></p>
|
<p>We have OPA installed in our Kubernetes cluster. Not Gatekeeper. The "original" OPA...</p>
<p><strong>I don't understand how I can look at what OPA is receiving as input request from the API-server ?</strong><br />
=> If I knew exactly what the payload looks like then writing the Rego would be simple.</p>
<p>I tried to use <code>-v=8</code> option in <code>kubectl</code> to see the request and response from api-server like so:</p>
<pre><code>$ kubectl get pod -v=8
...
GET https://xxxx.k8s.ovh.net/api/v1/namespaces/default/pods?limit=500
...
Request Headers: Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
...
Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"37801112226"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"string","format":"","description":"The number of times the containers in this pod have been restarted and when the last container in this pod has restarted.","priority":0},{"name":"Age","type [truncated 4024 chars]
</code></pre>
<p>Unfortunatly the above JSON payload doesn't match what I see in the different tutorials.</p>
<p><em>How is anybody able to write OPA rules for Kubernetes ???</em><br />
Thx</p>
| <p>You have two options:</p>
<ul>
<li><p>Run the OPA server with debug level logging:</p>
<p><code>opa run --server --log-level debug ...</code></p>
<p>This is obviously very noisy, so beware.</p>
</li>
<li><p>Run the server with <a href="https://www.openpolicyagent.org/docs/latest/management-decision-logs/" rel="nofollow noreferrer">decision logging</a> enabled. This is almost always preferable, and allows you to either dump decisions (including input data) to console, or for production deployments, to a remote server aggregating the logs. The decision logging system is really the native way of doing this, and comes with a bunch of features, like masking of sensitive data, etc.. but if you just want something printed to the console, you can run OPA like:</p>
<p><code>opa run --server --set decision_logs.console=true ...</code></p>
</li>
</ul>
|
<p>I have been reading a lot of posts about this but according to them I should have my bases covered so I am not sure what is going on.</p>
<p>I am using Minikube, Docker, Helm, and a local Docker registry on a Mac. After I install with Helm I see the following error.</p>
<pre><code>Failed to pull image "127.0.0.1:5000/hello-world:v1.0": rpc error: code = Unknown desc = Error response from daemon: manifest for 127.0.0.1:5000/hello-world:v1.0 not found: manifest unknown: manifest unknown
</code></pre>
<p>My deployment spec looks like this.</p>
<pre><code>spec:
containers:
- name: hello-world
image: 127.0.0.1:5000/hello-world:v1.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 5000
protocol: TCP
</code></pre>
<p>I get this if I try to pull the image.</p>
<pre><code>$docker pull 127.0.0.1:5000/hello-world:v1.0
v1.0: Pulling from hello-world
Digest: sha256:0534fcc8d406574f7def33e726f0e476ce94866e21d8cbd6ed4c273de113e9d3
Status: Image is up to date for 127.0.0.1:5000/hello-world:v1.0
127.0.0.1:5000/hello-world:v1.0
</code></pre>
<p>Then I check the manifest and it exists.</p>
<pre><code>$curl -X GET 127.0.0.1:5000/v2/hello-world/manifests/sha256:0534fcc8d406574f7def33e726f0e476ce94866e21d8cbd6ed4c273de113e9d3
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 8102,
"digest": "sha256:49188cd9fa7002b28391f01ba5c0ce4328cc467abeb6743d181cbe1a3a2fd37c"
},
...
}
</code></pre>
<p>Any help will be greatly appreciated</p>
<p><strong>Edit:</strong></p>
<p>The exact commands I executed are as follows</p>
<ul>
<li>docker run -d -p 5000:5000 --restart=always --name registry registry:2</li>
<li>docker build . -t hello-world:v1.0</li>
<li>docker tag hello-world:v1.0 127.0.0.1:5000/hello-world:v1.0</li>
<li>docker push 127.0.0.1:5000/hello-world:v1.0</li>
<li>helm install hello-world ./hello-world</li>
</ul>
<p><strong>Edit With Resolution:</strong></p>
<p>I finally got it to work. I had to modify the image key in the podspec of the pod/deployment manifest to <code>host.minikube.internal:5000/hello-world:v1.0</code> as suggested in the thread with @zsolt. Secondly I had to start minikube with <code>--insecure-registry="host.minikube.internal:5000"</code> instead of <code>--insecure-registry="127.0.0.1:5000"</code> while executing <code>minikube delete</code> before this change. Thanks for the assistance @zsolt!</p>
| <p>You can enable registry addon in minikube:</p>
<p><code>minikube addons enable registry</code></p>
<p>Also you can enable alias for registry:</p>
<p><code>minikube addons enable registry-aliases # test.com, test.org, example.com, example.org</code></p>
<p>Docs: <a href="https://github.com/kubernetes/minikube/blob/master/deploy/addons/registry-aliases/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/master/deploy/addons/registry-aliases/README.md</a></p>
<p><strong>UPDATE:</strong></p>
<p>I just reread you question, you want to reach from minikube the registry that is running on your host machine. For that you can use the <code>host.minikube.internal</code> hostname, so <code>host.minikube.internal:5000/hello-world:v1.0</code></p>
<p>Docs: <a href="https://minikube.sigs.k8s.io/docs/handbook/host-access/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/host-access/</a></p>
|
<p>The problem is that I need to modify the StorageClassName but it's not possible because there was a blank StorageClassName assigned.</p>
<p>This PVC is bound to a PV, so, if I delete the PVC to create a new one with the StorageClassName, the data that is in my AWS will be deleted?</p>
| <p>You can recreate an existing PVC reusing the same PV with no data losses by using <a href="https://kubernetes.io/blog/2021/12/15/kubernetes-1-23-prevent-persistentvolume-leaks-when-deleting-out-of-order/" rel="nofollow noreferrer">reclaim policy</a>.</p>
<p>In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.
In that case, it is more appropriate to use the “Retain” policy. With the “Retain” policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.</p>
<p>Reclaim Policy: Used to tell the cluster what to do with the volume after releasing its claim. Current reclaim policies are:</p>
<ul>
<li>Retain — manual reclamation</li>
<li>Recycle — basic scrub (rm -rf/thevolume/*)</li>
<li>Delete — associated storage assets such as AWS EBS, GCE
PD, Azure Disk, or OpenStack Cinder volume is deleted</li>
</ul>
<p>NOTE: Extremely recommended to use Retain policy for PVCs that store critical data.</p>
<p>Here in this <a href="https://webera.blog/recreate-an-existing-pvc-in-a-new-namespace-but-reusing-the-same-pv-without-data-loss-2c7326c0035a" rel="nofollow noreferrer">blog</a> you have detailed steps to recreate a PVC in another name space similarly you can change the storage class.</p>
|
<p>I'm trying to customize the <code>jenkins/jenkins:latest</code> image in order to install Docker so I'm able to run docker within the jenkins pipeline but when I run the following code using the following files, the pods, jenkins-jenkins, terminate with "Error" without outputing any meaningfully logs.</p>
<p>Dockerfile (custom_image:latest)</p>
<pre><code>FROM jenkins/jenkins:latest
USER jenkins
</code></pre>
<p>(even though this Dockerfile is not installing docker the same error occurs)</p>
<p>values.yaml</p>
<pre><code>jenkins:
name:
image: custom_image:latest
</code></pre>
<pre><code>helm repo add jenkins https://raw.githubusercontent.com/jenkinsci/kubernetes-operator/master/chart
helm install jenkins jenkins/jenkins-operator -n jenkins -f values.yaml
</code></pre>
<p>Outputs...</p>
<p>kubectl describe pod/jenkins-jenkins</p>
<pre><code>...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12s default-scheduler Successfully assigned jenkins/jenkins-jenkins to minikube
Normal Pulled 11s kubelet Container image "docker-jenkins:latest" already present on machine
Normal Created 11s kubelet Created container jenkins-master
Normal Started 11s kubelet Started container jenkins-master
Normal Pulled 11s kubelet Container image "virtuslab/jenkins-operator-backup-pvc:v0.1.0" already present on machine
Normal Created 11s kubelet Created container backup
Normal Started 11s kubelet Started container backup
Normal Killing 8s kubelet Stopping container backup
</code></pre>
<p>kubectl logs pod/jenkins-jenkins</p>
<pre><code>...
Defaulted container "jenkins-master" out of: jenkins-master, backup
+ '[' '' == true ']'
+ echo 'To print debug messages set environment variable '\''DEBUG_JENKINS_OPERATOR'\'' to '\''true'\'''
+ mkdir -p /var/lib/jenkins/init.groovy.d
To print debug messages set environment variable 'DEBUG_JENKINS_OPERATOR' to 'true'
+ cp -n /var/jenkins/init-configuration/createOperatorUser.groovy /var/lib/jenkins/init.groovy.d
+ mkdir -p /var/lib/jenkins/scripts
+ cp /var/jenkins/scripts/init.sh /var/jenkins/scripts/install-plugins.sh /var/lib/jenkins/scripts
+ chmod +x /var/lib/jenkins/scripts/init.sh /var/lib/jenkins/scripts/install-plugins.sh
Installing plugins required by Operator - begin
+ echo 'Installing plugins required by Operator - begin'
+ cat
+ [[ -z '' ]]
+ install-plugins.sh
WARN: install-plugins.sh has been removed, please switch to jenkins-plugin-cli
</code></pre>
<p>kubectl describe pod/jenkins-jenkins-operator-7c4cd6dc7b-g6m7z</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18h default-scheduler Successfully assigned jenkins/jenkins-jenkins-operator-7c4cd6dc7b-g6m7z to minikube
Normal Pulled 18h kubelet Container image "virtuslab/jenkins-operator:v0.7.1" already present on machine
Normal Created 18h kubelet Created container jenkins-operator
Normal Started 18h kubelet Started container jenkins-operator
Normal SandboxChanged 3m56s kubelet Pod sandbox changed, it will be killed and re-created.
Warning BackOff 3m23s kubelet Back-off restarting failed container
Normal Pulled 3m11s (x2 over 3m55s) kubelet Container image "virtuslab/jenkins-operator:v0.7.1" already present on machine
Normal Created 3m11s (x2 over 3m55s) kubelet Created container jenkins-operator
Normal Started 3m10s (x2 over 3m55s) kubelet Started container jenkins-operator
</code></pre>
<p>kubectl logs jenkins-jenkins-operator-7c4cd6dc7b-g6m7z</p>
<pre><code>2022-11-22T20:00:50.544Z DEBUG controller-jenkins Jenkins HTTP Service is present {"cr": "jenkins"}
2022-11-22T20:00:50.545Z DEBUG controller-jenkins Jenkins slave Service is present {"cr": "jenkins"}
2022-11-22T20:00:50.545Z DEBUG controller-jenkins Kubernetes resources are present {"cr": "jenkins"}
2022-11-22T20:00:50.545Z DEBUG controller-jenkins Jenkins master pod is present {"cr": "jenkins"}
2022-11-22T20:00:50.545Z DEBUG controller-jenkins Jenkins master pod is terminating {"cr": "jenkins"}
2022-11-22T20:00:55.546Z DEBUG controller-jenkins Reconciling Jenkins {"cr": "jenkins"}
2022-11-22T20:00:55.546Z DEBUG controller-jenkins Operator credentials secret is present {"cr": "jenkins"}
2022-11-22T20:00:55.552Z DEBUG controller-jenkins Scripts config map is present {"cr": "jenkins"}
2022-11-22T20:00:55.555Z DEBUG controller-jenkins Init configuration config map is present {"cr": "jenkins"}
2022-11-22T20:00:55.562Z DEBUG controller-jenkins Base configuration config map is present {"cr": "jenkins"}
2022-11-22T20:00:55.562Z DEBUG controller-jenkins GroovyScripts Secret and ConfigMap added watched labels {"cr": "jenkins"}
2022-11-22T20:00:55.562Z DEBUG controller-jenkins ConfigurationAsCode Secret and ConfigMap added watched labels {"cr": "jenkins"}
2022-11-22T20:00:55.562Z DEBUG controller-jenkins createServiceAccount with annotations map[] {"cr": "jenkins"}
2022-11-22T20:00:55.582Z DEBUG controller-jenkins Service account, role and role binding are present {"cr": "jenkins"}
2022-11-22T20:00:55.582Z DEBUG controller-jenkins Extra role bindings are present {"cr": "jenkins"}
2022-11-22T20:00:55.583Z DEBUG controller-jenkins Jenkins HTTP Service is present {"cr": "jenkins"}
2022-11-22T20:00:55.584Z DEBUG controller-jenkins Jenkins slave Service is present {"cr": "jenkins"}
2022-11-22T20:00:55.585Z DEBUG controller-jenkins Kubernetes resources are present {"cr": "jenkins"}
2022-11-22T20:00:55.585Z DEBUG controller-jenkins Jenkins master pod is present {"cr": "jenkins"}
2022-11-22T20:00:55.585Z DEBUG controller-jenkins Jenkins master pod is terminating {"cr": "jenkins"}
</code></pre>
| <p>I don't see any issue in the logs you shared. You may try to install Jenkins using helm chart and not operator.</p>
<p>I summarized how to do that in <a href="https://www.rokpoto.com/jenkins-docker-in-docker-agent/" rel="nofollow noreferrer">Jenkins Docker in Docker Agent</a> post. You may read about using Docker in Jenkins pipelines there as well.</p>
|
<p>I have a secret in my kubernetes cluster which contains a kubeconfig of another Cluster.
I want to be able to connect to that cluster in my Go code.</p>
<p>I only found ways to create a client via a kubeconfig FILE but not with a kubeconfig string.</p>
<p>This is my code so far:</p>
<pre><code> // Read secret
kubeconfigSecret := &corev1.Secret{}
err := apiReader.Get(context.Background(), client.ObjectKey{Namespace: namespace, Name: name}, kubeconfigSecret)
if err != nil {
// error handling
}
kubeconfigBytes, ok := kubeconfigSecret.Data["kubeconfig"]
if !ok {
// error handling
}
kubeconfigString := string(kubeconfigBytes)
</code></pre>
| <p>There are several ways for this purpose, for example:</p>
<pre class="lang-golang prettyprint-override"><code>content := `a kubeconfig string`
tmpfile, err := os.CreateTemp("", "kubeconfig")
if err != nil {
...
}
defer os.Remove(tmpfile.Name())
if err := os.WriteFile(tmpfile.Name(), []byte(content), 0666); err != nil {
...
}
config, err := clientcmd.BuildConfigFromFlags("", tmpfile.Name())
if err != nil {
...
}
clientset,err := kubernetes.NewForConfig(config)
</code></pre>
<p>Another way is to use <code>clientcmd.RESTConfigFromKubeConfig</code>:</p>
<pre class="lang-golang prettyprint-override"><code>content := `a kubeconfig string`
config, err := clientcmd.RESTConfigFromKubeConfig([]byte(content))
if err != nil {
...
}
clientset, err := kubernetes.NewForConfig(config)
</code></pre>
|
<p>I try to deploy nginx deployment to see if my cluster working properly on basic k8s installed on VPS (kubeadm, ubuntu 22.04, kubernetes 1.24, containerd runtime)</p>
<p>I successfully deployed metallb via helm on this VPS and assigned public IP of VPS to the
using CRD: apiVersion: metallb.io/v1beta1 kind: IPAddressPool</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx LoadBalancer 10.106.57.195 145.181.xx.xx 80:31463/TCP
</code></pre>
<p>my target is to send a request to my public IP of VPS to 145.181.xx.xx and get nginx test page of nginx.</p>
<p>the problem is that I am getting timeout, and connection refused when I try to reach this IP address outside the cluster, inside the cluster -everything is working correctly - it means that calling 145.181.xx.xx inside cluster returns Test page of nginx.</p>
<p>There is no firewall issue - I tried to setup simple nginx without kubernetes with systemctl and I was able to reach port 80 on 145.181.xx.xx.</p>
<p>any suggestions and ideas what can be the problem or how I can try to debug it?</p>
| <p>I'm facing the same issue.</p>
<p>Kubernetes cluster is deployed with Kubespray over 3 master and 5 worker nodes. MetalLB is deployed with Helm, IPAddressPool and L2Advertisement are configured. And I'm also deploying simple nginx pod and a service to check of MetalLB is working.</p>
<p>MetalLB assigns first IP from the pool to nginx service and I'm able to curl nginx default page from any node in the cluster. However, if I try to access this IP address from outside of the cluster, I'm getting timeouts.</p>
<p>But here is the fun part. When I modify nginx manifest (rename deployment and service) and deploy it in the cluster (so 2 nginx pods and services are present), MetalLB assigns another IP from the pool to the second nginx service and I'm able to access this second IP address from outside the cluster.</p>
<p>Unfortunately, I don't have an explanation or a solution to this issue, but I'm investigating it.</p>
|
<p>In dnsutils pod exec <code>ping stackoverflow.com</code></p>
<pre><code>/ # ping stackoverflow.com
ping: bad address 'stackoverflow.com'
</code></pre>
<p>The <code>/etc/resolve.conf</code> file looks fine from inside the pod</p>
<pre><code>/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search weika.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p><code>10.96.0.10</code> is the kube-dns service ip:</p>
<pre><code>[root@test3 k8s]# kubectl -n kube-system get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 75d
</code></pre>
<p><code>core dns</code></p>
<pre><code>[root@test3 k8s]# kubectl -n kube-system get pod -o wide | grep core
coredns-6557d7f7d6-5nkv7 1/1 Running 0 10d 10.244.0.14 test3.weikayuninternal.com <none> <none>
coredns-6557d7f7d6-gtrgc 1/1 Running 0 10d 10.244.0.13 test3.weikayuninternal.com <none> <none>
</code></pre>
<p>when I change the nameserver ip to coredns ip. resolve dns is ok.</p>
<pre><code>/ # cat /etc/resolv.conf
nameserver 10.244.0.14
#nameserver 10.96.0.10
search weika.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # ping stackoverflow.com
PING stackoverflow.com (151.101.65.69): 56 data bytes
64 bytes from 151.101.65.69: seq=0 ttl=49 time=100.497 ms
64 bytes from 151.101.65.69: seq=1 ttl=49 time=101.014 ms
64 bytes from 151.101.65.69: seq=2 ttl=49 time=100.462 ms
64 bytes from 151.101.65.69: seq=3 ttl=49 time=101.465 ms
64 bytes from 151.101.65.69: seq=4 ttl=49 time=100.318 ms
^C
--- stackoverflow.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 100.318/100.751/101.465 ms
/ #
</code></pre>
<p>Why is it happening?</p>
| <p><em>This might only apply to you if there was trouble during either your initial installation of <code>microk8s</code> or enablement of the <code>dns</code> addon</em>, but it might still be worth a shot. I've invested so much gd time in this I couldn't live with myself if I didn't at least share to help that one person out there.</p>
<p>In my case, the server I provisioned to set up a single-node cluster was too small - only 1GB of memory. When I was setting up microk8s for the first time and enabling all the addons I wanted (dns, ingress, hostpath-storage), I started running into problems that were remedied by just giving the server more memory. Unfortunately though, screwing that up initially seems to have left the addons in some kind of undefined, partially initialized/configured state, such that everything appeared to be running normally as best I could tell (i.e. CoreDNS was deployed and ready, and the <code>kube-dns</code> service showed CoreDNS's ClusterIP as it's backend endpoint) but none of my pods could resolve any DNS names, internal or external to the cluster, and I would get <a href="https://stackoverflow.com/questions/59550564/kubelet-does-not-have-clusterdns-ip-configured-in-microk8s">these annoying event logs</a> when I ran <code>kubectl describe <pod></code> suggesting there was a DNS issue of some kind.</p>
<p>What ended up fixing it is resetting the cluster (<code>microk8s reset --destroy-storage</code>) and then re-enabling all my addons (<code>microk8s enable dns ingress hostpath-storage</code>) now that I had enough memory to do so cleanly do so. After that, CoreDNS and the <code>kube-dns</code> service appeared ready just like before, but DNS queries actually worked like they should from within the pods running in the cluster.</p>
<p><strong>tl;dr; - Your <code>dns</code> addon might have have been f'd up during cluster installation. Try resetting your cluster, re-enabling the addons, and re-deploying your resources.</strong></p>
|
<p>I have a <a href="https://konghq.com/" rel="nofollow noreferrer">Kong</a> deployment.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: local-test-kong
labels:
app: local-test-kong
spec:
replicas: 1
selector:
matchLabels:
app: local-test-kong
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: local-test-kong
spec:
automountServiceAccountToken: false
containers:
- envFrom:
- configMapRef:
name: kong-env-vars
image: kong:2.6
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /bin/sleep 15 && kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8100
name: status
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: status
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: # ToDo
limits:
cpu: 256m
memory: 256Mi
requests:
cpu: 256m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kong_prefix/
name: kong-prefix-dir
- mountPath: /tmp
name: tmp-dir
- mountPath: /kong_dbless/
name: kong-custom-dbless-config-volume
terminationGracePeriodSeconds: 30
volumes:
- name: kong-prefix-dir
- name: tmp-dir
- configMap:
defaultMode: 0555
name: kong-declarative
name: kong-custom-dbless-config-volume
</code></pre>
<p>I applied this YAML in <strong>GKE</strong>. Then i ran <code>kubectl describe</code> on its pod.</p>
<pre class="lang-yaml prettyprint-override"><code>➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
local-test-kong-678598ffc6-ll9s8 1/1 Running 0 25m
➜ kubectl describe pod/local-test-kong-678598ffc6-ll9s8
Name: local-test-kong-678598ffc6-ll9s8
Namespace: local-test-kong
Priority: 0
Node: gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl/10.128.64.95
Start Time: Wed, 23 Nov 2022 00:12:56 +0800
Labels: app=local-test-kong
pod-template-hash=678598ffc6
Annotations: kubectl.kubernetes.io/restartedAt: 2022-11-23T00:12:56+08:00
Status: Running
IP: 10.128.96.104
IPs:
IP: 10.128.96.104
Controlled By: ReplicaSet/local-test-kong-678598ffc6
Containers:
proxy:
Container ID: containerd://1bd392488cfe33dcc62f717b3b8831349e8cf573326add846c9c843c7bf15e2a
Image: kong:2.6
Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
Ports: 8000/TCP, 8100/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 23 Nov 2022 00:12:58 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 256m
memory: 256Mi
Requests:
cpu: 256m
memory: 256Mi
Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
kong-env-vars ConfigMap Optional: false
Environment: <none>
Mounts:
/kong_dbless/ from kong-custom-dbless-config-volume (rw)
/kong_prefix/ from kong-prefix-dir (rw)
/tmp from tmp-dir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kong-prefix-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kong-custom-dbless-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kong-declarative
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned local-test-kong/local-test-kong-678598ffc6-ll9s8 to gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl
Normal Pulled 25m kubelet Container image "kong:2.6" already present on machine
Normal Created 25m kubelet Created container proxy
Normal Started 25m kubelet Started container proxy
➜
</code></pre>
<p>I applied the same YAML in my localhost's <strong>MicroK8S</strong> (on MacOS) and then I ran <code>kubectl describe</code> on its pod.</p>
<pre class="lang-yaml prettyprint-override"><code>➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
local-test-kong-54cfc585cb-7grj8 1/1 Running 0 86s
➜ kubectl describe pod/local-test-kong-54cfc585cb-7grj8
Name: local-test-kong-54cfc585cb-7grj8
Namespace: local-test-kong
Priority: 0
Node: microk8s-vm/192.168.64.5
Start Time: Wed, 23 Nov 2022 00:39:33 +0800
Labels: app=local-test-kong
pod-template-hash=54cfc585cb
Annotations: cni.projectcalico.org/podIP: 10.1.254.79/32
cni.projectcalico.org/podIPs: 10.1.254.79/32
kubectl.kubernetes.io/restartedAt: 2022-11-23T00:39:33+08:00
Status: Running
IP: 10.1.254.79
IPs:
IP: 10.1.254.79
Controlled By: ReplicaSet/local-test-kong-54cfc585cb
Containers:
proxy:
Container ID: containerd://d60d09ca8b77ee59c80ea060dcb651c3e346c3a5f0147b0d061790c52193d93d
Image: kong:2.6
Image ID: docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
Ports: 8000/TCP, 8100/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Wed, 23 Nov 2022 00:39:37 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 256m
memory: 256Mi
Requests:
cpu: 256m
memory: 256Mi
Liveness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
kong-env-vars ConfigMap Optional: false
Environment: <none>
Mounts:
/kong_dbless/ from kong-custom-dbless-config-volume (rw)
/kong_prefix/ from kong-prefix-dir (rw)
/tmp from tmp-dir (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kong-prefix-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kong-custom-dbless-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kong-declarative
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 92s default-scheduler Successfully assigned local-test-kong/local-test-kong-54cfc585cb-7grj8 to microk8s-vm
Normal Pulled 90s kubelet Container image "kong:2.6" already present on machine
Normal Created 90s kubelet Created container proxy
Normal Started 89s kubelet Started container proxy
Warning Unhealthy 68s kubelet Readiness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 68s kubelet Liveness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
➜
</code></pre>
<p>It's the exact same deployment YAML. However, the deployment created inside GKE cluster are running all fine with no complaints. But, the deployment created inside my localhost microk8s (on MacOS) is showing probe failures.</p>
<h2>What could i be missing here while deploying to microk8s (on MacOS)?</h2>
| <p>Your readiness probes are failing on the local pod on port <code>8100</code>. It looks like you have a firewall(s) rule preventing internal pod and/or pod to pod communication.</p>
<p>As per the <a href="https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#2-deploying-microk8s" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>You may need to configure your firewall to allow pod-to-pod and pod-to-internet communication:</p>
</blockquote>
<pre><code>sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
</code></pre>
|
<p>For the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> we added more and more dashboards config to <code>/grafana/dashboards</code> folder to have more and more provisioned dashboards.</p>
<p>And then in one day we've done this:</p>
<pre><code>kube-prometheus-stack>helm -n monitoring upgrade prometheus ./ -f ./values-core.yaml
</code></pre>
<p>and got:</p>
<pre><code>Error: UPGRADE FAILED: create: failed to create: Secret "sh.helm.release.v1.prometheus.v16" is invalid: data: Too long: must have at most 1048576 bytes
</code></pre>
<p>What is the designed way to overrun these limitations? There is a need to add more and more provisioned dashboards to the chart.</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>As we can get information from <a href="https://helm.sh/docs/topics/advanced/#storage-backends" rel="nofollow noreferrer">doc</a>, the purpose of auto generated secret is for recording release information. In k8s <a href="https://kubernetes.io/docs/concepts/configuration/secret/#restriction-data-size" rel="nofollow noreferrer">design</a>, Individual secrets are limited to 1MiB in size. Based on the above information, the secret size is the hard limitation of k8s, and the actual release secret size should be positively correlated with the size of the helm chart.</p>
<p>In this use case, the main reason for the large helm chart is that you use grafana's dashboardProvider to automatically deploy the ready-made dashboard JSON file. The provider will load all JSON file to kube-prometheus-stack for creating dashboard configmaps. And then in one day when you add new dashboard and it makes release secret finally hit the limitation you will get the error.</p>
<p>If you don't want to change k8s storage backend type, there is alternative way to work around with. The main idea is to separat tasks of creating dashboard configmap from grafana dashboardProvider and create dashboard configmap by our own.</p>
<p><strong>First</strong>, we can abandon this kind of declaration in kube-prometheus-stack</p>
<pre><code> dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: 'default'
type: file
disableDeletion: true
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
{{- range $_, $file := ( exec "bash" (list "-c" "echo -n dashboards/default/*.json") | splitList " " ) }}
{{ trimSuffix (ext $file) (base $file) }}:
json: |
{{- readFile $file }}
{{- end }}
</code></pre>
<p><strong>Then</strong>, we create another helm chart configmap</p>
<p>Helm chart template</p>
<pre><code>{{- range $config, $data := .Values.configs }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-dashboard-{{ $config }}
labels:
grafana_dashboard: "1"
annotations:
grafana_folder: {{ $config }}
data:
{{ range $key, $val := $data }}
{{ $key }}.json: |
{{ mustToJson $val }}
{{ end }}
{{- end }}
</code></pre>
<p>Helm values, read dashboard json file and convert to golang string</p>
<pre><code>configs:
default:
{{- range $_, $file := ( exec "bash" ( list "-c" (printf "echo -n dashboards/default/*.json")) | splitList " ") }}
{{ trimSuffix (ext $file) (base $file) }}:
{{ readFile $file }}
{{- end}}
</code></pre>
<p>At this time, when we deploy this separated dashboard helm chart, it should generate all configmaps which contain dashboard json value automatically.</p>
<p><strong>Finally</strong>, the last step, we can go to setup Grafana sidecar configurations to make it scrape dashboard from configmaps.</p>
<pre><code>grafana:
defaultDashboardsEnabled: false
sidecar:
dashboards:
enabled: true
label: grafana_dashboard
annotations:
grafana_folder: "Default"
folder: /tmp/dashboards
folderAnnotation: grafana_folder
provider:
foldersFromFilesStructure: true
</code></pre>
<p>After update kube-prometheus-stack and waiting for a while, or you can monitoring on Grafana sidecar pod logs. You will see the dashboard configmaps are loading to pod and ADD to dashboard.</p>
|
<p>Developed a nextjs app and have deployed it to a kubernetes (AKS) cluster. The docker file looks pretty much identical to the sample nextjs docker files (see <a href="https://github.com/vercel/next.js/blob/v12.3.2/examples/with-docker/Dockerfile" rel="nofollow noreferrer">here</a>).</p>
<p>The pod manifest is nothing special either, all I'm doing is setting some environment variables and setting the container up with the right docker image.</p>
<p>The nextjs app refuses to start on a pod in the AKS cluster.</p>
<p>I have pulled the docker image from the container registry and am able to start it up locally on docker desktop - all works fine.</p>
<p>However, it refuses to start on a kubernetes pod, and I am at my wits end! I overrode the entry point to the container on the pod and have manually tried running <code>node server.js</code> via kubectl inside the container, and... nothing happens. Node just exits after a second which seems to indicate that something is causing nodejs to silently crash.</p>
<p>What could be going wrong here? Is there anyway for me to get more output to try and diagnose this issue?</p>
| <p>Ok, seems like running operations and the final command as nextjs USER as specified in the Dockerfile is the bit that's causing issues in kubernetes. Removing those and running as default, root user fixes the issues.</p>
|
<p>How can I run kubectl apply commands from go via client-go?
e.g.:
I'm having a file called crds.yaml and I want to run it via client-go</p>
<p>I can't find any examples about how to do so can someone please help?</p>
| <p>You need to decode your .yaml file</p>
<p><a href="https://github.com/kubernetes/client-go/issues/216" rel="nofollow noreferrer">Here</a> it's the complete answer (with code example)</p>
|
<p>In K8s i'm practising the example <em>6.1. A pod with two containers sharing the same volume: fortune-pod.yaml</em> from the book <strong>kubernetes in Action</strong>. In volumes concept where my pod contain 2 containers, one of the containers is not running, Please guide me where i'm doing wrong. to run the pod successfully.
on checking the logs of the container i'm getting the below error:</p>
<pre><code>Defaulted container "fortune-cont" out of: fortune-cont, web-server
</code></pre>
<p>But where as in pod description events it looks like this.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned book/vol-1-fd556f5dc-8ggj6 to minikube
Normal Pulled 40m kubelet Container image "nginx:alpine" already present on machine
Normal Created 40m kubelet Created container web-server
Normal Started 40m kubelet Started container web-server
Normal Created 39m (x4 over 40m) kubelet Created container fortune-cont
Normal Started 39m (x4 over 40m) kubelet Started container fortune-cont
Normal Pulled 38m (x5 over 40m) kubelet Container image "xxxx/fortune:v1" already present on machine
Warning BackOff 25s (x188 over 40m) kubelet Back-off restarting failed container
</code></pre>
<p>here is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: vol-1
namespace: book
spec:
replicas: 1
selector:
matchLabels:
name: fortune-vol-1
type: volume
template:
metadata:
labels:
name: fortune-vol-1
type: volume
spec:
containers:
- image: ****/fortune:v1
name: fortune-cont
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: html
emptyDir: {}
</code></pre>
<p>Here is my pod description for containers.</p>
<pre><code>Containers:
fortune-cont:
Container ID: docker://3959e47a761b670ee826b2824efed09d8f5d6dfd6451c4c9840eebff018a3586
Image: prav33n/fortune:v1
Image ID: docker-pullable://prav33n/fortune@sha256:671257f6387a1ef81a293f8aef27ad7217e4281e30b777a7124b1f6017a330f8
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 24 Nov 2022 02:05:26 +0530
Finished: Thu, 24 Nov 2022 02:05:26 +0530
Ready: False
Restart Count: 17
Environment: <none>
Mounts:
/var/htdocs from html (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spdq4 (ro)
web-server:
Container ID: docker://37d831a2f7e97abadb548a21ecb20b5c784b5b3d6102cf8f939f2c13cdfd08c0
Image: nginx:alpine
Image ID: docker-pullable://nginx@sha256:455c39afebd4d98ef26dd70284aa86e6810b0485af5f4f222b19b89758cabf1e
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 24 Nov 2022 01:02:55 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from html (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spdq4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
html:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-spdq4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m20s (x281 over 64m) kubelet Back-off restarting failed container
</code></pre>
| <p>Your <em>Pod</em> named <code>vol-1</code> has two <em>containers</em>:</p>
<ul>
<li>One named <code>fortune-cont</code></li>
<li>One named <code>web-server</code></li>
</ul>
<p>If you run <code>kubectl logs vol-1</code>, Kubernetes doesn't know which container you're asking about, so it has to pick one, and tells you:</p>
<blockquote>
<p>Defaulted container "fortune-cont" out of: fortune-cont, web-server</p>
</blockquote>
<p>You can select a container explicitly with the <code>-c</code> option:</p>
<pre><code>kubectl logs vol-1 -c web-server
</code></pre>
|
<p>I tried creating just a simple Service account in Kubernetes by running the command on my AWS EC2 cli <code>kubectl create serviceaccount jenkins --dry-run=client -o yaml > jenkins-sa.yaml</code> and I have my <code>kube/config</code> file on my <code>/home/ec2-user</code>.</p>
<p>I applied the new config jenkins-sa.yaml by running <code>kubectl apply -f jenkins-sa.yaml</code> and then I tried to see more info about the newly created service account by running <code>kubectl describe serviceaccount jenkins</code> which displays some information but without the secret token that should be associated to the jenkins service account by default.</p>
<p>Please I would be grateful if someone can point out what i'm doing wrong because I'm pretty new to Kubernetes. Below is a screenshot</p>
<p><a href="https://i.stack.imgur.com/x0jdK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x0jdK.png" alt="enter image description here" /></a></p>
| <p>You have to create the token secret manually. Here is a YAML example:</p>
<pre><code>apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: jenkins-sa-token
namespace: default
annotations:
kubernetes.io/service-account.name: "jenkins"
</code></pre>
<p>This will create a sercvice account token secret for your jenkins service account with name <code>jenkins-sa-token</code>.</p>
|
<p>I am trying to enable ingress in minkube. When I run <code>minikube addons enable ingress</code> it hangs for a while then I get the following error message:</p>
<pre><code>❌ Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.15/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
stdout:
namespace/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
configmap/ingress-nginx-controller unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
service/ingress-nginx-controller-admission unchanged
stderr:
error: error validating "/etc/kubernetes/addons/ingress-deploy.yaml": error validating data: [ValidationError(Service.spec): unknown field "ipFamilies" in io.k8s.api.core.v1.ServiceSpec, ValidationError(Service.spec): unknown field "ipFamilyPolicy" in io.k8s.api.core.v1.ServiceSpec]; if you choose to ignore these errors, turn validation off with --validate=false
waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 😿 If the above advice does not help, please let us know: │
│ 👉 https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ Please also attach the following file to the GitHub issue: │
│ - /tmp/minikube_addons_2c0e0cafd16ea0f95ac51773aeef036b316005b6_0.log │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
</code></pre>
<p>This the the minikube start command I used:
<code>minikube start --kubernetes-version=v1.19.15 --vm-driver=docker</code></p>
<p>I have tried reinstalling minikube. It was working fine last week when I ran the same command.</p>
<p>If more specific information is needed please let me know and I will edit the question. Does anyone know how I could go about fixing this?</p>
<p>Thanks in advance.</p>
| <p>Bit late, but I hope someone find this useful, this happens becouse minikube could not pull the image(ingress-nginx-controller) in time, the way to know is:</p>
<pre><code>kubectl get pod -n ingress-nginx
</code></pre>
<p>If the ingress-nginx-controller-xxxx (xxxx is the identifier of the pod) has a status of ImagePullBackOff or something like that, you are on this scenario.</p>
<p>To fix you will need to first describe you pod:</p>
<pre><code>kubectl describe pod ingress-nginx-controller-xxxxx -n ingress-nginx
</code></pre>
<p>Look under containers/controller/images and copy its value(don't need to copyp the @sha256:... if it contains it). You must to manually pull it, but before probably delete the related deployment as well:</p>
<pre><code>kubectl delete deployment ingress-nginx-controller -n ingress-nginx
</code></pre>
<p>And then pull the image from the vm itself, in my case looks like this:</p>
<pre><code>minikube ssh docker pull k8s.gcr.io/ingress-nginx/controller:v1.2.1
</code></pre>
<p>Wait for it and then try to "addons enable ingress" again and see if it works, it did it for me.</p>
|
<p>I am getting this error from filebeat:</p>
<blockquote>
<p>Failed to connect to backoff(elasticsearch(<a href="https://elk.example.com:9200" rel="noreferrer">https://elk.example.com:9200</a>)): Get <a href="https://elk.example.com:9200" rel="noreferrer">https://elk.example.com:9200</a>: x509: certificate signed by unknown authority</p>
<p>INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(<a href="https://elk.example.com:9200" rel="noreferrer">https://elk.example.com:9200</a>)) with 1468 reconnect attempt(s)</p>
<p>INFO [publish] pipeline/retry.go:189 retryer: send unwait-signal to consumer</p>
<p>INFO [publish] pipeline/retry.go:191 done</p>
<p>INFO [publish] pipeline/retry.go:166 retryer: send wait signal to consumer</p>
<p>INFO [publish] pipeline/retry.go:168 done</p>
</blockquote>
<p>However, elasticsearch is having valid SSL by letsencrypt. (This is not a self-signed certificate).</p>
<p>Filebeat kubernetes config:</p>
<pre><code>output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
</code></pre>
<p>I tried adding these parameters in config file and it worked. But, why do I need to bypass verification even if certificate is valid.</p>
<pre><code>ssl.verification_mode: "none"
</code></pre>
| <p>The reason is either an old default operating system truststore that does not feature your very much valid and well-known trusted CA chains or the Elasticsearch certificate is self-signed or signed by a private CA.</p>
<p>You can choose from a number of solutions:</p>
<ol>
<li><p>Run your filebeat in an environment (server, container, etc) with an updated default truststore that knows the CA that signed your certificate - i.e: upgrade to a newer version of the operating system or updated container image.</p>
</li>
<li><p>Remove your <code>ssl.verification_mode: "none"</code> configuration and add a <code>ssl.certificate_authorities</code> point to one or more PEM files with the to-be trusted CA certificates.</p>
</li>
</ol>
<blockquote>
<pre><code> output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
ssl.certificate_authorities: ["/path/to/ca.pem"]
</code></pre>
</blockquote>
<ol start="3">
<li>Remove your <code>ssl.verification_mode: "none"</code> configuration and add a <code>ssl.certificate_authorities</code> configuration with the embedded to-be trusted CA certificate directly in the YAML configuration.</li>
</ol>
<p>Example from <a href="https://www.elastic.co/guide/en/beats/filebeat/master/configuration-ssl.html" rel="nofollow noreferrer">filebeat configuration documentation</a>.</p>
<blockquote>
<pre><code>certificate_authorities:
- |
-----BEGIN CERTIFICATE-----
MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF
ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2
MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n
fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl
94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t
/D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP
PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41
CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O
BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux
8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D
874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw
3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA
H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu
8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0
yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk
sxSmbIUfc2SGJGCJD4I=
-----END CERTIFICATE-----
</code></pre>
</blockquote>
<p><strong>Disclaimer</strong>: You have not provided a filebeat version so I assumed the latest one. Nevertheless, this kind of configuration will probably be the same across filebeat versions.</p>
|
<p>I am trying to get the cluster utilization values (CPU Utilization, Memory, File System, Network transfer, pod count) in a go program. Do we have api's to get the cluster data directly from k8s or openshift which can be called from a go program ? Or can we get the individual node data and add it up to get the total cluster utilization values ?</p>
<p>Executing kubectl commands using os/exec in the program is not able to fetch the values.</p>
| <p>The Kubernetes <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline" rel="nofollow noreferrer">metrics server</a> offers all of this information via the <a href="https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go#L23-L29" rel="nofollow noreferrer">Summary API</a>.</p>
<p>Invoking this API is straight-forward using the command line.</p>
<pre><code>kubectl get --raw "/api/v1/nodes/${NODE_NAME}/proxy/stats/summary"
</code></pre>
<p>And from go, if you are using <a href="https://github.com/kubernetes/client-go/" rel="nofollow noreferrer">client-go</a>, this API can be invoked with:</p>
<pre class="lang-golang prettyprint-override"><code>request := clientset.CoreV1().RESTClient().Get().Resource("nodes").Name(node.Name).SubResource("proxy").Suffix("stats/summary")
responseRawArrayOfBytes, err := request.DoRaw(context.Background())
if err != nil {
return nil, errors.Wrap(err, "failed to get stats from node")
}
</code></pre>
<p>Note that, according to the official metrics server documentation, the <a href="https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#summary-api-source" rel="nofollow noreferrer">endpoint will be changing</a> to <code>/metrics/resource</code> in metrics-server 0.6.x.</p>
|
<p>I've got a spring-boot application deployed in kubernetes, with actuator on port <code>8081</code>. The main application is working on port <code>8080</code>. When I describe the instance, the checks are failing:</p>
<pre><code>$ kubectl describe pod app-844d96f469-8vkbn
...
Warning Unhealthy 29s (x4 over 48s) kubelet Readiness probe failed: Get "http://192.168.13.189:8081/actuator/health/readiness": dial tcp 192.168.13.189:8081: connect: connection refused
Warning Unhealthy 29s (x2 over 39s) kubelet Liveness probe failed: Get "http://192.168.13.189:8081/actuator/health/liveness": dial tcp 192.168.13.189:8081: connect: connection refused
</code></pre>
<p>When I ssh into the pod, the checks show that the application is healthy, both on localhost and the ip addresses from the error message:</p>
<pre><code>$ kubectl exec -it pod/app-844d96f469-8vkbn -- /bin/sh
# curl localhost:8081/actuator/health/liveness
{"status":"UP"}
# curl localhost:8081/actuator/health/readiness
{"status":"UP"}
# curl 192.168.13.189:8081/actuator/health/liveness
{"status":"UP"}
# curl 192.168.13.189:8081/actuator/health/readiness
{"status":"UP"}
</code></pre>
<p>The application was healthy within the 10*10=100 second window, so this isn't just an issue of adjusting the threshold and period.</p>
<p>This is the kubernetes config for the checks from the deployment:</p>
<pre><code> livenessProbe:
httpGet:
path: "/actuator/health/liveness"
port: 8081
failureThreshold: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: "/actuator/health/readiness"
port: 8081
failureThreshold: 10
periodSeconds: 10
</code></pre>
<p>I'd appreciate any help with this issue!</p>
<hr />
<p><strong>Edit:</strong> Adding the Dockerfile, since Thomas asked if 8081 was exposed on the image. I didn't have to do anything for port 8080 to be exposed, but maybe the question wasn't related to the Dockerfile.</p>
<pre><code>FROM eclipse-temurin:17.0.5_8-jre
RUN mkdir /opt/app
ARG JAR_FILE
COPY ${JAR_FILE} /opt/app/app.jar
ENTRYPOINT ["java","-jar","/opt/app/app.jar"]
</code></pre>
| <p>I suppose that your application takes too long to become ready before it fails due to liveness probe.
Spring boot / Java applications do heavy initialization on startup and when CPU is limited, it is slow and takes a long time.</p>
<p>Here is a good explanation:
<a href="https://faun.pub/java-application-optimization-on-kubernetes-on-the-example-of-a-spring-boot-microservice-cf3737a2219c" rel="nofollow noreferrer">https://faun.pub/java-application-optimization-on-kubernetes-on-the-example-of-a-spring-boot-microservice-cf3737a2219c</a></p>
<p>A simple service takes 80 seconds on 0.5 CPU while it only takes 12 seconds on 3 CPUs.</p>
<p>If you hava CPU limit set, just leave it out. Only set a request.
And maybe also configure an <code>initialDelaySeconds: 60</code> or so for the liveness probe.</p>
|
<p>For regular pods (in running state), we can check the <strong>actual</strong> resource utilisation (runtime) using <code>kubectl top pod <pod_name></code> command.
However, for the job pods (execution is already complete), any way we can fetch how much resources were consumed by those pods?</p>
<p>Getting this info does help to better tune the resource allocation and also, whether we over/under provisioning the requests for the job pods.</p>
<p>Kuberenetes version</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:23:45Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>If not any direct way, maybe a work-around to get this info.</p>
| <p>There is no command that can show a job resource utilisation. The only option is using an external tool like the prometheus or make a sidecar container with logging resources usage.</p>
|
<p>I installed GitLab runner via <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">HelmChart</a> on my <code>Kubernetes</code> cluster</p>
<p>While installing via helm I used config <code>values.yaml</code></p>
<p>But my Runner stucks every time at <code>docker login</code> command,
without <code>docker login</code> working good</p>
<p>I have no idea what is wrong :(
<strong>Any help appreciated!</strong></p>
<p><strong>Error:</strong> <code>write tcp 10.244.0.44:50882->188.72.88.34:443: use of closed network connection</code></p>
<p><a href="https://i.stack.imgur.com/vVOn4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vVOn4.png" alt="enter image description here" /></a></p>
<p><code>.gitlab-ci.yaml</code> file</p>
<pre><code>build docker image:
stage: build
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- mkdir -p $HOME/.docker
- echo passwd| docker login -u user https://registry.labs.com --password-stdin
script:
- docker images
- docker ps
- docker pull registry.labs.com/jappweek:a_zh
- docker build -t "$CI_REGISTRY"/"$CI_REGISTRY_IMAGE":1.8 .
- docker push "$CI_REGISTRY"/"$CI_REGISTRY_IMAGE":1.8
tags:
- k8s
</code></pre>
<p><code>values.yaml</code> file</p>
<pre><code>image:
registry: registry.gitlab.com
#image: gitlab/gitlab-runner:v13.0.0
image: gitlab-org/gitlab-runner
# tag: alpine-v11.6.0
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.somebars.com
runnerRegistrationToken: "GR1348941a7jJ4WF7999yxsya9Arsd929g"
terminationGracePeriodSeconds: 3600
#
concurrent: 10
checkInterval: 30
sessionServer:
enabled: false
## For RBAC support:
rbac:
create: true
rules:
- resources: ["configmaps", "pods", "pods/attach", "secrets", "services"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create", "patch", "delete"]
clusterWideAccess: false
podSecurityPolicy:
enabled: false
resourceNames:
- gitlab-runner
metrics:
enabled: false
portName: metrics
port: 9252
serviceMonitor:
enabled: false
service:
enabled: false
type: ClusterIP
runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:16.04"
privileged: true
cache: {}
builds: {}
services: {}
helpers: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
runAsNonRoot: true
privileged: false
capabilities:
drop: ["ALL"]
podSecurityContext:
runAsUser: 100
# runAsGroup: 65533
fsGroup: 65533
resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
hostAliases: []
podAnnotations: {}
podLabels: {}
priorityClassName: ""
secrets: []
configMaps: {}
volumeMounts: []
volumes: []
</code></pre>
| <p>I bypassed <code>docker login</code> with importing <code>$HOME/.docker/config.json</code> file which stores <code>auth token</code> from my host machine to Gitlab Ci</p>
<pre><code> before_script:
- mkdir -p $HOME/.docker
- echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json
</code></pre>
<p><code>$DOCKER_AUTH_CONFIG</code> is <code>$HOME/.docker/config.json</code></p>
<p>That's all no <code>docker login</code> required</p>
|
<p>I want to know when the pod goes under termination and how much time it takes for termination</p>
<p>from which stage pod go under termination</p>
<p>I have a cluster in gke and so I want to know the details about pod termination in gke</p>
| <p>The default grace period is 30 seconds if that is what you are looking for.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination</a></p>
|
<p>Hi I have question about golang micro service via kubernetes container. Example I have 2 deploy of code name A and B.</p>
<ol>
<li>A have 1 pod do a sent request to create document in B service.</li>
<li>B have autoscaling config between 1 - 5 pod waiting for request from A and process to create a document in many collection.</li>
</ol>
<p>Sometime service A have error "EOF" from service B and I go to check log in service B couldn't find error that makes pod terminate or crashloopbackoff status from kubectl terminal it like service B just stop running this process.</p>
<p>I wonder cause of error "EOF" is about autoscaling config in B service in my understanding high traffic to B service have to scale up pod to 5 pod but when traffic go down pod must scale down too.</p>
<p>It is possible if any process working in pod that would to scale down it terminated before process success?</p>
| <p>When pods are terminated (due to scale down), they receive a term signal and are deregistered from the load balancer. The problem in your case might be, that running processes are terminated imediatelly and connections get closed. You can try to circumvent the problem with a preStop lifecycle hook for your container, i.e. give your running processes time to finish before the pod gets terminated.</p>
<pre><code> lifecycle:
preStop:
exec:
command:
- "sleep"
- "10"
</code></pre>
<p>With this hook, on scale down the pod will be deregistered from the load balancer imediately, preventing it from receiving further requests, and the preStop lifecycle hook will be executed. In this case it just waits 10 seconds. Only after the hook is finished, the container will receive the term signal.</p>
<p><a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/</a></p>
|
<p>I am new to K8S so please be gentle. I have a test hello world Flask app and I would like to deploy it on EKS. I am using the AWS load Balancer Controller Addon using the link below. At the end when I check the deployment it shows up without any issues as the link describes.
<a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html</a></p>
<p>When I apply the three files below they all apply correctly and I see the pods up, but on the ingress I dont see an external IP address and cant access my Flask app.</p>
<p>My goal is to have AWS create a dummy DNS name and then I can point my public DNS name to it as an CNAM entry. Also the Ingress should be in port 80 and then that should forward to port 5000 for the Flask app internally.</p>
<p>What am I missing?
Could someone please point me in the right direction?</p>
<p>ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "flask-test-ingress"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
labels:
app: hello-world
spec:
rules:
- host: testing.somesite.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: "hello-world-service"
port:
number: 80
</code></pre>
<p>deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
selector:
matchLabels:
app: hello-world
replicas: 2
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: gitlab.privaterepo.com:5050/jmartinez/flask_helloworld:v4
ports:
- containerPort: 5000
protocol: TCP
imagePullSecrets:
- name: regcred
</code></pre>
<p>service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-world-service
spec:
selector:
app: hello-world
type: NodePort
ports:
- protocol: "TCP"
port: 80
targetPort: 5000
nodePort: 30000
</code></pre>
| <p>Finally got it working. When I realized that the ALB was not created automatically I researched and found the solution. I had to remove the ingress.class value from the annotations as well as remove the host. So now my ingress looks like the following. After deleting the old ingress and reapplying this one, I waited about 10 minutes and my hello world app is now running.</p>
<p>ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "flask-test-ingress"
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
labels:
app: hello-world
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: "hello-world-service"
port:
number: 80
</code></pre>
|
<p>Hi I have question about golang micro service via kubernetes container. Example I have 2 deploy of code name A and B.</p>
<ol>
<li>A have 1 pod do a sent request to create document in B service.</li>
<li>B have autoscaling config between 1 - 5 pod waiting for request from A and process to create a document in many collection.</li>
</ol>
<p>Sometime service A have error "EOF" from service B and I go to check log in service B couldn't find error that makes pod terminate or crashloopbackoff status from kubectl terminal it like service B just stop running this process.</p>
<p>I wonder cause of error "EOF" is about autoscaling config in B service in my understanding high traffic to B service have to scale up pod to 5 pod but when traffic go down pod must scale down too.</p>
<p>It is possible if any process working in pod that would to scale down it terminated before process success?</p>
| <p>Make sure your service B can do a graceful shutdown.</p>
<p>On scale down, or delete, the pod will first receive a TERM signal. Then, after a grace period, it will send a KILL if needed.</p>
<p>Your service should respect TERM and stop all internal processes (goroutines in your case) and then terminate. If you manage there will be no need for a KILL signal.</p>
<p>It's described in the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">Pod Lifecycle documentation</a></p>
|
<p>I'm currently setting up a Kubernetes cluster with windows nodes. I accidentally created a local image in the default namespace.</p>
<p>As shown by <code>ctr image ls</code>, my image is in the <strong>default namespace</strong>:</p>
<pre><code>REF TYPE DIGEST SIZE PLATFORMS LABELS
docker.io/library/myimage:latest application/vnd.docker.distribution.manifest.v2+json sha256:XXX 6.3 GiB windows/amd64 -
</code></pre>
<p>Therefore, Kubernetes <strong>cannot find the image while creating the pod</strong> (<code>ErrImageNeverPull</code>, <code>imagePullPolicy</code> is set to <code>Never</code>). The reason for this is, the image isn't in the right <strong>namespace k8s.io</strong>:<br />
The command <code>ctr --namespace k8s.io image ls</code> shows the base Kubernetes images:</p>
<pre><code>REF TYPE DIGEST SIZE PLATFORMS LABELS
mcr.microsoft.com/oss/kubernetes/pause:3.6 application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
mcr.microsoft.com/oss/kubernetes/pause@sha256:DIGEST application/vnd.docker.distribution.manifest.list.v2+json sha256:XXX 3.9 KiB linux/amd64,linux/arm64,windows/amd64 io.cri-containerd.image=managed
...
</code></pre>
<p>The most straight-forward approach I tried, was exporting the image, deleting the image, and importing the image with different namespace. (as mentioned on a <a href="https://github.com/kubernetes-sigs/cri-tools/issues/546#issuecomment-646909445" rel="nofollow noreferrer">Github comment in the cri-tools project</a>)</p>
<pre><code>ctr --namespace k8s.io image import --base-name foo/myimage container_import.tar
</code></pre>
<p>It works. But I wonder, if there is any shorter (less time consuming) way than re-importing the image.
(Maybe by running a simple command or changing a text file.)</p>
<hr />
<p><strong>EDIT:</strong> To clarify my question: I have one node with a container stored in namespace "default". I want to have the same container stored in namespace "k8s.io" on the same node.
What else can I do, instead of running the following two (slow) commands?</p>
<pre><code>ctr -n default image export my-image.tar my-image
ctr -n k8s.io image import my-image.tar
</code></pre>
<p>I assume a more faster way of renaming the namespace, since it is just editing some meta data.</p>
| <p>As @ P Ekambaram suggested, the <code>podman save</code> and <code>podman load</code> commands let you share images across multiple servers and systems when they aren't available locally or remotely.</p>
<p>You can use <a href="https://www.redhat.com/sysadmin/share-container-image-podman-save#:%7E:text=The%20%60podman%20save%60%20and%20%60,t%20available%20locally%20or%20remotely.&text=Container%20images%20are%20the%20foundations%20that%20containers%20run%20on." rel="nofollow noreferrer">Podman</a> to manage images and containers.</p>
<p>The podman save command saves an image to an archive, making it available to be loaded on another server.</p>
<p>For instance, to save a group of images on a host named servera:</p>
<pre><code>[servera]$ podman save --output images.tar \
docker.io/library/redis \
docker.io/library/mysql \
registry.access.redhat.com/ubi8/ubi \
registry.access.redhat.com/ubi8/ubi:8.5-226.1645809065 \
quay.io/centos7/mysql-80-centos7 docker.io/library/nginx
</code></pre>
<p>Once complete, you can take the file images.tar to serverb and load it with podman load:</p>
<pre><code>[serverb]$ podman load --input images.tar
</code></pre>
<p>The newly released <a href="https://www.redhat.com/sysadmin/podman-transfer-container-images-without-registry" rel="nofollow noreferrer">Podman 4.0</a> includes the new <code>podman image scp</code> command, a useful command to help you manage and transfer container images.</p>
<p>With Podman's podman image scp, you can transfer images between local and remote machines without requiring an image registry.</p>
<p>Podman takes advantage of its SSH support to copy images between machines, and it also allows for local transfer. Registryless image transfer is useful in a couple of key scenarios:</p>
<p>Doing a local transfer between users on one system
Sharing images over the network</p>
|
<p>Could anyone offer advice on how to configure promtail correctly to parse the json output from my logs?
I haven't been able to figure out why promtail is not identifying the json properties and showing them in Grafana.</p>
<p>my logback appender, in logback.xml:</p>
<pre><code><appender name="json"
class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
<jsonFormatter
class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
<prettyPrint>true</prettyPrint>
</jsonFormatter>
<timestampFormat>yyyy-MM-dd' 'HH:mm:ss.SSS</timestampFormat>
</layout>
</appender>
</code></pre>
<p>promtail.yaml</p>
<pre><code> ...
...
...
scrape_configs:
- job_name: parse-logs
pipeline_stages:
- docker: {}
- json:
expressions:
subscriptionType:
AuthenticationSuccess:
user:
execution_lasted: execution_lasted
time-to-pull(.ms): time-to-pull(.ms)
source: message
- json:
expressions:
logger: logger
level: level
timestamp: timestamp
message: message
- labels:
level:
timestamp:
...
...
...
</code></pre>
<p>logs:</p>
<pre><code>...
{
"timestamp" : "2022-11-24 09:33:03.361",
"level" : "DEBUG",
"thread" : "http-nio-8081-exec-3",
"logger" : "com.vanitysoft.boundariesio.unitedstates.domain.dao.impl.PostalGrabberDAOImpl",
"message" : "{\"time-to-pull(.ms)\":12,\"user\":\"unknown\",\"number-of-postals\":1}",
"context" : "default"
}{
"timestamp" : "2022-11-24 09:33:03.361",
"level" : "DEBUG",
"thread" : "http-nio-8081-exec-3",
"logger" : "com.vanitysoft.boundariesio.unitedstates.domain.dao.impl.PostalGrabberDAOImpl",
"message" : "{\"time-to-pull(.ms)\":12,\"user\":\"unknown\",\"number-of-postals\":1}",
"context" : "default"
}
</code></pre>
<p>After editing promtail config I preform a rollup update...</p>
<pre><code>kubectl edit configmap loki-promtail --namespace loki-stack
kubectl rollout restart deployment/lke-monitor-grafana
</code></pre>
<p><strong>Update</strong>
promtail parsing requires the logs to be written on the same line(no pretty print!)</p>
<pre><code><layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
<jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
</jsonFormatter>
<appendLineSeparator>true</appendLineSeparator>
<timestampFormat>yyyy-MM-dd' 'HH:mm:ss.SSS</timestampFormat>
</layout>
</code></pre>
| <p>Assuming you correctly copied in your <code>promtail.yaml</code>, there is a formatting error: <code>-docker: {}</code> should be <code>- docker: {}</code></p>
<pre><code>---
scrape_configs:
- job_name: parse-logs
pipeline_stages:
- docker: {}
- json:
expressions:
subscriptionType: null
AuthenticationSuccess: null
user: null
execution_lasted: execution_lasted
time-to-pull(.ms): time-to-pull(.ms)
source: message
- json:
expressions:
logger: logger
level: level
timestamp: timestamp
message: message
- labels:
level: null
timestamp: null
</code></pre>
|
<p>I have a Flask web application running as a Docker image that is deployed to a Kubernetes pod running on GKE. There are a few environment variables necessary for the application which are included in the docker-compose.yaml like so:</p>
<pre class="lang-yaml prettyprint-override"><code>...
services:
my-app:
build:
...
environment:
VAR_1: foo
VAR_2: bar
...
</code></pre>
<p>I want to keep these environment variables in the <code>docker-compose.yaml</code> so I can run the application locally if necessary. However, when I go to deploy this using a Kubernetes deployment, these variables are missing from the pod and it throws an error. The only way I have found to resolve this is to add the following to my <code>deployment.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: my-app
...
env:
- name: VAR_1
value: foo
- name: VAR_2
value: bar
...
</code></pre>
<p>Is there a way to migrate the values of these environment variables directly from the Docker container image into the Kubernetes pod?</p>
<p>I have tried researching this in Kubernetes and Docker documentation and Google searching and the only solutions I can find say to just include the environment variables in the <code>deployment.yaml</code>, but I'd like to retain them in the <code>docker-compose.yaml</code> for the purposes of running the container locally. I couldn't find anything that explained how Docker container environment variables and Kubernetes environment variables interacted.</p>
| <p>Kompose can translate docker compose files to kubernetes resources:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/</a></p>
|
<p>This is my <code>auth-depl.yaml</code> file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: tester/auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>And the following is <code>auth-mongo-depl.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>When I run <code>skaffold dev</code> it gives me the following error:</p>
<pre><code> - deployment/auth-mongo-depl: container auth-mongo is waiting to start: mongo can't be pulled
- pod/auth-mongo-depl-64f5f58669-dw4hg: container auth-mongo is waiting to start: mongo can't be pulled
- deployment/auth-mongo-depl failed. Error: container auth-mongo is waiting to start: mongo can't be pulled.
</code></pre>
<p>The <code>mongo</code> service is up and running and I can run <code>mongo</code> command on the terminal. It also starts and runs on it's default <code>mongodb://127.0.0.1:27017</code> address. I use Windows 10 and Docker-Desktop.</p>
<p>Let me know if more information needs to be added.</p>
| <p>I had a similar issue, and this was how I resolved it. I went to my Docker Desktop app (Mac) and did a clean-up of all the docker images. I basically deleted all the images which also include mongo image.</p>
<p><a href="https://i.stack.imgur.com/rSmrZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rSmrZ.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/2h4Uk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2h4Uk.png" alt="enter image description here" /></a></p>
<p>Then I ran <code>skaffold dev</code> and all the images were built again, and a fresh copy of mongo was added to my Docker Desktop app. Now, my code started successfully without mongo error. I hope this helps</p>
|
<p>I wanted to know that does Knative on EKS supports the node autoscaler if yes do we need to set up cluster autoscaler or Knative itself scale the nodes?</p>
<p>I tried autoscaling the nodes by increasing the pods but did not work my question is that does Knative automatically scales nodes or do we have to set up any external plugins?</p>
| <p><strong>Knative</strong> supports only two types of auto-scaling</p>
<ul>
<li><strong>KPA</strong> - Knative Pod Autoscaler</li>
<li><strong>HPA</strong> - Horizontal Pod Autoscaler</li>
</ul>
<p>So for Node auto scaling <strong>Cluster Auto scaler</strong> will be required to be installed into EKS.</p>
<p>Ref : <a href="https://knative.dev/docs/serving/autoscaling/autoscaler-types/" rel="nofollow noreferrer">https://knative.dev/docs/serving/autoscaling/autoscaler-types/</a></p>
|
<p>I can see GKE, AKS, EKS all are having nodepool concepts inbuilt but Kubernetes itself doesn't provide that support. What could be the reason behind this?</p>
<p>We usually need different Node types for different requirements such as below-</p>
<p>Some pods require either CPU or Memory intensive and optimized nodes.
Some pods are processing ML/AI algorithms and need GPU-enabled nodes. These GPU-enabled nodes should be used only by certain pods as they are expensive.
Some pods/jobs want to leverage spot/preemptible nodes to reduce the cost.</p>
<p>Is there any specific reason behind Kubernetes not having inbuilt such support?</p>
| <p>Node Pools are cloud-provider specific technologies/groupings.</p>
<p>Kubernetes is intended to be deployed on various infrastructures, including on-prem/bare metal. Node Pools would not mean anything in this case.</p>
<p>Node Pools generally are a way to provide Kubernetes with a group of identically configured nodes to use in the cluster.</p>
<p>You would specify the node you want using node selectors and/or taints/tolerations.</p>
<p>So you could taint nodes with a GPU and then require pods to have the matching toleration in order to schedule onto those nodes. Node Pools wouldn't make a difference here. You could join a physical server to the cluster and taint that node in exactly the same way -- Kubernetes would not see that any differently to a Google, Amazon or Azure-based node that was also registered to the cluster, other than some different annotations on the node.</p>
|
<p>I deployed a pod and service of a Flask API in Kubernetes.</p>
<p>When I run the Nifi processor InvoqueHTTP that calls the API, I have the error :</p>
<pre><code>File "/opt/app-root/lib64/python3.8/site-packages/psycopg2/__init__.py"
</code></pre>
<p><code>psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above</code></p>
<p>The API connects to PGAAS database, in local it is running fine to connect but in the Kubernetes pod I need libpq library but I'm not finding the right library to install.
I also tried to install psycopg2-binary and it's throwing the same error.</p>
<p>Do you have any idea how to solve this issue ?</p>
<p>version tried in requirements : psycopg2==2.9.3 or psycopg2-binary==2.9.5</p>
| <p>For psycopg2.OperationalError: SCRAM authentication requires libpq version 10 or above follow the below work arounds:</p>
<p><strong>Solution :1</strong></p>
<p>Download libpq.dll from <a href="https://www.exefiles.com/en/dll/libpq-dll/" rel="nofollow noreferrer">https://www.exefiles.com/en/dll/libpq-dll/</a> then replace old libpq.dll at php directory with the latest downloaded</p>
<p><strong>Solution :2</strong></p>
<p>Change authentication to md5, then reset your password and restart the postgresql service and here are step by step:</p>
<ul>
<li>Find file postgresql.conf in C:\Program Files\PostgreSQL\13\data then set password_encryption = md5</li>
<li>Find file pg_hba.conf in C:\Program Files\PostgreSQL\13\data then change all METHOD to md5</li>
<li>Open command line (cmd,cmder,git bash...) and run <code>psql -U postgres</code> then enter your password when installed postgres sql
-Then change your password by running ALTER USER <code>postgres WITH PASSWORD 'new-password';</code> in command line</li>
<li>Restart service postgresql in your Service</li>
</ul>
<p><strong>Solution :3</strong></p>
<p>Check if psycopg is using the additional copy of libpq that may be present on your computer. Recognize that file, then upgrade or remove it. Perhaps psycopg has to be updated for that.</p>
|
<p>I'm trying to get off the ground with Spark and Kubernetes but I'm facing difficulties. I used the helm chart here:</p>
<p><a href="https://github.com/bitnami/charts/tree/main/bitnami/spark" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/main/bitnami/spark</a></p>
<p>I have 3 workers and they all report running successfully. I'm trying to run the following program remotely:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.master("spark://<master-ip>:<master-port>").getOrCreate()
df = spark.read.json('people.json')
</code></pre>
<p>Here's the part that's not entirely clear. Where should the file people.json actually live? I have it locally where I'm running the python code and I also have it on a PVC that the master and all workers can see at /sparkdata/people.json.</p>
<p>When I run the 3rd line as simply <code>'people.json'</code> then it starts running but errors out with:</p>
<p><code>WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources</code></p>
<p>If I run it as <code>'/sparkdata/people.json'</code> then I get</p>
<p><code>pyspark.sql.utils.AnalysisException: Path does not exist: file:/sparkdata/people.json</code></p>
<p>Not sure where I go from here. To be clear I want it to read files from the PVC. It's an NFS share that has the data files on it.</p>
| <p>Your <code>people.json</code> file needs to be accessible to your driver + executor pods. This can be achieved in multiple ways:</p>
<ul>
<li>having some kind of network/cloud drive that each pod can access</li>
<li>mounting volumes on your pods, and then uploading the data to those volumes using <code>--files</code> in your spark-submit.</li>
</ul>
<p>The latter option might be the simpler to set up. <a href="https://jaceklaskowski.github.io/spark-kubernetes-book/demo/spark-and-local-filesystem-in-minikube/#hostpath" rel="nofollow noreferrer">This page</a> discusses in more detail how you could do this, but we can shortly go to the point. If you add the following arguments to your spark-submit you should be able to get your <code>people.json</code> on your driver + executors (you just have to choose sensible values for the $VAR variables in there):</p>
<pre><code> --files people.json \
--conf spark.kubernetes.file.upload.path=$SOURCE_DIR \
--conf spark.kubernetes.driver.volumes.$VOLUME_TYPE.$VOLUME_NAME.mount.path=$MOUNT_PATH \
--conf spark.kubernetes.driver.volumes.$VOLUME_TYPE.$VOLUME_NAME.options.path=$MOUNT_PATH \
--conf spark.kubernetes.executor.volumes.$VOLUME_TYPE.$VOLUME_NAME.mount.path=$MOUNT_PATH \
--conf spark.kubernetes.executor.volumes.$VOLUME_TYPE.$VOLUME_NAME.options.path=$MOUNT_PATH \
</code></pre>
<p>You can always verify the existence of your data by going inside of the pods themselves like so:</p>
<pre><code>kubectl exec -it <driver/executor pod name> bash
(now you should be inside of a bash process in the pod)
cd <mount-path-you-chose>
ls -al
</code></pre>
<p>That last <code>ls -al</code> command should show you a <code>people.json</code> file in there (after having done your spark-submit of course).</p>
<p>Hope this helps!</p>
|
<p>I'm trying to generate some env variables when I'm deploying my code with Kubernets. What I'm trying to do is to generate a ConfigMap to get my variables, but it's not working.
I'm using azure pipelines to do my build and publish steps.</p>
<p>Dockerfile:</p>
<pre><code>FROM node:14-alpine
WORKDIR /usr/src/app
COPY package.json .
COPY . .
RUN npm cache clean --force
RUN npm install
RUN npm run build
EXPOSE 80
CMD ["npm", "start"]
</code></pre>
<p>My azure-pipelines.yml:</p>
<pre><code>stages:
#Build Dev
- stage: BuildDev
displayName: Build and Push Dev
jobs:
- job: Development
displayName: Build and Push Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
steps:
- checkout: self
- task: Docker@1
displayName: Build Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: build
imageName: tenant/front/dev:$(Build.BuildId)
includeLatestTag: true
buildContext: '**'
- task: Docker@1
displayName: Push Image
inputs:
azureSubscriptionEndpoint: my-subscription
azureContainerRegistry: my-container-registry
command: push
imageName: tenant/front/dev:$(Build.BuildId)
buildContext: '**'
#Deploy Dev
- stage: DeployDev
displayName: Deploy Dev
jobs:
- deployment: Deploy
displayName: Deploy Dev
timeoutInMinutes: 0
pool:
vmImage: ubuntu-18.04
environment: Development-Front
strategy:
runOnce:
deploy:
steps:
- task: Kubernetes@1
displayName: 'kubectl apply'
inputs:
kubernetesServiceEndpoint: 'AKS (standard subscription)'
command: apply
useConfigurationFile: true
configurationType: inline
inline: |
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
replicas: 1
selector:
matchLabels:
app: $(appNameDev)
template:
metadata:
labels:
app: $(appNameDev)
spec:
containers:
- name: $(appNameDev)
image: tenant/front/dev:$(Build.BuildId)
imagePullPolicy:
env:
- name: NEXT_PUBLIC_APP_API
value: development
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: environment-variables
mountPath: /usr/src/app/.env
readOnly: true
volumes:
- name: environment-variables
configMap:
name: environment-variables
items:
- key: .env
path: .env
---
apiVersion: v1
kind: Service
metadata:
name: $(appNameDev)
labels:
app: $(appNameDev)
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: $(appNameDev)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: environment-variables
data:
.env: |
NEXT_PUBLIC_APP_API=development
API=http://another.endpoint.com/serverSide
</code></pre>
<p>When I'm trying to access this NEXT_PUBLIC_APP_API variable, I'm receiving undefined. In my next.config.js, I'm exporting the variable as publicRuntimeConfig.</p>
| <p>If you are using GitHub actions, the first thing is to add a step in your image build process to include dynamic variables</p>
<pre><code> - name: Create variables
id: vars
run: |
branch=${GITHUB_REF##*/}
echo "API_URL=API_${branch^^}" >> $GITHUB_ENV
echo "APP_ENV=APP_${branch^^}" >> $GITHUB_ENV
echo "BASE_URL=BASE_${branch^^}" >> $GITHUB_ENV
sed -i "s/GIT_VERSION/${{ github.sha }}/g" k8s/${branch}/api-deployment.yaml
</code></pre>
<p>The second step is to build the docker image with extra arguments, if you are using another CI, just add the variables directly in the build args as below:</p>
<pre><code>--build-arg PROD_ENV=NEXT_PUBLIC_API_URL=${{ secrets[env.API_URL] }}\nNEXT_PUBLIC_BASE_URL=${{ secrets[env.BASE_URL]}}\nNEXT_PUBLIC_APP_ENV=${{ secrets[env.APP_ENV] }}
</code></pre>
<p>Pay attention to the <code>\n</code> to skip lines and docker to be able to understand that you are sending multiple variables to the build process.</p>
<p>The last thing is to add the extra args inside the Dockerfile</p>
<pre><code># Install dependencies only when needed
FROM node:16.13.0-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
# Rebuild the source code only when needed
FROM node:16.13.0-alpine AS builder
ARG PROD_ENV=""
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
RUN printf "$PROD_ENV" >> .env.production
RUN yarn build
# Production image, copy all the files and run next
FROM node:16.13.0-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/.env* ./
COPY --from=builder /app/next-i18next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app/.next
USER nextjs
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
RUN npx next telemetry disable
CMD ["yarn", "start"]
</code></pre>
<p>I send as extra args PROD_ENV and then build a .env.production file on the fly with the required values.</p>
<p>Mark as answer if it helps you</p>
|
<p>I have a cron that's I'm testing for another project that is supposed to delete a namespace called "test" every minute.</p>
<p>I have set up a ServiceAccount, ClusterRole, and RoleBinding as shown below:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: test
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["delete", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test # This will bind the role and service account
subjects:
- kind: ServiceAccount
name: test
roleRef:
kind: ClusterRole
name: test
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>My Cronjob is as follows:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: kill-ns
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: test
containers:
- name: kill-ns
image: bitnami/kubectl:latest
command:
- kubectl
args:
- delete
- namespace
- test
restartPolicy: Never
</code></pre>
<p>I get the following error on from the job.</p>
<blockquote>
<p>Error from server (Forbidden): namespaces "test" is forbidden: User
"system:serviceaccount:default:test" cannot delete resource
"namespaces" in API group "" in the namespace "test"</p>
</blockquote>
<p>I thought my apiGroups was the issue it was not. apiGroups is [""] for namespaces.</p>
<p>I'm currently using docker desktop locally.</p>
<p>Edit: I am deploying the job in the default namespace.</p>
| <p>You have to use ClusterRoleBinding instead of RoleBinding. RoleBinding only allows access to resources in the namespace of the service account while ClusterRoleBinding allows cluster-wide access.</p>
|
<p>I want to create a cluster inside a VNET, so that later I can create a connection to the on-premise resources.
I have following problem:</p>
<p><code>│ Error: Unsupported argument │ │ on main.tf line 130, in resource "azurerm_kubernetes_cluster" "aks": │ 130: user_assigned_identity_id = azurerm_user_assigned_identity.identity.id │ │ An argument named "user_assigned_identity_id" is not expected here.</code></p>
<p>My main.tf:</p>
<p>`</p>
<pre><code>data "azurerm_resource_group" "aks-rg" {
name = var.resource_group_name
}
resource "azurerm_role_assignment" "role_acrpull" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity.0.object_id
#skip_service_principal_aad_check = true
}
resource "azurerm_container_registry" "acr" {
name = var.acr_name
resource_group_name = data.azurerm_resource_group.aks-rg.name
location = var.location
sku = "Standard"
admin_enabled = false
}
resource "azurerm_network_security_group" "pusg" {
name = "Public_Security_Group"
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
}
resource "azurerm_virtual_network" "puvnet" {
name = "Public_VNET"
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
address_space = ["10.19.0.0/16"]
dns_servers = ["10.19.0.4", "10.19.0.5"]
}
resource "azurerm_subnet" "osubnet" {
name = "Outer_Subnet"
resource_group_name = data.azurerm_resource_group.aks-rg.name
address_prefixes = ["10.19.1.0/24"]
virtual_network_name = azurerm_virtual_network.puvnet.name
}
resource "azurerm_subnet" "isubnet" {
name = "Inner_Subnet"
resource_group_name = data.azurerm_resource_group.aks-rg.name
address_prefixes = ["10.19.2.0/24"]
virtual_network_name = azurerm_virtual_network.puvnet.name
}
resource "azurerm_subnet" "firewall_subnet" {
name = "AzureFirewallSubnet"
resource_group_name = data.azurerm_resource_group.aks-rg.name
virtual_network_name = azurerm_virtual_network.puvnet.name
address_prefixes = ["10.19.3.0/24"]
}
resource "azurerm_user_assigned_identity" "identity" {
resource_group_name = data.azurerm_resource_group.aks-rg.name
location = var.location
name = "identityh"
}
resource "azurerm_role_assignment" "networkRole" {
scope = data.azurerm_resource_group.aks-rg.id
role_definition_name = "Network Contributor"
principal_id = azurerm_user_assigned_identity.identity.principal_id
}
resource "azurerm_route_table" "routeTable" {
name = "rt-FWall"
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
}
resource "azurerm_route" "route1" {
name = "dg-route1"
resource_group_name = data.azurerm_resource_group.aks-rg.name
route_table_name = azurerm_route_table.routeTable.name
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.firewall.ip_configuration.0.private_ip_address
}
resource "azurerm_subnet_route_table_association" "base" {
subnet_id = azurerm_subnet.osubnet.id
route_table_id = azurerm_route_table.routeTable.id
}
resource "azurerm_public_ip" "firewall_public_ip" {
name = "pip-firewall"
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_firewall" "firewall" {
name = "public_firewall"
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
sku_name = "AZFW_VNet"
sku_tier = "Standard"
ip_configuration {
name = "ipconfig"
subnet_id = azurerm_subnet.firewall_subnet.id
public_ip_address_id = azurerm_public_ip.firewall_public_ip.id
}
}
resource "azurerm_kubernetes_cluster" "aks" {
name = var.cluster_name
kubernetes_version = var.kubernetes_version
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
dns_prefix = var.cluster_name
network_profile {
network_plugin = "azure"
outbound_type = "userDefinedRouting"
}
default_node_pool {
name = "system"
node_count = var.system_node_count
vm_size = "Standard_DS2_v2"
vnet_subnet_id = azurerm_subnet.osubnet.id
}
identity {
type = "UserAssigned"
user_assigned_identity_id = azurerm_user_assigned_identity.identity.id
}
}
</code></pre>
<p>`</p>
<p>Region is Germany West Central. Kubernetes version 1.24.3, Azurerm 3.33, Terraform 1.3</p>
<p>I have tried different approaches I found over internet, but none seemed to work. Best case scenario is when creation takes too long and terraform stops without creating cluster.</p>
| <p>That argument does not seem to be a part of the identity block. The argument that can be used inside of the identity block is <code>identity_ids</code> [1]:</p>
<pre><code>resource "azurerm_kubernetes_cluster" "aks" {
name = var.cluster_name
kubernetes_version = var.kubernetes_version
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
dns_prefix = var.cluster_name
network_profile {
network_plugin = "azure"
outbound_type = "userDefinedRouting"
}
default_node_pool {
name = "system"
node_count = var.system_node_count
vm_size = "Standard_DS2_v2"
vnet_subnet_id = azurerm_subnet.osubnet.id
}
identity {
type = "UserAssigned"
identity_ids = [azurerm_user_assigned_identity.identity.id]
}
}
</code></pre>
<p>Note that the <code>identity_ids</code> argument is a list, hence the square brackets around the identity.</p>
<hr />
<p>[1] <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#identity_ids" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#identity_ids</a></p>
|
<p>I am new to k8s and trying to setup prometheus monitoring for k8s. I used
"helm install" to setup prometheus. Now:</p>
<ol>
<li>two pods are still in pending state:
<ul>
<li>prometheus-server</li>
<li>prometheus-alertmanager</li>
</ul></li>
<li>I manually created persistent volume for both
Can anyone help me with how to map these PV with PVC created by helm chart?</li>
</ol>
<pre><code>[centos@k8smaster1 ~]$ kubectl get pod -n monitoring
NAME READY STATUS RESTARTS AGE
prometheus-alertmanager-7757d759b8-x6bd7 0/2 Pending 0 44m
prometheus-kube-state-metrics-7f85b5d86c-cq9kr 1/1 Running 0 44m
prometheus-node-exporter-5rz2k 1/1 Running 0 44m
prometheus-pushgateway-5b8465d455-672d2 1/1 Running 0 44m
prometheus-server-7f8b5fc64b-w626v 0/2 Pending 0 44m
</code></pre>
<pre><code>[centos@k8smaster1 ~]$ kubectl get pv
prometheus-alertmanager 3Gi RWX Retain Available 22m
prometheus-server 12Gi RWX Retain Available 30m
</code></pre>
<pre><code>[centos@k8smaster1 ~]$ kubectl get pvc -n monitoring
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending 20m
prometheus-server Pending 20m
</code></pre>
<pre><code>[centos@k8smaster1 ~]$ kubectl describe pvc prometheus-alertmanager -n monitoring
Name: prometheus-alertmanager
Namespace: monitoring
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-8.15.0
component=alertmanager
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 116s (x83 over 22m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: prometheus-alertmanager-7757d759b8-x6bd7
</code></pre>
<p>I am expecting the pods to get into running state</p>
<p>!!!UPDATE!!!</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-alertmanager Pending local-storage 4m29s
prometheus-server Pending local-storage 4m29s
</code></pre>
<pre><code>[centos@k8smaster1 prometheus_pv_storage]$ kubectl describe pvc prometheus-server -n monitoring
Name: prometheus-server
Namespace: monitoring
StorageClass: local-storage
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-8.15.0
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 11s (x22 over 4m59s) persistentvolume-controller waiting for first consumer to be created before binding
Mounted By: prometheus-server-7f8b5fc64b-bqf42
</code></pre>
<p>!!UPDATE-2!!</p>
<pre><code>[centos@k8smaster1 ~]$ kubectl get pods prometheus-server-7f8b5fc64b-bqf42 -n monitoring -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-08-18T16:10:54Z"
generateName: prometheus-server-7f8b5fc64b-
labels:
app: prometheus
chart: prometheus-8.15.0
component: server
heritage: Tiller
pod-template-hash: 7f8b5fc64b
release: prometheus
name: prometheus-server-7f8b5fc64b-bqf42
namespace: monitoring
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: prometheus-server-7f8b5fc64b
uid: c1979bcb-c1d2-11e9-819d-fa163ebb8452
resourceVersion: "2461054"
selfLink: /api/v1/namespaces/monitoring/pods/prometheus-server-7f8b5fc64b-bqf42
uid: c19890d1-c1d2-11e9-819d-fa163ebb8452
spec:
containers:
- args:
- --volume-dir=/etc/config
- --webhook-url=http://127.0.0.1:9090/-/reload
image: jimmidyson/configmap-reload:v0.2.2
imagePullPolicy: IfNotPresent
name: prometheus-server-configmap-reload
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-server-token-7h2df
readOnly: true
- args:
- --storage.tsdb.retention.time=15d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
image: prom/prometheus:v2.11.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /-/healthy
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: prometheus-server
ports:
- containerPort: 9090
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /-/ready
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /data
name: storage-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-server-token-7h2df
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
serviceAccount: prometheus-server
serviceAccountName: prometheus-server
terminationGracePeriodSeconds: 300
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: prometheus-server
name: config-volume
- name: storage-volume
persistentVolumeClaim:
claimName: prometheus-server
- name: prometheus-server-token-7h2df
secret:
defaultMode: 420
secretName: prometheus-server-token-7h2df
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-18T16:10:54Z"
message: '0/2 nodes are available: 1 node(s) didn''t find available persistent
volumes to bind, 1 node(s) had taints that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
</code></pre>
<p>Also I have the volumes created and assigned to local storage</p>
<pre><code>[centos@k8smaster1 prometheus_pv]$ kubectl get pv -n monitoring
prometheus-alertmanager 3Gi RWX Retain Available local-storage 2d19h
prometheus-server 12Gi RWX Retain Available local-storage 2d19h
</code></pre>
| <p>If you are in EKS, your node need to have the next permission</p>
<pre><code>arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
</code></pre>
<p>and the <strong>Amazon EBS CSI Driver</strong> Add-on</p>
|
<p>Assuming I have the following <code>skaffold.yaml</code></p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
metadata:
name: myapp
build:
local:
push: true
artifacts:
- image: pkaramol/my-image
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: myapp
chartPath: charts
kubectl:
manifests:
- ./k8s/*
</code></pre>
<p>How can I instruct <code>skaffold</code> to avoid uploading (and then downloading) <code>pkaramol/myimage</code> to dockerhub but rather build it locally and using it directly within the cluster I am connected to?</p>
| <p>You can instruct Skaffold to build the image locally by using the <code>local</code> build mode in the <code>build</code> section of the skaffold.yaml file, like this:</p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
metadata:
name: myapp
build:
local:
push: false
artifacts:
- image: pkaramol/my-image
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: myapp
chartPath: charts
kubectl:
manifests:
- ./k8s/*
</code></pre>
<p>The <code>push</code> parameter should be set to <code>false</code> to prevent Skaffold from uploading the image to a registry. This will tell Skaffold to build the image locally and use it directly in the cluster that you are connected to.</p>
|
<p>Is there a way to restart a kubernetes deployment using go-client .I have no idea how to achieve this ,help me!</p>
| <p>If you run <code>kubectl rollout restart deployment/my-deploy -v=10</code> you see that kubectl actually sends a PATCH request to the APIServer and sets the <code>.spec.template.metadata.annotations</code> with something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>kubectl.kubernetes.io/restartedAt: '2022-11-29T16:33:08+03:30'
</code></pre>
<p>So, you can do this with the client-go:</p>
<pre class="lang-golang prettyprint-override"><code>clientset, err :=kubernetes.NewForConfig(config)
if err != nil {
// Do something with err
}
deploymentsClient := clientset.AppsV1().Deployments(namespace)
data := fmt.Sprintf(`{"spec": {"template": {"metadata": {"annotations": {"kubectl.kubernetes.io/restartedAt": "%s"}}}}}`, time.Now().Format("20060102150405"))
deployment, err := deploymentsClient.Patch(ctx, deployment_name, k8stypes.StrategicMergePatchType, []byte(data), v1.PatchOptions{})
if err != nil {
// Do something with err
}
</code></pre>
|
<p>I am new to Docker. From what I understood both Docker and Kubernetes manage multiple containers.</p>
<p>Kubernetes is more a large-scale container management with an abstraction layer compared to Docker. Isn't it possible to duplicate this scenario of abstraction in Docker? If so, then why do we say that it is purely different from each other? How exactly does Kubernetes differ from Docker with its application?</p>
<p>I went through some explanations but still being confused regarding this question.Could anyone help me with a good explanation or provide some materials that would help me clear this out?</p>
<p>Thanks in advance.</p>
| <p><strong>Docker</strong> is a <strong>container runtime tool</strong> first; Kubernetes cannot be compared to Docker. The PODs are scheduled or managed by the orchestration platform known as Kubernetes.</p>
<p>You may compare <strong>Kubernetes</strong> with <strong>Docker swarm</strong> in several ways.</p>
<p>Kubernetes have some better options like</p>
<ul>
<li>Distribution of containers based on the requirement (Run ML container on Graphics card Nodes, CPU required container on CPU intensive VMs or Node pool)</li>
<li>Scaling based on Metrics - <strong>HPA & VPA</strong></li>
<li>Observability across platform - <strong>Easy to add/integrate other solutions</strong></li>
<li>RBAC & Security options - <strong>Firewall rules, Network policies</strong></li>
<li>Loadbalancing internally with <strong>Service</strong></li>
<li>Traffic routing option & pluggable solutions - <strong>Istio, LinkerD</strong></li>
<li>Managed services by cloud providers - <strong>EKS, GKE, OKE</strong></li>
<li>Variety of Volume options - <strong>PV, PVC backed by SSD, NFS, CSI driver options</strong></li>
</ul>
|
<p>We have an application running on EKS 1.21 and we are in the process of upgrading to EKS 1.22 since 1.21 will reach end of support in Feb 2023. I am trying to figure out a kubectl command or something similar that can query the entire cluster for any v1beta1 API versions that are deprecated and replaced by GA v1 as per :-</p>
<p><a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/" rel="nofollow noreferrer">https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/</a></p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html#update-1.22" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html#update-1.22</a></p>
<p><a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22</a></p>
<p>Is there a comprehensive consolidated kubectl command that can be used to see if there is any YAML manifest in the cluster that still uses v1beta1?</p>
| <p>The below command would return the namespace, name, apiVersion, and all the resources in your cluster. You can pipe this with grep to filter wherever you want.</p>
<pre><code>kubectl api-resources --no-headers |while read type ignore; do kubectl get $type -A -o go-template='{{range $index,$pod := .items}}{{(or .metadata.name "-")}} {{(or .metadata.namespace "-")}} {{ (or .kind "-")}} {{ (or .apiVersion "-") }} {{"\n"}}{{end}}' 2>/dev/null; done
</code></pre>
|
<p>so I am currently connected to a contabo hosted kubernetes cluster. On there I have running kafka and opensearch/opensearch dashboards deployements. I am trying to run logstash so that I can get the data from a kafka topic to opensearch, <a href="https://hub.docker.com/r/opensearchproject/logstash-oss-with-opensearch-output-plugin" rel="nofollow noreferrer">https://hub.docker.com/r/opensearchproject/logstash-oss-with-opensearch-output-plugin</a> this is the image that I use for logstash (<a href="https://justpaste.it/47676" rel="nofollow noreferrer">https://justpaste.it/47676</a> this is my logstash configuration). And the following is my opensearch configuration <a href="https://justpaste.it/a090p" rel="nofollow noreferrer">https://justpaste.it/a090p</a>
When I deploy logstash, I successfully get the data from the kafka topic, so my input plugin is working as expected, but the output is not, I am failing to output data to opensearch from logstash. The following is the logs from the logstash pod: <a href="https://justpaste.it/620g4" rel="nofollow noreferrer">https://justpaste.it/620g4</a> .</p>
<p>This is the output of "kubectl get services"</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
dashboards-opensearch-dashboards ClusterIP 10.96.114.252 <none> 5601/TCP 5d20h
grafana ClusterIP 10.107.83.28 <none> 3000/TCP 44h
logstash-service LoadBalancer 10.102.132.114 <pending> 5044:31333/TCP 28m
loki ClusterIP 10.99.30.246 <none> 3100/TCP 43h
loki-headless ClusterIP None <none> 3100/TCP 43h
my-cluster-kafka-0 NodePort 10.101.196.50 <none> 9094:32000/TCP 53m
my-cluster-kafka-1 NodePort 10.96.247.75 <none> 9094:32001/TCP 53m
my-cluster-kafka-2 NodePort 10.98.203.5 <none> 9094:32002/TCP 53m
my-cluster-kafka-bootstrap ClusterIP 10.111.178.24 <none> 9091/TCP,9092/TCP,9093/TCP 53m
my-cluster-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9092/TCP,9093/TCP 53m
my-cluster-kafka-external-bootstrap NodePort 10.109.134.74 <none> 9094:32100/TCP 53m
my-cluster-zookeeper-client ClusterIP 10.98.157.173 <none> 2181/TCP 54m
my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 54m
opensearch-cluster-master ClusterIP 10.98.55.121 <none> 9200/TCP,9300/TCP 19h
opensearch-cluster-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 19h
prometheus-operated ClusterIP None <none> 9090/TCP 25m
prometheus-operator ClusterIP None <none> 8080/TCP 50m
</code></pre>
<p>What am I doing wrong and how do I establish this connection?</p>
| <p>I figured it out. I think that it was expecting an ssl certificate and that is why it was refusing the connection. The way that I "fixed" this (because I don't need the ssl certification for this project for now) is that I changed the logstash configuration in this way.</p>
<pre><code> logstash.conf: |
input {
kafka{
codec => json
bootstrap_servers => "10.111.178.24:9092"
topics => ["t_events"]
}
}
output {
opensearch {
hosts => ["https://10.102.102.109:9200"]
ssl_certificate_verification => false
user => "admin"
password => "admin"
index => "logstash-logs-%{+YYYY.MM.dd}"
}
}
</code></pre>
<p>So I have added the "ssl_certificate_verification => false" line to the config and that enabled me to connect from logstash to opensearch and send the data. Now I have the data encryption aspect by using a https protocol but I am lacking the ssl authentication which I am fine with for this project.</p>
|
<p>We are trying to get the logs of pods after multiple restarts but we dont want to use any external solution like efk.</p>
<p>i tried below config but its not working. does the below cmd run on the pod or it will run on node level</p>
<pre><code>lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "kubectl logs appworks-0 > /container-stoped.txt"]
</code></pre>
| <blockquote>
<p>i tried below config but its not working. does the below cmd run on
the pod or it will run on node level</p>
</blockquote>
<p>it will run on the POD level, not on <strong>Node</strong> level</p>
<p>You can use the <strong>Hostpath</strong> in POD configuration</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: alpine
name: test-container
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /host
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /
type: Directory
</code></pre>
<p><strong>Hostpath</strong> will directly will create one Dir at the Node level and save logs over there, if you don't want this solution you can add your solution of <strong>lifecycle hook</strong> also however when you can directly write app logs to Host don't add lifecycle hook extra.</p>
<p><strong>Note</strong> : Make sure if your Node goes down <strong>hostpath</strong> or <strong>emptyDir</strong> logs you will miss.</p>
|
<p>I am very new to the Kubernetes and Docker, and I am following the tutorials in the Kubernetes documentation.</p>
<p>I got the below output after running the <code>minikube start</code> command.</p>
<p><a href="https://i.stack.imgur.com/TnDmq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TnDmq.png" alt="enter image description here" /></a></p>
<p>The process was paused for some time and after I entered <code>p</code> it continued, and the configuration was successfully completed.</p>
<p>Can I know what is the meaning for the two options given as p and s in the above output?
Thanks in advance.</p>
| <p>p/s means "per second"</p>
<p>That line is your machine pulling the relevant docker image(s) to start minikube.</p>
|
<p>I'm trying to create an assignment for students to do that contains the following :</p>
<ol>
<li>A docker image with issues that have to be scanned and remedied. (using an opensource scanner in kubernetes)</li>
<li>(Maybe) A sample attack scenario that can exploit those vulnerabilities.</li>
</ol>
<p>The problem arises when I try to find a suitable vulnerable image or create one. I cannot find a base of security issues at all. I really bend my back thinking of a suitable phrase in Google but everything leads merely to some blog posts about how-to scan an image.
I expected a database that might contain multiple sec issues and what causes them. I'd also expect some way to discern which are the most popular ones.</p>
<p>Do you have the source I require ?
Maybe you can just offer me 3-4 common security issues that are good to know and educational when having your first brush with docker ? (And how to create those issues ?)</p>
<p>The whole situation would have been probably easier if I myself would have been an expert in the field, but the thing I do is also my assignment as a student. (So as students we design assignments for each other. )</p>
| <p>Looks like you are looking for the <strong>Container security hardening</strong> and Kubernetes security options maybe.</p>
<p>You can use some tools like</p>
<ul>
<li><a href="https://github.com/controlplaneio/kubesec" rel="nofollow noreferrer">kubesec</a> - Security risk analysis for Kubernetes resources</li>
<li><a href="https://github.com/bridgecrewio/checkov" rel="nofollow noreferrer">checkov</a> - Prevent cloud misconfigurations and find vulnerabilities during build-time in infrastructure as code, container images and open-source packages</li>
<li><a href="https://github.com/aquasecurity/trivy" rel="nofollow noreferrer">Trivy</a> - vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more</li>
</ul>
<p>If you are looking for some questions you can set like, this is CKS (Certified Kubernetes Security) exam question</p>
<ul>
<li>There are a number of pods/container running in the "spectacle" namespace.
Identify and delete the pods which have CRITICAL vulnerabilities.</li>
</ul>
<p>For this <strong>trivy</strong> opensource tools comes into the picture to scan the image that you will be using in the deployment of Kubernetes or docker</p>
<pre><code>trivy image --severity CRITICAL nginx:1.16 (Image running in container)
</code></pre>
<p>List of few questions you can create lab out of it : <a href="https://github.com/moabukar/CKS-Exercises-Certified-Kubernetes-Security-Specialist/tree/main/7-mock-exam-questions" rel="nofollow noreferrer">https://github.com/moabukar/CKS-Exercises-Certified-Kubernetes-Security-Specialist/tree/main/7-mock-exam-questions</a></p>
|
<p>There is a lot of sources that clarify how to connect into the cluster. But when searching for information about how to connect from inside the cluster to outside the search results are polluted with the opposite information.</p>
<p>I want to connect my custom service that is running inside the cluster to my keycloack that is outside of the cluster on my local machine. There is no option of running keycloak inside the cluster. I am using Docker desktop for Mac to run kubernetes.</p>
| <blockquote>
<p>There is no option of running keycloak inside the cluster.</p>
</blockquote>
<p>Not sure may you have checked or not, you can run the <strong>keycloak</strong> on the Kubernetes.</p>
<p>You can refer my Github repo to get YAML files and deployment keycloak on Kubernetes : <a href="https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment" rel="nofollow noreferrer">https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment</a></p>
<p>Generally, you can use the <strong>external</strong> service, if you are diverting the traffic out of the cluster.</p>
<p>If you are running on a Public K8s cluster on any cloud provider service will be directly able to access the external service over IP without external service creation.</p>
<p><strong>Example</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: local-host
spec:
type: ExternalName
externalName: 192.168.0.1
</code></pre>
<p>If you don't want to pass the IP in external service you can give any domain name map details in <strong>/etc/hosts</strong> files too. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ref doc</a></p>
<p>Now if you Keycloak is running on Host machine (localhost) you can use the IP directly.</p>
<p>If you are using the <strong>minikube</strong> you can use the <code>host.minikube.internal</code> as host. <a href="https://minikube.sigs.k8s.io/docs/handbook/host-access/" rel="nofollow noreferrer">ref doc</a></p>
<pre><code>host.minikube.internal:<Port of keycloak>
</code></pre>
|
<p>I am on Kubernetes v1.22.13. When i was trying to delete a namespace that's stuck in status <code>terminating</code>, i deleted api-service <code>v1.networking.k8s.io</code> by mistake with:</p>
<pre><code>kubectl delete apiservices.apiregistration.k8s.io v1.networking.k8s.io
</code></pre>
<p>And now i don't have crds related to <code>v1.networking.k8s.io</code> such as <code>Ingress</code>. When i try to install ingress-controller it gives the error:</p>
<blockquote>
<p>error: resource mapping not found for name: "nginx" namespace: "" from "https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml": no matches for kind "IngressClass" in version "networking.k8s.io/v1"</p>
</blockquote>
<p>How can i undo that operation? Or how can i bring back api-resource <code>v1.networking.k8s.io</code>?</p>
<p>Tried to find a way to undo it and install it manually but i couldn't find the manifest related to that.</p>
| <p>You can recreate it via the following:</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
kube-aggregator.kubernetes.io/automanaged: onstart
name: v1.networking.k8s.io
spec:
group: networking.k8s.io
groupPriorityMinimum: 17200
version: v1
versionPriority: 15
EOF
</code></pre>
|
<p>I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart.</p>
<p>Example:</p>
<p>I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently.</p>
<p>I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers.</p>
| <p>What you could do is use Kustomize. Let me show you with an example that I use for my Prometheus installation.</p>
<p>I'm using the kube-prometheus-stack helm chart, but add some more custom resources like a SecretProviderClass.</p>
<p>kustomization.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: kube-prometheus-stack
repo: https://prometheus-community.github.io/helm-charts
version: 39.11.0
releaseName: prometheus
namespace: prometheus
valuesFile: values.yaml
includeCRDs: true
resources:
- secretproviderclass.yaml
</code></pre>
<p>I can then build the Kustomize yaml by running <code>kustomize build . --enable-helm</code> from within the same folder as where my kustomization.yaml file is.</p>
<p>I use this with my gitops setup, but you can use this standalone as well.</p>
<p>My folder structure would look something like this:</p>
<pre><code>.
├── kustomization.yaml
├── secretproviderclass.yaml
└── values.yaml
</code></pre>
|
<p>I have 2 k8s application running in a single cluster. When it communicate with service name, it takes 12 seconds to transfer 1 GB data. When communicate with nodeport time is 8sec. Why the service name is slow?</p>
<p>Why the nodeport give faster? Actually the service name should be faster right?</p>
| <p>It is expected to be faster when using the <code>NodePort</code> since the request does not rely on the <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-proxy" rel="nofollow noreferrer">kube-proxy</a> to forward the traffic to the target.</p>
<p>It also depends on the way the kube-proxy is configured.</p>
<blockquote>
<p><strong>User space</strong>: This mode gets its name because the service routing takes place in kube-proxy in the user process space instead of in the kernel network stack. It is not commonly used as it is slow and outdated.</p>
<p><strong>iptables</strong>: This mode uses Linux kernel-level Netfilter rules to configure all routing for Kubernetes Services. This mode is the default for kube-proxy on most platforms. When load balancing for multiple backend pods, it uses unweighted round-robin scheduling.</p>
<p><strong>IPVS (IP Virtual Server)</strong>: Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This kube-proxy mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.</p>
</blockquote>
<p>ref: <a href="https://www.stackrox.io/blog/kubernetes-networking-demystified/#kube-proxy" rel="nofollow noreferrer">https://www.stackrox.io/blog/kubernetes-networking-demystified/#kube-proxy</a></p>
<p>On a side note, recently a new feature gate was introduced <a href="https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/" rel="nofollow noreferrer">local service-traffic-policy</a>. While it's still going through the kube-proxy, it will also reduce roundtrips since it routes traffic only to the same node. Maybe you want to test this as an experiment.</p>
|
<p>Currently we have a set of microservice hosted on kubernetes cluster. We are setting hpa values based on rough estimates. I am planning to monitor horizontal pod autoscaling behavior using grafana to ensure we are not over/under allocating the resources like CPU/memory and come up with possible cost optimization recommendation. Need directions on how to achieve this.</p>
<p>I am new to Kubernetes world. Need directions on how to achieve this.</p>
| <p><strong>tl;dr</strong></p>
<ol>
<li>Monitor resource consumption of each pod.</li>
<li>Monitor pod restarts and number of replicas.</li>
<li>Use a load test.</li>
</ol>
<p><strong>Memory</strong></p>
<p>As a starting point, you could monitor CPU- and memory-consumption of each pod. For example you can do something like this:</p>
<pre><code>sum by (pod) (container_memory_usage_bytes{container=...}/
sum by (pod) (kube_pod_container_resource_requests{container=...})
</code></pre>
<p>If you follow the advice given in <a href="https://blog.kubecost.com/blog/requests-and-limits/" rel="nofollow noreferrer">A Practical Guide to Setting Kubernetes Requests and Limits</a>, the limit setting is related to the request setting. With such a query you can analyse, if the requested memory per pod is roughly realistic. Depending on the configuration of the autoscaler, this could be helpful. You could define some grafana alert rule that triggers an alarm if the desired ratio between used and requested memory exceeds some threshold.</p>
<p><strong>Restarts</strong></p>
<p>If the pod exceeds a given memory limit, the pod will crash and kubernetes will trigger a restart. With the following metric you can monitor restarts:</p>
<pre><code>sum by (pod) (increase(kube_pod_container_status_restarts_total{...}[1h]))
</code></pre>
<p><strong>CPU</strong></p>
<p>CPU usage is also relevant:</p>
<pre><code>process_cpu_usage{container="..."}
</code></pre>
<p>For additional queries, have a look at <a href="https://stackoverflow.com/q/55143656/11934850">Prometheus queries to get CPU and Memory usage in kubernetes pods</a>.</p>
<p><strong>Replicas</strong></p>
<p>Now, as you have basic metrics in place, what about the autoscaler itself? You'll be able to count the number of active pods like this:</p>
<pre><code>kube_horizontalpodautoscaler_status_current_replicas{}
</code></pre>
<p>Note that you might need to filter this metric by label <code>horizontalpodautoscaler</code>. But I recommend that you first run the metric without filters to get information about all running autoscalers.</p>
<p>To have better cost control, autoscaling is usually limited to a maximum of replicas. If you are running on maximum, you might want to check if the given maximum is to low. With kubectl you can check the status like this:</p>
<pre><code>kubectl describe hpa
</code></pre>
<p>Have a look at condition <code>ScalingLimited</code>.</p>
<p>With grafana:</p>
<pre><code>kube_horizontalpodautoscaler_status_condition{condition="ScalingLimited"}
</code></pre>
<p>A list of kubernetes metrics can be found at <a href="https://github.com/kubernetes/kube-state-metrics/tree/master/docs#exposed-metrics" rel="nofollow noreferrer">kube-state-metrics</a>. Have a look at <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/horizontalpodautoscaler-metrics.md" rel="nofollow noreferrer">Horizontal Pod Autoscaler Metrics</a> and <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/replicationcontroller-metrics.md" rel="nofollow noreferrer">ReplicationController metrics</a>.</p>
<p><strong>Use a load test</strong></p>
<p>In the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">HorizontalPodAutoscaler Walkthrough</a> there is a point where you need to increase the load on your application. There are several tools, that you may use for this, such as Apache Bench or JMeter.</p>
<p>In my experience, upscaling is easy to achieve, the tricky part is the downscaling. Therefore, you need to play with increasing <em>and decreasing</em> the load.</p>
|
<p>Im trying to listen to secret change using the operator sdk
The problem is that im not getting the reconcile event when I apply the secret with the labels that I define in the operator</p>
<p>I did the following</p>
<pre><code>mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
…
NewCache: cache.BuilderWithOptions(cache.Options{
SelectorsByObject: cache.SelectorsByObject{
&corev1.Secret{}: {
Label: labels.SelectorFromSet(labels.Set{"foo": "bar"}),
},
},
}),
</code></pre>
<p>I run the operator and apply the following secret and the <strong>reconcile is not invoked</strong>, any idea?</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
labels:
foo: bar
name: mysecret
namespace: dev
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: dGVzdBo=
</code></pre>
| <p>It looks like you are using the cache.Options.SelectorsByObject field to specify the labels that should trigger a reconcile event. However, this field is used to specify the labels that should be used to select objects from the cache, not the labels that should trigger a reconcile event.</p>
<p>To specify the labels that should trigger a reconcile event, you can use the ctrl.Watch function, like this:</p>
<pre class="lang-golang prettyprint-override"><code>mgr.Watch(&source.Kind{Type: &corev1.Secret{}},
&handler.EnqueueRequestForObject{},
predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
return labels.Set(e.MetaNew.GetLabels()).Has("foo", "bar")
},
})
</code></pre>
|
<p>I'm deploying ELK on k8s but getting error on <code>Filebeat</code></p>
<p><code>kubectl describe pod filebeat-filebeat-rpjbg -n elk</code> ///</p>
<p><strong>Error:</strong></p>
<pre><code>Warning Unhealthy 8s (x5 over 48s) kubelet Readiness probe failed: logstash: logstash:5044...
connection...
parse host... OK
dns lookup... ERROR lookup logstash on 10.245.0.10:53: no such host
</code></pre>
<p>In logstash <code>values.yaml</code> may be this causing error?</p>
<pre><code>logstashConfig:
logstash.yml: |
http.host: 0.0.0.0
xpack.monitoring.enabled: false
</code></pre>
<p><strong>PODS:</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 146m
filebeat-filebeat-rpjbg 0/1 Running 0 5m45s
filebeat-filebeat-v4fxz 0/1 Running 0 5m45s
filebeat-filebeat-zf5w7 0/1 Running 0 5m45s
logstash-logstash-0 1/1 Running 0 14m
logstash-logstash-1 1/1 Running 0 14m
logstash-logstash-2 1/1 Running 0 14m
</code></pre>
<p><strong>SVC:</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.245.205.251 <none> 9200/TCP,9300/TCP 172m
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 172m
logstash-logstash ClusterIP 10.245.104.163 <none> 5044/TCP 16m
logstash-logstash-headless ClusterIP None <none> 9600/TCP 16m
</code></pre>
<p><a href="https://github.com/ConorGrocock/DigitalOcean-Kubernetes-Challenge/blob/main/elasticsearch/values.yaml" rel="nofollow noreferrer">elasticsearch - values.yaml</a></p>
<p><a href="https://github.com/ConorGrocock/DigitalOcean-Kubernetes-Challenge/blob/main/logstash/values.yaml" rel="nofollow noreferrer">logstash - values.yaml</a></p>
<p><a href="https://github.com/ConorGrocock/DigitalOcean-Kubernetes-Challenge/tree/main/filebeat" rel="nofollow noreferrer">filebeat - values.yaml</a></p>
| <p>Filebeat is trying to resolve "logstash", but you don't have a service with that name. You have "logstash-logstash". Try to change filebeat config (line 49 and 116 in filebeat values.yaml) or change the name of your logstash service accordingly.</p>
|
<p>I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart.</p>
<p>Example:</p>
<p>I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently.</p>
<p>I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers.</p>
| <p>Using only Helm without any 3rd party tools like kustomize there are two solutions:</p>
<ol>
<li><p>Depend on the configurability of the Chart you are using as described by @Akshay in the other answer</p>
</li>
<li><p>Declare the Chart you are looking to add a ConfigMap to as a <a href="https://helm.sh/docs/helm/helm_dependency/" rel="nofollow noreferrer">dependency</a></p>
</li>
</ol>
<p>You can manage the Chart dependencies in the <code>Chart.yaml</code> file:</p>
<pre><code># Chart.yaml
dependencies:
- name: nginx
version: "1.2.3"
repository: "https://example.com/charts"
</code></pre>
<p>With the dependency in place, you can add your own resource files (e.g., the ConfigMap) to the chart. During Helm install, all dependencies and your custom files will be merged into a single Helm deployment.</p>
<pre><code>my-nginx-chart/:
values.yaml # defines all values including the dependencies
Chart.yaml # declares the dependencies
templates/ # custom resources to be added on top of the dependencies
configmap.yaml # the configmap you want to add
</code></pre>
<p>To configure values for a dependency, you need to prefix the parameters in your <code>values.yaml</code>:</p>
<pre><code>my-configmap-value: Hello World
nginx: #<- refers to "nginx" dependency
image: ...
</code></pre>
|
<p>I'm installing jupyterhub on k8s using <a href="https://z2jh.jupyter.org/en/stable/jupyterhub/installation.html#install-jupyterhub" rel="nofollow noreferrer">helm</a>.</p>
<pre><code>helm upgrade --cleanup-on-fail --install jupyterhub jupyterhub-2.0.0/jupyterhub/ --namespace my-NS --create-namespace --version=2.0.0 --values my-values.yaml --timeout 30m --debug
</code></pre>
<p>Its failing with error in creating hook-image-awaiter pods.</p>
<p>Error from helm debug:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>upgrade.go:142: [debug] preparing upgrade for jupyterhub
upgrade.go:150: [debug] performing update for jupyterhub
upgrade.go:322: [debug] creating upgraded release for jupyterhub
client.go:310: [debug] Starting delete for "hook-image-puller" DaemonSet
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "hook-image-awaiter" ServiceAccount
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "hook-image-awaiter" Job
client.go:128: [debug] creating 1 resource(s)
client.go:540: [debug] Watching for changes to Job hook-image-awaiter with timeout of 30m0s
client.go:568: [debug] Add/Modify event for hook-image-awaiter: ADDED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 1, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 2, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 3, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 4, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 5, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 6, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
upgrade.go:434: [debug] warning: Upgrade "jupyterhub" failed: pre-upgrade hooks failed: job failed: BackoffLimitExceeded
Error: UPGRADE FAILED: pre-upgrade hooks failed: job failed: BackoffLimitExceeded
helm.go:84: [debug] pre-upgrade hooks failed: job failed: BackoffLimitExceeded
UPGRADE FAILED</code></pre>
</div>
</div>
</p>
<p>There are some answers in web which didn't help to resolve the issue. I have tried following,</p>
<ul>
<li>Increase helm install time out - didn't work</li>
<li>Failed to pull image: I can see jupyterhub/k8s-image-awaiter:2.0.0 got pulled and I do not see any errors when pods are described</li>
</ul>
<p><a href="https://i.stack.imgur.com/Nnscp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nnscp.png" alt="enter image description here" /></a></p>
<p>hook-image-puller:
<a href="https://i.stack.imgur.com/L22Oy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L22Oy.png" alt="enter image description here" /></a>
hook-image-awaiter:
<a href="https://i.stack.imgur.com/iUOkM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iUOkM.png" alt="enter image description here" /></a></p>
<p>Event logs:
<a href="https://i.stack.imgur.com/MjOFc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MjOFc.png" alt="enter image description here" /></a></p>
<p>PS: I disabled rbac in values.yaml (create=false) and am installing jupyterhub in a new namespace while there is one is running in another namespace)</p>
| <p>Disable the pre-pull hook:</p>
<p><a href="https://z2jh.jupyter.org/en/latest/administrator/optimization.html#pulling-images-before-users-arrive" rel="nofollow noreferrer">https://z2jh.jupyter.org/en/latest/administrator/optimization.html#pulling-images-before-users-arrive</a></p>
<p>prePuller:
hook:
enabled: false</p>
|
<p>If I use -XX:MaxRAMPercentage for Java running in a container, and in the K8S definition I provide request and limit values for the container's memory, what will happen?</p>
<p>For example, if the request is 1GB and the limit 2GB, what will be the initial value for the JVM's max memory?</p>
| <p>The memory limit applies to the cgroup. So assuming you are using a JVM version 10+, where +UseContainerSupport is enabled by default, your JVMs max memory will be the configured percentage of the limit, i.e. if -XX:MaxRAMPercentage has a value of 100 then your max memory will be 2GB.</p>
<p>Memory request is only used for scheduling of new pods. I.e. Kuberentes won't schedule a pod on a node that has less than the requested memory available.</p>
<p>Reference:</p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run</a></p>
|
<p>i've got a kubernetes node, control-plane, which is untainted for deploying pods to. i've got a docker image <code>sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2</code>.</p>
<p>i'm signed into docker cli. the daemon.json is set with insecure registry and i can verify with <code>docker info</code> :</p>
<pre><code> Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: sdmay2342
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
sdmay23-42.ece.iastate.edu:5000
127.0.0.0/8
Live Restore Enabled: false
</code></pre>
<p>i can pull the image:</p>
<pre><code>Status: Image is up to date for sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2
sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2
</code></pre>
<p>i can build a container from the image:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad582a4d514b sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2 "docker-entrypoint.s…" 6 seconds ago Up 6 seconds 3000/tcp test-frontend
</code></pre>
<p>when i deploy it to the node from yaml manifest, i get an error.
basic manifest:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2
ports:
- containerPort: 6379
</code></pre>
<p>command: <code>sudo kubectl create -f test-deploy.yaml</code>
response: <code>pod/test-pod created</code></p>
<p>the description of the pod:</p>
<pre><code>Name: test-pod
Namespace: default
Priority: 0
Service Account: default
Node: sdmay23-42/10.29.160.55
Start Time: Sun, 27 Nov 2022 18:46:54 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.244.0.116
IPs:
IP: 10.244.0.116
Containers:
test-container:
Container ID:
Image: sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2
Image ID:
Port: 6379/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bvwzz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-bvwzz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned default/test-pod to sdmay23-42
Normal BackOff 23s (x2 over 50s) kubelet Back-off pulling image "sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2"
Warning Failed 23s (x2 over 50s) kubelet Error: ImagePullBackOff
Normal Pulling 12s (x3 over 50s) kubelet Pulling image "sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2"
Warning Failed 12s (x3 over 50s) kubelet Failed to pull image "sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2": rpc error: code = Unknown desc = failed to pull and unpack image "sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2": failed to resolve reference "sdmay23-42.ece.iastate.edu:5000/canvas-lti-frontend:v2": failed to do request: Head "https://sdmay23-42.ece.iastate.edu:5000/v2/canvas-lti-frontend/manifests/v2": http: server gave HTTP response to HTTPS client
Warning Failed 12s (x3 over 50s) kubelet Error: ErrImagePull
</code></pre>
| <p>the solution was to configure containerd to accept insecure registries. i had thought kubernetes was using docker. you can do <code>kubectl get nodes -o wide</code> to see information about container runtime.</p>
<p>steps for configuring containrd here: <a href="https://stackoverflow.com/questions/72419513/how-to-pull-docker-image-from-a-insecure-private-registry-with-latest-kubernetes">How to pull docker image from a insecure private registry with latest Kubernetes</a>.</p>
|
<p>My GKE cluster is displaying "Scale down blocked by pod" note, and clicking it then going to the Logs Explorer it shows a filtered view with log entries for the pods that had the incident: <code>no.scale.down.node.pod.not.enough.pdb</code> . But that's really strange since the pods on the log entries having that message do have PDB defined for them. So it seems to me that GKE is wrongly reporting the cause of the blocking of the node scale down. These are the manifests for one of the pods with this issue:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: ms-new-api-beta
name: ms-new-api-beta
namespace: beta
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: ms-new-api-beta
type: NodePort
</code></pre>
<p>The Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ms-new-api-beta
name: ms-new-api-beta
namespace: beta
spec:
selector:
matchLabels:
app: ms-new-api-beta
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'
labels:
app: ms-new-api-beta
spec:
containers:
- command:
- /deploy/venv/bin/gunicorn
- '--bind'
- '0.0.0.0:8000'
- 'newapi.app:app'
- '--chdir'
- /deploy/app
- '--timeout'
- '7200'
- '--workers'
- '1'
- '--worker-class'
- uvicorn.workers.UvicornWorker
- '--log-level'
- DEBUG
env:
- name: ENV
value: BETA
image: >-
gcr.io/.../api:${trigger['tag']}
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
path: /rest
port: 8000
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 20
timeoutSeconds: 30
name: ms-new-api-beta
ports:
- containerPort: 8000
name: http
protocol: TCP
readinessProbe:
httpGet:
path: /rest
port: 8000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 2
resources:
limits:
cpu: 150m
requests:
cpu: 100m
startupProbe:
failureThreshold: 30
httpGet:
path: /rest
port: 8000
periodSeconds: 120
imagePullSecrets:
- name: gcp-docker-registry
</code></pre>
<p>The Horizontal Pod Autoscaler:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: ms-new-api-beta
namespace: beta
spec:
maxReplicas: 5
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ms-new-api-beta
targetCPUUtilizationPercentage: 100
</code></pre>
<p>And finally, the Pod Disruption Budget:</p>
<pre><code>apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: ms-new-api-beta
namespace: beta
spec:
minAvailable: 0
selector:
matchLabels:
app: ms-new-api-beta
</code></pre>
| <p><code>no.scale.down.node.pod.not.enough.pdb</code> is not complaining about the lack of a PDB. It is complaining that, if the pod is scaled down, it will be <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility#noscaledown-node-level-reasons" rel="nofollow noreferrer"><em>in violation</em> of the existing PDB(s)</a>.</p>
<p>The "budget" is how much disruption the Pod can permit. The platform will not take any intentional action which violates that budget.</p>
<p>There may be another PDB in place that would be violated. To check, make sure to review pdbs in the pod's namespace:</p>
<pre><code>kubectl get pdb
</code></pre>
|
<p>In have a situation where the node has <code>4GB memory</code> and actual memory usage looks as below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Pod</th>
<th>Memory Requested</th>
<th>Memory Limit</th>
<th>Memory Used</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2.0 GiB</td>
<td>3.0 GiB</td>
<td>1.0 GiB</td>
</tr>
<tr>
<td>2</td>
<td>2.0 GiB</td>
<td>3.0 GiB</td>
<td>1.0 GiB</td>
</tr>
<tr>
<td>Free</td>
<td>0.0 GiB</td>
<td>0.0 GiB</td>
<td>2.0 GB</td>
</tr>
</tbody>
</table>
</div>
<p>Since there is free memory, nothing gets evicted.</p>
<p>But now let's say both pods 1 and 2 start doing real work, and the situation changes to</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Pod</th>
<th>Memory Requested</th>
<th>Memory Limit</th>
<th>Memory Used</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2.0 GiB</td>
<td>3.0 GiB</td>
<td>3.0 GiB</td>
</tr>
<tr>
<td>2</td>
<td>2.0 GiB</td>
<td>3.0 GiB</td>
<td>2.0 GiB</td>
</tr>
</tbody>
</table>
</div>
<p>and the Kubernetes eviction algorithm gets triggered.</p>
<p>In such a situation which pod will be evicted? Will it be <code>pod1</code> or <code>pod2?</code>.</p>
<p>I have already checked <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction" rel="nofollow noreferrer">pod selection rules</a>, but still not able to get an understanding of how eviction will work in this case.</p>
| <p>In your example, pod 1 will get evicted. The Pod that is not using more memory than it requested will not get evicted.</p>
<p>This is mentioned in the Kubernetes documentation you link to:</p>
<blockquote>
<p>The kubelet uses the following parameters to determine the pod eviction order:</p>
<ol>
<li>Whether the pod's resource usage exceeds requests</li>
<li>Pod Priority</li>
<li>The pod's resource usage relative to requests</li>
</ol>
</blockquote>
<p>In your example, pod 2's resource usage does not exceed requests (memory requested=2 GiB, actual use=2 GiB) so it is removed from the algorithm. That leaves pod 1 as the only pod remaining, and it gets evicted.</p>
<p>Say pod 2 is also above its request. Then for both pods, subtract actual utilization from the request, and the pod that's the most over its limit gets evicted.</p>
<p>Let's look at a little more complex example on a hypothetical 8 GiB node:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Pod</th>
<th>Requested</th>
<th>Actual</th>
<th>Excess use</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>4.0 GiB</td>
<td>4.0 GiB</td>
<td>0.0 GiB</td>
</tr>
<tr>
<td>2</td>
<td>1.0 GiB</td>
<td>2.0 GiB</td>
<td><strong>1.0 GiB</strong></td>
</tr>
<tr>
<td>3</td>
<td>1.0 GiB</td>
<td>1.3 GiB</td>
<td>0.3 GiB</td>
</tr>
<tr>
<td>4</td>
<td>0.0 GiB</td>
<td>0.8 GiB</td>
<td>0.8 GiB</td>
</tr>
</tbody>
</table>
</div>
<p>Pod 1 is using the most memory, but it is within its requests, so it is safe. Subtracting actual use from requests, pod 2 is using the most excess memory and it is the one that will get evicted. Pod 4 hasn't declared resource requests at all, and while it's safe in this scenario, it's at risk in general; absent pod 2, it's the pod using the most memory above its requests, even though it's the second-least absolute memory.</p>
|
<p>I'm testing a trigger i set on my k8s cluster. What it does it that detects if a pod is evicted. So to test I ran</p>
<p><code>kubectl drain <NODENAME> --ignore-daemonsets --force</code></p>
<p>but the evicted pods are just automatically deleted instead of staying in cluster.
Can you guys help me on how can I evict a pod?
I'm using kind to test this locally on my computer.
Actually I want to read the evicted pod yaml after its evicted so that I can build that trigger.</p>
| <p>First, to answer the title question: "How to evict pods in kubernetes?" There are a number of additional ways to trigger pod eviction, but two easy ones:</p>
<h3>API Based Eviction</h3>
<p>There is an API you can use to <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#create-eviction-pod-v1-core" rel="nofollow noreferrer">evict a pod manually</a>. To use the API:</p>
<ul>
<li>via curl: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/api-eviction/</a></li>
<li>with a tool: <a href="https://github.com/rajatjindal/kubectl-evict-pod" rel="nofollow noreferrer">https://github.com/rajatjindal/kubectl-evict-pod</a></li>
</ul>
<h2>Apply a NoExecute Taint</h2>
<p>A <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts" rel="nofollow noreferrer">NoExecute taint</a> that your pod does not tolerate will cause it to be evicted.</p>
<pre><code>kubectl taint nodes node1 key1=value1:NoExecute
</code></pre>
<p>Now, the text of your question asks a slightly different question: reading the pod resource <em>after</em> a successful eviction. At least in my environment and k8s version, as soon as a pod is evicted, it is deleted by the service account associated with the node it was running on.</p>
<p>In practice, many things can delete pods - including <a href="https://kubernetes.io/docs/concepts/architecture/garbage-collection/" rel="nofollow noreferrer">garbage collectors</a>, so querying a pod after it has been terminated is fraught with potential race conditions.</p>
<p>Instead, I would recommend setting up a programmatic watch for pod events. <code>client-go</code> offers <a href="https://medium.com/codex/explore-client-go-informer-patterns-4415bb5f1fbd" rel="nofollow noreferrer">the Informer pattern</a> to accomplish this with a bit of boilerplate.</p>
<p>Create an informer for Pods, define an <code>UpdateFunc</code> for your event handler, and you will reliably receive pod state changes as they are processed by the API -- which you can then assess to determine if they were evicted.</p>
|
<p>The application that me and my team working on use Ruby on Rails hosted in Openshift pod. We use delayed_job to handle background process, and for delayed_job logging, the application write the log into <code>log/delayed_job.log</code> file. However, if a new pod for the application is created, the pod will create a new file. Also, the company is using Kibana to save every log from Openshift pod.</p>
<p>What we tried so far is put this code in <code>delayed_job_config.rb</code></p>
<p><code>Delayed::Worker.logger = Logger.new(STDERR)</code></p>
<p>To write the log for another process beside delayed job, in order to write the log in Openshift pod log, we use this following code, e.g:</p>
<p><code>Rails.logger.info "Result: #{ldap.get_operation_result.code}"</code></p>
<p>However, the delayed_job log is not still appear in Logs tab of the pod (in order for the log to appear in Kibana)</p>
<p>The goal is to write the log in Logs tab of the Openshift pod.</p>
<p><a href="https://i.stack.imgur.com/EcYlJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EcYlJ.png" alt="enter image description here" /></a></p>
<p>Any help is appreciated.</p>
<p>UPDATE :</p>
<p>We tried to put <code>Delayed::Worker.logger = Rails.logger</code> in <code>delayed_job_config.rb</code> but still did not work</p>
| <p>In case it might help anyone:</p>
<p>We're ended up muting the DelayedJobs ActiveRecord related logs
playing around <a href="https://github.com/collectiveidea/delayed_job/issues/477#issuecomment-800341818" rel="nofollow noreferrer">this delayed_job repo post</a> to also ensure that if an error occurs, we get the previous log level back</p>
<p>And within our container, we tail the actual delayed job log file to output to get the actual jobs log</p>
<blockquote>
<p>tail -f /app/log/production.log &</p>
</blockquote>
<p>It's not optimal, but at least, we get to see the logs!</p>
|
<p>I'm managing Kubernetes + nginx.</p>
<p>I'd like to install dynamic modules on nginx that are provided by Nginx Ingress Controller.
Those dynamic modules are not offered by Nginx Ingress Controller official configmap (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/</a>)</p>
<p>So I believe, I need to build my own Docker container of Nginx Ingress Controller.
(Could be added at this? <a href="https://github.com/kubernetes/ingress-nginx/blob/8951b7e22ad3952c549150f61d7346f272c563e1/images/nginx/rootfs/build.sh#L618-L632" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/blob/8951b7e22ad3952c549150f61d7346f272c563e1/images/nginx/rootfs/build.sh#L618-L632</a> )</p>
<p>Do you know how we can customize the controller and manage it by helm chart? I'm thinking about making a Fork branch from the controller master repo on Github.
But I don't have any idea on how we install a customized version of the controller on terraform + helm chart.</p>
<p>However, I would prefer to use a non-customizable solution (because of some annotation settings)</p>
<p>Environment:
Kubernetes
Nginx Ingress Controller is installed by helm chart + terraform
Nginx Ingress Controller -> <a href="https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx</a></p>
<p>Terraform:</p>
<pre><code>resource "helm_release" "nginx-ingress-controller" {
name = "nginx-ingress-controller"
chart = "ingress-nginx/ingress-nginx"
namespace = "kube-system"
version = "3.34.0"
}
</code></pre>
<p>dynamic modules
<a href="https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/" rel="noreferrer">https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/</a>
(install process might be using <code>--add-dynamic-module</code> option, and set <code>load_module modules/something.so</code> on <code>nginx.conf</code> via <code>ingress.yaml</code>)</p>
<p>Thank you.</p>
| <h1>TL;DR</h1>
<p>Extend the official image with the dynamic modules, and update the <code>helm_release</code> <code>terraform</code> resource to <code>set</code> the <code>controller.image.registry</code>, <code>controller.image.image</code>, <code>controller.image.tag</code>, <code>controller.image.digest</code>, and <code>controller.image.digestChroot</code> for your custom image along with a <code>controller.config.main-snippet</code> to load the dynamic module(s) in the main context.</p>
<hr />
<p>This is similar to my previous <a href="https://stackoverflow.com/a/57741684/1423507">answer for building modules using the official nginx image</a>. You can extend the <code>ingress-nginx/controller</code> image, build the modules in one stage, extend the official image with the dynamic modules in another stage, and use the image in your <code>helm_release</code>. An example for extending the <code>ingress-nginx/controller</code> with the <a href="https://github.com/openresty/echo-nginx-module" rel="nofollow noreferrer"><code>echo-nginx-module</code></a> e.g.:</p>
<h2>Docker</h2>
<pre><code>ARG INGRESS_NGINX_CONTROLLER_VERSION
FROM registry.k8s.io/ingress-nginx/controller:${INGRESS_NGINX_CONTROLLER_VERSION} as build
ARG INGRESS_NGINX_CONTROLLER_VERSION
ENV INGRESS_NGINX_CONTROLLER_VERSION=${INGRESS_NGINX_CONTROLLER_VERSION}
USER root
RUN apk add \
automake \
ca-certificates \
curl \
gcc \
g++ \
make \
pcre-dev \
zlib-dev
RUN NGINX_VERSION=$(nginx -V 2>&1 |sed -n -e 's/nginx version: //p' |cut -d'/' -f2); \
curl -L "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" | tar -C /tmp/nginx --strip-components=1 -xz
WORKDIR /src/echo-nginx-module
RUN curl -L https://github.com/openresty/echo-nginx-module/archive/refs/tags/v0.63.tar.gz | tar --strip-components=1 -xz
WORKDIR /tmp/nginx
RUN ./configure --with-compat --add-dynamic-module=/src/echo-nginx-module && \
make modules
FROM registry.k8s.io/ingress-nginx/controller:${INGRESS_NGINX_CONTROLLER_VERSION}
COPY --from=build /tmp/nginx/objs/ngx_http_echo_module.so /etc/nginx/modules/
</code></pre>
<p>... build and push the image e.g.: <code>docker build --rm -t myrepo/ingress-nginx/controller:v1.5.1-echo --build-arg INGRESS_NGINX_CONTROLLER_VERSION=v1.5.1 . && docker push myrepo/ingress-nginx/controller:v1.5.1-echo</code></p>
<h2>Terraform</h2>
<p>Update the <code>terraform</code> <code>helm_release</code> resource to install the charts using the custom image and adding a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#main-snippet" rel="nofollow noreferrer"><code>main-snippet</code></a> to set the <a href="https://nginx.org/en/docs/ngx_core_module.html#load_module" rel="nofollow noreferrer"><code>load_module</code></a> directive in the <code>main</code> context:</p>
<pre class="lang-hcl prettyprint-override"><code>resource "helm_release" "ingress-nginx" {
name = "ingress-nginx"
namespace = "kube-system"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = "3.34.0"
set {
name = "controller.image.registry"
value = "myrepo"
}
set {
name = "controller.image.image"
value = "ingress-nginx/controller"
}
set {
name = "controller.image.tag"
value = "v1.5.1-echo"
}
set {
name = "controller.image.digest"
value = "sha256:1b32b3e8c983ef4a32d87dead51fbbf2a2c085f1deff6aa27a212ca6beefcb72"
}
set {
name = "controller.image.digestChroot"
value = "sha256:f2e1146adeadac8eebb251284f45f8569beef9c6ec834ae1335d26617da6af2d"
}
set {
name = "controller.config.main-snippet"
value = <<EOF
load_module /etc/nginx/modules/ngx_http_echo_module.so;
EOF
}
}
</code></pre>
<p>The <code>controller.image.digest</code> is the image <code>RepoDigest</code>: <code>docker inspect myrepo/ingress-nginx/controller:v1.5.1-echo --format '{{range .RepoDigests}}{{println .}}{{end}}' |cut -d'@' -f2</code></p>
<p>The <code>controller.image.digestChroot</code> is the <code>Parent</code> sha: <code>docker inspect myrepo/ingress-nginx/controller:v1.5.1-echo --format {{.Parent}}</code></p>
<h2>Test</h2>
<ol>
<li>Create a <code>nginx</code> pod: <code>kubectl run nginx --image=nginx</code></li>
<li>Expose the pod: <code>kubectl expose pod nginx --port 80 --target-port 80</code></li>
<li>Create an ingress with a <code>server-snippet</code>:</li>
</ol>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/server-snippet: |
location /hello {
echo "hello, world!";
}
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: echo.example.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: nginx
port:
number: 80
tls:
- hosts:
- echo.example.com
secretName: tls-echo
EOF
</code></pre>
<blockquote>
<p>Using <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer"><code>cert-manager</code></a> for TLS certificates issuance and <a href="https://github.com/kubernetes-sigs/external-dns" rel="nofollow noreferrer"><code>external-dns</code></a> for DNS management.</p>
</blockquote>
<ol start="4">
<li>Test using <code>curl</code>:</li>
</ol>
<p><a href="https://i.stack.imgur.com/ZoDrF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZoDrF.png" alt="curl echo module test on publicly exposed app" /></a></p>
|
<p>we are doing case study with control-m to monitor Kubernetes job.
On successful completions of job, control -m is able to recognize the job completed.
however when it fails, it never recognize the failure it shows job is still running,i suspect job never gets completed in kubernetes.</p>
<p>Here as job, pod status and kubernetes yaml file.</p>
<p>My question, is there way to kubernetes job complete with failure?
or is it default behavior of kubernetes?</p>
<pre><code># kubectl -n ns-dev get job
NAME COMPLETIONS DURATION AGE
job-pod-failure-policy-example 0/1 3m39s 3m39s
# kubectl -n ns-dev get pods
NAME READY STATUS RESTARTS AGE
job-pod-failure-policy-example-h86bp 0/1 Error 0 82s
Yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: job-pod-failure-policy-example
spec:
template:
spec:
restartPolicy: Never
containers:
- name: main
image: docker.io/library/bash:5
command: ["bash"] # example command simulating a bug which triggers the FailJob action
args:
- -c
- echo "Hello world!" && sleep 5 && exit 1
backoffLimit: 0
podFailurePolicy:
rules:
- action: Terminate
onExitCodes:
containerName: main
operator: In
values: [1]
</code></pre>
<p>I have gone through below link to help to set the backoff limit to zero which helped stop retriggering multiple times.</p>
<p><strong><a href="https://stackoverflow.com/questions/62443173/kubernetes-job-keeps-spinning-up-pods-which-end-up-with-the-error-status">Kubernetes job keeps spinning up pods which end up with the 'Error' status</a></strong></p>
| <blockquote>
<p>My question, is there way to kubernetes job complete with failure? or
is it default behavior of kubernetes?</p>
</blockquote>
<p>You can manage it from the code mostly, if there is any error gracefully shutdown or pass the proper exit code.</p>
<p><strong>Kubernetes</strong> have two status only <strong>Failed</strong> or <strong>Complete</strong>.</p>
<p>But you can update the and mark the job as complete also by hitting the api-server with path request</p>
<pre><code>curl <Api server>/apis/batch/v1/namespaces/<namespacename>/jobs/<job name>/status -XPATCH -H "Accept: application/json" -H "Content-Type: application/strategic-merge-patch+json" -d '{"status": {"succeeded": 1}}'
</code></pre>
|
<p>I'm unable to find any references other than this link that confirms that the failure has to be consecutive. <a href="https://github.com/kubernetes/website/issues/37414" rel="nofollow noreferrer">https://github.com/kubernetes/website/issues/37414</a></p>
<p>Background: Our Java application is getting restarted every day because of liveness probe failure. The application's access logs don't show 3 consecutive failures. So wanted to understand the behavior of probes.</p>
| <p>Liveness check is created when Kubernetes creates pod and is recreated each time that Pod is restarted. In your configuration you have set initialDelaySeconds: 20 so after creating a pod, Kubernetes will wait 20 seconds, then it will call liveness probe 3 times (as default value failureThreshold: 3). After 3 fails, Kubernetes will restart this pod according to RestartPolicy. Also in logs you will be able to find in logs.</p>
<p>When you are using <code>kubectl get events</code> you are getting events only from the last hour.</p>
<pre><code>Kubectl get events
LAST SEEN TYPE REASON OBJECT
47m Normal Starting node/kubeadm
43m Normal Scheduled pod/liveness-http
43m Normal Pulling pod/liveness-http
43m Normal Pulled pod/liveness-http
43m Normal Created pod/liveness-http
43m Normal Started pod/liveness-http
4m41s Warning Unhealthy pod/liveness-http
40m Warning Unhealthy pod/liveness-http
12m20s Warning BackOff pod/liveness-http
</code></pre>
<p>same command after ~1 hour:</p>
<pre><code>LAST SEEN TYPE REASON OBJECT
43s Normal Pulling pod/liveness-http
8m40s Warning Unhealthy pod/liveness-http
20m Warning BackOff pod/liveness-http
</code></pre>
<p>So that might be the reason you are seeing only one failure.</p>
<p>Liveness probe can be configured using the fields below:</p>
<ul>
<li><p>initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</p>
</li>
<li><p>periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</p>
</li>
<li><p>timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</p>
</li>
<li><p>successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.</p>
</li>
<li><p>failureThreshold: When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. In case of a readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</p>
</li>
</ul>
<p>If you set the minimal values for periodSeconds, timeoutSeconds, successThreshold and failureThreshold you can expect more frequent checks and faster restarts.</p>
<p>Liveness probe :</p>
<ul>
<li>Kubernetes will restart a container in a pod after <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">failureThreshold</a> times. By default it is 3 times - so after 3 failed probes.</li>
<li>Depending on your configuration of the container, time needed for container termination could be very differential</li>
<li>You can adjust both <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">failureThreshold</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">terminationGracePeriodSeconds</a> period parameters, so the container will be restarted immediately after every failed probe</li>
</ul>
<p>In <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">liveness probe configuration</a> and <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">best practices</a> you can find more information.</p>
|
<p>I had this happen to me a couple of times:</p>
<p>I change something in /etc/kubernetes/manifests/kube-apiserver.yaml and check for the API server process. I see that Docker container exited with error code 1. I check the logs of the container and all there is is a single line with:</p>
<pre><code>Shutting down, got signal: Terminated
</code></pre>
<p>I don’t know where to begin in troubleshooting this as there’s nowhere to start. In a lab environment I just recreate the cluster but I’m afraid this might happen in a production environment.</p>
<p>How can I troubleshoot a kube-apiserver that fails to start like this (with no exit reason besides the code) and that is deployed with kubeadm and such is running in a container?</p>
| <p>I had a similar issue with pods of my kube-system randomly being terminated, preventing the cluster from starting.</p>
<p>I found the solution here : <a href="https://discuss.kubernetes.io/t/why-does-etcd-fail-with-debian-bullseye-kernel/19696" rel="nofollow noreferrer">https://discuss.kubernetes.io/t/why-does-etcd-fail-with-debian-bullseye-kernel/19696</a></p>
<p>TLDR: It was a cgroup driver/version problem. Kubelet had a hard time figuring out what's running versus what should be running and was killing my legitimate pods. The following config fixed the thing :</p>
<pre><code># Content of file /etc/containerd/config.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
</code></pre>
|
<p><em>SOS</em>
I'm trying to deploy ELK stack on my Kubernetes**a</p>
<p>ElasticSearch, Metricbeat, Filebeat and Kibana running on Kubernetes, but in Kibana there is <code>no Filebeat index logs</code>
Kibana accessable: <strong>URL</strong><a href="http://logging.halykmart.online/" rel="nofollow noreferrer"> here</a>
Only <strong><a href="http://logging.halykmart.online/app/kibana#/management/elasticsearch/index_management/home?_g=()" rel="nofollow noreferrer">MetricBeat</a></strong> index available</p>
<p><a href="https://i.stack.imgur.com/TuDvU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TuDvU.png" alt="enter image description here" /></a></p>
<p><strong>I don't know where the issue please help me to figure out.
Any idea???</strong></p>
<p><strong>Pods</strong>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 21h
es-mono-0 1/1 Running 0 19h
filebeat-4446k 1/1 Running 0 11m
filebeat-fwb57 1/1 Running 0 11m
filebeat-mk5wl 1/1 Running 0 11m
filebeat-pm8xd 1/1 Running 0 11m
kibana-86d8ccc6bb-76bwq 1/1 Running 0 24h
logstash-deployment-8ffbcc994-bcw5n 1/1 Running 0 24h
metricbeat-4s5tx 1/1 Running 0 21h
metricbeat-sgf8h 1/1 Running 0 21h
metricbeat-tfv5d 1/1 Running 0 21h
metricbeat-z8rnm 1/1 Running 0 21h
</code></pre>
<p><strong>SVC</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch LoadBalancer 10.245.83.99 159.223.240.9 9200:31872/TCP,9300:30997/TCP 19h
kibana NodePort 10.245.229.75 <none> 5601:32040/TCP 24h
kibana-external LoadBalancer 10.245.184.232 <pending> 80:31646/TCP 24h
logstash-service ClusterIP 10.245.113.154 <none> 5044/TCP 24h
</code></pre>
<p><strong>Logstash logs <a href="https://github.com/shukurew/logs/blob/main/logs.logstash" rel="nofollow noreferrer">logstash (Raw)</a></strong></p>
<p><strong>filebeat <a href="https://github.com/shukurew/logs/blob/main/logs.filebeat-4446k" rel="nofollow noreferrer">logs (Raw)</a></strong></p>
<p><code>kibana.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elk
labels:
run: kibana
spec:
replicas: 1
selector:
matchLabels:
run: kibana
template:
metadata:
labels:
run: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:6.5.4
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch.elk:9200/
- name: XPACK_SECURITY_ENABLED
value: "true"
#- name: CLUSTER_NAME
# value: elasticsearch
#resources:
# limits:
# cpu: 1000m
# requests:
# cpu: 500m
ports:
- containerPort: 5601
name: http
protocol: TCP
#volumes:
# - name: logtrail-config
# configMap:
# name: logtrail-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elk
labels:
#service: kibana
run: kibana
spec:
type: NodePort
selector:
run: kibana
ports:
- port: 5601
targetPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana-external
spec:
type: LoadBalancer
selector:
app: kibana
ports:
- name: http
port: 80
targetPort: 5601
</code></pre>
<p><code>filebeat.yaml</code></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elk
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elk
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ['logstash-service:5044']
setup.kibana.host: "http://kibana.elk:5601"
setup.kibana.protocol: "http"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: elk
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
#- name: data
# mountPath: /usr/share/filebeat/data
subPath: filebeat/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
#- name: data
# persistentVolumeClaim:
# claimName: elk-pvc
---
</code></pre>
<p><code>Metricbeat.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: elk
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: "kibana.elk:5601"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: elk
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
hosts: ["localhost:10255"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:6.5.4
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: data
mountPath: /usr/share/metricbeat/data
subPath: metricbeat/
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
persistentVolumeClaim:
claimName: elk-pvc
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: elk
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- events
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
</code></pre>
<p><code>Logstash.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: elk
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["elasticsearch.elk:9200"]
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: elk
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: elk
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
</code></pre>
<p><strong>Full src files<a href="https://github.com/shukurew/ELK-Stack-kubernetes" rel="nofollow noreferrer">(GitHub)</a></strong></p>
<pre><code></code></pre>
| <p>Try to use FluentD as log transportation
<code>fluentd.yaml</code></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elk
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.elk.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre>
|