prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p><strong>Description</strong>
Trying to deploy the triton docker image as container on kubernetes cluster</p>
<p><strong>Triton Information</strong>
What version of Triton are you using? -> 22.10</p>
<p><strong>Are you using the Triton container or did you build it yourself?</strong>
I used the server repo with following command:</p>
<pre><code>python3 compose.py --backend onnxruntime --backend python --backend tensorflow2 --repoagent checksum --container-version 22.10
</code></pre>
<p>then again created new triton image with following dockerfile:</p>
<pre><code>FROM tritonserver:latest
RUN apt install python3-pip -y
RUN pip install tensorflow==2.7.0
RUN pip install transformers==2.11.0
RUN pip install tritonclient
RUN pip install tritonclient[all]
</code></pre>
<p>and dockerfile is being with following command:</p>
<pre><code>docker build -t customtritonimage -f ./DockerFiles/DockerFile .
</code></pre>
<p><strong>To Reproduce</strong>
directory structure:
parent directory -> tritonnludeployment
files in it -> DockerFiles (folder containing docker files), k8_trial.yaml, model_repo_triton (all the models here in triton-supported directory shape and has required files)</p>
<p>I am using this 'k8_trial.yaml' file for starting kubectl deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: flower
labels:
app: flower
spec:
replicas: 3
selector:
matchLabels:
app: flower
template:
metadata:
labels:
app: flower
spec:
volumes:
- name: models
hostPath:
# server: 216.48.183.17
path: /root/Documents/tritonnludeployment
# readOnly: false
type: Directory
containers:
- name: flower
ports:
- containerPort: 8000
name: http-triton
- containerPort: 8001
name: grpc-triton
- containerPort: 8002
name: metrics-triton
image: "customtritonimage:latest"
imagePullPolicy: Never
volumeMounts:
- mountPath: /root/Documents/tritonnludeployment
name: models
command: ["/bin/sh", "-c"]
args: ["cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models/model_repo_triton --allow-gpu-metrics=false --strict-model-config=false"]
# resources:
# requests:
# memory: "500Mi"
# cpu: "500Mi"
# limits:
# memory: "900Mi"
# cpu: "900Mi"
# nvidia.com/gpu: 1
</code></pre>
<p>Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).</p>
<p><strong>Expected behavior</strong>
kubectl deployment should start, with triton container as one of the pods</p>
<p>Which step i am doing wrong!</p>
| <p>And what is the error message you are getting? Some of the issues I noticed:</p>
<ul>
<li>use the expected file name know to <code>docker</code>, i.e. <code>Dockerfile</code> not <code>DockerFile</code></li>
<li>make sure base image exists (<code>tritonserver:latest</code> does not, you probably want one of <a href="https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver/tags" rel="nofollow noreferrer">these</a>)</li>
<li>first update the sources (<code>RUN apt install</code> ... -> <code>RUN apt update && apt install </code>...)</li>
<li>reduce layers number by installing multiple python packages at once</li>
<li><code>tritonclient[all]</code> already includes <code>tritonclient</code></li>
<li>don't run containers as <code>root</code> (<code>tritonserver</code> does not require it anyway)</li>
<li>make sure you pull the image first time (<code>imagePullPolicy</code>: <code>Never</code> -> <code>IfNotPresent</code>)</li>
<li>remove multiple and unnecessary commands from <code>args</code> (such as <code>cd /models</code>)</li>
<li><code>tritonserver</code> can import all subfolders, so <code>--model-repository=/models</code> is probably better</li>
</ul>
|
<p>I have read about the various ways to run tasks periodically in a K8s cluster, but none of them seem to work well for this specific case. I have a deployment "my-depl" that can run an arbitrary number of pods and the task needs to execute periodically inside each pod (basically a shell command that "nudges" the main application once a week or so).</p>
<p>The Kubernetes Cronjob functionality starts a task in its own container. This K8s task does not know how many pods are currently running for "my-depl" and cannot run anything in those pods. Conceivably, I could run kubectl within this K8s Cronjob, but that seems incredibly hacky and dangerous.</p>
<p>The second alternative would be to have crond (or an alternative tool like <a href="https://github.com/dshearer/jobber" rel="nofollow noreferrer">Jobber</a> or <a href="https://github.com/ess/cronenberg" rel="nofollow noreferrer">Cronenberg</a>) run as part of the pod. But that would mean that two processes are running and the container might not die, if only the cron process dies.</p>
<p>The third option is to run a multi-process container via a special init process like <a href="https://github.com/just-containers/s6-overlay" rel="nofollow noreferrer">s6-overlay</a>. This can be made to die if one of the child processes dies, but it seems fairly involved and hardly a first-class feature.</p>
<p>The fourth option I could think of was "don't do this, it's stupid. Redesign your application so it doesn't need to be 'nudged' once a week". That's a sound suggestion, but a lot of work and I need at least a temporary solution in the meantime.</p>
<p>So, does anyone have a better idea than those detailed here?</p>
| <p>I think the simplest solution is to run <code>crond</code> (or an alternative of your choice) in a sidecar container (that is, another container in the same pod). Recall that all containers in a pod share the same network namespace, so <code>localhost</code> is the same thing for all containers.</p>
<p>This means your cron container can happily run a <code>curl</code> or <code>wget</code> command (or whatever else is necessary) to ping your API over the local port.</p>
<p>For example, something like this, in which our cron task simply runs <code>wget</code> against the web server running in the <code>api</code> container:</p>
<pre><code>apiVersion: v1
data:
root: |
* * * * * wget -O /tmp/testfile http://127.0.0.1:8080 2> /tmp/testfile.err
kind: ConfigMap
metadata:
labels:
app: cron-example
name: crontabs-ghm86fgddg
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cron-example
name: cron-example
spec:
replicas: 1
selector:
matchLabels:
app: cron-example
template:
metadata:
labels:
app: cron-example
spec:
containers:
- image: docker.io/alpinelinux/darkhttpd:latest
name: api
- command:
- /bin/sh
- -c
- |
crontab /data/crontabs/root
exec crond -f -d0
image: docker.io/alpine:latest
name: cron
volumeMounts:
- mountPath: /data/crontabs
name: crontabs
volumes:
- configMap:
name: crontabs-ghm86fgddg
name: crontabs
</code></pre>
|
<p>I've installed Zolando PostgreSQL Operator v1.8.1 in bare-metal K8S using helm,</p>
<pre><code>helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator
helm install postgres-operator postgres-operator-charts/postgres-operator --version 1.8.1
</code></pre>
<p>but I'm not sure how to properly upgrade it to v1.8.2?</p>
<p>There is a documentation but it's too high level for me: <a href="https://github.com/zalando/postgres-operator/blob/master/docs/administrator.md#upgrading-the-operator" rel="nofollow noreferrer">https://github.com/zalando/postgres-operator/blob/master/docs/administrator.md#upgrading-the-operator</a></p>
<p>Could somebody provide step-by-step command line how to upgrade operator using Helm or without Helm?</p>
| <p>Hope is not too late, I found myself doing this a couple of months ago as well, all you have to do is to change the docker image as explained in the documentation above, to be more specific, the <code>deployment</code> docker image. So I assume you have some basic understanding of helm, it is pretty common to have a custom values.yaml file in which you edit the configuration to fit your needs you just need to specify this file while running the upgrade with the <code>-f</code> flag.</p>
<pre class="lang-yaml prettyprint-override"><code>image:
registry: registry.opensource.zalan.do
repository: acid/postgres-operator
tag: v1.8.2
pullPolicy: "IfNotPresent"
</code></pre>
<p>and then <code>helm upgrade <your-release> <path-to-your-chart> -f values.yaml -n <your-ns></code></p>
<p>That should be it. (be sure to double check the breaking changes for changes in the crds)</p>
|
<p>I am experiencing a <code>pending-update</code> Helm issue, ie. in CI pipelines we sometimes deploy a particular application just one time after another. The first helm deployment is still in the <code>pending-update</code> state, while the other deployment of the same application (ran eg. 1 min later) cannot be deployed causing a well-known error: <code>Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress</code>.</p>
<p>I know it is a well-known issue and one possible solution could be Helm-related k8s secret deletion (automatically in the CI pipeline) just before the <code>helm upgrade</code> command but is it a good way of solving this? How do you deal with those scenarios in your CI pipelines?</p>
| <p>Yes, you can solve it with a <strong>secret deletion</strong> in one of our cases it was also not working.</p>
<p>So i would recommend checking the <code>kubetl get events</code> it will point out an error also if someone can not enable the logs of not getting debug log.</p>
<p>i faced issues due to helm deploying creating the <strong>service</strong> <code>Type: LoadBalancer</code> and we were out of <strong>Quota</strong> for <strong>LoadBalancer</strong> in <strong>tenancy/account</strong>.</p>
<p>We mostly check for the <strong>POD status</strong> but the helm might be having issues with <strong>service</strong>, <strong>secret</strong>, <strong>configmap</strong>, etc so make sure you do instead of guessing debug properly.</p>
<p>Another workaround you can do is to <strong>roll back</strong> to the previous release version which will change the status from <strong>pending-update</strong> to <strong>deployed</strong> even if you are deploying the first time.</p>
<p>There could be chances if anything is still pending it gets installed it will mark the release as deployed. So you upgrade or install a new one instead of getting an error.</p>
<pre><code>helm -n [NAMESPACE ] rollback <RELEASE> [REVISION]
</code></pre>
<p>Example</p>
<pre><code>helm -n default rollback service 1
</code></pre>
<p>this will mark your release as <strong>deployed</strong> as a workaround if stuck in state.</p>
|
<p>We have been experimenting with the number of Ignite server pods to see the impact on performance.</p>
<p>One thing that we have noticed is that if the number of Ignite server pods is increased after client nodes have established communication the new pod will just fail loop with the error below.</p>
<p>If however the grid is destroyed (bring down all client and server nodes) and then the desired number of server nodes is launch there are no issues.</p>
<p>Also the above procedure is not fully dependable for anything other than launching a single Ignite server.</p>
<p>From reading it looks like [this stack over flow][1] post and [this documentation][2] that the issue may be that we are not launching the "Kubernetes service".</p>
<blockquote>
<p>Ignite's KubernetesIPFinder requires users to configure and deploy a special Kubernetes service that maintains a list of the IP addresses of all the alive Ignite pods (nodes).</p>
</blockquote>
<p>However this is the only documentation I have found and it says that it is no longer current.</p>
<p>Is this information still relevant for Ignite 2.11.1?
If not is there some more recent documentation?
If this service is indeed needed, are there some more concreate examples and information on setting them up?</p>
<p>Error on new Server pod:</p>
<pre><code>[21:37:55,793][SEVERE][main][IgniteKernal] Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, addressFilter=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@78422efb], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, soLinger=0, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:281)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:952)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:851)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:721)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Node with the same ID was found in node IDs history or existing node in topology has the same ID (fix configuration and restart local node) [localNode=TcpDiscoveryNode [id=000e84bb-f587-43a2-a662-c7c6147d2dde, consistentId=8751ef49-db25-4cf9-a38c-26e23a96a3e4, addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 127.0.0.1, fd00:85:4001:5:f831:8cc:cd3:f863%eth0], sockAddrs=HashSet [nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local/fd00:85:4001:5:f831:8cc:cd3:f863:47500, /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=0, lastExchangeTime=1676497065109, loc=true, ver=2.11.1#20211220-sha1:eae1147d, isClient=false], existingNode=000e84bb-f587-43a2-a662-c7c6147d2dde]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.duplicateIdError(TcpDiscoverySpi.java:2083)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1201)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
... 13 more
</code></pre>
<p>Server DiscoverySpi Config:</p>
<pre><code><property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="myNameSpace"/>
<property name="serviceName" value="myServiceName"/>
</bean>
</property>
</bean>
</property>
</code></pre>
<p>Client DiscoverySpi Configs:</p>
<pre><code><bean id="discoverySpi" class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder" ref="ipFinder" />
</bean>
<bean id="ipFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="shared" value="false" />
<property name="addresses">
<list>
<value>myServiceName.myNameSpace:47500</value>
</list>
</property>
</bean>
</code></pre>
<p>Edit:</p>
<p>I have experimented more with this issue. As long as I do not deploy any clients (using the static TcpDiscoveryVmIpFinder above) I am able to scale up and down server pods without any issue. However as soon as a single client joins I am no longer able to scale the server pods up.</p>
<p>I can see that the server pods have ports 47500 and 47100 open so I am not sure what the issue is. Dows the TcpDiscoveryKubernetesIpFinder still need the port to be specified on the client config?</p>
<p>I have tried to change my client config to use the TcpDiscoveryKubernetesIpFinder below but I am getting a discovery timeout falure (see below).</p>
<pre><code> <property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="680e5bbc-21b1-5d61-8dfa-6b27be10ede7"/>
<property name="serviceName" value="nkw-mnomni-ignite-1-1"/>
</bean>
</property>
</bean>
</property>
</code></pre>
<pre><code>24-Feb-2023 14:15:02.450 WARNING [grid-timeout-worker-#22%igniteClientInstance%] org.apache.ignite.logger.java.JavaLogger.warning Thread dump at 2023/02/24 14:15:02 UTC
Thread [name="main", id=1, state=WAITING, blockCnt=78, waitCnt=3]
Lock [object=java.util.concurrent.CountDownLatch$Sync@45296dbd, ownerName=null, ownerId=-1]
at [email protected]/jdk.internal.misc.Unsafe.park(Native Method)
at [email protected]/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
at [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715)
at [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047)
at [email protected]/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:230)
at o.a.i.spi.discovery.tcp.ClientImpl.spiStart(ClientImpl.java:324)
at o.a.i.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at o.a.i.i.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
at o.a.i.i.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at o.a.i.i.IgniteKernal.startManager(IgniteKernal.java:1985)
at o.a.i.i.IgniteKernal.start(IgniteKernal.java:1331)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
- locked o.a.i.i.IgnitionEx$IgniteNamedInstance@57ac9100
at o.a.i.i.IgnitionEx.start0(IgnitionEx.java:1172)
at o.a.i.i.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:952)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:851)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:721)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:690)
at o.a.i.Ignition.start(Ignition.java:353)
</code></pre>
<p>Edit 2:
I also spoke with an admin about opening client side ports in case that was the issue. He indicated that should not be needed as clients should be able to open ephemeral ports to communicate with the server nodes.<br />
[1]: <a href="https://stackoverflow.com/questions/52032669/ignite-not-discoverable-in-kubernetes-cluster-with-tcpdiscoverykubernetesipfinde">Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder</a>
[2]: <a href="https://apacheignite.readme.io/docs/kubernetes-ip-finder" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/kubernetes-ip-finder</a></p>
| <p>It's hard to say precisely what the root cause is, but in general it's something related to the network or domain names resolution.</p>
<p>A public address is assigned to a node on a startup and is exposed to other nodes for communication. Other nodes store that address and nodeId in their history. Here is what is happening: a new node is trying to enter the cluster, it connects to a random node, then this request is transferred to the coordinator. The coordinator issues TcpDiscoveryNodeAddedMessage that must circle across the topology ring and be ACKed by all other nodes. That process didn't finish during a join timeout, so the new node is trying to re-enter the topology by starting the same joining process but with a new ID. But, other nodes see that this address is already registered by another nodeId, causing the original duplicate nodeId error.</p>
<p>Some recommendations:</p>
<ul>
<li><p>If the issue is reproducible on a regular basis, I'd recommend collecting more information by enabling DEBUG logging for the following package:
<code>org.apache.ignite.spi.discovery</code> (discovery-related events tracing)</p>
</li>
<li><p>Take thread dumps from affected nodes (could be done by kill -3). Check for discovery-related issues. Search for "lookupAllHostAddr".</p>
</li>
<li><p>Check that it's not DNS issue and all public addresses for your node are resolved instantly <code>nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local</code>. I was asking about the provider, because in OpenShift there seems to be a hard limit on DNS resolution time.</p>
</li>
<li><p>Check GC and safepoints.</p>
</li>
<li><p>To hide the underlying issue you can play around by increasing Ignite configuration: network timeout, join timeout, reducing failure detection timeout. But I recommend finding the real root cause instead of treating the symptoms.</p>
</li>
</ul>
|
<p>I want to setup cassandra,elasticsearch and janusgraph on ubuntu 20.04 using kubernetes.</p>
<p>can anyone suggest me links or provide me yaml file for setup?</p>
<p>i have a 3node Kubernetes cluster,1 master and 2 nodes,
i have installed 1.26.1 Kubernetes version</p>
| <p>You can leverage the Operator to create these databases on Kubernetes.</p>
<p><strong>Operators</strong> will manage and create the database for you.</p>
<p><strong>Elasticsearch</strong> - <a href="https://www.elastic.co/downloads/elastic-cloud-kubernetes" rel="nofollow noreferrer">https://www.elastic.co/downloads/elastic-cloud-kubernetes</a></p>
<p><strong>Cassandra</strong> - <a href="https://github.com/instaclustr/cassandra-operator" rel="nofollow noreferrer">https://github.com/instaclustr/cassandra-operator</a></p>
<p><strong>Janusgraph</strong> - <a href="https://github.com/IBM/janusgraph-operator" rel="nofollow noreferrer">https://github.com/IBM/janusgraph-operator</a></p>
<p>You can also leverage the helm chart available to setup database, mostly operators have helm chart for easy setup.</p>
<p>If you don't want to go with an operator you can directly install the helm also which will create the <strong>Deployment</strong>, <strong>POD</strong>, <strong>services</strong> for you with <strong>PVC</strong> and other required objects. The best practice will be to go with the <strong>operator</strong>.</p>
|
<p>We have on-premises Kubernetes clusters.</p>
<p>Our data is in S3-compatible cloud storage (e.g. Amazon s3).</p>
<p>We want to mount S3 bucket to Kubernetes pods in ReadWriteMany (RWX) mode via a CSI driver.</p>
<p>What non-dead Kuebrnetes CSI RWX S3 drivers exist in 2022?</p>
| <p>There is no official AWS support for S3 as a CSI.
There is a fuse driver from Yandex-cloud here:</p>
<p><a href="https://github.com/yandex-cloud/k8s-csi-s3" rel="nofollow noreferrer">https://github.com/yandex-cloud/k8s-csi-s3</a></p>
|
<p>This is my first time using GCP, I have always been an AWS person before this.</p>
<p>As a small team without dedicated DevOps, should we separate our prod and sandbox environments into 2 clusters / VPCs under one GCP project, or should we be going through the hassle of entirely separate GCP projects as well?</p>
<p>What is the industry standard here? If we have IAM setup correctly, what would be the con to having them both in the same project?</p>
| <p>A common approach is to use separate projects for staging and production.</p>
<p>With that, you will get the <strong>isolation</strong>, and clear <strong>separation</strong> between the <strong>staging</strong> & <strong>production</strong> environment. Prevents accidental changes or testing and also prevents downtimes etc.</p>
<p><strong>Resource management</strong> across the project would be easy for you will have to a better idea bout the resources & management across the project. You properly get a list of resources & billing details in the project.</p>
<p><strong>Access Management</strong> : Separate projects also have better access control, as only specific users will have prod project across so not everyone hang around the project or view/update the resources.</p>
<p>But the separate project also comes with the challenges of <strong>admin</strong> <strong>work</strong> so you mentioned you are a <strong>small team</strong> without <strong>DevOps</strong> it would be better to go with a <strong>Single project</strong> with <strong>multiple clusters</strong> managed with proper <strong>labels, networks & IAM roles</strong>.</p>
|
<p>Issue type: Kubernetes on Docker Desktop stopped working due to expired kuber-apiserver certificates</p>
<p>OS Version/build: Windows 10 version - 1909 and OS Build - 18363</p>
<p>App version: Docker Desktop 3.03</p>
<p>Steps to reproduce:</p>
<ol>
<li>Install Docker Desktop</li>
<li>Enable Kubernetes</li>
<li>Change the Windows PC time to ahead by 1 year</li>
<li>Kubernetes cluster will stop working saying the kube-apiserver certificates are expired</li>
<li>The applications/workloads deployed on the Kubernetes cluster will also stop working.</li>
</ol>
<p>As Kubernetes certificates are issued for 1 year; after the 1 year time period the certificates are expired and this will break Kubernetes.</p>
<p><a href="https://i.stack.imgur.com/DaaR2.png" rel="nofollow noreferrer">certificate snapshot</a></p>
<p>Need help:
Requesting Information on how to renew the kube-apiserver certificates without affecting the Kubernetes and the installed application</p>
| <p>I also had the issue showing in the etcd logs:</p>
<pre><code>{"level":"warn","ts":"2023-02-14T11:47:26.260Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.65.4:34996","server-name":"","error":"tls: failed to verify client certificate: x509: certificate has expired or is not yet valid: current time 2023-02-14T11:47:26Z is after 2023-02-03T12:24:57Z"}
</code></pre>
<p>I found the answer for windows here:
<a href="https://forums.docker.com/t/kubernetes-on-docker-desktop-fails-to-launch-after-kube-apiserver-certificate-expiry/106570/2" rel="nofollow noreferrer">https://forums.docker.com/t/kubernetes-on-docker-desktop-fails-to-launch-after-kube-apiserver-certificate-expiry/106570/2</a></p>
<p>on Mac OS, the path is <code>~/Library/Containers/com.docker.docker</code>. delete or backup the pki folder and restart docker. kubernetes should come back</p>
|
<p>I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster.</p>
<p>From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken.</p>
<p>Is there anyway to create one cluster-issuer to manage all ingresses across the cluster?</p>
<p>Code included below</p>
<h3>Cluster-issuer.yaml</h3>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-grafana
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-grafana
solvers:
- selector:
dnsZones:
- "foo.com"
dns01:
route53:
region: eu-central-1
hostedZoneID: foo
accessKeyID: foo
secretAccessKeySecretRef:
name: aws-route53-creds
key: password.txt
</code></pre>
<h3>Ingress.yaml</h3>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: loki
annotations:
cert-manager.io/cluster-issuer: letsencrypt-grafana
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "125m"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- grafana.foo.com
secretName: letsencrypt-grafana # < cert-manager will store the created certificate in this secret.
rules:
- host: grafana.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: loki-grafana
port:
number: 80
</code></pre>
| <p>@Harsh Manvar while I do appreciate your anwser I found something that is a better suit for my needs.</p>
<p><a href="https://cert-manager.io/docs/tutorials/syncing-secrets-across-namespaces/" rel="nofollow noreferrer">Cert-manager documentation</a> contains multiple options to sync secrets across namespaces</p>
<p>The one I chose was <a href="https://github.com/emberstack/kubernetes-reflector" rel="nofollow noreferrer">reflector</a>. The steps to install are included in the documentation but just for the sake of service i'll post here aswell</p>
<h5>Requirements: <a href="https://helm.sh/docs/helm/helm_install/" rel="nofollow noreferrer">Helm</a></h5>
<h4>Installation:</h4>
<pre><code>helm repo add emberstack https://emberstack.github.io/helm-charts
helm repo update
helm upgrade --install reflector emberstack/reflector
</code></pre>
<h3>Setup:</h3>
<p>Add the following annotation to your secret <code>reflector.v1.k8s.emberstack.com/reflection-allowed: "true"</code>, it should look like the following</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: source-secret
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
</code></pre>
<p>Done! Your secret should be replicated within all namespaces. For multiple ingress configurations within the same namespace you could edit your ingress.yaml like this</p>
<h4>Ingress.yaml</h4>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
namespace: jenkins
annotations:
cert-manager.io/cluster-issuer: letsencrypt-global
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "125m"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- jenkins.foo.com
- nginx.foo.com
secretName: letsencrypt-global # < cert-manager will store the created certificate in this secret.
rules:
- host: jenkins.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jenkins
port:
number: 80
- host: nginx.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
</code></pre>
|
<p>This is my Dockerfile:</p>
<pre><code>FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY ls .
COPY tail .
COPY test .
COPY manager .
ENTRYPOINT ["/manager"]
</code></pre>
<p>after</p>
<pre><code>[root@master go-docker-test]# docker build -t strangething:v1.13 .
[root@master go-docker-test]# docker run -d strangething:v1.13
[root@master go-docker-test]# docker logs b2
</code></pre>
<p>it shows:</p>
<pre><code>exec /manager: no such file or directory
</code></pre>
<p>I'm pretty sure it is there. I use dive to see it:</p>
<pre><code>[Layers]βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ [β Current Layer Contents]ββββββββββββββββββββββββββββββββββββββββββββββββββ
Cmp Image ID Size Command Permission UID:GID Size Filetree
sha256:cb60fb9b862c6a89f9 2.3 MB FROM sha256:cb60fb9b862c6a89f9 drwxr-xr-x 0:0 2.3 MB βββ .
sha256:3e884d7c2d4ba9bac6 118 kB COPY ls . # buildkit drwxr-xr-x 0:0 0 B β βββ bin
sha256:e75e9da8f1605f7944 67 kB COPY tail . # buildkit drwxr-xr-x 0:0 0 B β βββ boot
sha256:7a0f1970f36a364672 1.8 MB COPY test . # buildkit drwxr-xr-x 0:0 0 B β βββ dev
sha256:c9ab59cb1ce11477ca 47 MB COPY manager . # buildkit drwxr-xr-x 0:0 220 kB β βββ etc
drwxr-xr-x 65532:65532 0 B β βββ home
[Layer Details]ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ drwxr-xr-x 0:0 0 B β βββ lib
drwxr-xr-x 0:0 0 B β βββ proc
Digest: sha256:c9ab59cb1ce11477cac4d634bb81cf7316c344b50f01a62a8e5ddcf355d5fe drwx------ 0:0 0 B β βββ root
cf drwxr-xr-x 0:0 0 B β βββ run
Tar ID: 998c57d00785ccffaf3b308a529c7f816633897097d1ef6519269a8e3c5af59b drwxr-xr-x 0:0 0 B β βββ sbin
Command: drwxr-xr-x 0:0 0 B β βββ sys
COPY manager . # buildkit drwxrwxrwx 0:0 0 B β βββ tmp
drwxr-xr-x 0:0 2.1 MB β βββ usr
[Image Details]ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ drwxr-xr-x 0:0 1.8 kB β βββ var
-rwxr-xr-x 0:0 118 kB βββ ls
Total Image size: 51 MB -rwxr-xr-x 0:0 47 MB βββ manager
Potential wasted space: 0 B -rwxr-xr-x 0:0 67 kB βββ tail
Image efficiency score: 100 % -rwxr-xr-x 0:0 1.8 MB βββ test
Count Total Space Path
</code></pre>
<p>This file tree confuses me more or less.</p>
<p>Anyway, strange thing is, <code>ls tail manager</code> can't exec, but <code>test</code>can exec.</p>
<p>after I change <code>ENTRYPOINT ["/manager"]</code> to <code>ENTRYPOINT ["/test"]</code>:</p>
<pre><code>[root@master go-docker-test]# docker logs c11
empty
</code></pre>
<p>test is a program built by test.go, I write it myself:</p>
<pre><code>package main
import (
"fmt"
)
type Per struct {
Name string
Age int
}
type Person struct {
Name string
Age int
Lov Per
}
func main() {
var one Person
one.Name="abc"
one.Age=11
var two Per
one.Lov=two
if one.Lov != (Per{}){
fmt.Println("not empty!")
}else {
fmt.Println("empty")
}
}
</code></pre>
| <p>After reading silh's comment and the kubebuilder <code>Dockerfile</code> again, I somehow solved this problem by adding <code>CGO_ENABLED=0</code> to <code>go build</code>. It changes dynamic linker to static linker.</p>
<pre><code>CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go
</code></pre>
<p>At least the <code>no such file or directory</code> error is now gone.</p>
|
<p>I am building a new Helm chart (<strong>mychart</strong>) that I'm trying to install.</p>
<p>A <code>values.yaml</code> exists and its contents specify the fullnameOverride:</p>
<pre class="lang-yaml prettyprint-override"><code>fullnameOverride: "myapp"
</code></pre>
<p>I run the following command</p>
<p><code>helm install --dry-run -f "mychart-stack/values.yaml" mychart-stack1 ./mychart-stack</code></p>
<p>And it's giving me the error:</p>
<blockquote>
<p>template: mychart-stack/templates/persistentvolume.local-storage.range.yml:5:14: executing "mychart-stack/templates/persistentvolume.local-storage.range.yml" at <include "mychart-stack.fullname" .>: error calling include: template: mychart-stack/templates/_helpers.tpl:14:14: executing "mychart-stack.fullname" at <.Values.fullnameOverride>: nil pointer evaluating interface {}.fullnameOverride</p>
</blockquote>
<p>The <code>mychart-stack/templates/_helpers.tpl:14:14</code> is the pregenerated one when you're asking Helm to produce a Chart example.</p>
<p>The error (14:14) is associated at the first line of the following auto generated code:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
</code></pre>
<hr />
<p>A little more context, as it's throwing an error while checking the persistentvolume.local-storage.range.yml, here are the contents of the file:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- range .Values.persistentVolume.localStorage }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-{{ include "mychart-stack.fullname" }}-{{ .name }}
spec:
capacity:
storage: 20Gi
# le champ volumeMode requiert l'activation de la "feature gate" Alpha BlockVolume
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage-{{ include "mychart-stack.fullname" }}--{{ .name }}
local:
path: {{ .Values.persistentVolume.basePath }}/{{ .name }}
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- {{ .Values.hostName }}
{{- end }}
</code></pre>
<p>I don't know what's wrong, the code seems to indicate that it's not defined properly.
I tried to run it in --debug mode but it doesn't help (same error).</p>
| <p>Finally the problem wasn't the values.yaml that was not set correctly but more the way it was used within the template.</p>
<p>When using an include of a definition coming from a .tpl file (this one was the autogenerated by Helm), we must be careful to not be in a range.</p>
<p>I was creating a range of assets so it seems that it will run the code in the context of the range.</p>
<blockquote>
<p>Your conditional logic is being evaluated inside a range loop. This means . you're using to access Values is not the one you expect it to be, as it's overridden for each range iteration evaluation.</p>
</blockquote>
<p>ref: <a href="https://stackoverflow.com/questions/57475521/ingress-yaml-template-returns-error-in-renderring-nil-pointer-evaluating-int">ingress.yaml template returns error in renderring --> nil pointer evaluating interface {}.service</a></p>
<p>That means that we should use <code>$</code> instead of <code>.</code> notation because it references the global scope.</p>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- include "mychart-stack.fullname" $ }}
</code></pre>
|
<p>We have created two machine deployments.</p>
<pre><code>kubectl get machinedeployment -A
NAMESPACE NAME REPLICAS AVAILABLE-REPLICAS PROVIDER OS KUBELET AGE
kube-system abc 3 3 hetzner ubuntu 1.24.9 116m
kube-system vnr4jdxd6s-worker-tgl65w 1 1 hetzner ubuntu 1.24.9 13d
</code></pre>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
abc-b6647d7cb-bcprj Ready <none> 62m v1.24.9
abc-b6647d7cb-llsq8 Ready <none> 65m v1.24.9
abc-b6647d7cb-mtlsl Ready <none> 58m v1.24.9
vnr4jdxd6s-worker-tgl65w-59ff7fc46c-d9tm6 Ready <none> 13d v1.24.9
</code></pre>
<p>We know that we can add a label to a specific node</p>
<pre><code>kubectl label nodes abc-b6647d7cb-bcprj key=value
</code></pre>
<p>But our nodes are autoscaled.
We would like to install, for example, MariaDB Galera on specific machinedeployment node.
Is it somehow possible to annotate all nodes with a particular machinedeployments?</p>
<p>Is it somehow possible to annotate all nodes with a particular machinedeployments?</p>
| <p>To annotate all nodes with a particular machinedeployment. You can use the <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_annotate/" rel="nofollow noreferrer">kubectl annotate</a> command to annotate all nodes in a particular machinedeployment with a specific key-value pair. For example, to annotate all nodes with a machinedeployment of nginx-deployment, you can run the following command:</p>
<pre><code>kubectl annotate nodes --all deployment=nginx-deployment key=value
</code></pre>
<p>This will annotate all nodes in the machinedeployment of nginx-deployment with the specified key-value pair.</p>
<p>For more information follow this <a href="https://www.kubermatic.com/blog/annotating-machine-deployment-for-autoscaling/" rel="nofollow noreferrer">blog by Seyi Ewegbemi</a>.</p>
|
<p>when CLUSTER-IP is internal to cluster network, then why I am able to ping one of the cluster IP service from host/node where k8s installed. Have a look at ip 10.101.210.88 which is a cluster-ip and as per definition it must be only reachable from any other pod only but still I can reach it from my Ubuntu host/node machine</p>
<pre><code>/root#kgs
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 16d
ricinfra service-tiller-ricxapp ClusterIP 10.98.94.194 <none> 44134/TCP 7d7h
ricplt aux-entry ClusterIP 10.105.149.143 <none> 80/TCP,443/TCP 7d7h
ricplt r4-influxdb-influxdb2 ClusterIP 10.110.14.243 <none> 80/TCP 7d7h
ricplt r4-infrastructure-kong-proxy NodePort 10.107.12.178 <none> 32080:32080/TCP,32443:32443/TCP 7d7h
ricplt r4-infrastructure-prometheus-alertmanager ClusterIP 10.104.86.76 <none> 80/TCP 7d7h
ricplt r4-infrastructure-prometheus-server ClusterIP 10.102.224.176 <none> 80/TCP 7d7h
ricplt service-ricplt-a1mediator-http ClusterIP 10.105.45.1 <none> 10000/TCP 7d7h
ricplt service-ricplt-a1mediator-rmr ClusterIP 10.108.188.147 <none> 4561/TCP,4562/TCP 7d7h
ricplt service-ricplt-alarmmanager-http ClusterIP 10.111.239.130 <none> 8080/TCP 7d7h
ricplt service-ricplt-alarmmanager-rmr ClusterIP 10.106.30.195 <none> 4560/TCP,4561/TCP 7d7h
ricplt service-ricplt-appmgr-http ClusterIP 10.110.110.91 <none> 8080/TCP 7d7h
ricplt service-ricplt-appmgr-rmr ClusterIP 10.110.96.28 <none> 4561/TCP,4560/TCP 7d7h
ricplt service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 7d7h
ricplt service-ricplt-e2mgr-http ClusterIP 10.101.210.88 <none> 3800/TCP 7d7h
ricplt service-ricplt-e2mgr-rmr ClusterIP 10.101.245.34 <none> 4561/TCP,3801/TCP 7d7h
ricplt service-ricplt-e2term-prometheus-alpha ClusterIP 10.97.95.213 <none> 8088/TCP 7d7h
ricplt service-ricplt-e2term-rmr-alpha ClusterIP 10.100.36.142 <none> 4561/TCP,38000/TCP 7d7h
ricplt service-ricplt-e2term-sctp-alpha NodePort 10.108.215.136 <none> 36422:32222/SCTP 7d7h
ricplt service-ricplt-o1mediator-http ClusterIP 10.96.196.67 <none> 9001/TCP,8080/TCP,3000/TCP 7d7h
ricplt service-ricplt-o1mediator-tcp-netconf NodePort 10.104.237.252 <none> 830:30830/TCP 7d7h
ricplt service-ricplt-rtmgr-http ClusterIP 10.105.27.42 <none> 3800/TCP 7d7h
ricplt service-ricplt-rtmgr-rmr ClusterIP 10.110.0.158 <none> 4561/TCP,4560/TCP 7d7h
ricplt service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 7d7h
ricplt service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 7d7h
ricplt service-ricplt-vespamgr-http ClusterIP 10.98.139.191 <none> 8080/TCP,9095/TCP 7d7h
ricxapp aux-entry ClusterIP 10.99.152.66 <none> 80/TCP,443/TCP 7d7h
ricxapp service-ricxapp-bouncer-xapp-http ClusterIP 10.99.222.68 <none> 8080/TCP 25h
ricxapp service-ricxapp-bouncer-xapp-rmr ClusterIP 10.97.149.3 <none> 4560/TCP,4561/TCP 25h
ricxapp service-ricxapp-example-http ClusterIP 10.101.14.166 <none> 8080/TCP 2d11h
ricxapp service-ricxapp-example-rmr ClusterIP 10.97.129.85 <none> 4560/TCP,4561/TCP 2d11h
/root#
/root#
/root#ping 10.101.210.88 --------> this is a cluster ip and i am pinging it from my host/node machine
PING 10.101.210.88 (10.101.210.88) 56(84) bytes of data.
64 bytes from 10.101.210.88: icmp_seq=1 ttl=64 time=0.061 ms
64 bytes from 10.101.210.88: icmp_seq=2 ttl=64 time=0.032 ms
^C
--- 10.101.210.88 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1019ms
rtt min/avg/max/mdev = 0.032/0.046/0.061/0.014 ms
/root#
</code></pre>
<p>thanks</p>
| <p>Yes, you will be able to access the <strong>ClusterIP</strong> from <strong>Kubernetes</strong> worker nodes.</p>
<p>When you create the SVC in to Kubernetes with type <strong>ClusterIP</strong> it assigns the <strong>Virtual IP</strong> to <strong>SVC</strong>, Virtual IP address is accessible only within the K8s cluster.</p>
<p>Kubernetes uses the <strong>IPTables</strong> <strong>rules</strong> to forward traffic from <strong>VirtualIP</strong> to the PODs that the service is routing traffic to.</p>
<p>Kubernetes nodes are part of the Cluster they will be able to access the virtual IP. So <strong>clusterIp</strong> will be accessible from in cluster across from any of <strong>worker node</strong> but you won't be able to access it from <strong>outside</strong> of Cluster.</p>
<p>If you really want to go deep understanding the bridge, IP assignment, forwarding you can ref this nice article : <a href="https://dustinspecker.com/posts/iptables-how-kubernetes-services-direct-traffic-to-pods/" rel="nofollow noreferrer">https://dustinspecker.com/posts/iptables-how-kubernetes-services-direct-traffic-to-pods/</a></p>
|
<p>a have a problem with Kubernetes service. My service only sends requests to one pod ignoring other pods. I don't know why and how can I debug it. It should distribute request in a round-robin way. For me, it seems that something's wrong service but I don't know to debug it. Outputs of kubectl describe service and nodes along with endpoints</p>
<pre><code>
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30002
selector:
app: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: web-app
image: webimage
ports:
- containerPort: 80
imagePullPolicy: Never
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
Name: web-svc
Namespace: default
Labels: <none>
Annotations: Selector: app=webpod
Type: NodePort
IP: 10.111.23.112
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30002/TCP
Endpoints: 10.244.1.7:80,10.244.1.8:80,10.244.1.9:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 172.18.0.3:6443
Session Affinity: None
Events: <none>
Name: web-depl-5c87b748f-kvtqr
Namespace: default
Priority: 0
Node: kind-worker/172.18.0.2
Start Time: Mon, 04 May 2020 04:20:34 +0000
Labels: app=webpod
pod-template-hash=5c87b748f
Annotations: <none>
Status: Running
IP: 10.244.1.8
IPs:
IP: 10.244.1.8
Controlled By: ReplicaSet/web-depl-5c87b748f
Containers:
web:
Container ID: containerd://8b431d80fd729c8b0d7e16fa898ad860d1a223b3e191367a68e3b65e330fe61a
Image: web
Image ID: sha256:16a4c5d1a652b1accbacc75807abc1d9a05e2be38115dc8a5f369a04a439fad2
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 04 May 2020 04:20:36 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 500m
Requests:
cpu: 500m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c9tgf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-c9tgf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c9tgf
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
=========
Name: iweblens-svc
Namespace: default
Labels: <none>
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-05-04T04:20:36Z
Subsets:
Addresses: 10.244.1.7,10.244.1.8,10.244.1.9
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 80 TCP
Events: <none>
</code></pre>
| <p>Is the client using a persistent/long-lived connection? Because the service endpoint will only distribute the new connections in a round-robin manner as per your setting. Kubernetes doesn't offer any built-in mechanism to load balance long-lived connections. For long-lived connections, you can handle the load balancing on the client side or use a reverse proxy (service mesh/traefik ingress) which can take care of the load balancing responsibility.</p>
|
<p>I have the following directory structure:</p>
<pre><code>templates/
a.tmpl
services/
service.go
main.go
</code></pre>
<p>Now inside the <code>service.go</code> file i am calling the below function:</p>
<pre><code>dir, err := filepath.Abs(filepath.Dir("./templates/"))
if err != nil {
return nil, err
}
baseFile := filepath.Join(dir, "a.tmpl")
tmpl, err := template.New("base").ParseFiles(baseFile)
</code></pre>
<p>now the above function is parsing my <code>a.tmpl</code> file as expected.</p>
<p>but once this service is up on docker and kubernetes, i am no longer able to open the file since the file does not exists</p>
<p>why is that?</p>
<p>UPDATE:</p>
<pre><code>FROM golang:1.16-buster AS builder
# Copy the code from the host and compile it
WORKDIR $GOPATH/src/github.com/me/report
COPY . ./
# pack templates to binary
RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -ldflags "-X github.com/me/report/cmd.version=$(cat .VERSION)" -o /app .
FROM xyz.amazonaws.com/common/platform/base:latest as prod
COPY --from=builder /app ./
ADD ./migrations /migrations
ENTRYPOINT ["/app"]
</code></pre>
| <p>Just copy the templates folder from the builder, like:</p>
<pre><code>COPY --from=builder /app ./
ADD ./migrations /migrations
COPY --from=builder ./templates /templates
</code></pre>
<p>or add the folder like:</p>
<pre><code>COPY --from=builder /app ./
ADD ./migrations /migrations
ADD ./templates /templates
</code></pre>
|
<p>I have the following directory structure:</p>
<pre><code>templates/
a.tmpl
services/
service.go
main.go
</code></pre>
<p>Now inside the <code>service.go</code> file i am calling the below function:</p>
<pre><code>dir, err := filepath.Abs(filepath.Dir("./templates/"))
if err != nil {
return nil, err
}
baseFile := filepath.Join(dir, "a.tmpl")
tmpl, err := template.New("base").ParseFiles(baseFile)
</code></pre>
<p>now the above function is parsing my <code>a.tmpl</code> file as expected.</p>
<p>but once this service is up on docker and kubernetes, i am no longer able to open the file since the file does not exists</p>
<p>why is that?</p>
<p>UPDATE:</p>
<pre><code>FROM golang:1.16-buster AS builder
# Copy the code from the host and compile it
WORKDIR $GOPATH/src/github.com/me/report
COPY . ./
# pack templates to binary
RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -ldflags "-X github.com/me/report/cmd.version=$(cat .VERSION)" -o /app .
FROM xyz.amazonaws.com/common/platform/base:latest as prod
COPY --from=builder /app ./
ADD ./migrations /migrations
ENTRYPOINT ["/app"]
</code></pre>
| <p>When you <a href="https://pkg.go.dev/cmd/go#hdr-Compile_packages_and_dependencies" rel="nofollow noreferrer">build</a> your binary, go only includes the necessary go files to have your program work. It does not know that your <code>templates</code> directory is necessary to the running of the program.</p>
<p>There is several solutions to your problem :</p>
<ul>
<li>Create an environment variable pointing to were the templates are and use it on runtime.</li>
<li>Embed the <code>templates</code> directory into your binary using the <a href="https://golang.google.cn/pkg/embed/" rel="nofollow noreferrer">embed package</a> so that you can access the files at runtime</li>
</ul>
|
<p>We are planning to move from on-prem to gcp, but the process of designing is taking some time. We want to deploy our docker containers on gce vms. We don't want to manually increase the storage every time the containers fill up the space, we know that this can be done by auto scaling but we don't have enough expertise on this. So, we found out that GCS can be mounted on the VM and we can run our containers on the mounted path, but when we tried touching files or running containers we are getting permission denied error. can anyone help us to resolve it we tried many docs and tutorials but they are a little bit confusing.</p>
<p>We used gcsfuse for mounting the bucket to the gce vm</p>
| <p>In our case we were using gcsfuse and mounted the bucket to the gce instance at /root/bucketmount. since the bucket is mounted on root fs, I have to use escalate privilege tag but it's not suggested to use. so we now changed the permissions to the folder using chown command. This helped us so In our pipeline we have created 3 steps</p>
<p>1 for creating a gcs bucket and mount it to the vm
2 for checking permissions for the mounted path and updating the permissions
3 for deploying the docker containers.</p>
<p>As of now the path is hardcoded I want to make it randomised verifing options will let you know if I succeed..</p>
<p><strong>Update</strong></p>
<p>I have created a script which will get triggered by n8n webhook and creates a random name for the mount directory and by using the same script the name of the directory will be updated to a csv file from which my CICD will fetch the details and deploy the containers.</p>
|
<p>I have one question regarding ServiceEntry in Istio.
As I can see in the guides it says that we can use <a href="https://istio.io/latest/docs/reference/config/networking/sidecar/#WorkloadSelector" rel="nofollow noreferrer">workloadSelector</a> to select one or more Kubernetes pods of the MESH_INTERNAL to direct traffic into them.
That means that we can route traffic directly to a kuberneted pod without the usage of a kubernetes service?</p>
<p>Let's say for example that meshservice1 wants to call meshservice2.</p>
<p>The below are the labels of meshservice2 pods.</p>
<pre><code>kind: Pod
labels:
app.name: meshservice2
name: meshservice2
</code></pre>
<p>and below is its service-entry</p>
<pre><code>kind: ServiceEntry
metadata:
labels:
app.name: meshservice2
spec:
hosts:
- meshservice2.test
location: MESH_INTERNAL
ports:
- name: http
number: 80
protocol: HTTP
resolution: STATIC
workloadSelector:
labels:
app.name: meshservice2
</code></pre>
<p>Is the above two enough to call meshservice2.test succesfully from meshservice1 pods without the definition of a kubernetes svc for memservice2?</p>
<p>I tried it that way, it connects to the other service but I receive 503 response code.
When I add a kubernetes svc for meshservice2 evrything works as expected.</p>
<p>I am just wondering if I can safely discard kubernetes svc from my app for mesh internal calls.</p>
| <p>I don't think that is how it works. meshservice1 sidecar needs to know where to contact meshservice2, it's used Kubernetes service to get the latest endpoint IP of meshservice2 itself.</p>
<p>alternatively, you can use spec.endpoints, but you need to update everytime Pod IP is change.</p>
|
<p>I'm trying to pull my Docker Image from my private Docker Registry in Kubernetes, but I got this error: ImagePullBackOff</p>
<pre><code>NAME READY STATUS RESTARTS AGE
nginx-994fc8fb7-f24sv 2/2 Running 0 2d22h
portals-app-669b654d87-lk258 0/1 ImagePullBackOff 0 66m
portals-app-669b654d87-p87c6 0/1 ImagePullBackOff 0 67m
portals-app-7775d445-c5762 0/1 ImagePullBackOff 0 66m
</code></pre>
<p>So I used describe command to view the error detail, here is the error:</p>
<pre><code>Name: portals-app-669b654d87-lk258
Namespace: default
Priority: 0
Service Account: default
Node: client-portal-nodepool-qjfch/10.127.0.2
Start Time: Sat, 25 Feb 2023 20:30:56 +1100
Labels: app=app
pod-template-hash=669b654d87
Annotations: <none>
Status: Pending
IP: 10.244.0.60
IPs:
IP: 10.244.0.60
Controlled By: ReplicaSet/portals-app-669b654d87
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-s2j6z:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 2m34s (x284 over 67m) kubelet Back-off pulling image
"xichen9718/portals_docker_repository:latest"
</code></pre>
<p>I feel this error message is not that clear, and I guess it might have something to do with the Docker Private Registry authentication. So I create an individual Pod. This is my pod Yaml file.:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: xichen9718/portals_docker_repository:latest
imagePullSecrets:
- name: regcred
</code></pre>
<p>and I ran the describe command again, I got this at this time:</p>
<pre><code>Name: private-reg
Namespace: default
Priority: 0
Service Account: default
Node: client-portal-nodepool-qjfch/10.127.0.2
Start Time: Sat, 25 Feb 2023 21:15:52 +1100
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.244.0.79
IPs:
IP: 10.244.0.79
Containers:
private-reg-container:
Container ID:
Image: xichen9718/portals_docker_repository:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7gvvj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-7gvvj:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned default/private-reg to client-portal-nodepool-qjfch
Warning Failed 28s (x2 over 46s) kubelet Failed to pull image "xichen9718/portals_docker_repository:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/xichen9718/portals_docker_repository:latest": failed to resolve reference "docker.io/xichen9718/portals_docker_repository:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Warning Failed 28s (x2 over 46s) kubelet Error: ErrImagePull
Normal BackOff 15s (x2 over 45s) kubelet Back-off pulling image "xichen9718/portals_docker_repository:latest"
Warning Failed 15s (x2 over 45s) kubelet Error: ImagePullBackOff
Normal Pulling 2s (x3 over 49s) kubelet Pulling image "xichen9718/portals_docker_repository:latest"
</code></pre>
<p>But I think I set the secret successfully, when I run
<code>kubectl get secret regcred --output=yaml</code>,
I can view my .dockerconfigjson and other data. and I tried to pull the image locally, I can pull it successfully, so that means my image name and tag is correct.
I'm really confused now, can anybody help me with this please?
Thanks a lot.</p>
<p>Update:</p>
<p>I deleted my secret, deployment, pod and re-created them again, then it works. I guess it might have something to do with the sequence of my developing process</p>
| <p>This:</p>
<pre><code>spec:
containers:
- name: private-reg-container
image: xichen9718/portals_docker_repository:latest
</code></pre>
<p>Is pulling from docker.io and not a private repo.</p>
<p>Also, this:</p>
<pre><code> Warning Failed 28s (x2 over 46s) kubelet Failed to pull image "xichen9718/portals_docker_repository:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/xichen9718/portals_docker_repository:latest": failed to resolve reference "docker.io/xichen9718/portals_docker_repository:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
</code></pre>
<p>Says authorization failed. Presumeably because you're trying to authenticate against docker hub and not your private repo.</p>
<p>If you are using a private repo, you need to include the hostname of your repo in the image, e.g.</p>
<pre><code>spec:
containers:
- name: private-reg-container
image: myprivaterepo.com/xichen9718/portals_docker_repository:latest
</code></pre>
<p>If you don't it will assume docker hub</p>
|
<p>I have this query:</p>
<pre><code>100 * (1 - ((avg_over_time(node_memory_MemFree_bytes[10m]) + avg_over_time(node_memory_Cached_bytes[10m]) + avg_over_time(node_memory_Buffers_bytes[10m])) / avg_over_time(node_memory_MemTotal_bytes[10m])))
</code></pre>
<p>However it only returns the namespace where Prometheus is installed:</p>
<pre><code>{instance="10.240.0.11:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"}
5.58905365516873
{instance="10.240.0.11:9100", job="node-exporter"}
5.588556605118522
{instance="10.240.0.42:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"}
5.093870850709847
{instance="10.240.0.42:9100", job="node-exporter"}
5.09401539556571
{instance="10.240.0.90:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"}
5.103046564234582
{instance="10.240.0.90:9100", job="node-exporter"}
</code></pre>
<p>Is it possible to have a similar query that queries the entire cluster, all nodes and namespaces? If yes, how?</p>
| <p>With the node-exporter installed as daemonset, you have the metrics of the entire cluster.</p>
<p>To have the overall cluster memory usage, in percentage:</p>
<pre><code>100 * (
sum(node_memory_MemTotal_bytes{service="node-exporter"}) -
sum(node_memory_MemAvailable_bytes{service="node-exporter"})
) / sum(node_memory_MemTotal_bytes{service="node-exporter"})
</code></pre>
<p>Result, for example:</p>
<pre><code>{} 37.234674067149946
</code></pre>
<p>Memory usage by node:</p>
<pre><code>100 * (
sum by (instance) (node_memory_MemTotal_bytes{service="node-exporter"}) -
sum by (instance) (node_memory_MemAvailable_bytes{service="node-exporter"})
) / sum by (instance) (node_memory_MemTotal_bytes{service="node-exporter"})
</code></pre>
<p>Result, for example:</p>
<pre><code>{instance="x.x.x.x:9100"} 42.51742364002058
{instance="y.y.y.y:9100"} 38.26956501095188
{instance="z.z.z.z:9100"} 36.57150031634585
</code></pre>
<p>Memory usage for a specific namespace:</p>
<pre><code>100 * sum(container_memory_working_set_bytes{namespace="my-namespace"}) /
sum(node_memory_MemTotal_bytes)
</code></pre>
<p>Result, for example:</p>
<pre><code>{} 4.212481093513011
</code></pre>
|
<p>I have set up the Kubernetes MongoDB operator according to this guide: <a href="https://adamtheautomator.com/mongodb-kubernetes/" rel="nofollow noreferrer">https://adamtheautomator.com/mongodb-kubernetes/</a> and it works well. However, when I try to update the MongoDB version to 6.0.4, I get the following error:</p>
<pre><code>{
"error":"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR:
Location4926900: Invalid featureCompatibilityVersion document in admin.system.version:
{ _id: \"featureCompatibilityVersion\", version: \"4.4\" }.
See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.
:: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0.
See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.).
If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at
https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures."}
</code></pre>
<p>I have followed this guide: <a href="https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md#upgrade-your-mongodbcommunity-resource-version-and-feature-compatibility-version" rel="nofollow noreferrer">https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md#upgrade-your-mongodbcommunity-resource-version-and-feature-compatibility-version</a></p>
<p>This means that my <code>config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml</code> file looks like this:</p>
<pre><code>apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mdb0
spec:
members: 2
type: ReplicaSet
version: "6.0.4"
featureCompatibilityVersion: "6.0"
security:
...
</code></pre>
<p>The rest is set according to the linked guide (in the first link above).</p>
<p>The error that is thrown suggests that, for whatever reason, the <code>featureCompatibilityVersion</code> field is ignored, even though I have explicitly set it to <code>"6.0"</code>. However, since the documentation clearly states that this is a possible configuration, this shouldn't be the case. My question then is: am I doing something wrong, or is this a bug?</p>
| <p>After a couple of days' research, I managed to find a way to do this, and it is annoyingly simple...</p>
<p>The key to all of this lies in the documentation <a href="https://www.mongodb.com/docs/manual/reference/command/setFeatureCompatibilityVersion/#default-values" rel="nofollow noreferrer">here</a>. Basically, in order to update from mongo 4.4.0 to 6.0.4, you need to do it in steps:</p>
<p>First, change the mongo version from "4.4.0" to e.g. "5.0.4", whilst setting the featureCompatibilityVersion to "5.0":</p>
<pre><code>apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mdb0
spec:
version: "5.0.4"
featureCompatibilityVersion: "5.0"
...
</code></pre>
<p>After having applied this, verify that the featureCompatibilityVersion is indeed 5.0 and that all MongoDB pods are "5.0.4". If the MongoDB pods aren't "5.0.4", you need to restart the service (See "<strong>Restarting everything</strong>" below). You can now run the second step:</p>
<p>Update the mongo version to "6.0.4" and the featureCompatibilityVersion to "6.0":</p>
<pre><code>apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mdb0
spec:
version: "6.0.4"
featureCompatibilityVersion: "6.0"
...
</code></pre>
<p>Apply this change and verify that the featureCompatibilityVersion is indeed 6.0, and that all MongoDB pods are "6.0.4". Once again, if the pods aren't "6.0.4", <strong>Restart everything</strong> according to the procedure below.</p>
<hr />
<h3>Checking feature compatibility version</h3>
The easiest way to do this is to:
<ol>
<li>Port-forward the mongodb connection to your host: <code>kubectl port-forward service/mdb0-svc -n mongodb 27017:27017</code> (according to the guide).</li>
<li>Install mongosh on your host (if you haven't done so already).</li>
<li>Run the following query: <code>mongosh -u mongoadmin -p secretpassword --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})'</code> (if you're using the same credentials as the guide).</li>
</ol>
<h3>Restarting everything</h3>
During my development process, I had multiple occasions where I had to restart everything. Here's my way of doing that:
<ol>
<li>Delete the config's resources: <code>kubectl delete -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb </code>.</li>
<li>Whilst (1) is pending, execute the following lines of code to make (1) being able to finish:</li>
</ol>
<pre><code>kubectl patch pv data-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv data-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv data-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv logs-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv logs-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
kubectl patch pv logs-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb
</code></pre>
<ol start="3">
<li>Run the following lines of code:</li>
</ol>
<pre><code>kubectl delete deployments.apps mongodb-kubernetes-operator -n mongodb
kubectl delete crd mongodbcommunity.mongodbcommunity.mongodb.com
kubectl apply -f config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
kubectl apply -k config/rbac/ -n mongodb
kubectl create -f config/manager/manager.yaml -n mongodb
kubectl apply -f new-user.yaml -n mongodb
kubectl apply -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb
</code></pre>
|
<p>I am using Google Kubernetes Engine and some of my deployments are more important than others (For example, the staging environment can run on less replicas or even stop if needed).</p>
<p>I want to dynamically change the amount of replicas of a deployment depending on the available resources, especially memory.</p>
<p>Ideally, I would set a default number of replicas which is used normally and if the cluster is running low on resources, it should reduce the number of replicas of that deployment.</p>
<p>This should happen for some deployments but not all of them.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-deployment
name: my-deployment
namespace: my-namespace
spec:
replicas: 3 # This should be lower if the cluster is running on low resources (memory or CPU)
selector:
matchLabels:
app: my-deployment
template:
metadata:
creationTimestamp: null
labels:
app: my-deployment
spec:
containers:
- image: my/image:version
name: my-deployment
</code></pre>
<p>It should even be possible to reduce the number of replicas down to 0 for some deployments but not for others.</p>
<p>Note that my deployments are distributed across multiple namespaces (if that matters).</p>
| <blockquote>
<p>I want to dynamically change the amount of replicas of a deployment
depending on the available resources, especially memory.</p>
</blockquote>
<p>You can use the Kubernetes HPA (Horizonatal POD autoscaling), which which dynamically changes the number of replicas based on the <strong>CPU/Memory</strong> <strong>utilization</strong>.</p>
<p>While you can also set the default number of replicas for your deployment for <strong>deployment</strong> you have <strong>1 scaling to 3</strong> for <strong>staging</strong> you have a minimum <strong>3 running</strong> scaling up to <strong>5</strong> and for <strong>Prod</strong> minimum running you <strong>4</strong> and <strong>scaling</strong> to <strong>10</strong> etc.</p>
<p>Read more about the HPA : <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p>
<p><strong>Example</strong></p>
<pre><code>kubectl autoscale deployment <Deployment-name> --cpu-percent=50 --min=1 --max=10
</code></pre>
<p>Example ref : <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p>
<p>If there is less traffic <strong>auto-scale</strong> down to zero is also possible and based on CPU usage it can <strong>scale up</strong> pods auto without user input.</p>
<p>If you not looking for scaling or just for templating you are facing issues ignore above answer and you can use the <strong>scaffold</strong> or <strong>helm</strong> to mange the <strong>YAML</strong> templating dynamically.</p>
<p><strong>Deployment.yaml</strong></p>
<pre><code>replicas: {{ .Values.replicas}}
</code></pre>
<p><strong>values-dev.yaml</strong></p>
<pre><code>replicas=3
</code></pre>
<p><strong>values-staging.yaml</strong></p>
<pre><code>replicas=5
</code></pre>
<p>based on the environment you can pass the <strong>values-*.yaml</strong> replica to helm and it will create the template for you.</p>
|
<p>I want to run some gpu workloads on my bare metal k8s cluster. So I have installed the nvidia containerd runtime engine on my cluster. But the cilium cni pods crashes when I make nvidia the default runtime. (I'll post about that some other place)</p>
<p>I'm thinking I should be able to work around this problem by scheduling only the gpu pods on the nvidia runtime and leave runc as the default. Is it possible to specify different runtime engines for different workloads? Is this a good workaround? If so, how do I configure it?</p>
<p>This is how I've install the nvidia drivers and containerd runtime <a href="https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html#option-2-installing-kubernetes-using-kubeadm" rel="nofollow noreferrer">https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html#option-2-installing-kubernetes-using-kubeadm</a></p>
<p>I found this documentation, but it's a little dry <a href="https://kubernetes.io/docs/concepts/containers/runtime-class/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/runtime-class/</a></p>
| <p>well... I feel dumb for not reading the docs more closely. Here I am to answer my own question.</p>
<ol>
<li>create a RuntimeClass like this:</li>
</ol>
<pre><code>kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: nvidia
handler: nvidia
</code></pre>
<ol start="2">
<li>add <code>runtimeClassName: nvidia</code> to the container spec of any containers that you want to use the nvidia containerd engine.</li>
</ol>
<p>Thats all. It just works.</p>
|
<p>I tried the following but failed to get a list of all pods like these:</p>
<pre><code>my-pod-1
my-pod-19
my-pod-24
</code></pre>
<p>etc.</p>
<pre><code>kgp | grep -E my-pod-[0-9]+
kgp | grep -E my-pod-[0-9]
kgp | grep -i -E my-pod-[0-9]
kgp | grep -i -E "my-pod-[0-9]"
kgp | grep -i -E "^my-pod-[0-9]"
kgp | grep "^my-pod-[0-9]"
kgp | grep -v 'my-pod-[0-9]'
kgp == kubectl get pods
</code></pre>
<hr />
<p>Also following kind Timor's suggestion I tried the following:</p>
<pre><code>user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my-
pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod
my-pod-0 1/1 Running 0 4d9h
my-pod-1 1/1 Running 0 22h
my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m
my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h
</code></pre>
<hr />
<p>Please advise.</p>
| <p>Just use the options to the <code>kubectl</code> command to get the name column as the only output.</p>
<pre><code>kubectl get pods -o custom-columns=NAME:.metadata.name
</code></pre>
<p>Then use a grep/sed to filter out what you do or do not want.</p>
|
<p>I'm developing a NATS based solution with deterministic subject partitioning, I use this type of mapping:</p>
<blockquote>
<p>service.* --> service.*.<number of partition></p>
</blockquote>
<p>Now I need a way to subscribe only one of my replicas per partition, what's the right way to do that?</p>
<p>I was thinking about K8s ordinal index, but all the replicas should be stateless.</p>
| <p>One way to ensure that only one replica per partition subscribes to messages is to use a queue group subscription in NATS.
When multiple subscribers are part of the queue group. Only one of them will receive each message. This allows you to ensure that only one replica per partition processes messages at a time.</p>
<p>Example:</p>
<ol>
<li><p>Assign a unique identifier to each replica such as pod name or other unique identifier.</p>
</li>
<li><p>If there are multiple subscribers in the queue group. NATS will distribute messages to them in a round-robin fashion.</p>
</li>
<li><p>If there is only one replica subscriber in the queue group; then it will receive all the messages for the partition.</p>
</li>
</ol>
<p>By above methods only one replica per partition will receive messages and even if it goes down, NATS will automatically reassign remaining users to the group.</p>
<p>For more information please check this <a href="https://docs.nats.io/using-nats/developer/receiving/queues" rel="nofollow noreferrer">official page</a>.</p>
|
<p>I tried the following but failed to get a list of all pods like these:</p>
<pre><code>my-pod-1
my-pod-19
my-pod-24
</code></pre>
<p>etc.</p>
<pre><code>kgp | grep -E my-pod-[0-9]+
kgp | grep -E my-pod-[0-9]
kgp | grep -i -E my-pod-[0-9]
kgp | grep -i -E "my-pod-[0-9]"
kgp | grep -i -E "^my-pod-[0-9]"
kgp | grep "^my-pod-[0-9]"
kgp | grep -v 'my-pod-[0-9]'
kgp == kubectl get pods
</code></pre>
<hr />
<p>Also following kind Timor's suggestion I tried the following:</p>
<pre><code>user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my-
pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod
my-pod-0 1/1 Running 0 4d9h
my-pod-1 1/1 Running 0 22h
my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m
my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h
</code></pre>
<hr />
<p>Please advise.</p>
| <p>Use <code>grep</code> like so:</p>
<pre><code>kubectl get pods | grep -P '^my-pod-\d+$'
</code></pre>
<p>Here, GNU <a href="https://www.gnu.org/software/grep/manual/grep.html" rel="nofollow noreferrer"><code>grep</code></a> uses the following option:<br />
<code>-P</code> : Use Perl regexes.</p>
<p><code>^my-pod-\d+$</code> : matches the start of the line, followed by <code>my_pod-</code>, then 1 or more digits until the end of the line.</p>
<p><strong>SEE ALSO:</strong><br />
<a href="https://perldoc.perl.org/perlre" rel="nofollow noreferrer">perlre - Perl regular expressions</a></p>
|
<p>I want to expose 2 services:</p>
<ul>
<li>web application at <strong>example.com</strong> port <strong>80</strong></li>
<li>web sockets at <strong>example.com/app/ws</strong> port <strong>8000</strong></li>
</ul>
<p>with given configuration:</p>
<hr />
<pre><code>apiVersion: "v1"
kind: "Service"
metadata:
name: "web-app-service"
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: "LoadBalancer"
selector:
app.kubernetes.io/instance: "web-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
name: "http"
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "web-sockets-service"
spec:
type: "NodePort"
selector:
app.kubernetes.io/instance: "web-sockets"
ports:
- protocol: "TCP"
port: 8000
targetPort: 8000
name: "http"
---
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "web-app-ingress"
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "example.com"
http:
paths:
- path: "/app/ws"
pathType: "Prefix"
backend:
service:
name: "web-sockets-service"
port:
number: 8000
- path: "/"
pathType: "Prefix"
backend:
service:
name: "web-app-service"
port:
number: 80
---
</code></pre>
<p>I can reach web application at <strong>example.com</strong>, but trying to communicate with <strong>example.com/app/ws</strong> results with failed to connect - timeout. Curl call to internal service IP at port 8000 results with success, so im misconfigured something at ingress configuation of web sockets</p>
<pre><code>curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" example.com/app/ws:8000
</code></pre>
<p>results with timeout</p>
<p>internal cluster call (different pod than websocket server):</p>
<pre><code>curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" <web-sockets-service ip>/app/ws:8000
</code></pre>
<p>result with success websocket server response</p>
<p>Websocat:</p>
<p>request to service:</p>
<pre><code>> websocat ws://<web-sockets-service ip>:8000/app/ws
{"event":"connection_established","data":"{\"socket_id\":\"620572367.722478523\",\"activity_timeout\":30}"}
</code></pre>
<p>request to ingress:</p>
<pre><code>> websocat ws://example.com:8000/app/ws
websocat: WebSocketError: I/O Failure
websocat: error running
</code></pre>
| <p>I found the problem was with used port 8000 for websocket server. Used port was not open on firewall, after changing websocket server port to 80, everything started working.</p>
|
<p>After running a pipeline job in Jenkins that runs in my k8s cluster</p>
<p>I am getting this error -</p>
<pre><code>βJenkinsβ doesnβt have label βjenkins-eks-podβ.
</code></pre>
<p>What am I missing in my configuration?</p>
<p>Pod Logs in k8s-</p>
<pre><code> 2023-02-20 14:37:03.379+0000 [id=1646] WARNING o.c.j.p.k.KubernetesLauncher#launch: Error in provisioning; agent=KubernetesSlave name: jenkins-eks-agent-h4z6t, template=PodTemplate{id='05395ad55cc56972ee3e4c69c2731189bc03a75c0b51e637dc7f868fa85d07e8', name='jenkins-eks-agent', namespace='default', slaveConnectTimeout=100, label='jenkins-non-prod-eks-global-slave', serviceAccount='default', nodeUsageMode=NORMAL, podRetention='Never', containers=[ContainerTemplate{name='jnlp', image='805787217936.dkr.ecr.us-west-2.amazonaws.com/aura-jenkins-slave:ecs-global-node_master_57', alwaysPullImage=true, workingDir='/home/jenkins/agent', command='', args='', ttyEnabled=true, resourceRequestCpu='512m', resourceRequestMemory='512Mi', resourceRequestEphemeralStorage='', resourceLimitCpu='512m', resourceLimitMemory='512Mi', resourceLimitEphemeralStorage='', envVars=[KeyValueEnvVar [getValue()=http://jenkins-non-prod.default.svc.cluster.local:8080/, getKey()=JENKINS_URL]], livenessProbe=ContainerLivenessProbe{execArgs='', timeoutSeconds=0, initialDelaySeconds=0, failureThreshold=0, periodSeconds=0, successThreshold=0}}]}
java.lang.IllegalStateException: Containers are terminated with exit codes: {jnlp=0}
at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.checkTerminatedContainers(KubernetesLauncher.java:275)
at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:225)
at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:298)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:48)
at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:82)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2023-02-20 14:37:03.380+0000 [id=1646] INFO o.c.j.p.k.KubernetesSlave#_terminate: Terminating Kubernetes instance for agent jenkins-eks-agent-h4z6t
2023-02-20 14:37:03.380+0000 [id=1646] SEVERE o.c.j.p.k.KubernetesSlave#_terminate: Computer for agent is null: jenkins-eks-agent-h4z6t
</code></pre>
| <p>This error might be due to not creating the label βjenkins-eks-podβ on the jenkins server.</p>
<p>To create a label on the jenkins server :</p>
<blockquote>
<p>go to manage jenkins > Manage nodes and Clouds > labels and then enter
the label name.</p>
</blockquote>
<p>Post creating this label try to run the job and check if it works.</p>
<p>Refer to this <a href="https://devopscube.com/jenkins-build-agents-kubernetes/" rel="nofollow noreferrer">Blog by Bibin Wilson</a>.</p>
<p>As per the error <code>No httpclient implementations found on the context classloader</code> You need to upgrade jenkins kubernetes plugin to 3802. This error No httpclient implementations found on the context classloader is fixed in this release. Refer to this <a href="https://github.com/jenkinsci/kubernetes-plugin/releases/tag/3802.vb_b_600831fcb_3" rel="nofollow noreferrer">jenkinsci/kubernetes-plugin/releases/tag/3802</a>.</p>
|
<p>I want to check if pod in the cluster running as <code>privileged pods</code>, which can indicate that we may have security issue, so I check if
<code>privileged: true</code></p>
<p>However under the
<code>securityContext:</code> spec there is additional fields like</p>
<ul>
<li><code>allowPrivilegeEscalation</code></li>
<li><code>RunAsUser</code></li>
<li><code>ProcMount</code></li>
<li><code>Capabilities</code>
etc</li>
</ul>
<p>Which may be risky (not sure about it) ,</p>
<p>My question is in case the pod is marked as <code>privileged:false</code> and the other fields are true like the following example,if this indicate some security issue ? Does this pods can do some operation on <strong>other pods</strong> etc , access external data?</p>
<p><strong>For example</strong> the following configuration which indicate the the pod is not privileged but <code>allowPrivilegeEscalation: true</code></p>
<pre><code>securityContext:
allowPrivilegeEscalation: true
privileged: false
</code></pre>
<p>I want to know which <code>securityContext</code> combination of pod config can <strong>control other</strong> <code>pods/process</code> in the cluster ?</p>
| <p>The <code>securityContext</code> are more related to the container itself and some access to the host machine.</p>
<p>The <code>allowPrivilegeEscalation</code> allow a process to gain more permissions than its parent process. This is more related to setuid/setgid flags in binaries, but inside a container there is no much to get worried about.</p>
<p>You can only control other containers in the host machine from inside a container if you have a <code>hostPath</code> volume, or something like that, allowing you to reach the <code>.sock</code> file as <code>/run/crio/crio.sock</code> or the <code>docker.sock</code>. Is pretty obvious that, if you are concerned about this, allowing requests to Docker API through the network should be disabled.</p>
<p>Of course, all of these access are ruled by DAC and MAC restrictions. This is why podman <strong>uidmap</strong> is better, because the root inside the container do not have the same root id outside the container.</p>
<p>From Kubernetes point of view, you don't need this kind of privileges, all you need is a <code>ServiceAccount</code> and the correct RBAC permissions to control other things inside Kubernetes. A <code>ServiceAccount</code> binded to a <code>cluster-admin</code> <code>ClusterRole</code> can do anything in the API and much more, like adding ssh keys to the hosts.</p>
<p>If you are concerned about pods executing things in Kubernetes or in the host, just force the use of <code>nonRoot</code> containers, avoid indiscriminate use of <code>hostPath</code> volumes, and control your RBAC.</p>
<p>Openshift uses a very nice restriction by default:</p>
<ul>
<li>Ensures that pods cannot run as privileged</li>
<li>Ensures that pods cannot mount host directory volumes</li>
<li>Requires that a pod is run as a user in a pre-allocated range of UIDs (openshift feature, random uid)</li>
<li>Requires that a pod is run with a pre-allocated MCS label (selinux related)</li>
</ul>
<p>I don't answer exactly what you want, because I shifted the attention to RBAC, but I hope this can give you a nice idea.</p>
|
<p>My NodeJS microservice is deployed to k8s cluster.</p>
<p>I would like this microservice to access the k8s API server. For that, I guess I need to create a <code>ServiceAccount</code> for it. So I did this:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-service-account
namespace: myapp-ns
</code></pre>
<p>Then, I also created a <code>ClusterRole</code> to define the permissions:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: myapp-cluster-role
namespace: myapp-ns
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
</code></pre>
<p>Finally, I created a <code>ClusterRoleBinding</code>:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-app-role-binding
namespace: myapp-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myapp-cluster-role
subjects:
kind: ServiceAccount
name: my-app-service-account
</code></pre>
<p>When I deploy them (I use Ansible to do the deployment), I get the following error:</p>
<pre><code>"error": 400, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"ClusterRoleBinding in version \\\\\"v1\\\\\" cannot be handled as a ClusterRoleBinding: json: cannot unmarshal object into Go struct field ClusterRoleBinding.subjects of type []v1.Subject\",\"reason\":\"BadRequest\",\"code\":400}\\n'",
</code></pre>
<p>Why this error? Where am I wrong?</p>
| <p>I'd reckon the issue is with the resources, not with Ansible.</p>
<p>Take a look:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: myapp-cluster-role
namespace: myapp-ns # <-- NOT A NAMESPACED RESOURCE
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-app-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myapp-cluster-role
subjects:
- kind: ServiceAccount # <-- Added (-)
name: my-app-service-account
namespace: myapp-ns # <-- MOVED FROM METADATA
</code></pre>
<p>To summarize:</p>
<ul>
<li><code>Clusterrole</code> is a not a namespaced resource, hence you should not specify it</li>
<li>You've missed a <code>-</code> in the <code>.subjects</code></li>
<li>You should move <code>.namespace</code> from <code>.metadata</code> to <code>.suspects...</code></li>
</ul>
<p>More explanation on namespaced/non namespaced resources:</p>
<ul>
<li><code>kubectl api-resources </code></li>
</ul>
<pre class="lang-bash prettyprint-override"><code>NAME SHORTNAMES APIVERSION
roles rbac.authorization.k8s.io/v1 true Role
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
</code></pre>
<p>I encourage you to check on the following docs:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Access Authn Authz: RBAC</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#not-all-objects-are-in-a-namespace" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Overview: Working with objects: Namespaces: Not all objects are in a namespace</a></em></li>
</ul>
|
<p>I need to automate the provisioning of a complex application in Kubernetes. It's a complex, multi-step process that involves provisioning of some cluster-wide resources and some app-specific resources. The cluster-wide resources are:</p>
<ul>
<li>Istio</li>
<li>A few Operators (Cert Manager, Prometheus Operator, Postgres Operator, among others)</li>
</ul>
<p>Then I want to create an application (let's call it <code>Foo</code>) which leverages Istio and the aforementioned operators. It will create statefulsets, services, Certificates, a Postgres database, Istio gateways, Prometheus PodMonitors, etc.</p>
<p>There will be multiple <code>Foo</code>'s created, each configured differently (since the Kubernetes cluster will be used to provide <code>Foo</code> applications as a multi-tenant service).</p>
<p>What's the idiomatic way to do this? I think I should write a <code>Foo</code> controller which assumes that Istio and the other operators (prometheus, cert-manager, postgres, etc) already exist.</p>
<p>Is it possible to write a meta <code>ClusterOfFoos</code> operator that installs Istio, installs the required operators, and then installs the <code>Foo</code> controller?</p>
<p>If so, how does one go about provisioning operators (normally installed through Helm) from within a controller?</p>
<p>So far I have looked into using helm to do this, but there are too many dependencies and Helm just tends to create all resources at once, which makes some things fail (eg. when a deployment refers to a Secret that hasn't yet been created by cert-manager).</p>
| <p>The <a href="https://olm.operatorframework.io/docs/getting-started/" rel="nofollow noreferrer">Operator Lifecycle Manager</a> is really well suited for the task.</p>
<p>When you create operator <code>Foo</code>, you can package it in the OLM way by creating a bundle which contains the <a href="https://olm.operatorframework.io/docs/tasks/creating-operator-manifests/#writing-your-operator-manifests" rel="nofollow noreferrer">ClusterServiceVersion</a> needed to inform OLM of dependencies that need to be resolved before install and during upgrades. These can just be a list of APIs you need - and OLM will find and install the set of latest versions of the operators that own each API.</p>
<p>All your dependencies are operators available in the <a href="https://operatorhub.io/" rel="nofollow noreferrer">Operatorhub.io Catalog</a> so they are available for install and dependency resolution as soon as you install OLM.</p>
<p>You can also configure certain dependencies by including these objects in the bundle itself. According to <a href="https://olm.operatorframework.io/docs/tasks/creating-operator-manifests/#packaging-additional-objects-alongside-an-operator" rel="nofollow noreferrer">the docs</a>, the following objects are supported as of the time of this post:</p>
<pre><code>Secret
ClusterRole
ClusterRoleBinding
ConfigMap
ServiceAccount
Service
Role
RoleBinding
PrometheusRule
ServiceMonitor
PodDisruptionBudget
PriorityClasse
VerticalPodAutoscaler
ConsoleYAMLSample
ConsoleQuickStart
ConsoleCLIDownload
ConsoleLink
</code></pre>
<p>The <a href="https://sdk.operatorframework.io/docs/olm-integration/" rel="nofollow noreferrer">Operator SDK</a> can help you with bootstrapping the bundle.</p>
|
<p>I'm trying to deploy kafka on local k8s, then I need to connect to it by application and using offset explorer</p>
<p>so, using kubectl I created zookeeper service and deployment using this yml file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30091
targetPort: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: bitnami/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
</code></pre>
<p>Then, I created kafka service and deployment using this yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-service
name: kafka-service
spec:
type: NodePort
ports:
- name: kafka-port
port: 9092
nodePort: 30092
targetPort: 9092
selector:
app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kafka-broker
name: kafka-broker
spec:
replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
hostname: kafka-broker
containers:
- image: bitnami/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper-service:2181"
- name: KAFKA_LISTENERS
value: PLAINTEXT://localhost:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://localhost:9092
# Creates a topic with one partition and one replica.
- name: KAFKA_CREATE_TOPICS
value: "bomc:1:1"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
</code></pre>
<p>And both services and deployment created and running
<a href="https://i.stack.imgur.com/Zh5Rl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zh5Rl.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/RekR0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RekR0.png" alt="enter image description here" /></a></p>
<p>And I have ingress for this services</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /health
pathType: Prefix
backend:
service:
name: health-app-service
port:
number: 80
- path: /actuator
pathType: Prefix
backend:
service:
name: health-app-service
port:
number: 80
- path: /jsonrpc
pathType: Prefix
backend:
service:
name: core-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: kafka-service # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ Π²Π°ΡΠ΅Π³ΠΎ Kafka-ΡΠ΅ΡΠ²ΠΈΡΠ°
port:
number: 9092 # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠ°, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΠΎΠ³ΠΎ Π΄Π»Ρ Kafka
- path: /
pathType: Prefix
backend:
service:
name: kafka-service # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ Π²Π°ΡΠ΅Π³ΠΎ Kafka-ΡΠ΅ΡΠ²ΠΈΡΠ°
port:
number: 30092 # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠ°, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΠΎΠ³ΠΎ Π΄Π»Ρ Kafka
- path: /
pathType: Prefix
backend:
service:
name: kafka-service # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ Π²Π°ΡΠ΅Π³ΠΎ Kafka-ΡΠ΅ΡΠ²ΠΈΡΠ°
port:
name: kafka-port # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠ°, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΠΎΠ³ΠΎ Π΄Π»Ρ Kafka
- path: /
pathType: Prefix
backend:
service:
name: zookeeper-service
port:
name: zookeeper-port
</code></pre>
<p><a href="https://i.stack.imgur.com/KZVw1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KZVw1.png" alt="enter image description here" /></a></p>
<p>but, when I try to connect to this kafka using offset key tool, there is error connection.</p>
<p><a href="https://i.stack.imgur.com/KK2zV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KK2zV.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/1y7bk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1y7bk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/D73lC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D73lC.png" alt="enter image description here" /></a></p>
<p>When I use localhost:30092 like a bootstrap server - error with logs:</p>
<pre><code> 12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Starting application : Offset Explorer
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Version : 2.3
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Built : Jun 30, 2022
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.home : C:\Users\Roberto
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.dir : C:\Program Files\OffsetExplorer2
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - os.name : Windows 10
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.runtime.version : 1.8.0_232-b09
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - max memory=3586 MB
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - available processors=8
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.security.auth.login.config=null
12/ΠΌΠ°Ρ/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Finding plugins in directory C:\Program Files\OffsetExplorer2\plugins
12/ΠΌΠ°Ρ/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Found files in plugin directory, count=1
12/ΠΌΠ°Ρ/2023 22:32:46.121 INFO com.kafkatool.ui.MainApp - Loading user settings
12/ΠΌΠ°Ρ/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server group settings
12/ΠΌΠ°Ρ/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server connection settings
12/ΠΌΠ°Ρ/2023 22:32:50.103 INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [localhost:30092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
12/ΠΌΠ°Ρ/2023 22:32:50.126 DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:30092 (id: -1 rack: null)], partitions = [], controller = null).
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-reauthentication:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication-no-reauth:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-authentication:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-reauthentication:
12/ΠΌΠ°Ρ/2023 22:32:50.198 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name reauthentication-latency:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:
12/ΠΌΠ°Ρ/2023 22:32:50.204 WARN org.apache.kafka.clients.admin.AdminClientConfig - The configuration 'group.id' was supplied but isn't a known config.
12/ΠΌΠ°Ρ/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.0
12/ΠΌΠ°Ρ/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018
12/ΠΌΠ°Ρ/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1678649570204
12/ΠΌΠ°Ρ/2023 22:32:50.214 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Kafka admin client initialized
12/ΠΌΠ°Ρ/2023 22:32:50.215 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Queueing Call(callName=listNodes, deadlineMs=1678649690215) with a timeout 120000 ms from now.
12/ΠΌΠ°Ρ/2023 22:32:50.215 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1
12/ΠΌΠ°Ρ/2023 22:32:50.228 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent
12/ΠΌΠ°Ρ/2023 22:32:50.230 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received
12/ΠΌΠ°Ρ/2023 22:32:50.232 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency
12/ΠΌΠ°Ρ/2023 22:32:50.232 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
12/ΠΌΠ°Ρ/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions.
12/ΠΌΠ°Ρ/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1.
12/ΠΌΠ°Ρ/2023 22:32:50.376 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196)
at java.lang.Thread.run(Thread.java:748)
12/ΠΌΠ°Ρ/2023 22:33:11.787 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected.
12/ΠΌΠ°Ρ/2023 22:33:12.766 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1
12/ΠΌΠ°Ρ/2023 22:33:12.767 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
12/ΠΌΠ°Ρ/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions.
12/ΠΌΠ°Ρ/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1.
12/ΠΌΠ°Ρ/2023 22:33:12.768 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
| <p>It seems you have missing broker config, usually I have below config exposed from my docker compose when I run Kafka on my CI</p>
<pre><code> KAFKA_LISTENERS: 'LISTENER_INTERNAL://kafka:29092,LISTENER_HOST://:9092'
KAFKA_ADVERTISED_LISTENERS: LISTENER_INTERNAL://kafka:29092,LISTENER_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_INTERNAL:PLAINTEXT,LISTENER_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_INTERNAL
</code></pre>
<p>And when some other service tries to internally connect within the docker compose cluster it uses below config</p>
<pre><code> KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092
KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper-kafka:2181
</code></pre>
<p>This is also explained <a href="https://www.confluent.io/blog/kafka-listeners-explained/" rel="nofollow noreferrer">https://www.confluent.io/blog/kafka-listeners-explained/</a> I hope this helps</p>
|
<p>On a Kubernetes cluster, I have multiple <code>Deployment</code> resources. For security, I am using a sidecar proxy pattern where the <code>Service</code> will proxy traffic to the sidecar, which will ensure authentication before passing on to the deployed application.</p>
<p>I am trying to set up Kustomize to do this. Since the sidecar definition is likely environment specific, I don't want to include the sidecar in my base manifests, but would like it to be an overlay. Since I have multiple deployments that will need to attach that sidecar, it seemed appropriate to have the sidecar specification be a common shared component. This seemed like appropriate use of the Kustomize <code>Component</code> resource, but perhaps I'm wrong.</p>
<p>I have something similar to the following:</p>
<pre><code>.
βββ base
β βββ app1
β β βββ deployment.yaml
β β βββ kustomization.yaml
β βββ app2
β β βββ deployment.yaml
β β βββ kustomization.yaml
β βββ app3
β βββ deployment.yaml
β βββ kustomization.yaml
βββ components
β βββ sidecar
β βββ deployment-sidecar.yaml
β βββ kustomization.yaml
βββ overlays
βββ dev
βββ kustomization.yaml
</code></pre>
<p>I'd like the sidecar component to be applied to the 3 app deployments, but I can't seem to find a way to do this. Am I misusing components here?</p>
<p>My <code>components/sidecar/kustomization.yaml</code> file looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- path: deployment-sidecar.yaml
target:
labelSelector: xxx
</code></pre>
<p>This works, however it specifies the target of the patch in the component, whereas I would like to leave the component more generic and instead specify the target in <code>overlays/dev</code>.</p>
<p>Is there a better way to be handling this? In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to.</p>
| <blockquote>
<p>In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to.</p>
</blockquote>
<p>My first thought was that you could have a label that means "apply the sidecar patch", and use that in the Component:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- path: deployment-sidecar.yaml
target:
labelSelector: "inject-sidecar=true"
</code></pre>
<p>And then in your overlay(s), use a patch to apply that label to specific deployments:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
components:
- ../../sidecar
patches:
- target:
kind: Deployment
labelSelector: "app=app1"
patch: |
- op: add
path: /metadata/labels/inject-sidecar
value: "true"
</code></pre>
<p>Unfortunately, this won't work because patches are applied <strong>after</strong> processing all resources and components.</p>
<p>We can still do this, but it requires an intermediate stage. We can get that by creating another component inside the <code>dev</code> overlay that is responsible for applying the labels. In <code>overlays/dev/apply-labels/kustomization.yaml</code> you have a <code>kustomization.yaml</code> that contains the logic for applying the <code>inject-sidecar</code> label to specific Deployments (using a label selector, name patterns, or other criteria):</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- target:
kind: Deployment
labelSelector: "app=app1"
patch: |
- op: add
path: /metadata/labels/inject-sidecar
value: "true"
</code></pre>
<p>And then in <code>overlays/dev/kustomization.yaml</code> you have:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
components:
- apply-labels
- ../../components/sidecar
</code></pre>
<p>This gets you what you want:</p>
<ul>
<li>The sidecar patch is specified in a single place</li>
<li>Your overlay determines to which deployments you apply the sidecar patch</li>
</ul>
<p>There's a level of complexity here that is only necessary if:</p>
<ul>
<li>You have multiple overlays</li>
<li>You want to selectively apply the sidecar only to some deployments</li>
<li>You want the overlay to control to which deployments the patch is applied</li>
</ul>
<p>If any of those things aren't true you can simplify the configuration.</p>
|
<p>I am trying to deploy PostgreSQL to GKE and here is my PersistentVolumeClaim definition:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: db
labels:
app: imgress-db
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Mi
</code></pre>
<p>and this is deployment/service definition:</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: imgress-db
namespace: db
spec:
serviceName: imgress-db
replicas: 1
selector:
matchLabels:
app: imgress-db
template:
metadata:
labels:
app: imgress-db
spec:
containers:
- name: imgress-db
image: postgres
env:
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: db-configmap
key: DATABASE_HOST
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: db-configmap
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: db-configmap
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: POSTGRES_PASSWORD
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: imgress-db
namespace: db
spec:
selector:
app: imgress-db
ports:
- name: postgres
port: 5432
</code></pre>
<p>First I run:</p>
<pre><code>kubectl apply -f postgres-pvc.yaml
</code></pre>
<p>and then:</p>
<pre><code>kubectl apply -f postgres-deployment.yaml
</code></pre>
<p>but I get this notorious error when I run <code>kubectl get pods -A</code>:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
db imgress-db-0 0/1 CrashLoopBackOff 6 (2m15s ago) 8m26s
</code></pre>
<p>For <code>kubectl describe pvc postgres-pvc -n db</code> I get this result:</p>
<pre><code>Name: postgres-pvc
Namespace: db
StorageClass: standard
Status: Bound
Volume: pvc-c6369764-1106-4a7d-887c-0e4009968115
Labels: app=imgress-db
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: imgress-db-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 31m persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
Normal Provisioning 31m pd.csi.storage.gke.io_gke-e0f710dc594c4eb5ac14-5c62-e039-vm_ca2409ad-83a8-4139-93b4-4fffbacbf44f External provisioner is provisioning volume for claim "db/postgres-pvc"
Normal ProvisioningSucceeded 31m pd.csi.storage.gke.io_gke-e0f710dc594c4eb5ac14-5c62-e039-vm_ca2409ad-83a8-4139-93b4-4fffbacbf44f Successfully provisioned volume pvc-c6369764-1106-4a7d-887c-0e4009968115
</code></pre>
<p>and for <code>kubectl describe pod imgress-db-0 -n db</code> I get this result (please pay attention to <code>Back-off restarting failed container</code> on the last line):</p>
<pre><code>Name: imgress-db-0
Namespace: db
Priority: 0
Service Account: default
Node: gke-imgress-default-pool-e9bdef38-hjhv/10.156.0.5
Start Time: Fri, 24 Feb 2023 13:44:15 +0500
Labels: app=imgress-db
controller-revision-hash=imgress-db-7f557d4b88
statefulset.kubernetes.io/pod-name=imgress-db-0
Annotations: <none>
Status: Running
IP: 10.84.2.49
IPs:
IP: 10.84.2.49
Controlled By: StatefulSet/imgress-db
Containers:
imgress-db:
Container ID: containerd://96140ec0b0e369ca97822361a770abcb82e27b7924bc90e17111ab354e51d6aa
Image: postgres
Image ID: docker.io/library/postgres@sha256:901df890146ec46a5cab7a33f4ac84e81bac2fe92b2c9a14fd649502c4adf954
Port: 5432/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 24 Feb 2023 13:50:09 +0500
Finished: Fri, 24 Feb 2023 13:50:11 +0500
Ready: False
Restart Count: 6
Environment:
POSTGRES_HOST: <set to the key 'DATABASE_HOST' of config map 'db-configmap'> Optional: false
POSTGRES_DB: <set to the key 'POSTGRES_DB' of config map 'db-configmap'> Optional: false
POSTGRES_USER: <set to the key 'POSTGRES_USER' of config map 'db-configmap'> Optional: false
POSTGRES_PASSWORD: <set to the key 'POSTGRES_PASSWORD' in secret 'db-secret'> Optional: false
Mounts:
/var/lib/postgresql/data from postgres-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfsf9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
postgres-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-pvc
ReadOnly: false
kube-api-access-tfsf9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m51s default-scheduler Successfully assigned db/imgress-db-0 to gke-imgress-default-pool-e9bdef38-hjhv
Normal SuccessfulAttachVolume 6m46s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c6369764-1106-4a7d-887c-0e4009968115"
Normal Pulled 6m42s kubelet Successfully pulled image "postgres" in 365.355391ms
Normal Pulled 6m39s kubelet Successfully pulled image "postgres" in 236.532115ms
Normal Pulled 6m26s kubelet Successfully pulled image "postgres" in 278.12995ms
Normal Created 5m59s (x4 over 6m42s) kubelet Created container imgress-db
Normal Started 5m59s (x4 over 6m41s) kubelet Started container imgress-db
Normal Pulled 5m59s kubelet Successfully pulled image "postgres" in 348.694043ms
Normal Pulling 5m10s (x5 over 6m42s) kubelet Pulling image "postgres"
Normal Pulled 5m9s kubelet Successfully pulled image "postgres" in 288.848351ms
Warning BackOff 95s (x25 over 6m38s) kubelet Back-off restarting failed container
</code></pre>
<p>Result of <code>kubectl logs imgress-db-0 -n db</code>:</p>
<pre><code>The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: error: directory "/var/lib/postgresql/data" exists but is not empty
initdb: detail: It contains a lost+found directory, perhaps due to it being a mount point.
initdb: hint: Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
</code></pre>
<p>I've already seen some related questions asked on SO but I guess it can be caused by different problems.</p>
<p><strong>What I tried</strong>:</p>
<ol>
<li>adding user and password as an env variables</li>
<li>deleting everything in the namespace and recreating</li>
<li>deleting livenessprobe and readinessprobe from the description</li>
</ol>
| <p>I was able to solve the issue by adding two things to description:</p>
<ol>
<li><p><code>PGDATA</code> env variable inside containers:</p>
<pre><code> env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
</code></pre>
</li>
<li><p><code>subPath: postgres</code> under <code>volumeMounts</code></p>
</li>
</ol>
<p>Now my deployment file looks like this:</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: imgress-db
namespace: db
spec:
serviceName: imgress-db
replicas: 1
selector:
matchLabels:
app: imgress-db
template:
metadata:
labels:
app: imgress-db
spec:
containers:
- name: imgress-db
image: postgres
env:
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: db-configmap
key: DATABASE_HOST
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: db-configmap
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: db-configmap
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: POSTGRES_PASSWORD
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: imgress-db
namespace: db
spec:
selector:
app: imgress-db
ports:
- name: postgres
port: 5432
</code></pre>
|
<p>I am following the instructions on the official Kubernetes documentation to install Kubernetes repository, I run</p>
<pre><code>sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
</code></pre>
<p>but I get 500 error</p>
<pre><code>curl: (22) The requested URL returned error: 500
</code></pre>
<p>Visiting the URL or <code>curl</code>ing it returns 500 Error</p>
<pre><code>curl https://packages.cloud.google.com/apt/doc/apt-key.gpg
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 500 (Internal Server Error)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>500.</b> <ins>Thatβs an error.</ins>
<p> <ins>Thatβs all we know.</ins>
</code></pre>
<p>Is Google having issues at the moment or am I missing something?</p>
<p>This is the official documentation URL (Debian distribution tab)
<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</a></p>
<p><strong>UPDATE</strong>
There is a GitHub issue opened as well <a href="https://github.com/kubernetes/kubernetes/issues/116068" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/116068</a></p>
| <p>Indeed, there seem to be an issue. Docs still suggests that URL should be used. Yet I see to 500 too.</p>
<p>Checking google status page ( <a href="https://status.cloud.google.com/" rel="nofollow noreferrer">https://status.cloud.google.com/</a> ), there could be an issue with FileStore (file access issues for non-root users). With a bunch of affected locations.</p>
<hr />
<p>and right now it works.</p>
|
<p>I have a ruby on rails deployment and I want to use it in the frontend deployment so I created a service exposing port 3000 called "flicron-backend-service"</p>
<p>here is the description of the service</p>
<pre><code>kubectl describe svc flicron-backend-service
Name: flicron-backend-service
Namespace: default
Labels: io.kompose.service=flicron-backend-service
Annotations: kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.28.0 (c4137012e)
Selector: io.kompose.service=flicron-backend
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.107.112.244
IPs: 10.107.112.244
Port: 3000 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.244.0.144:3000
Session Affinity: None
Events: <none>
</code></pre>
<p>I am trying to use the service name but it does not get resolved
I have tried from inside minikube to curl the backend-service-name did not work</p>
<pre><code>curl flicron-backend-service:3000
curl: (6) Could not resolve host: flicron-backend-service
curl flicron-backend-service.default.svc.cluster.local:3000
curl: (6) Could not resolve host: flicron-backend-service.default.svc.cluster.local
</code></pre>
<p>but if I used the ip it works fine</p>
<pre><code>curl 10.107.112.244:3000
# some HTML
</code></pre>
<p>Here is my DNS configuration</p>
<pre><code>kubectl describe cm coredns -n kube-system
Name: coredns
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
Corefile:
----
.:53 {
log
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
hosts {
192.168.49.1 host.minikube.internal
fallthrough
}
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
BinaryData
====
Events: <none>
</code></pre>
<p>FrontEnd Service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: flicron-frontend-service
name: flicron-frontend-service
spec:
selector:
app: flicron-frontend
type: NodePort
ports:
- name: http
port: 4000
targetPort: 4000
nodePort: 32123
</code></pre>
<p>backend service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yml
kompose.version: 1.28.0 (c4137012e)
creationTimestamp: null
labels:
io.kompose.service: flicron-backend-service
name: flicron-backend-service
spec:
# clusterIP: None
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
io.kompose.service: flicron-backend # this maps this service to that deployment
</code></pre>
| <blockquote>
<p>I have tried from inside minikube to curl the backend-service-name did not work</p>
</blockquote>
<p>If you explicitly meant that you've:</p>
<ul>
<li>connected to your minikube instance (i.e. <code>minikube ssh</code>)</li>
<li><code>curl</code> the service name directly from <code>minikube</code> instance.</li>
</ul>
<p>Then it should <strong>not work</strong>.</p>
<p>It will work from a Pod/container that is targeting internal Kubernetes DNS resolution.</p>
<hr />
<p>Let me explain:</p>
<p>DNS resolution within Kubernetes environment is handled by <code>coredns</code> residing in <code>kube-system</code> namespace. It is a containerized service that is hosted inside of Kubernetes. Information on how to connect to it is injected to Pods via Kubelet.</p>
<p>You can see it by:</p>
<ul>
<li><code>kubectl run -it basic-pod --image=nginx -- /bin/bash</code></li>
<li><code>cat /etc/resolv.conf</code></li>
</ul>
<pre class="lang-bash prettyprint-override"><code>nameserver 10.96.0.10 # <-- SERVICE KUBE-DNS IN KUBE-SYSTEM (CLUSTER-IP)
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
<p>Minikube itself does not have the core-dns configured in <code>/etc/hosts</code>.</p>
<p>Try to contact your <code>Service</code> with an actual <code>Pod</code>:</p>
<ul>
<li><code>kubectl run -it basic-pod --image=nginx -- /bin/bash</code></li>
<li><code>apt update && apt install dnsutils -y</code> - <code>nginx</code> image used for simplicity</li>
<li><code>nslookup nginx</code> - there is a <code>Service</code> named <code>nginx</code> in my <code>minikube</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>root@basic-pod:/# nslookup nginx
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: nginx.default.svc.cluster.local
Address: 10.109.51.22
</code></pre>
<p>I encourage you to take a look on following documentation:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: DNS Pod Service</a></em></li>
</ul>
|
<p>Im trying to deploy spark (pyspark) in kubernetes using spark-submit, but I'm getting the following error :</p>
<blockquote>
<p>Exception in thread "main" org.apache.spark.SparkException: Please specify spark.kubernetes.file.upload.path property.Β Β Β Β at org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:330)Β Β Β Β at org.apache.spark.deploy.k8s.KubernetesUtils$.renameMainAppResource(KubernetesUtils.scala:300)Β Β Β Β at</p>
</blockquote>
<p>Since i'm packing my dependencies trhough a virtual environment, I don't have the need of specify a remote cluster to retrieve them, so I'm no setting the parameter <code>spark.kubernetes.file.upload.path</code></p>
<p>I tried to include that parameter anyway, leaving an empty value, but it does'nt work.</p>
<p>My spark-submit command (which I trigger from a python script) is as follows:</p>
<pre><code> cmd = f""" {SPARK_HOME}/bin/spark-submit
--master {SPARK_MASTER}
--deploy-mode cluster
--name spark-policy-engine
--executor-memory {EXECUTOR_MEMORY} \
--conf spark.executor.instances={N_EXECUTORS}
--conf spark.kubernetes.container.image={SPARK_IMAGE}
--conf spark.kubernetes.file.upload.path=''
--packages org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.apache.hadoop:hadoop-aws:3.3.1,com.amazonaws:aws-java-sdk-bundle:1.11.901,org.apache.hadoop:hadoop-common:3.3.1
--archives pyspark_venv.tar.gz#environment {spark_files}
--format_id {format_id}
"""
</code></pre>
<p>As shown I'm including the parameter with within a --conf tag (as shown in <a href="https://spark.apache.org/docs/3.0.0-preview/running-on-kubernetes.html#:%7E:text=It%20can%20be%20found%20in,use%20with%20the%20Kubernetes%20backend.&text=This%20will%20build%20using%20the%20projects%20provided%20default%20Dockerfiles%20." rel="nofollow noreferrer">https://spark.apache.org/docs/3.0.0-preview/running-on-kubernetes.html#:~:text=It%20can%20be%20found%20in,use%20with%20the%20Kubernetes%20backend.&text=This%20will%20build%20using%20the%20projects%20provided%20default%20Dockerfiles%20.</a>), but wether is present or not, it just doesn't work</p>
| <p>You need to specify a real path not an empty string, let's say in your image you have a tmp folder under /opt/spark, then the conf should be set like this:</p>
<pre><code>--conf spark.kubernetes.file.upload.path='local:///opt/spark/tmp'
</code></pre>
<p>If you don't want to use the</p>
<pre><code> cmd = f""" {SPARK_HOME}/bin/spark-submit
--master {SPARK_MASTER}
--deploy-mode cluster
--name spark-policy-engine
--executor-memory {EXECUTOR_MEMORY} \
--conf spark.executor.instances={N_EXECUTORS}
--conf spark.kubernetes.container.image={SPARK_IMAGE}
--packages org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.apache.hadoop:hadoop-aws:3.3.1,com.amazonaws:aws-java-sdk-bundle:1.11.901,org.apache.hadoop:hadoop-common:3.3.1
--archives pyspark_venv.tar.gz#environment {spark_files}
--format_id {format_id}
local:///opt/spark/work-dir/xxx.jar
"""
</code></pre>
|
<p>I'm using podman 4.5-dev
I have two pods deployed using:
<em>podman kube play foo.yaml</em>
<em>podman kube play bar.yaml</em></p>
<p>I specified the pods' hostnames in the files, but they won't get resolved inside the containers.
I verified that the pods are in the same network.</p>
<p>Is there some DNS configuration missing? Should I use a Services? The official docs lack of a precise indication about this topic</p>
<p>Here's one of the two pods's YAML (the other one has the same keys with different values):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
labels:
app: postgres
name: postgres
spec:
hostname: postgres
containers:
- name: pgadmin-container
image: docker.io/dpage/pgadmin4:latest
ports:
- containerPort: 80
hostPort: 9876
</code></pre>
<p>Here's some terminal output that could be useful:</p>
<pre class="lang-bash prettyprint-override"><code>[daniele@localhost]$ podman pod inspect xptssrv |grep -ni network
25: "HostNetwork": false,
34: "Networks": [
35: "podman-default-kube-network"
37: "NetworkOptions": null,
[daniele@localhost]$ podman pod inspect postgres |grep -ni network
25: "HostNetwork": false,
34: "Networks": [
35: "podman-default-kube-network"
37: "NetworkOptions": null,
[daniele@localhost]$ podman network inspect podman-default-kube-network |grep "" -n
1:[
2: {
3: "name": "podman-default-kube-network",
4: "id": "a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc",
5: "driver": "bridge",
6: "network_interface": "cni-podman1",
7: "created": "2023-02-21T23:03:25.800256942+01:00",
8: "subnets": [
9: {
10: "subnet": "10.89.0.0/24",
11: "gateway": "10.89.0.1"
12: }
13: ],
14: "ipv6_enabled": false,
15: "internal": false,
16: "dns_enabled": false,
17: "ipam_options": {
18: "driver": "host-local"
19: }
20: }
21:]
[daniele@localhost pods]$ podman exec xptssrv-xptssrv-container cat /etc/resolv.conf
nameserver 192.168.1.6
nameserver 8.8.8.8
</code></pre>
| <p>Edit your network settings in <code>/etc/containers/net.d/podman-default-kube-network.conflist</code>
Change this line</p>
<pre><code>"dns_enabled": false,
</code></pre>
<p>To this:</p>
<pre><code>"dns_enabled": true,
</code></pre>
<p>Then reboot and up your yaml and try to resolve this:</p>
<pre><code>postgres_pgadmin-container_1
</code></pre>
<p>Or maybe this:</p>
<pre><code>postgres_postgres_1
</code></pre>
<h2>Edit1</h2>
<p>Just copy the config file:</p>
<pre><code>sudo cp /usr/share/containers/containers.conf /etc/containers/containers.conf
</code></pre>
<p>Then in the file change the network backend to netavark using the following command:</p>
<pre><code>sed -i "/^\s*\#*\s*network_backend\s*=.*$/ s/^.*$/network_backend = \"netavark\"/" /etc/containers/containers.conf
</code></pre>
<p><strong>Notice:</strong> I think it's better to restart you system in order to apply changes.</p>
|
<p>I have seen both <code>serviceAccount</code> and <code>serviceAccountName</code> been used in a pod manifest. What is the difference?</p>
| <p>There is no difference <code>serviceAccount</code> is DEPRECATED and you shoud use <code>serviceAccountName</code> instead.</p>
<p>Quoting from the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#podspec-v1-core" rel="nofollow noreferrer">Kubernetes API docs > pod spec</a>:</p>
<blockquote>
<p>serviceAccount: <strong>Deprecated</strong>ServiceAccount is a deprecated alias for ServiceAccountName: Deprecated: <strong>Use serviceAccountName instead</strong></p>
</blockquote>
|
<p>I am a student and have to make a Bachelor thesis for my school. Is it possible to make a hybrid Kubernetes cluster, and how is this possible?</p>
<p>Is there a good application I can run in this cluster to show that it works?</p>
<p>I have made an AKS cluster and a on-prem cluster. Then I made a nginx loadbalancer and loadbalanced the 2, but the application isn't synced (which is logical). I tried using rancher but somehow I always got errors while trying to make a local cluster. Is it possible to have the storages synced somehow and be able to control the 2 clusters from one place or just make them one cluster? I have found you can use Azure Arc with azure AKS, is this a viable solution? Should I use a VPN instead?</p>
| <p>If by hybrid k8s cluster you mean a cluster that has nodes over different cloud providers, then yes that is entirely possible.</p>
<p>You can create a simple example cluster of this by using <a href="https://docs.k3s.io/" rel="nofollow noreferrer">k3s</a> (lightweight Kubernetes) and then using the <a href="https://docs.k3s.io/installation/network-options#distributed-hybrid-or-multicloud-cluster" rel="nofollow noreferrer">--node-external-ip</a> flag. This tells your nodes to talk to eachother via their public IP.</p>
<p>This sort of setup is described in <a href="https://kubernetes.io/docs/setup/best-practices/multiple-zones/" rel="nofollow noreferrer">Running in Multiple Zones</a> on the Kubernetes Documentation. You will have to configure the different places you place nodes at as different zones.<br />
You can fix storage on a cluster like this by using CSI drivers for the different environments you use, like AWS, GCP, AKS, etc. When you then deploy a PVC and it creates a PV at AWS for example, when you mount this volume on a pod, that pod will always be scheduled in the zone the PV resides in, otherwise scheduling will be impossible.</p>
<p>I personally am not running this set up in production, but I am using a technique that also suits this multiple zones idea with regards to networking. To save money on my personal cluster, I am telling my Nginx ingress controller to not make a LoadBalancer resource and to run the controllers as a DaemonSet. The Nginx controller pods have a HostPort open on the node they run on (since its a DaemonSet there won't be more than one of those pods per node) and this HostPort opens ports 80 and 443 on the host. When you then add more nodes, every one of the nodes with an ingress controller pod on it will become an ingress entrypoint. Just set up your DNS records to include all of those nodes and you'll have them load balanced.</p>
|
<p>I'm trying to construct a Kubernetes informer outside of the EKS cluster that it's watching. I'm using <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator" rel="nofollow noreferrer">aws-iam-authenticator</a> plugin to provide the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins" rel="nofollow noreferrer">exec-based credentials</a> to the EKS cluster. For the plugin to work, I'm assuming an IAM role and passing the AWS IAM credentials as environment variables.</p>
<p>The problem is that these credentials expire after an hour and cause the informer to fail with</p>
<blockquote>
<p>E0301 23:34:22.167817 582 runtime.go:79] Observed a panic: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server has asked for the client to provide credentials (get pods)", Reason:"Unauthorized", Details:(*v1.StatusDetails)(0xc0005b0300), Code:401}} (the server has asked for the client to provide credentials (get pods))</p>
</blockquote>
<p>Is there a better way of getting <code>ClientConfig</code> and <code>aws-iam-authenticator</code> to refresh the credentials?</p>
<p>Here's a rough skeleton of my code:</p>
<pre class="lang-golang prettyprint-override"><code>credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...))
creds, err := credentialsProvider.Retrieve(ctx)
config := clientcmdapi.NewConfig()
// ...
config.AuthInfos["eks"] = &clientcmdapi.AuthInfo{
Exec: &clientcmdapi.ExecConfig{
Command: "aws-iam-authenticator",
Args: []string{
"token",
"-i",
clusterName,
},
// These env vars are static! :(
Env: []clientcmdapi.ExecEnvVar{
{
Name: "AWS_ACCESS_KEY_ID",
Value: creds.AccessKeyID,
},
{
Name: "AWS_SECRET_ACCESS_KEY",
Value: creds.SecretAccessKey,
},
{
Name: "AWS_SESSION_TOKEN",
Value: creds.SessionToken,
},
},
APIVersion: "client.authentication.k8s.io/v1beta1",
InteractiveMode: clientcmdapi.NeverExecInteractiveMode,
},
}
restConfig, err := config.ClientConfig()
clientset, err = kubernetes.NewForConfig(restConfig)
informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30)
podInformer := cw.informerFactory.Core().V1().Pods().Informer()
</code></pre>
<p>Here are a couple similar threads I found:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/74563117/kubernetes-client-go-informers-getting-unauthorized-error-after-15-mins">Kubernetes client-go informers getting "Unauthorized" error after 15 mins</a></li>
<li><a href="https://github.com/kubernetes/client-go/issues/1189" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/issues/1189</a></li>
</ul>
| <p>My solution was to create write the credentials to a file and create a background thread to refresh that file. I can then pass tell <code>aws-iam-authenticator</code> to read the credentials from the file via the <code>AWS_SHARED_CREDENTIALS_FILE</code> environment variable.</p>
<p>This might also be possible using <code>AWS_WEB_IDENTITY_TOKEN_FILE</code> to save some steps, but I didn't look further.</p>
<p>The updated code looks like this</p>
<pre><code>func updateCredentials(ctx context.Context) {
creds, err := c.credentialsProvider.Retrieve(ctx)
s := fmt.Sprintf(`[default]
aws_access_key_id=%s
aws_secret_access_key=%s
aws_session_token=%s`, creds.AccessKeyID, creds.SecretAccessKey, creds.SessionToken)
err = os.WriteFile(credentialsFile.Name(), []byte(s), 0666)
return nil
}
func updateCredentialsLoop(ctx context.Context) {
for {
err := updateCredentials(ctx)
time.Sleep(5*time.Minute)
}
}
credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...))
credentialsFile, err := os.CreateTemp("", "credentials")
updateCredentials(ctx)
go updateCredentialsLoop(ctx)
config := clientcmdapi.NewConfig()
// ...
config.AuthInfos["eks"] = &clientcmdapi.AuthInfo{
Exec: &clientcmdapi.ExecConfig{
Command: "aws-iam-authenticator",
Args: []string{
"token",
"-i",
clusterName,
},
Env: []clientcmdapi.ExecEnvVar{
{
Name: "AWS_SHARED_CREDENTIALS_FILE",
Value: credentialsFile.Name(),
},
},
APIVersion: "client.authentication.k8s.io/v1beta1",
InteractiveMode: clientcmdapi.NeverExecInteractiveMode,
},
}
restConfig, err := config.ClientConfig()
clientset, err = kubernetes.NewForConfig(restConfig)
informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30)
podInformer := cw.informerFactory.Core().V1().Pods().Informer()
</code></pre>
|
<p>In, nomad, we have an env variable named NOMAD_ALLOC_INDEX, that gives me the index of the container, is there a similar env variable in Kubernetes for the pods to get the pod index?</p>
<p>Could you please provide your inputs?</p>
<p>Thanks,
Sarita</p>
| <p>Not really, unless you are using <a href="https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/" rel="nofollow noreferrer">indexed jobs (Kubernetes 1.21, Apr. 2021)</a>.</p>
<p>For indexed jobs, the index is exposed to each Pod in the <code>batch.kubernetes.io/job-completion-index</code> annotation and the <code>JOB_COMPLETION_INDEX</code> environment variable.</p>
<p>Official documentation: "<strong><a href="https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/" rel="nofollow noreferrer">Indexed Job for Parallel Processing with Static Work Assignment</a></strong>"</p>
<p>You can use the builtin <code>JOB_COMPLETION_INDEX</code> environment variable set by the Job controller for all containers.<br />
Optionally, you can <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">define your own environment variable through the downward API</a> to publish the index to containers</p>
<hr />
<p>There is also <a href="https://github.com/kubernetes/enhancements/pull/2630/files" rel="nofollow noreferrer"><code>kubernetes/enhancements</code> PR2630</a>, where the Pods hostname are set to <code>$(job-name)-$(index)</code>.</p>
<p>This is not yet integrated to Kubernetes, but could means you can derive the pod hostname from the job name and index, allowing you to get its IP. That means pods can address each other with a DNS lookup and communicate directly using Pod IPs.</p>
|
<p>I have run into an issue where <code>helm install</code>ing my charts will work fine, but when I go to restart the system, the nvidia gpu operator will fail to validate.</p>
<p>Bootstrapping is simple:</p>
<p><code>$ microk8s enable gpu</code></p>
<p>< watching dashboard for all the pods to turn green ></p>
<p><code>$ microk8s helm install -n morpheus morpheus-ai-engine morpheus-ai-engine</code></p>
<p>< watching for the morpheus pods to turn green ></p>
<p>Now I can check if the <code>ai-engine</code> pod has GPU access:</p>
<pre><code>$ kubectl exec ai-engine-897d65cff-b2trz -- nvidia-smi
Wed Feb 22 16:35:32 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro P400 Off | 00000000:04:00.0 Off | N/A |
| 0% 38C P8 N/A / 30W | 98MiB / 2048MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
</code></pre>
<p>Running the test vector-add pod returns a <code>Test PASSED</code>.</p>
<p>The trouble comes when I restart microk8s. The <code>nvidia-device-plugin-validator</code> pod fails to load with an <code>UnexpectedAdmissionError</code> claiming that no GPUs are available. And running <code>nvidia-smi</code> in the <code>ai-engine</code> pod returns a "command not found". The vector-add test pod won't start due to insufficient GPUs.</p>
<p>But if I uninstall the <code>ai-engine</code> chart and restart microk8s (waiting for the gpu operator pods to all turn green), I can then reinstall <code>ai-engine</code> and it works fine again, as does the vector-add test.</p>
| <p>This is an issue I am comming across too which lead me hear, it looks like it was just recently fixed with this patch <a href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/release-notes.html#id2" rel="nofollow noreferrer">https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/release-notes.html#id2</a></p>
<p>Which will evict pods requesting gpus while the operator starts up again.
This should solve your issue as it did mine.</p>
|
<h1>Problem</h1>
<p>After moving to RHEL 8.5 from 8.4, started having the issue of K8S pods failure.</p>
<pre><code>spec.template.spec.containers[0].env[52].name: Invalid value: "BASH_FUNC_which%%": a valid environment variable name must
consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*')
</code></pre>
<p>The <strong>env</strong> command in the login shell shows <code>BASH_FUNC_which%%</code> defined as below.</p>
<pre><code>BASH_FUNC_which%%=() { ( alias;
eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@"
}
</code></pre>
<p>Suggeted that <code>/etc/profile.d/which2.sh</code> is the one that sets up the <code>BASH_FUNC_which%%</code>.</p>
<ul>
<li>/etc/profile.d/which2.sh</li>
</ul>
<pre><code>
# shellcheck shell=sh
# Initialization script for bash, sh, mksh and ksh
which_declare="declare -f"
which_opt="-f"
which_shell="$(cat /proc/$$/comm)"
if [ "$which_shell" = "ksh" ] || [ "$which_shell" = "mksh" ] || [ "$which_shell" = "zsh" ] ; then
which_declare="typeset -f"
which_opt=""
fi
which ()
{
(alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@"
}
export which_declare
export ${which_opt} which
</code></pre>
<p>By removing it, the issue was fixed.</p>
<h1>Question</h1>
<p>Please help understand where exactly <code>BASH_FUNC_which%%</code> is setup in RHEL8.5 and what is the purpose of this <code>BASH_FUNC_which%%</code>, why is has been introduced in RHEL.</p>
| <p>Answering the first part:</p>
<p>Output of <code>rpm -q --changes which-2.21-28.el9.x86_64</code></p>
<pre><code>* Fri Apr 15 14:00:00 2022 Than Ngo <[email protected]> - 2.21-28
- Resolves: #2050996, error on login when using ksh as the default shell
* Tue Aug 10 14:00:00 2021 Mohan Boddu <[email protected]> - 2.21-27
- Rebuilt for IMA sigs, glibc 2.34, aarch64 flags
Related: rhbz#1991688
* Fri May 7 14:00:00 2021 Than Ngo <[email protected]> - 2.21-26
- Related: #1940464, fixed unbound variable
* Fri Apr 16 14:00:00 2021 Mohan Boddu <[email protected]> - 2.21-25
- Rebuilt for RHEL 9 BETA on Apr 15th 2021. Related: rhbz#1947937
* Tue Mar 23 13:00:00 2021 Than Ngo <[email protected]> - 2.21-24
- fix regression in zsh
* Sun Mar 21 13:00:00 2021 Than Ngo <[email protected]> - 2.21-23
- improved which2.sh
</code></pre>
<p>Possibly, [email protected] can elaborate what is going on?
Or, since you use RHEL, you could contact RedHat support?</p>
|
<p>I have a little Go app which creates YAML ressources for me I then deploy to a Kubernetes cluster.</p>
<p>It worked quite well but since some hours (?) it fails saying:</p>
<blockquote>
<p>error: Could not automatically download and install resource plugin 'pulumi-resource-kubernetes', install the plugin using <code>pulumi plugin install resource kubernetes</code>.
Underlying error: 401 HTTP error fetching plugin from <a href="https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest" rel="nofollow noreferrer">https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest</a></p>
</blockquote>
<p>Executing <code>pulumi plugin install resource kubernetes</code> manually returns the same result:</p>
<blockquote>
<p>error: 401 HTTP error fetching plugin from <a href="https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest" rel="nofollow noreferrer">https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest</a></p>
</blockquote>
<p>I have no idea what's wrong; I don't really get the message and more detail is not available - as far as I can see. The link (<a href="https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest" rel="nofollow noreferrer">https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest</a>) seems to work properly. If a GitHub API throttle is the reason, I don't see where to place an API key.</p>
<p>I am running MacOS Ventura 13.1 (22C65) on a MacBook Pro 2019. No recent changes here. Go has version 1.19 and the lib is github.com/pulumi/pulumi-kubernetes/sdk/v3 v3.24.1</p>
<p>Any hint is highly appreciated.</p>
| <p>The issue can be solved by providing a valid GitHub API token via the <code>GITHUB_TOKEN</code> environment variable.</p>
<p>Thanks a lot <em><strong>Christian Nunciato</strong></em> for the hint.</p>
<p>However, if anybody knows some background about this and why this happened so surprisingly, I would be very interested in some insights/explanations.</p>
|
<p>I've just enabled docker desktop's kubernetes feature on my m2 macbook air, and I observed that the <code>vpnkit-controller</code> pod under <code>kube-system</code> namespace has restarted several times since enabling the feature, is this normal or I missed some of the settings?</p>
<p><a href="https://i.stack.imgur.com/HnGmH.png" rel="nofollow noreferrer">screen shot of pod status</a></p>
| <h2>Update:</h2>
<p>This problem was resolved after upgrading Docker-Desktop to version <code>4.20.1</code></p>
<p>which uses a new <code>docker/desktop-vpnkit-controller</code> image tagged with <code>dc331cb22850be0cdd97c84a9cfecaf44a1afb6e</code></p>
<hr />
<h2>Original Answer</h2>
<p>you can get the log of crashed <code>vpnkit-controller</code> pod via command as follow:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl logs -p -n kube-system vpnkit-controller
</code></pre>
<p>this shows me as follows</p>
<pre><code>2023/03/09 03:45:54 Starting kube-vpnkit-forwarder...
W0309 04:03:37.888472 1 reflector.go:334] github.com/moby/vpnkit/go/cmd/kube-vpnkit-forwarder/main.go:49: watch of *v1.Service ended with: The resourceVersion for the provided watch is too old.
log: exiting because of error: log: cannot create log: open /tmp/kube-vpnkit-forwarder.vpnkit-controller.unknownuser.log.WARNING.20230309-040337.1: no such file or directory
</code></pre>
|
<p>How does a StatefulSet ensure that a PersistentVolume and Pod will always be provisioned in the same Availability Zone? I understand that each pod in a StatefulSet has a storage identity, and that each pod will remember the PVC it is using, but am struggling to find the official documentation to support this.</p>
| <blockquote>
<p>pod will remember the PVC it is using, but am struggling to find the
official documentation to support this</p>
</blockquote>
<p>I think here is the exact line you are looking for</p>
<blockquote>
<p>You can set the .spec.volumeClaimTemplates which can provide stable
storage using PersistentVolumes provisioned by a PersistentVolume
Provisioner.</p>
</blockquote>
<p>Ref doc : <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates</a></p>
<p>The general flow goes like</p>
<p>You have <strong><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a></strong>(backed by csi) which list of <strong>zones</strong> K8s goes <strong>round robin</strong> to create the <strong>PV</strong></p>
<p>Based on the <strong>PVC</strong> pod access the <strong>volume</strong> and schedule to AZ</p>
|
<p>Below is the manifest file i used to enable calico CNI for k8s, pods are able to communicate over ipv4 but i am unable to reach outside using ipv6, k8s version v1.14 and calico version v3.11,
am i missing some settings,</p>
<p>forwarding is enabled on host with "sysctl -w net.ipv6.conf.all.forwarding=1"</p>
<pre><code> ---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "vxlan"
# Configure the MTU to use
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam",
"assign_ipv4": "true",
"assign_ipv6": "true"
},
"container_settings": {
"allow_ip_forwarding": true
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset
---
# Source: calico/templates/rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Nodes are watched to monitor for deletions.
- apiGroups: [""]
resources:
- nodes
verbs:
- watch
- list
- get
# Pods are queried to check for existence.
- apiGroups: [""]
resources:
- pods
verbs:
- get
# IPAM resources are manipulated when nodes are deleted.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
verbs:
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
# Needs access to update clusterinformations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- clusterinformations
verbs:
- get
- create
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
# The Calico IPAM migration needs to get daemonsets. These permissions can be
# removed if not upgrading from an installation using host-local IPAM.
- apiGroups: ["apps"]
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: calico/cni:v3.11.3
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
securityContext:
privileged: true
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.11.3
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: calico/pod2daemon-flexvol:v3.11.3
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.11.3
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s"
# Enable IPIP
- name: CALICO_IPV4POOL_VXLAN
value: "Always"
- name: CALICO_IPV6POOL_VXLAN
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_VXLAN
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "192.168.128.1/18"
- name: CALICO_IPV6POOL_CIDR
value: "fd00::/80"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "true"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
#- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
#- -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: calico/kube-controllers:v3.11.3
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml
</code></pre>
<p>also i saw that all calico interfaces on host got same ipv6
fe80::ecee:eeff:feee:eeee/64
which is default ipv6 gateway for all pods</p>
<p>also observed when enabled vxlan it was not adding routes
vxlan.calico Link encap:Ethernet HWaddr 66:6a:cb:79:4e:d7
inet addr:192.168.191.64 Bcast:192.168.191.64 Mask:255.255.255.255
inet6 addr: fe80::646a:cbff:fe79:4ed7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1410 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:125 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 0 0 0 eth0
10.243.0.0 * 255.255.224.0 U 0 0 0 eth0
192.168.0.0 * 255.255.192.0 U 0 0 0 eth0
192.168.191.70 * 255.255.255.255 UH 0 0 0 cali1c1c9f58b42
192.168.191.71 * 255.255.255.255 UH 0 0 0 cali2fc31b4251c
192.168.191.72 * 255.255.255.255 UH 0 0 0 cali5be518cf856
192.168.191.73 * 255.255.255.255 UH 0 0 0 cali5e3a68f7b5f
192.168.191.74 * 255.255.255.255 UH 0 0 0 cali59474b079db</p>
| <p>We have faced the same issue we were able to reach through ipv4 and not working with ipv6.</p>
<p>We did the following,</p>
<pre><code># Reset the Cluster
kubeadm reset
# Create the cluster by adding the ipv6 CIDR along with the ipv4 address also enable IPv6DualStack
# NOTE: The ipv6 given below matches the first 4 octet with Host ipv6 address ie., fde1
kubeadm init --kubernetes-version=v1.21.1 --apiserver-advertise-address=172.16.2.1 --feature-gates="IPv6DualStack=true" --pod-network-cidr=172.16.0.0/24,fde1::/64 --service-cidr=172.16.1.0/24,fde1::/112
</code></pre>
<p>By now we will have the ipv6 address seen for interfaces in all our PODs starting with "fde1:"</p>
<p>Even after configuring the above, we were not able to ping the external ipv6 address. Now we found that the packet reaches the target machine and comes back to the k8s host machine, but didn't reach the respective POD.</p>
<p>Now we found that we were missing some ip6tables rules. we enables the natOuting to true as below,</p>
<pre><code># cat /etc/calico/default-ipv6-ippool.yaml
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: default-ipv6-ippool
spec:
blockSize: 122
cidr: fde1::/64
ipipMode: Never
nodeSelector: all()
natOutgoing: true
vxlanMode: Never
# calicoctl apply -f /etc/calico/default-ipv6-ippool.yaml
</code></pre>
<p>Now above created the entry in ip6tables and we can able to ping the external IPv6</p>
|
<p>Good afternoon!</p>
<p>I am relatively new to Kubernetes. I would like to ask for your help in knowledge! What is the best way to deploy Postgresql or MySql in kubernetes for production? Suggest articles or resources, I would be very grateful.</p>
<p>I only see examples and information on how to deploy Postgresql dev, very little information on how to set up a production environment. Thank you!!!</p>
| <p>We have been running multiple databases on K8s like Elasticsearch, Redis some message brokers so it you can definitely run on Production.</p>
<p>Now i would suggest going with the Operator managed <strong>Database deployment</strong>. As on Production, you might be expecting an <strong>HA</strong>, data replication which you won't be having an idea in depth. <strong>Operator</strong> manages the <strong>deployment</strong> in simple words and do the operation as need.</p>
<p><a href="https://postgres-operator.readthedocs.io/en/latest/" rel="nofollow noreferrer">Here</a> is the document to follow for <strong>postgres</strong> operator is has the details on how <strong>CRD</strong>, <strong>stateful sets</strong> volume work together.</p>
<p>Zalando postgres-operator : <a href="https://github.com/zalando/postgres-operator" rel="nofollow noreferrer">https://github.com/zalando/postgres-operator</a></p>
<p>Comparision between available operator : <a href="https://blog.palark.com/comparing-kubernetes-operators-for-postgresql/" rel="nofollow noreferrer">https://blog.palark.com/comparing-kubernetes-operators-for-postgresql/</a></p>
|
<p>I'm trying to deploy kafka on local k8s, then I need to connect to it by application and using offset explorer</p>
<p>so, using kubectl I created zookeeper service and deployment using this yml file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30091
targetPort: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: bitnami/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
</code></pre>
<p>Then, I created kafka service and deployment using this yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-service
name: kafka-service
spec:
type: NodePort
ports:
- name: kafka-port
port: 9092
nodePort: 30092
targetPort: 9092
selector:
app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kafka-broker
name: kafka-broker
spec:
replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
hostname: kafka-broker
containers:
- image: bitnami/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper-service:2181"
- name: KAFKA_LISTENERS
value: PLAINTEXT://localhost:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://localhost:9092
# Creates a topic with one partition and one replica.
- name: KAFKA_CREATE_TOPICS
value: "bomc:1:1"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
</code></pre>
<p>And both services and deployment created and running
<a href="https://i.stack.imgur.com/Zh5Rl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zh5Rl.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/RekR0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RekR0.png" alt="enter image description here" /></a></p>
<p>And I have ingress for this services</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /health
pathType: Prefix
backend:
service:
name: health-app-service
port:
number: 80
- path: /actuator
pathType: Prefix
backend:
service:
name: health-app-service
port:
number: 80
- path: /jsonrpc
pathType: Prefix
backend:
service:
name: core-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: kafka-service # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ Π²Π°ΡΠ΅Π³ΠΎ Kafka-ΡΠ΅ΡΠ²ΠΈΡΠ°
port:
number: 9092 # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠ°, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΠΎΠ³ΠΎ Π΄Π»Ρ Kafka
- path: /
pathType: Prefix
backend:
service:
name: kafka-service # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ Π²Π°ΡΠ΅Π³ΠΎ Kafka-ΡΠ΅ΡΠ²ΠΈΡΠ°
port:
number: 30092 # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠ°, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΠΎΠ³ΠΎ Π΄Π»Ρ Kafka
- path: /
pathType: Prefix
backend:
service:
name: kafka-service # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ Π²Π°ΡΠ΅Π³ΠΎ Kafka-ΡΠ΅ΡΠ²ΠΈΡΠ°
port:
name: kafka-port # ΠΠ°Π·Π²Π°Π½ΠΈΠ΅ ΠΏΠΎΡΡΠ°, ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΠΌΠΎΠ³ΠΎ Π΄Π»Ρ Kafka
- path: /
pathType: Prefix
backend:
service:
name: zookeeper-service
port:
name: zookeeper-port
</code></pre>
<p><a href="https://i.stack.imgur.com/KZVw1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KZVw1.png" alt="enter image description here" /></a></p>
<p>but, when I try to connect to this kafka using offset key tool, there is error connection.</p>
<p><a href="https://i.stack.imgur.com/KK2zV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KK2zV.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/1y7bk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1y7bk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/D73lC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D73lC.png" alt="enter image description here" /></a></p>
<p>When I use localhost:30092 like a bootstrap server - error with logs:</p>
<pre><code> 12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Starting application : Offset Explorer
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Version : 2.3
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Built : Jun 30, 2022
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.home : C:\Users\Roberto
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.dir : C:\Program Files\OffsetExplorer2
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - os.name : Windows 10
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.runtime.version : 1.8.0_232-b09
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - max memory=3586 MB
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - available processors=8
12/ΠΌΠ°Ρ/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.security.auth.login.config=null
12/ΠΌΠ°Ρ/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Finding plugins in directory C:\Program Files\OffsetExplorer2\plugins
12/ΠΌΠ°Ρ/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Found files in plugin directory, count=1
12/ΠΌΠ°Ρ/2023 22:32:46.121 INFO com.kafkatool.ui.MainApp - Loading user settings
12/ΠΌΠ°Ρ/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server group settings
12/ΠΌΠ°Ρ/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server connection settings
12/ΠΌΠ°Ρ/2023 22:32:50.103 INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [localhost:30092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
12/ΠΌΠ°Ρ/2023 22:32:50.126 DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:30092 (id: -1 rack: null)], partitions = [], controller = null).
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-reauthentication:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication-no-reauth:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-authentication:
12/ΠΌΠ°Ρ/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-reauthentication:
12/ΠΌΠ°Ρ/2023 22:32:50.198 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name reauthentication-latency:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:
12/ΠΌΠ°Ρ/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:
12/ΠΌΠ°Ρ/2023 22:32:50.204 WARN org.apache.kafka.clients.admin.AdminClientConfig - The configuration 'group.id' was supplied but isn't a known config.
12/ΠΌΠ°Ρ/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.0
12/ΠΌΠ°Ρ/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018
12/ΠΌΠ°Ρ/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1678649570204
12/ΠΌΠ°Ρ/2023 22:32:50.214 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Kafka admin client initialized
12/ΠΌΠ°Ρ/2023 22:32:50.215 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Queueing Call(callName=listNodes, deadlineMs=1678649690215) with a timeout 120000 ms from now.
12/ΠΌΠ°Ρ/2023 22:32:50.215 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1
12/ΠΌΠ°Ρ/2023 22:32:50.228 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent
12/ΠΌΠ°Ρ/2023 22:32:50.230 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received
12/ΠΌΠ°Ρ/2023 22:32:50.232 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency
12/ΠΌΠ°Ρ/2023 22:32:50.232 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
12/ΠΌΠ°Ρ/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions.
12/ΠΌΠ°Ρ/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1.
12/ΠΌΠ°Ρ/2023 22:32:50.376 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196)
at java.lang.Thread.run(Thread.java:748)
12/ΠΌΠ°Ρ/2023 22:33:11.787 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected.
12/ΠΌΠ°Ρ/2023 22:33:12.766 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1
12/ΠΌΠ°Ρ/2023 22:33:12.767 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
12/ΠΌΠ°Ρ/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions.
12/ΠΌΠ°Ρ/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1.
12/ΠΌΠ°Ρ/2023 22:33:12.768 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
| <p>You configured the NodePort of <code>30092</code> for your kafka broker so to be able to connect to it you should use <code><node-ip>:30092</code> as your bootstrap server address.</p>
<p>If you're running minikube, you can find the node ip with the <code>minikube ip</code> command.<br />
(Although you mentioned that the connection to your zookeeper at <code>localhost:30181</code> was successful so you might as well try <code>localhost:30092</code> for your kafka.)</p>
<p>Also, you don't need to port forward via <code>kubectl</code> in this case since NodePort is exposed to the outside world anyway.</p>
<p>To debug the connection problems with Offset Explorer, enable debug logging in its settings, make an attempt to connect to your broker and check the tool's debug logs for the details of the error. The logs should be located in the same directory as the tool executable if you're running on Windows.</p>
|
<p>Alice deployes a chart from her computer to a k8s cluster.
Bob runs helm ls -a on that cluster and sees Alice's chart.</p>
<p>I am looking for any configmaps on my cluster that could hold chart info but cant find any.</p>
<p>Where is it then?</p>
| <p>When you apply the helm release it convert the Helm template to K8s resources like <strong>deployment</strong>, <strong>configmap</strong>, <strong>secret</strong>, <strong>statefulset</strong> and those values get applied and stored to <strong>etcd</strong> datastore of <strong>K8s</strong>.</p>
<p>Still <strong>helm</strong> also create the <strong>secret</strong> for <strong>release</strong> so if you check the <code>kubectl -n <namespace> get secret</code> there will be <strong>secrets</strong>.</p>
<p>Secret name will be something like : <code>sh.helm.release.v1.<release-name>.<revision></code></p>
|
<p>Can anyone explain in simple terms what is the difference between deploying Kafka through Kubernetes operator (for example Strimzi ) and Kafka helm chart or manifest file?</p>
<p>Previously in my project we used helm chart for kafka but for now we've received requirement to move to kafka strimzi-operator. I can't access people who invented it but my colleguess also don't know the reason.</p>
<p>So please explain why kafka strimzi operator better (or maybe worse) than kafka helm chart ?</p>
| <p>Helm is like a package manager. It can install applications on your cluster, but it has only some basic logic for updates to its configuration or for version upgrades. You control it through the <code>helm</code> commands and call it when you need it. So it helps you with some tasks, but it is still up to you to run your Kafka cluster day-to-day.</p>
<p>Operators on the other hand are (usually) more sophisticated. They don't handle only the installation but also <em>day-2 operations</em>. They essentially try to encode the knowledge and the tasks a human operator running someting like a Kafka cluster would need and do into an application (= the operator). The operator runs all the time in your cluster, and constantly monitors the Kafka cluster to see what is happening in it, if some actions should be taken, and so on. For something like Kafka, the Strimzi operator for example incorporates the rolling update knowledge such as that the controller broker should be rolled last and partition replicas kept in-sync, it deals with upgrades which in Kafka usually consist of multiple rolling updates, handles certificate renewals, and much more.</p>
<p>So an operator will normally do a lot more things for you than a Helm Chart as it operates the Kafka cluster for you. For stateful applications such as Kafka or for example databases, this can often make a huge difference. But it is usually also more opinionated as it does things the way it was programmed to which might be different from what you were used to. Helm Charts normally give you a lot of freedom to do things any way you want.</p>
<p><em>Note: Different operators have different features and levels of maturity. So they might or might not support different tasks.</em></p>
<p>If you google for it, you will find many different articles, videos, or conference talks about the <em>Kubernetes operator pattern</em> and compare it with Helm Charts which will explain the differences.</p>
<p><em>(Disclaimer: I'm one of the Strimzi project maintatainers)</em></p>
|
<p>We would like to pack as many pods into each nodes in our cluster as much as possible do decrease the amount of nodes we have on some of our environments. I saw <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/descheduler</a> HighNodeUtilization strategy which seems to fit the bill for what we need. However, it seems the cluster needs to have the scoring strategy <strong>MostAllocated</strong> to work with this.</p>
<p>I believe that the kube-scheduler in EKS in inaccessible to be configured. How do I then configure the MostAllocated scoring strategy?</p>
<p>Better yet, how do I configure this automated packing of pods in as little nodes as possible in a cluster without the use of Descheduler?</p>
<p>Tried deploying the descheduler as is without the MostAllocated scoring strategy configured. Obviously did not provide the results expected.</p>
<p>Many of my digging online led to having to create a custom-scheduler, but I have found little/unclear resources to be able to do so.</p>
| <p>Eks does not provide the ability to override the default scheduler configuration, which means that actually configuring the <code>default-scheduler</code> profile with the <code>MostAllocated</code> scoring strategy is not an option. However, you may run your own scheduler <em>alongside</em> the default scheduler, and <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">this one may be configured how you like</a>. Once you create a custom scheduler, you can override <em>that</em> scheduler's configuration with the <code>MostAllocated</code> scoring strategy and then instruct your workloads to use that scheduler.</p>
<p>In order to run multiple schedulers, you have to set up several Kubernetes Objects. These objects are documented in the guide linked above:</p>
<ul>
<li>ServiceAccount</li>
<li>ClusterRoleBinding x2</li>
<li>RoleBinding</li>
<li>ConfigMap</li>
<li>Deployment</li>
</ul>
<p>The deployment will use the standard <code>kube-scheduler</code> image provided by Google, <a href="https://www.youtube.com/watch?v=IYcL0Un1io0" rel="nofollow noreferrer">unless you'd like to create your own</a>. I wouldn't recommend it.</p>
<h3>Major Note: Ensure your version of the kube-scheduler is the same version as the control plane. This will not work otherwise.</h3>
<p>In addition, ensure that your version of the <code>kube-scheduler</code> is compatible with the version of the configuration objects that you use to configure the scheduler profile. <code>v1beta2</code> is safe for <code>v1.22.x</code> -> <code>v1.24.x</code> but only <code>v1beta3</code> or <code>v1</code> is safe for <code>v.1.25+</code>.</p>
<p>For example, here's a working version of a deployment manifest and config map that are used to create a custom scheduler compatible with <code>k8s</code> <code>v.1.22.x</code>. Note you'll still have to create the other objects for this to work:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-scheduler
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
name: custom-scheduler
template:
metadata:
labels:
component: scheduler
name: custom-scheduler
tier: control-plane
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --config=/etc/kubernetes/custom-scheduler/custom-scheduler-config.yaml
env: []
image: registry.k8s.io/kube-scheduler:v1.22.16
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
name: custom-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
volumeMounts:
- mountPath: /etc/kubernetes/custom-scheduler
name: custom-scheduler-config
serviceAccountName: custom-scheduler
volumes:
- configMap:
name: custom-scheduler-config
name: custom-scheduler-config
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
data:
custom-scheduler-config.yaml: |
apiVersion: kubescheduler.config.k8s.io/v1beta2
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: false
profiles:
- pluginConfig:
- args:
apiVersion: kubescheduler.config.k8s.io/v1beta2
kind: NodeResourcesFitArgs
scoringStrategy:
resources:
- name: cpu
weight: 1
- name: memory
weight: 1
type: MostAllocated
name: NodeResourcesFit
plugins:
score:
enabled:
- name: NodeResourcesFit
weight: 1
schedulerName: custom-scheduler
metadata:
name: custom-scheduler-config
namespace: kube-system
</code></pre>
|
<p>I know docker has options like syslog <code>log-driver</code> and <code>log-opts</code> so that it can be used to send the logs to say, the UDP server.</p>
<p>Marathon is the docker orchestrator here and a config file has the below:</p>
<pre><code> {
"key": "log-driver",
"value": "syslog"
},
{
"key": "log-opt",
"value": "syslog-address=udp://some-udp-server:port"
},
</code></pre>
<p>The existing setup is such that certain downstream systems/entities take the information received on this UDP server to create visualisations on Grafana.</p>
<p>How do I achieve the same in a k8s manifest file that I'm deploying via helm3? Or is there a third-party application I need to use? Basically, I want to send the logs that come in the <code>kubectl logs -f <pod_name></code> command to this UDP server with minimal intrusion. I would only like to replace this part of the flow so that I don't have to disturb any of the downstream systems.</p>
| <p>As David suggested there is no option to control the log target. However as requested for log collector application writing this answer.</p>
<p>If your <strong>application</strong> is <strong>streaming</strong> the <strong>UDP</strong> logs you can use the <strong><a href="https://www.graylog.org/products/source-available/" rel="nofollow noreferrer">Graylog</a></strong> Opensource. It uses <strong>Mongo & Elasticsearch</strong> as backend databases. We been using <strong>Graylog</strong> to collect logs from the application POD.</p>
<p>Now regarding the <strong>log</strong> <strong>collector</strong> for <code>kubectl logs -f <POD></code> you can push all these logs from the <strong>Worker Node</strong> file system using the <strong>fluentd</strong> collector. <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#log-location-node" rel="nofollow noreferrer">Log location</a> will be <code>/var/log/pods</code></p>
<p>You can use the <strong>Fluentd</strong> collector along with the <strong>Graylog</strong> Gelf UDP input</p>
<pre><code>Fluentd -> pushing over gelf UDP -> Graylog input saving to Elasticsearch
</code></pre>
<p>Here is the ref you can follow : <a href="https://docs.fluentd.org/how-to-guides/graylog2" rel="nofollow noreferrer">https://docs.fluentd.org/how-to-guides/graylog2</a></p>
<p>Above example uses <strong>Graylog2</strong> now <a href="https://www.graylog.org/products/source-available/" rel="nofollow noreferrer">Graylog3</a> version is available opensource would suggest checking out that.</p>
<p>You can refer my Github repo : <a href="https://github.com/harsh4870/OCI-public-logging-uma-agent" rel="nofollow noreferrer">https://github.com/harsh4870/OCI-public-logging-uma-agent</a></p>
<p>Will get more idea about how deployment setting up log file on <strong>Node's filesystem</strong> and further it gets processed by <strong>collector</strong> although not using <strong>fluentd</strong> but just for ref.</p>
<p>Oracle <strong>OCI</strong> <strong>UMA</strong> agent also similar job like <strong>fluentd</strong> collector only, <strong>parsing</strong> & <strong>pushing</strong> logs to the backend.</p>
|
<p>I am having a problem with my GitLab pipeline when trying to mount a docker volume inside a container. Before I explain the problem, first I will describe my whole setup, because I think that is very essential to understand the problem, because I think that this is the reason why I am having this problem.</p>
<h2>Setup</h2>
<p>Okay, so to start off, I have a kubernetes cluster. This cluster runs my <code>gitlab/gitlab-ee:15.8.0-ee.0</code> image. I installed a GitLab runner in this cluster as well, so that I am able to run pipelines of course. Then the last thing I installed is a docker instance, because I saw that you can mount the <code>docker.sock</code> from your host machine to the gitlab pipeline, but this is not recommended, because the entire cluster relies on that <code>docker.sock</code>, so I have another instance of docker running and I am mounting that <code>docker.sock</code> for pipelines only. These 3 deployments are used by me to run GitLab pipelines.</p>
<h2>The problem</h2>
<p>I am happy with the way everything is setup, but I think I am still missing some configuration, because the mounting of docker volumes are not working properly in pipelines. I have this script to test this, which contains this code:</p>
<pre><code>image: docker:20.10.16-dind
variables:
DOCKER_HOST: "tcp://docker-service:2375" # <-- Address to reach the docker instance from my cluster
DOCKER_COMPOSE_CMD: "docker-compose -f docker-compose-test.yml"
stages:
- test
test:
stage: test
script:
- $DOCKER_COMPOSE_CMD down --volumes --remove-orphans
- $DOCKER_COMPOSE_CMD build
- $DOCKER_COMPOSE_CMD --env-file .env.pipeline up -d
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test sh -c "ls"
</code></pre>
<p>With the following docker-compose-test.yml:</p>
<pre><code>version: '3.7'
services:
laravel-api-test:
build:
context: .
dockerfile: docker/development/Dockerfile
volumes:
- .:/var/www/html
environment:
- COMPOSER_MEMORY_LIMIT=-1
depends_on:
- database-test
database-test:
image: postgres:15.1-alpine
ports:
- ${DB_PORT}:5432
environment:
POSTGRES_DB: ${DB_DATABASE}
POSTGRES_PASSWORD: ${DB_PASSWORD_SECRET}
POSTGRES_USER: ${DB_USERNAME_SECRET}
redis-test:
image: redis:7.0.8
ports:
- ${REDIS_PORT}:6379
networks:
default:
name: application
</code></pre>
<p>Now what this pipeline does, it builds the docker containers and then starts them. Then it runs the <code>ls</code> command which prints out all the files in the working-dir of the container. However, this working-dir is empty. This is caused by the volume mount in the <code>docker-compose-test.yml</code> with this line:</p>
<pre><code>volumes:
- .:/var/www/html
</code></pre>
<p>In the Dockerfile I also have this:</p>
<pre><code>COPY . /var/www/html/
</code></pre>
<p>So when I remove the volume mount in <code>docker-compose-test.yml</code>, all files are there, so the copying does work for the <code>Dockerfile</code>, but not mounting it later on. I saw <a href="https://gitlab.com/gitlab-org/gitlab-foss/-/issues/41227" rel="nofollow noreferrer">this</a> thread and tried some of their solutions and tested it with their test script:</p>
<pre><code>variables:
SHARED_PATH: /builds/shared/$CI_PROJECT_PATH
script:
- mkdir -p ${SHARED_PATH}
- touch ${SHARED_PATH}/test_file
- docker run -v ${SHARED_PATH}:/mnt ubuntu ls /mnt
</code></pre>
<p>But this still resulted in an empty <code>/mnt</code> dir, while the <code>test_file</code> should have been there. In the GitLab runner I added this line to the config:</p>
<pre><code>volumes = ["/cache", "/builds:/builds"]
</code></pre>
<p>Unfortunately, this did not change anything. I am not sure, but my guess is that I need to access the /builds from my other docker instance, because I have a feeling that I am mounting the /builds from the host machine, which is not the docker I am using in my pipeline. If this is the case, I am not sure how to configure my Kubernetes cluster to use the other one. The weird thing is that when
I do <code>cd /builds/projects/laravel-api</code> (my repo is named <code>laravel-api</code> and its inside the <code>projects</code> group) and then <code>ls</code> in my pipeline, I do see my repository containing all the files. But when I try to mount that directory in my docker-compose-test.yml I still get an empty dir. So I mean this:</p>
<pre><code>volumes:
- /builds/projects/laravel-api:/var/www/html
</code></pre>
<p>So every way of mounting volumes after builds are resulting in empty directories...</p>
<h2>Wrap up</h2>
<p>So to summarize the problem. Every form of mounting I do in my pipeline results eventually in an empty directory. When copying files from a Dockerfile only the directory does work, but that is not something I can work with.</p>
<p>I hope this covers the entire problem. Some help is really appreciated! If there are any questions about the setup or something like that, please ask I will respond ASAP!</p>
| <p>This is a tricky problem requiring in-depth explanation of GitLab and Kubernetes dynamics.</p>
<p><strong>Summary of your problem:</strong></p>
<blockquote>
<pre><code> volumes:
- .:/var/www/html
# or
# - /builds/projects/laravel-api:/var/www/html
</code></pre>
</blockquote>
<p>The Docker service you are using does not share the same filesystem than your job's Pod. When you instruct Docker to mount <code>/builds/projects/laravel-api</code> (or <code>.</code> which resolves to the same), Docker will mount this directory <em>from its own filesystem</em> which is indeed empty.</p>
<p><img src="https://i.stack.imgur.com/JBdvN.png" alt="" /></p>
<p>As you already pointed out, you must somehow share the <code>/builds</code> directory between job's Pod and Docker service.</p>
<h2>Solution 1: share Persistent Volume between Docker service and job's Pod</h2>
<p>Create a <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#persistentvolumeclaim-volume" rel="nofollow noreferrer">Persistent Volume Claim (PVC)</a> so that they share the <code>/builds</code> directory:</p>
<ul>
<li>Create a PVC such as:
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-claim
spec:
accessModes:
- ReadWriteOnce
# or ReadWriteMany
resources:
requests:
storage: 8Gi
</code></pre>
</li>
<li>Configure GitLab Runner Kubernetes executor to mount the PVC at <code>/builds</code>. For example:
<pre class="lang-ini prettyprint-override"><code>[[runners]]
[runners.kubernetes]
[[runners.kubernetes.volumes.pvc]]
name = "gitlab-claim"
mount_path = "/builds"
</code></pre>
</li>
<li>Configure Docker deployment to mount the PVC at <code>/builds</code>. This depends on how you configured Docker service, but you'll probably have to configure a Container spec such as:
<pre class="lang-yaml prettyprint-override"><code>spec:
volumes:
- name: job-volume
persistentVolumeClaim:
claimName: gitlab-claim
containers:
- name: docker
image: docker:dind
# ...
volumeMounts:
- name: job-volume
mountPath: /builds
</code></pre>
</li>
</ul>
<p>Your setup will look something like this. Both pods will share the same Volume mounted at <code>/builds</code>.</p>
<p><img src="https://i.stack.imgur.com/4OLW4.png" alt="" /></p>
<p>Important note: choose carefully between <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer"><code>ReadWriteOnce</code> / <code>ReadWriteMany</code> access mode</a>:</p>
<ul>
<li>Use <code>ReadWriteMany</code> if your provider supports it, it will allow the same volume to be shared across multiple nodes.</li>
<li>If not, <code>ReadWriteOnce</code> will require your job's Pods to be running on the same node as Docker service as volume won't be shareable across Kubernetes nodes.</li>
</ul>
<h2>Solution 2: use GitLab service to run Docker-in-Docker (DinD)</h2>
<p>This setup is a bit different as you'll deploy a fresh Docker service every build. However data won't be persisted across jobs as Docker service will be recreated for each job.</p>
<p><a href="https://docs.gitlab.com/runner/executors/kubernetes.html#mount-volumes-on-service-containers" rel="nofollow noreferrer"><em>Volumes defined for the build container are also automatically mounted for all services containers.</em></a>. You can then share an <code>emptyDir</code> between job's pod and a Docker DinD service:</p>
<ul>
<li>Configure a <a href="https://docs.gitlab.com/ee/ci/docker/using_docker_build.html" rel="nofollow noreferrer">DinD service on your CI</a> such as:
<pre class="lang-yaml prettyprint-override"><code>image: docker:20.10.16-dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
services:
- docker:20.10.16-dind
</code></pre>
</li>
<li>Configure Kubernetes executor to mount an <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#mount-volumes-on-service-containers" rel="nofollow noreferrer"><code>emptyDir</code> at <code>/builds</code></a> which will be shared between job's and services containers:
<pre class="lang-ini prettyprint-override"><code>[runners.kubernetes]
[[runners.kubernetes.volumes.empty_dir]]
name = "builds-data"
mount_path = "/builds"
</code></pre>
</li>
</ul>
<p>In this case, you'll have a single Pod with both job's container and DinD service container, both sharing the <code>emptyDir</code> volume <code>/builds</code>. Your <code>laravel</code> and other containers will run <em>inside</em> the DinD service container.</p>
<p><img src="https://i.stack.imgur.com/lp94D.png" alt="enter image description here" /></p>
<h2>Which solution is better?</h2>
<p>Depends on context:</p>
<p>Efficiency-wize, Solution 1 is better as your Docker service will remain static and keep build caches, downloaded images, etc. allowing much faster Docker builds and deployments. However, your Docker service may be shared with multiple projects/entities, causing security risk: anyone with access to Docker service may also access containers not related to their project or scope and eventually retrieve sensitive data from within another projects container/volume/...</p>
<p>For example: consider project A and project B using Docker service to deploy testing apps, each configured to access AWS account A & B. Entity with access to Project A may easily grab credentials from a container running for Project B and access AWS account B to start a crypto miner on EC2.</p>
<p>It's possible to mitigate that risk by deploying Docker services dedicated per project or scope depending on your need, and/or securing via SSH/TLS to only allow certain entities to use it.</p>
<p>Solution 2 is much easier to setup and secure, but will be less efficient: a new CI Docker service will be started each build, so you'll have to download Docker images again every pipeline and any Docker build cache will be lost. You may optimize CI config to download cache or such, but it requires a more complex setup (might as well go for Solution 2 if you're ready to add complexity)</p>
<hr />
<p>This is all a bit complex and plays with lots of various concepts. Do not hesitate to ask questions, I'll answer or edit as possible.</p>
|
<p>Kubernetes cluster is running on a node (Virtual Machine) which gets its network configuration from a DHCP server (this includes DNS settings).</p>
<p>From within a pod I need to be able to reach external resources identified by FQDN. The DNS on the K8 node (VM) is able to solve the FQDN to an IP address but how could I make it accessible to the pod? Also I need to support the scenario when the DNS settings change in the VM where K8 cluster is running.</p>
<p>The FQDN resources are not known upfront.</p>
| <p>You can configure Kubernetes to use your host's DNS resolver by setting up dnsPolicy for the pod by selecting either Default or ClusterFirstWithHostNet.</p>
<ul>
<li>The first option uses the cluster DNS if available, and falls back to the host's DNS resolver if none is found.</li>
<li>The second option uses the cluster DNS first, and fallsback to the host DNS resolver if it can't resolve the name.</li>
</ul>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a>
<a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/</a></p>
|
<p>I got freez of Sklearn-classifier in MLRun (the job is still running after 5, 10, 20, ... minutes), see log output:</p>
<pre><code>2023-02-21 13:50:15,853 [info] starting run training uid=e8e66defd91043dda62ae8b6795c74ea DB=http://mlrun-api:8080
2023-02-21 13:50:16,136 [info] Job is running in the background, pod: training-tgplm
</code></pre>
<p>see freez/pending issue on Web UI:</p>
<p><a href="https://i.stack.imgur.com/6IhBB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6IhBB.png" alt="enter image description here" /></a></p>
<p>I used this source code and <code>classifier_fn.run(train_task, local=False)</code> generates freez:</p>
<pre><code># Import the Sklearn classifier function from the function hub
classifier_fn = mlrun.import_function('hub://sklearn-classifier')
# Prepare the parameters list for the training function
training_params = {"model_name": ['risk_xgboost'],
"model_pkg_class": ['sklearn.ensemble.GradientBoostingClassifier']}
# Define the training task, including the feature vector, label and hyperparams definitions
train_task = mlrun.new_task('training',
inputs={'dataset': transactions_fv.uri},
params={'label_column': 'n4_pd30'}
)
train_task.with_hyper_params(training_params, strategy='list', selector='max.accuracy')
# Specify the cluster image
classifier_fn.spec.image = 'mlrun/mlrun'
# Run training
classifier_fn.run(train_task, local=False)
</code></pre>
<p>Did you have and solve the same issue?</p>
| <p>I solved the same issue and the problem was with different MLRun version between client side and server side. I had MLRun on client in version <strong>1.2.1rc2</strong> and server side in version <strong>1.2.1</strong> (these versions have different interfaces and it generates freez issue).</p>
<p><strong>Please, synch MLRun versions between client and server and it will works.</strong></p>
<p>BTW: Your part of code seems as this original sample here <a href="https://docs.mlrun.org/en/stable/feature-store/end-to-end-demo/02-create-training-model.html" rel="nofollow noreferrer">https://docs.mlrun.org/en/stable/feature-store/end-to-end-demo/02-create-training-model.html</a></p>
|
<p>I am running a load test over a kubernetes pod and i want to sample every 5 minutes the CPU and memory usage of it.
I was currently manually using the linux <code>top</code> command over the kubernetes pod.</p>
<p>Is there any way given a <code>kubernetes pod</code> to fetch the CPU/Memory usage every X minutes and append it to a file ?</p>
| <p>Try this one-liner:</p>
<pre class="lang-bash prettyprint-override"><code>while [ true ]; do echo $(date) $(date +%s) $(kubectl top -n your-namespace pod $(kubectl get pods -n your-namespace -l your-label-name=your-label-value -o jsonpath='{..metadata.name}') | tail -n 1) | tee -a /path/to/save/your/logs.txt; done
</code></pre>
<p>Add <code>sleep 300</code> to sample it every 5 minutes instead of continuously.</p>
<p>It will find a pod in namespace <code>your-namespace</code> with label <code>your-label-name</code> that has value <code>your-label-value</code>, take its name, and will take only the last one such pod, if you have multiple pods with the same label (that's what <code>| tail -n 1</code> for). This way you won't have to determine the name of a pod manually. Then it'll print something like this:</p>
<pre><code>Sun, Mar 12, 2023 4:59:05 PM 1678640345 your-pod-name-5c64678fc6-rsldm 47m 657Mi
</code></pre>
<p>Where <code>1678640345</code> is Unix milliseconds timestamp written by <code>$(date +%s)</code>. The output will be printed in console (stdout) and mirrored in <code>/path/to/save/your/logs.txt</code> file.</p>
|
<p>Can anyone explain in simple terms what is the difference between deploying Kafka through Kubernetes operator (for example Strimzi ) and Kafka helm chart or manifest file?</p>
<p>Previously in my project we used helm chart for kafka but for now we've received requirement to move to kafka strimzi-operator. I can't access people who invented it but my colleguess also don't know the reason.</p>
<p>So please explain why kafka strimzi operator better (or maybe worse) than kafka helm chart ?</p>
| <p>Some generic thoughts on Helm charts vs. operators:</p>
<p>A Helm chart directly includes Kubernetes YAML files using the Go <a href="https://pkg.go.dev/text/template" rel="nofollow noreferrer"><code>text/template</code></a> language. An operator has actual code, frequently written in Go using the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">Kubernetes SDK</a>, that creates the same objects.</p>
<p>If you're going to install some application, using a Helm chart it's possible to inspect the chart or use a tool like <code>helm template</code> to see what it's going to do. You do not have that level of control over an operator: you can assign it some very broad permissions to create and edit StatesulSets and Secrets and it will do...something. A Helm chart will visibly fail quickly if some configuration is wrong, but an operator can only report its state via the <code>status:</code> in its custom resources, so you can have limited visibility into what's going wrong if an operator isn't working.</p>
<p>As an implementer, if you're familiar with the Kubernetes YAML syntax already, it's a straightforward transition to turn it into a Helm chart. The template language is Turing-complete, though, and it's possible to write arbitrarily complex logic. Testing the templated logic becomes tricky. You also need to carefully manage whitespace and YAML layout concerns in the output of your templates. Once you've gotten up to this level of complexity, the Go native <code>testing</code> package with the support tools in packages like <a href="https://kubebuilder.io" rel="nofollow noreferrer">Kubebuilder</a> make testing an operator much easier.</p>
<p>Operators and controllers do have some additional capabilities. They run arbitrary code, can edit objects in the cluster (given the right RBAC permissions), can inspect external state, and keep running after the initial installation. It is straightforward to layer operators by having one operator create the resource that triggers another (as in standard Kubernetes where a Deployment creates ReplicaSets which create Pods). Helm's dependency system is a little more robustly defined, but runs into trouble when you do try to have nested dependencies.</p>
<p>If most of your environment is in Helm anyways, it might make sense to prefer Helm charts for everything. Tools like <a href="https://helmfile.readthedocs.org" rel="nofollow noreferrer">Helmfile</a> can make installing multiple Helm charts more straightfoward. If you're not already invested in Helm and are using other tools, and you don't mind not being able to see what the operator is doing, then a controller will be likely be simpler to use.</p>
<p>(In my day job, I maintain both Helm charts and custom operators. My application uses Kafka, but I do not maintain the Kafka installation. Our Helmfile-oriented developer setup installs Kafka using a Helm chart.)</p>
|
<p>I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page -
<a href="https://i.stack.imgur.com/lJ1IG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lJ1IG.png" alt="enter image description here" /></a></p>
<p>However when I try to go through the reverse proxy I see this partial result -
<a href="https://i.stack.imgur.com/zA6hc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zA6hc.png" alt="enter image description here" /></a></p>
<p>Here are the files:</p>
<p>nginx.conf:</p>
<pre><code>events {
worker_connections 4096; ## Default: 1024
}
http{
server {
listen 81;
#Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server.
set $target "http://127.0.0.1:80/";
location / {
proxy_pass $target;
}
}
}
</code></pre>
<p>deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
...
name: ...
spec:
replicas: 1
selector:
matchLabels:
pod: ...
template:
metadata:
labels:
pod: ...
name: ...
spec:
containers:
- image: claudiubelu/nginx:1.15-1-windows-amd64-1809
name: nginx-reverse-proxy
volumeMounts:
- mountPath: "C:/usr/share/nginx/conf"
name: nginx-conf
imagePullPolicy: Always
- image: some-repo/proprietary-server-including-iis
name: ...
imagePullPolicy: Always
nodeSelector:
kubernetes.io/os: windows
imagePullSecrets:
- name: secret1
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
</code></pre>
<p>Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using <code>kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/</code>.</p>
<p>Busybox pod (used to access the PVC):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-busybox-pod
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "360000"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: nginx-conf
mountPath: "/mnt/nginx/conf"
restartPolicy: Always
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
nodeSelector:
kubernetes.io/os: linux
</code></pre>
<p>And lastly the PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: azurefile
</code></pre>
<p>Any ideas why?</p>
| <p>After some testing, here is a working nginx.conf -</p>
<pre><code>http{
server {
listen 81;
set $target "http://127.0.0.1:80";
location / {
proxy_pass $target;
proxy_set_header Host $host;
}
}
}
</code></pre>
<ul>
<li>New directive - <code>proxy_set_header Host $host;</code></li>
<li>Trailing slash removed from the <code>target</code> variable used by the proxy_pass directive.</li>
<li>(Specific to my application) Other endpoints on the server are better reachable using <code>$host:$server_port</code> in place of <code>$host</code>. This is caused by the app server redirecting incoming requests to different URIs, losing the proxy's port (81) in the process.</li>
</ul>
|
<p>As an IT contractor, I was tasked with fixing an issue in a client's software which made simple use of a third-party library to encrypt or decrypt byte strings. For reasons relating to AWS temporary access tokens, the library required occasional reinitialisation for a fresh access token to be acquired (via AWS EKS) and used.</p>
<p>I came up with a simple solution in which initialization and use of this library was relegated to a child process forked for the purpose, with input and output strings passed each way in hex via a couple of pipes. Then to reinitialize the library the parent could simply kill the child process and fork a new one.</p>
<p>Seems pretty standard stuff, used everywhere in Unix. But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things. Is he correct, or is he (as I suspect) being an over-cautious pearl clutcher?</p>
<p>If he is right then what kind of problems can arise, and how can these be avoided?</p>
| <blockquote>
<p>But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things.</p>
</blockquote>
<p>There is nothing special about kubernetes with respect to child processes and access rights. It's just Unix processes: a child runs with the same credentials as the parent and can access the same files and other resources as the parent.</p>
<p>The process monitoring question is worth exploring in a little more detail. Typically, we say that containers in general -- not just in Kubernetes, but in docker/podman/etc as well -- should have a single entrypoint. In other words, you don't want to create a single container running multiple services, like a webserver and a database. This is because in a multi-entrypoint container, the failure of a service is hidden from the container management tools, so the container runtime can't destroy and re-create the container in response to the service failure.</p>
<p>As long as your application is able to respond properly to the child process dying unexpectedly -- both by calling <code>wait()</code> on it to clean up the process entry and properly respawning it when necessary -- you're in good shape.</p>
|
<p>Is there any easy command line option to export my entire ETCD database to json file but also decode the keys and values automatically from base64?</p>
<p>What I succeeded to the moment is this(example show 1x key/value):</p>
<pre><code> ./etcdctl get "" --prefix -w json | jq -r ".[] | .[] "
{
"key": "YnktZGV2L21ldGEvc25hcHNob3RzL3Jvb3QtY29vcmQvcGFydGl0aW9ucy80NDAwNDc0MjQ2MTgzNjUxNzAvNDQwMDQ3NDI0NjE4MzY1MTcxX3RzNDQwMDQ5NDg5ODkxODE5NTI0",
"create_revision": 44536,
"mod_revision": 44536,
"version": 1,
"value": "CPOB0OXRmdeNBhIIX2RlZmF1bHQYhIDgxN/V140GIPKB0OXRmdeNBg=="
}
</code></pre>
<p>But I need to decode the entire database keys and values to human readable format?</p>
<p>Thanks</p>
<p>P.S.
Final solution after @Jeff Mercado help:</p>
<pre><code>1. /etcdctl get "" --prefix -w json | jq '.[]' > etcd_filter.txt
2. Clear output to form array of objects [{},{} ...{}]
3. cat etcd_filter.txt | jq '.[] | (.key, .value) |= @base64d'
</code></pre>
<p><a href="https://jqplay.org/s/rglpDglWHNB" rel="nofollow noreferrer">jq playground</a></p>
| <p>If the encoded data is a string and not binary data, you can decode it to a UTF-8 string using the <code>@base64d</code> filter. This should be available in jq 1.6.</p>
<pre><code>$ ./etcdctl ... | jq '.[][] | (.key, .value) |= @base64d'
{
"key": "by-dev/meta/snapshots/root-coord/partitions/440047424618365170/440047424618365171_ts440049489891819524",
"create_revision": 44536,
"mod_revision": 44536,
"version": 1,
"value": "\bοΏ½οΏ½οΏ½ΡΧ\u0006\u0012\b_default\u0018οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½Χ\u0006 οΏ½οΏ½οΏ½ΡΧ\u0006"
}
</code></pre>
<p>It appears the value is not a UTF-8 string in your example so beware. Unfortunately, it doesn't return a byte array so it may not be very useful for these cases.</p>
<p><a href="https://jqplay.org/s/HV4PtqYSnPi" rel="nofollow noreferrer">jqplay</a></p>
|
<p><code>error: Couldn't get available api versions from server: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }</code></p>
| <p>I had to set output format to <code>json</code>.</p>
<pre><code>>kubectl get nodes
Unable to connect to the server: getting credentials: decoding stdout: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
>aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [eu-north-1]:
Default output format [text]: json
>kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-20-202.eu-north-1.compute.internal Ready <none> 370d v1.21.5-eks-9017834
</code></pre>
|
<p>I have set of pods belonging to different deployments. All are configured to run as single replicas. Further, i have 2 nodes in my cluster. Now, when i try to schedule my pods, all pods get deployed to same node. its very rare that i see my pods are going to another node.</p>
<p>Due to this, my one node is always under memory pressure with utilization near 90% and other node with utilization near 30%. Due to this, if my pods try to consume more than 80% of their limits, they are killed by k8s by saying node does not have enough resources.</p>
<p>How can i spread my pods equally across the nodes? or what could be possibly wrong with my cluster? I have read through topology spread constraints but they only talk about spreading pods belonging to one deployment.</p>
| <p>You are right <strong>topology spread constraints</strong> is good for one deployment. There could be many reasons behind that behavior of Kubernetes.</p>
<p>One could be like you have set the <strong>Resource</strong> <strong>request</strong> & <strong>limit</strong> which K8s think is fine to Run both on <strong>Single Node</strong> so it's scheduling both pods on the <strong>same</strong> Node. Or you have not at all set anything which could be another reason too.</p>
<p>Try increasing the <strong>request & limit</strong> of <strong>Deployment</strong> you will the scheduling diff.</p>
<p>Meanwhile, you can also use <strong>Affinity</strong> <strong>(Node/POD affinity)</strong> or <strong>Taints</strong> & <strong>Toleration</strong> is also a good option to separate out the PODs on different available <strong>nodes</strong>. Affinity will work across deployment also.</p>
<p>Ref : <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- zone-east1
containers:
- name: with-node-affinity
image: registry.k8s.io/pause:2.0
</code></pre>
|
<p>I have setup a backend and frontend service running on Kubernetes. Frontend would be <code>www.<myDomain>.com</code> and backend would be <code>api.<myDomain>.com</code></p>
<p>I need to expose and secure both services. I wish to use one ingress. I want to use free certificates from let's encrypt + cert manager. I guess a certificate for <code><myDomain>.com</code> should cover both <code>www.</code> and <code>api.</code>.</p>
<p>Pretty normal use case, right? But when these normal stuff comes together, I couldn't figure out the combined yaml. I was able to get single service, the <code>www.<myDomain>.com</code> working with https. Things doesn't work when I tried to add the <code>api.<myDomain>.com</code></p>
<p>I'm using GKE, but this doesn't seem to be a platform related question. Now creating ingress takes forever. This following events has been tried again and again</p>
<pre><code>Error syncing to GCP: error running load balancer syncing routine: loadbalancer <some id here> does not exist: googleapi: Error 404: The resource 'projects/<project>/global/sslCertificates/<some id here>' was not found, notFound
</code></pre>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.allow-http: "true"
cert-manager.io/issuer: letsencrypt-staging
spec:
tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
rules:
- host: "www.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: angular-service
port:
number: 80
- host: "api.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: spring-boot-service
port:
number: 8080
</code></pre>
| <p>@Jun's answer worked mostly for me, but the <code>secretName</code> values have to be different. Otherwise, you'll get this error:</p>
<blockquote>
<p>Warning BadConfig 12m cert-manager-ingress-shim spec.tls[0].secretName: Invalid value: "api-ingress-cert": this secret name must only appear in a single TLS entry but is also used in spec.tls[1].secretName</p>
</blockquote>
<p>After fixing the <code>secretName</code> values, cert-manager generated everything as expected.</p>
|
<p>I followed the <a href="https://knative.dev/docs/install/yaml-install/serving/install-serving-with-yaml/" rel="nofollow noreferrer">official instruction</a> to installing knative serving on a self-built k8s cluster. But when running the second line</p>
<pre><code>kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml
</code></pre>
<p>I got</p>
<pre><code>unable to recognize "https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2"
unable to recognize "https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2"
</code></pre>
<p>I searched for similar errors, but found few things helpful.</p>
<p>My k8s cluster is built on two virtualbox VMs, one as the master node and one as the worker node. Both with:</p>
<ul>
<li>ubuntu 22.04</li>
<li>docker version 20.10</li>
<li>k8s version 1.21.14</li>
</ul>
<p>Here are the custom resources I got after running the first line.</p>
<pre><code>$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created
customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created
customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created
$ kubectl get crd
NAME CREATED AT
certificates.networking.internal.knative.dev 2023-03-03T01:09:49Z
clusterdomainclaims.networking.internal.knative.dev 2023-03-03T01:09:49Z
configurations.serving.knative.dev 2023-03-03T01:09:49Z
domainmappings.serving.knative.dev 2023-03-03T01:09:49Z
images.caching.internal.knative.dev 2023-03-03T01:09:50Z
ingresses.networking.internal.knative.dev 2023-03-03T01:09:49Z
metrics.autoscaling.internal.knative.dev 2023-03-03T01:09:50Z
podautoscalers.autoscaling.internal.knative.dev 2023-03-03T01:09:50Z
revisions.serving.knative.dev 2023-03-03T01:09:50Z
routes.serving.knative.dev 2023-03-03T01:09:50Z
serverlessservices.networking.internal.knative.dev 2023-03-03T01:09:50Z
services.serving.knative.dev 2023-03-03T01:09:50Z
</code></pre>
<p>All I have done are these two lines of commands</p>
<pre><code>$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-crds.yaml
$ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml
</code></pre>
<p>I am new to k8s custom resources and knative. Thanks in advance for any help.</p>
| <p>I believe Knative 1.9 requires Kubernetes 1.23 or newer. One of the changes that happened in the last year or so was the removal of the <code>v2beta2</code> autoscaling API and replacement with <code>v2</code> API. I believe the new API is present in Kubernetes 1.22, but Knative 1.9 was tested with the community supported versions as of January, which includes 1.26, 1.25, 1.24 and possibly 1.23.</p>
|
<p>I created the following <code>configMap </code>for my NGINX ingress controller:</p>
<pre><code>apiVersion: v1
data:
allow-snippet-annotations: "true"
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"
modsecurity-snippet: |-
SecRuleEngine On
SecRequestBodyAccess On
SecAuditLog /dev/stdout
SecAuditLogFormat JSON
SecAuditEngine RelevantOnly
SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: ingress-basic
creationTimestamp: "2023-01-20T11:31:53Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.5.1
helm.sh/chart: ingress-nginx-4.4.2
name: nginx-ingress-ingress-nginx-controller
namespace: ingress-basic
resourceVersion: "200257665"
uid: e6ab9121-9a73-47e3-83ec-6c1fa19072ee
</code></pre>
<p>I would expect that following SecRule</p>
<pre><code>SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'"
</code></pre>
<p>would block any request containing the word <code>attack </code>in the URI or in the querystring, for example in:</p>
<p><a href="https://secrule.sample.com/api?task=attack" rel="nofollow noreferrer">https://secrule.sample.com/api?task=attack</a></p>
<p>But it doesn't. There is clearly something missing in the definition of the configMap of my NGINX ingress controller, but I don't understand what. Any clue? Thanks!</p>
<p>I'd like to use ModSecurity with an NGINX Ingress Controller to block incoming calls that contain a given word in the querystring.</p>
| <p>I solved the issue by escaping quotes and double quotes of the SecRule in the configmap as follows:</p>
<pre><code>SecRule REQUEST_URI|ARGS|QUERY_STRING \"@contains attack\" \"id:100001,phase:1,t:lowercase,deny,status:403,msg:\'Attack Detected\'\"
</code></pre>
|
<p>I'm getting below error when I try to run the minikube after downloading its binary:-</p>
<pre><code>β Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.22.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
stderr:
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0304 05:40:42.096000 3744 certs.go:489] WARNING: could not validate bounds for certificate apiserver-kubelet-client: the certificate has expired: NotBefore: 2020-06-29 07:35:45 +0000 UTC, NotAfter: 2022-12-10 12:46:24 +0000 UTC
error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate has expired or is not yet valid: current time 2023-03-04T05:40:42Z is after 2022-12-10T12:46:24Z
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>I'm referring its official documentation <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">here</a></p>
<p>It gives a warning that kubelet service is not enabled and suggested to use 'systemctl enable kubelet.service'. I tried below commands but no idea how run the minikube on MacOS:-</p>
<pre><code>(base) -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~ Β» systemctl enable kubelet.service 80 β΅ vinod827@Vinods-MacBook-Pro
zsh: command not found: systemctl
(base) -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
~ Β» launchctl enable kubelet.service 127 β΅ vinod827@Vinods-MacBook-Pro
Unrecognized target specifier.
Usage: launchctl enable <service-target>
<service-target> takes a form of <domain-target>/<service-id>.
Please refer to `man launchctl` for explanation of the <domain-target> specifiers.
(base)
</code></pre>
<p>Any idea what could be the problem here?</p>
| <p>Executing both <code>minikube delete</code> and <code>minikube start</code> has solved the problem for me</p>
|
<p>For a GKE cluster configured with Autopilot, does it make sense to also enable autoscaling?</p>
<p>In the document <a href="https://cloud.google.com/kubernetes-engine/docs/resources/autopilot-standard-feature-comparison" rel="nofollow noreferrer">Compare GKE Autopilot and Standard</a>, it says the auto scaler are optional.</p>
<p>Also <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#overview" rel="nofollow noreferrer">node auto-provisioning</a> says:</p>
<blockquote>
<p>With Autopilot clusters, you don't need to manually provision nodes or manage node pools because node pools are automatically provisioned through node auto-provisioning. With node auto-provisioning, nodes are automatically scaled to meet the requirements of your workloads.</p>
</blockquote>
<p>EDIT: I am confused between the concepts of autoscaling and auto node provisioning.</p>
| <blockquote>
<p>Is it necessary to enable Vertical or Horizontal Pod Autoscaling in
GKE Autopilot Clusters?</p>
</blockquote>
<p>Not necessary, but if using autopilot it's about leveraging the option to scale <strong>nodes</strong> without worry and just you focus on HPA & VPA.</p>
<blockquote>
<p>For a GKE cluster is configured with Autopilot, does it make sense to
also enable autoscaling?</p>
</blockquote>
<p>I think would be <strong>beneficial</strong> to you to enable Autoscaling like <strong>HPA & VPA</strong> with <strong>GKE autopilot</strong> mode if using. You can also go with HPA and Max POD limit to scale.</p>
<p><strong>VPA</strong> would be also good to create small-size <strong>POD</strong> when there is less traffic or resource consumption there, which will be a good factor in <strong>reducing</strong> <strong>cost</strong> also.</p>
<blockquote>
<p>In the document Compare GKE Autopilot and Standard, it says the
autoscalers are optional.</p>
</blockquote>
<p>Yes, it's optional but would be good to go initially with <strong>HPA</strong> scaling so if there is any sudden traffic spike will be able to handle it.</p>
<blockquote>
<p>Also node auto-provisioning says:</p>
</blockquote>
<p>Yes with Autopilot cluster you don't have to worry about the Ifra part like Node pool setup, size, Node pool's scaling etc. With <strong>autopilot</strong>, you just worry about your application and its scaling with <strong>HPA & VPA</strong>.</p>
<p><strong>HPA</strong> scales the replicas of <strong>POD</strong> as per your setting while if required <strong>Node</strong> <strong>auto-scaled</strong> by Google and get attached to your GKE cluster without setting up <strong>cluster(node) autoscaler</strong> by you.</p>
<p>While with <strong>GKE standard</strong> you have to worry about the size of <strong>Node pool</strong> scaling etc.</p>
|
<p>So, here is my current setup
My experience is mostly on openshift, but I'm trying to get familiar with kubernetes... and I'm a bit noob in KS8 :)</p>
<p>kubernets + callico + external storage(nfs) + metallb + ingress-nginx</p>
<pre><code> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready control-plane 3d14h v1.26.2 192.168.50.15 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4
master02 Ready control-plane 2d15h v1.26.2 192.168.50.16 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4
worker-01 Ready worker 2d14h v1.26.2 192.168.50.105 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4
worker-02 Ready worker 2d13h v1.26.2 192.168.50.106 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4
</code></pre>
<p>kubectl get pods -n metallb-system -o wide</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-79d5899cb-hg4lv 1/1 Running 0 23m 10.30.0.27 worker-02 <none> <none>
speaker-lvpbn 1/1 Running 0 21m 192.168.50.106 worker-02 <none> <none>
speaker-rxcvb 1/1 Running 0 21m 192.168.50.105 worker-01 <none> <none>
</code></pre>
<p>metallb has been config with this ippool</p>
<pre><code>apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
namespace: metallb-system
name: lb-pool
spec:
addresses:
- 192.168.50.115-192.168.50.118
</code></pre>
<p>kubectl get all -n ingress-nginx</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-controller-c69664497-z84b8 1/1 Running 0 12h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.108.69.42 192.168.50.115 80:32481/TCP,443:32137/TCP,8443:30940/TCP 83m
service/ingress-nginx-controller-admission ClusterIP 10.97.240.138 <none> 443/TCP 12h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 12h
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-c69664497 1 1 1 12h
kubectl create deployment httpd24 --image=docker.io/library/httpd:2.4.55
kubectl expose deployment/httpd24 --port 80
</code></pre>
<p>create ingress::</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: httpd24-ingress
namespace: default
spec:
ingressClassName: nginx
rules:
- host: http24-kube.docker-containers.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: httpd24
port:
number: 80
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
httpd24-ingress nginx http24-kube.docker-containers.local 192.168.50.115 80 58m
</code></pre>
<p>So, from inside cluster I can execute w/o errors curl -s <a href="http://http24-kube.docker-containers.local" rel="nofollow noreferrer">http://http24-kube.docker-containers.local</a></p>
<p>However, from outside cluster ping to 192.158.50.115 or ping to http24-kube.docker-containers.local got timeout.</p>
<p>Computer from where I try to connect to http24-kube.docker-containers.local has access to subnet 192.168.50.0/25</p>
<p>I think it may be related to nginx-ingress...but have zero experience with ....</p>
<p>========</p>
<pre><code>C:\Users\AZ_fjonnas>nslookup http24-kube.docker-containers.local
Name: http24-kube.docker-containers.local
Address: 192.168.50.115
</code></pre>
<p>Now, from the one of the masternodes</p>
<pre><code>root@master01:~# curl -s 'http://http24-kube.docker-containers.local'
<html><body><h1>It works!</h1></body></html>
root@master01:~#
</code></pre>
<p>So, cluster master nodes resolve the name, moreover can access the httpd24 pod with ingress IP</p>
<p>But, windows machine can't access at all ingress IP(192.168.50.111) port 80</p>
<p>That's why I think is something related to how nginx-ingress works... :(</p>
<p>ALL nodes belong to same subnet: 192.168.50.0/25</p>
| <p>Damn,</p>
<p>Solved :)</p>
<pre><code>---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: lb-pool
namespace: metallb-system
spec:
ipAddressPools:
- lb-pool
</code></pre>
<p>Seems that L2Advertisement was missing according to :
<a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md</a></p>
|
<p>While I'm reading how to test on k8s with the fake client on this <a href="https://itnext.io/testing-kubernetes-go-applications-f1f87502b6ef" rel="nofollow noreferrer">link</a>, I noticed this function, which, IIUC, assigns a faked server version to the faked k8s cluster.</p>
<pre><code>k8s.clientset.Discovery().(*discoveryfake.FakeDiscovery).FakedServerVersion = &version.Info{
Major: expectedMajor,
Minor: expectedMinor,
}
</code></pre>
<p>I'm trying to understand what this <code>*discoveryfake.FakeDiscovery</code> code means.
<code>k8s.clientset.Discovery()</code> returns <code>DiscoveryInterface</code> (<a href="https://pkg.go.dev/k8s.io/[email protected]/discovery#DiscoveryInterface" rel="nofollow noreferrer">defined here</a>), but I cannot find <code>*discoveryfake</code> field.</p>
<p>What does <code>Discovery().(*discoveryfake.FakeDiscovery)</code> mean?</p>
| <p>It's a <a href="https://go.dev/tour/methods/15" rel="nofollow noreferrer">type assertion</a>, as @peter suggested in the comment.</p>
|
<p>We want our deployment to have 3/6/9/etc replicas at all times to have an even AZ spread.
Is there a way to achieve this via HPA config ?</p>
| <p>You can use the Scaling policy with the <strong>HPA</strong></p>
<p><strong>Example</strong></p>
<pre><code>scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Pods
value: 3
periodSeconds: 5
selectPolicy: Max
</code></pre>
<p>it will add the <strong>3</strong> PODs every <strong>5</strong> second until HPA metrics become steady.</p>
<p>Ref doc : <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md</a></p>
|
<p>I am getting the following error on installing a custom docker image as a revision. I am not able to figure out the reason. So far I have tried adding secrets with the password and it didn't work</p>
<pre><code>helm upgrade --install airflow apache-airflow/airflow -n airflow -f values.yaml --debug
history.go:56: [debug] getting history for release airflow
upgrade.go:144: [debug] preparing upgrade for airflow
Error: UPGRADE FAILED: execution error at (airflow/charts/postgresql/templates/secrets.yaml:20:15):
PASSWORDS ERROR: The secret "airflow-postgresql" does not contain the key "password"
</code></pre>
| <p>Try this:</p>
<pre><code>kubectl create secret generic airflow-postgresql -n airflow --from-literal=password='postgres' --dry-run=client -o yaml | kubectl apply -f -
</code></pre>
<p>I think the error is due to the helm create the airflow-postgresql's secrets using "postgres-password" as the key while the helm update expects "password" as the key.</p>
<p>Or:</p>
<p><a href="https://airflow.apache.org/docs/helm-chart/stable/release_notes.html#airflow-helm-chart-1-8-0-2023-02-06" rel="nofollow noreferrer">https://airflow.apache.org/docs/helm-chart/stable/release_notes.html#airflow-helm-chart-1-8-0-2023-02-06</a></p>
<p>Airflow Helm Chart 1.8.0 (2023-02-06)
Significant Changes
bitnami/postgresql subchart updated to 12.1.9 (#29071)
The version of postgresql installed is still version 11.</p>
<p>If you are upgrading an existing helm release with the built-in postgres database, you will either need to delete your release and reinstall fresh, or manually delete these 2 objects:</p>
<pre><code>kubectl delete secret {RELEASE_NAME}-postgresql
kubectl delete statefulset {RELEASE_NAME}-postgresql
</code></pre>
<p>As a reminder, it is recommended to set up an external database in production.</p>
<p>This version of the chart uses different variable names for setting usernames and passwords in the postgres database.</p>
<p>postgresql.auth.enablePostgresUser is used to determine if the βpostgresβ admin account will be created.</p>
<p>postgresql.auth.postgresPassword sets the password for the βpostgresβ user.</p>
<p>postgresql.auth.username and postrgesql.auth.password are used to set credentials for a non-admin account if desired.</p>
<p>postgresql.postgresqlUsername and postgresql.postresqlPassword, which were used in the previous version of the chart, are no longer used.</p>
<p>Users will need to make those changes in their values files if they are changing the Postgres configuration.</p>
<p>Previously the subchart version was 10.5.3.</p>
|
<p>Based on the <a href="https://github.com/benc-uk/kubeview" rel="nofollow noreferrer"><strong>KubeView</strong> README</a>, I tried to run <strong>KubeView</strong> using the container provided <a href="https://github.com/users/benc-uk/packages/container/package/kubeview" rel="nofollow noreferrer">here</a>.</p>
<p>I run:</p>
<pre><code>$ docker run --publish-all --name kubeview ghcr.io/benc-uk/kubeview:0.1.31
</code></pre>
<p>I get the following output:</p>
<pre><code>2023/03/12 18:06:45 ### Kubeview v0.1.31 starting...
2023/03/12 18:06:45 ### Connecting to Kubernetes...
2023/03/12 18:06:45 ### Creating client with config file: /.kube/config
panic: stat /.kube/config: no such file or directory
goroutine 1 [running]:
main.main()
/build/cmd/server/main.go:60 +0x6a5
</code></pre>
<p>I can see that the problem is that the tool is looking for the kubeconfig file in <code>/.kube/config</code>. It can't find it because mine is in my home directory, <code>~/.kube/config/</code></p>
<p>I tried to pass an environment variable like this:</p>
<pre><code>$ docker run --publish-all --name kubeview -e KUBECONFIG=/Users/<MY_USERNAME>/.kube/config ghcr.io/benc-uk/kubeview:latest
</code></pre>
<p>It didn't work. Has anyone been able to run <code>KubeView</code> as a container? I'm on a Mac.</p>
| <p>You can <strong>mount</strong> your local <strong>kubeconfig</strong> file to the <strong>Docker</strong> image that you are trying, not sure if the way you follow <strong>Env variable</strong> option is suggested anywhere or not</p>
<p>Try something like :</p>
<pre><code>docker run --publish-all --name kubeview -v ./config:/.kube/config ghcr.io/benc-uk/kubeview:0.1.31
</code></pre>
<p><strong>./config</strong> - will the path of your <code>~/.kube/config/</code> <strong>local system</strong></p>
<p><strong>/.kube/config</strong> - path where your local file will get set in the container, so when container will run your local file will be available at that path</p>
<p>Tried with <strong>fake</strong> values, which not able to parse but it got the file</p>
<pre><code>2023/03/12 19:28:04 ### Kubeview v0.0.0 starting...
2023/03/12 19:28:04 ### Connecting to Kubernetes...
2023/03/12 19:28:04 ### Creating client with config file: /.kube/config
panic: error loading config file "/.kube/config": couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
goroutine 1 [running]:
main.main()
/build/cmd/server/main.go:60 +0x6a5
</code></pre>
|
<p>I do have an existing application that used docker and docker compose so far. I want to operate this app in a Kubernetes cluster. Shouldn't be a big deal, right? But so far I failed because of the used secrets.</p>
<p>The application expects a secret to be present in a file at: <code>/run/secrets/webhook_secret</code>, where <code>webhook_secret</code> is the file containing the secret.</p>
<p>I created a secret with kubectl like this:</p>
<pre><code> kubectl create secret
generic webhook-secret \
--from-literal=webhook_secret=123 \
--namespace my-app
</code></pre>
<p>I tried to mount the secret in the manifest with...</p>
<pre><code> ...
volumeMounts:
- name: secrets
mountPath: "/run/secrets"
readOnly: true
volumes:
- name: secrets
secret:
secretName: webhook-secret
...
</code></pre>
<p>But then the pod is not able to start, as Kubernetes also tries to mount the same directory and so issues with run/secrets/kubernetes.io occur...</p>
<p><code>Warning Failed 3m1s (x4 over 3m37s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/e79f634a-2abe-4c47-ae50-c4beb5b66ae6/volumes/kubernetes.io~projected/kube-api-access-grl42" to rootfs at "/var/run/secrets/kubernetes.io/serviceaccount": mkdir /run/containerd/io.containerd.runtime.v2.task/k8s.io/my-app-container/rootfs/run/secrets/kubernetes.io: read-only file system: unknown</code></p>
<p>Any idea how I can provide the secret as required by the application?</p>
<p>Regards and thanks,
Thomas</p>
| <p>It might be perfectly possible to mount your own secrets into the <code>/var/run/secrets</code> or <code>/run/secrets</code> path. Even in the scenario where Kuberbetes is concurrently using that path for system-related tasks. This commonly happens when, for instance, you are running a cluster with 2+ nodes and Kubernetes starts projecting it's system objects like secrets, tokens, certifates, etc. into <code>/var/run/secrets/kubernetes.io/serviceaccount/</code>.</p>
<p>I don't know the exact specifics of your case, but you might want to look into <code>subPath</code>. When your application expects secrets to be available in <code>/run/secrets</code>, you can do so something like:</p>
<pre><code>...
containers:
- name: myContainer
image: "someImage:1.0.0"
volumeMounts:
- name: mySecretVolumeName
mountPath: "/run/secrets/webhook_secret"
readOnly: true
subPath: webhook-secret
volumes:
- name: mySecretVolumeName
secret:
secretName: secrets
items:
- key: webhook-secret
path: webhook-secret
...
</code></pre>
<p>People started bumping into the issue you're experiencing (if I understand you correctly) a few years ago: <a href="https://github.com/kubernetes/kubernetes/issues/65835" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/65835</a></p>
<p>Hope this helps you out!</p>
|
<p>I can resolve a DNS name on the internal kubernetes DNS with a value like: <a href="http://serviceNameExampleAbc.default.svc.cluster.local" rel="nofollow noreferrer">http://serviceNameExampleAbc.default.svc.cluster.local</a> where <code>serviceExampleAbc</code> relates to the IP of the service by name <code>serviceExampleAbc</code>...</p>
<p>However, how can I instead only resolve this to the service IP only if it has a specific label to it?</p>
<p>For example <code>http://serviceNameExampleAbc.servicesWithXyzLabelOnly.default.svc.cluster.local</code> would only resolve to <code>serviceNameExampleAbc</code>'s IP <strong>if</strong> it has a label defined for it like below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: serviceNameExampleAbc
labels:
xyz: abcxyzexmaple
...
</code></pre>
| <p>You can use the K8s <strong><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">network polcies</a></strong> to block the traffic across the namespace or services.</p>
<p>So with network policy, you will be able to restrict the traffic across the namespaces, PODs based on the labels, selectors etc.</p>
<p>With <strong>network policy</strong> service will be able to resolve the <strong>IP</strong> in <strong>DNS</strong> but the Network policy won't allow further to connect to <strong>end</strong> <strong>service</strong>.</p>
<p><strong>Example</strong></p>
<p>Allow traffic from some PODs in another namespace</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-all-ns-db
spec:
podSelector:
matchLabels:
app: web
ingress:
- from:
- namespaceSelector: #All pods in namespace with label app=db-ns
matchLabels:
team: operations
podSelector: #Chooses pods with app=db
matchLabels:
app: db
</code></pre>
<p>here is the list of policies you can refer : <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes</a></p>
|
<p>I have a k8 deployment (using minikube) for ActiveMQ-Artemis. In this I'm exposing ports for broker(61616) and console(8161) both. Pointing the ports to service. It is working fine.
and also I configured an Ingress Controller for the console.</p>
<p>Service.yaml</p>
<pre><code>kind: Service
metadata:
name: artemis-service
spec:
type: ClusterIP
ports:
- port: 8161
name: http-console
protocol: TCP
targetPort: 8161
- port: 61616
name: netty-connector
protocol: TCP
targetPort: 61616
selector:
app: artemis
</code></pre>
<p>Ingress.yaml</p>
<pre><code>kind: Ingress
metadata:
name: broker-ingress
labels:
name: broker-ingress
spec:
ingressClassName: nginx
rules:
- host: artemis.broker.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: artemis-service
port:
number: 8161
</code></pre>
<p>I can access the activemq console when I hit <code>http://artemis.broker.com</code>
And now I want to expose the TCP port (61616) using NGINX Controller through which I can publish/consume messages to ActiveMQ queue using a TCP URL.</p>
<p>I found TransportServer in Nginx Controller to expose TCP.
<a href="https://docs.nginx.com/nginx-ingress-controller/configuration/transportserver-resource/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/transportserver-resource/</a>
Can someone help me how to configure this TransportServer.</p>
<p>PS - I'm new to kubernetes.</p>
| <p><strong>Nginx Ingress</strong> does not support <strong>TCP</strong> or <strong>UDP</strong> services</p>
<p>You can follow this guide the setup & expose the TCP service : <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a></p>
<p>Here is my <strong>YAML</strong> config file for <strong>RabbitMQ</strong> i have used</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
61616: "default/artemis-service:61616"
</code></pre>
<p>Make <strong>tcp-service</strong> config into <strong>ingress-nginx</strong> namespace or where your controller is installed.</p>
<p>Add the Port to <strong>Nginx</strong> <strong>service</strong></p>
<pre><code> - name: proxied-tcp-61616
port: 61616
targetPort: 61616
protocol: TCP
</code></pre>
<p>You will be able to connect your Broker with <code>NginxIP:61616</code> now from outside.</p>
<p><strong>Note if you are very new to K8s</strong>:</p>
<p>If you are running your application inside of the same Kubernetes cluster you don't need to expose the Port, you can use the <strong>service name</strong> for internal communication & connection.</p>
<p><strong>artemis-service:61616</strong> rest will K8s will manage DNS resolution etc.</p>
|
<p>I would like to know, if there is an optimal approach for setting memory limits for Kubernetes containers, especially for applications running java.</p>
<p>For Java applications, we have to set a heap value in conjunction with Kubernetes resources. it's like we're kinda forced to guess at a memory limit for the Kubernetes microservice.</p>
<p>To be more clear,</p>
<ul>
<li>in java the heap memory can be limited to memory limits defined in the container, but how to arrive at a specific limit value ??</li>
<li>If we don't set up limits for the container, then the java heap considers the underlying node memory limits rather than at container limits, so it can extend the pod limit to max memory, which can stress the other pods running in that node.</li>
<li>If we don't set up enough memory limits at the container, then we can see containers getting killed with OOM errors.</li>
</ul>
<p>The possible solutions, I can think of is</p>
<ol>
<li>Monitoring the microservice for some period of time and based on the utilization, choosing the limits</li>
<li>Implementing some load testing mechanism and based on the observation setting the limits</li>
</ol>
<p>Other than the above, I would like to get some comments if there is any other approach followed by anyone in setting memory limits for Kubernetes containers.
Has anyone encountered this earlier!!</p>
| <p>Yes, I have encountered the issue multiple times. You definitely want to keep the memory limit for the k8 to avoid the noisy neighbour problems. The possible solutions you have mentioned are right. Monitoring and load testing are a must to arrive at the number.</p>
<p>Along with these, I used the profiling of Java processes to see how GC is getting triggered and whether the memory usage should remain the same or increase with the increase of load. Profiling is a very powerful tool to provide some insights into suboptimal usage of data structures as well.</p>
<p><strong>What to profile</strong></p>
<p>While doing the Java profiling, you need to check</p>
<ul>
<li>What's the Eden and old-gen usage</li>
<li>How often full GC is running, the memory utilisation will increase and decrease after the full GC. See the <a href="https://dzone.com/articles/interesting-garbage-collection-patterns" rel="nofollow noreferrer">GC pattern</a></li>
<li>How many objects are getting created</li>
<li>CPU usage, (will increase during the full GC)</li>
</ul>
<p><strong>How to profile Java application</strong></p>
<p>Here are a few good resources</p>
<ul>
<li><a href="https://www.baeldung.com/java-profilers#:%7E:text=A%20Java%20Profiler%20is%20a,thread%20executions%2C%20and%20garbage%20collections" rel="nofollow noreferrer">https://www.baeldung.com/java-profilers#:~:text=A%20Java%20Profiler%20is%20a,thread%20executions%2C%20and%20garbage%20collections</a>.</li>
<li><a href="https://medium.com/platform-engineer/guide-to-java-profilers-e344ce0339e0" rel="nofollow noreferrer">https://medium.com/platform-engineer/guide-to-java-profilers-e344ce0339e0</a></li>
</ul>
<p><strong>How to Profile Kubernetes Application with Java</strong></p>
<ul>
<li><a href="https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852" rel="nofollow noreferrer">https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852</a></li>
<li><a href="https://www.youtube.com/watch?v=vHTWdkCUAoI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=vHTWdkCUAoI</a></li>
</ul>
|
<p>While installing influxdb2 using k8s manifest from the link <a href="https://docs.influxdata.com/influxdb/v2.6/install/?t=Kubernetes" rel="nofollow noreferrer">influxdb2 installation on k8s</a>
I get below "<code>pod has unbound immediate PersistentVolumeClaims</code>" error.</p>
<p>The instruction is given for minikube but I am installing it as a normal k8s cluster.
Any idea about the issue and how to fix.</p>
<pre><code>/home/ravi#kubectl describe pod influxdb-0 -n influxdb
Name: influxdb-0
Namespace: influxdb
Priority: 0
Node: <none>
Labels: app=influxdb
controller-revision-hash=influxdb-78bc684b99
statefulset.kubernetes.io/pod-name=influxdb-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/influxdb
Containers:
influxdb:
Image: influxdb:2.0.6
Port: 8086/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/lib/influxdb2 from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k9d8t (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-influxdb-0
ReadOnly: false
default-token-k9d8t:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k9d8t
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
/home/ravi#
</code></pre>
<p>influx db2 yaml file</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: influxdb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: influxdb
name: influxdb
namespace: influxdb
spec:
replicas: 1
selector:
matchLabels:
app: influxdb
serviceName: influxdb
template:
metadata:
labels:
app: influxdb
spec:
containers:
- image: influxdb:2.0.6
name: influxdb
ports:
- containerPort: 8086
name: influxdb
volumeMounts:
- mountPath: /var/lib/influxdb2
name: data
volumeClaimTemplates:
- metadata:
name: data
namespace: influxdb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
---
apiVersion: v1
kind: Service
metadata:
name: influxdb
namespace: influxdb
spec:
ports:
- name: influxdb
port: 8086
targetPort: 8086
selector:
app: influxdb
type: ClusterIP
</code></pre>
<p>k8s version</p>
<pre><code>/home/ravi#kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
/home/ravi>sudo kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
influxdb data-influxdb-0 Pending 4h41m
ricplt pvc-ricplt-alarmmanager Bound pv-ricplt-alarmmanager 100Mi RWO local-storage 5h17m
ricplt pvc-ricplt-e2term-alpha Bound pv-ricplt-e2term-alpha 100Mi RWO local-storage 5h18m
ricplt r4-influxdb-influxdb2 Pending 32m
/home/ravi>
/home/ravi>
/home/ravi>
/home/ravi>sudo kubectl describe pvc data-influxdb-0 -n influxdb
Name: data-influxdb-0
Namespace: influxdb
StorageClass:
Status: Pending
Volume:
Labels: app=influxdb
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: influxdb-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m12s (x1021 over 4h17m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
/home/ravi>
</code></pre>
| <p><strong>Looks like there is no auto-provisioning of k8s cluster you are running. You need to create Persistent Volume & Storage classes on <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">local</a> clusters manually by cluster admin.</strong></p>
<p>Create one provisioner and pass that to your storage class and give that storage class to a persistence volume template to create automatic PVC and PV.</p>
<p>Refer to official kubernetes documentation on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a>, which may help to resolve your issue.</p>
<blockquote>
<p><strong>To configure a Pod to use a PersistentVolumeClaim for storage. Here is a summary of the process:</strong></p>
<ol>
<li><p>You, as a cluster administrator, create a PersistentVolume backed by physical storage. You do not associate the volume with any Pod.</p>
</li>
<li><p>You, now taking the role of a developer / cluster user, create a PersistentVolumeClaim that is automatically bound to a suitable
PersistentVolume.</p>
</li>
<li><p>You create a Pod that uses the above PersistentVolumeClaim for storage.</p>
</li>
</ol>
</blockquote>
<p>Also try changing <code>accessModes: ReadWriteMany</code> to volume accessed by all your pods. A <code>subPath</code> needs to be used, If each pod wants to have its own directory. Refer to official Kubernetes document on <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">Using subPath</a>, like below :</p>
<pre><code>volumeMounts:
- name: data
mountPath: /var/lib/influxdb2
subPath: $(POD_NAME)
</code></pre>
|
<p>I am new to Kubernetes, Istio and so on, so please be gentle :)</p>
<p>I have minikube running, I can deploy services and they run fine.
I have installed istio following this guide:
<a href="https://istio.io/latest/docs/setup/install/istioctl/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/install/istioctl/</a></p>
<p>If I tag the default namespace with</p>
<pre><code>kubectl label namespace default istio-injection=enabled
</code></pre>
<p>the deployment fails. The service is green on the minikube dashboard, but the pod doesn't start up.</p>
<pre><code>Ready: false
Started: false
Reason: PodInitializing
</code></pre>
<p>Here are a couple of print screens from the dashboard:</p>
<p><a href="https://i.stack.imgur.com/o13fz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o13fz.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/c7nY7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c7nY7.png" alt="enter image description here" /></a></p>
<p>This is clearly related to istio.
If I remove the istio tag from the namespace, the deployment works and the pod starts.</p>
<p>Any help would be greatly appreciated.</p>
<p><strong>EDIT</strong></p>
<p>Running</p>
<pre><code>kubectl logs mypod-bd48d6bcc-6wcq2 -c istio-init
</code></pre>
<p>prints out</p>
<pre><code>2022-08-24T14:07:15.227238Z info Istio iptables environment:
ENVOY_PORT=
INBOUND_CAPTURE_PORT=
ISTIO_INBOUND_INTERCEPTION_MODE=
ISTIO_INBOUND_TPROXY_ROUTE_TABLE=
ISTIO_INBOUND_PORTS=
ISTIO_OUTBOUND_PORTS=
ISTIO_LOCAL_EXCLUDE_PORTS=
ISTIO_EXCLUDE_INTERFACES=
ISTIO_SERVICE_CIDR=
ISTIO_SERVICE_EXCLUDE_CIDR=
ISTIO_META_DNS_CAPTURE=
INVALID_DROP=
2022-08-24T14:07:15.229791Z info Istio iptables variables:
PROXY_PORT=15001
PROXY_INBOUND_CAPTURE_PORT=15006
PROXY_TUNNEL_PORT=15008
PROXY_UID=1337
PROXY_GID=1337
INBOUND_INTERCEPTION_MODE=REDIRECT
INBOUND_TPROXY_MARK=1337
INBOUND_TPROXY_ROUTE_TABLE=133
INBOUND_PORTS_INCLUDE=*
INBOUND_PORTS_EXCLUDE=15090,15021,15020
OUTBOUND_OWNER_GROUPS_INCLUDE=*
OUTBOUND_OWNER_GROUPS_EXCLUDE=
OUTBOUND_IP_RANGES_INCLUDE=*
OUTBOUND_IP_RANGES_EXCLUDE=
OUTBOUND_PORTS_INCLUDE=
OUTBOUND_PORTS_EXCLUDE=
KUBE_VIRT_INTERFACES=
ENABLE_INBOUND_IPV6=false
DNS_CAPTURE=false
DROP_INVALID=false
CAPTURE_ALL_DNS=false
DNS_SERVERS=[],[]
OUTPUT_PATH=
NETWORK_NAMESPACE=
CNI_MODE=false
EXCLUDE_INTERFACES=
2022-08-24T14:07:15.232249Z info Writing following contents to rules file: /tmp/iptables-rules-1661350035231776045.txt1561657352
* nat
-N ISTIO_INBOUND
-N ISTIO_REDIRECT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A PREROUTING -p tcp -j ISTIO_INBOUND
-A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN
-A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN
-A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT
-A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
COMMIT
2022-08-24T14:07:15.232504Z info Running command: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352
2022-08-24T14:07:15.256253Z error Command error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat'
Error occurred at line: 1
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
2022-08-24T14:07:15.256845Z error Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352, exit status 2
</code></pre>
| <p>This might help you:</p>
<p>I am having same error in injected container istio-init.</p>
<p>My system is : k8s 1.26 and istio 1.17.1, installed on rocky 8.5 stations.</p>
<p>This solved my problem:</p>
<p>1.</p>
<pre><code>cat <<EOT >> /etc/modules-load.d/k8s.conf
overlay
br_netfilter
nf_nat
xt_REDIRECT
xt_owner
iptable_nat
iptable_mangle
iptable_filter
EOT
</code></pre>
<ol start="2">
<li></li>
</ol>
<pre><code>modprobe br_netfilter ; modprobe nf_nat ; modprobe xt_REDIRECT ; modprobe xt_owner; modprobe iptable_nat; modprobe iptable_mangle; modprobe iptable_filter
</code></pre>
<p>I got this solution from:
<a href="https://github.com/istio/istio/issues/23009" rel="nofollow noreferrer">enter link description here</a></p>
|
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: django-k8-web-deployment
labels:
app: django-k8-web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: django-k8-web-deployment
template:
metadata:
labels:
app: django-k8-web-deployment
spec:
containers:
- name: django-k8s-web
image: registry.digitalocean.com/chrisocean/django-k8s-web:latest
envFrom:
- secretRef:
name: django-k8s-web-prod-env
env:
- name: PORT
value: "8001"
ports:
- containerPort: 8001
imagePullSecrets:
- name: oceandev
</code></pre>
<p>the above yaml file above is what I want to apply in kubernetes. I ran the folowing command on my terminal</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -f k8s/apps/django-k8s-web.yaml
</code></pre>
<p>then I go the following error on the terminal</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -f k8s/apps/django-k8s-web.yaml
service/django-k8-web-service unchanged
Error from server (BadRequest): error when creating "k8s/apps/django-k8s-web.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.containers[0].envFrom[0].name"
</code></pre>
<p>who knows how to resolve the issue?</p>
<p>I wanted it to apply the changes in the yaml file but it is not working.When I ran the following command</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods
</code></pre>
<p>the STATUS of the pod is pending</p>
| <p>This is issue with indentation.</p>
<p>Here is proper <code>deployment</code> definition</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: django-k8-web-deployment
labels:
app: django-k8-web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: django-k8-web-deployment
template:
metadata:
labels:
app: django-k8-web-deployment
spec:
containers:
- name: django-k8s-web
image: registry.digitalocean.com/chrisocean/django-k8s-web:latest
envFrom:
- secretRef:
name: django-k8s-web-prod-env
env:
- name: PORT
value: "8001"
ports:
- containerPort: 8001
imagePullSecrets:
- name: oceandev
</code></pre>
|
<p>I want to execute command on GCP GKE node but I need to do it without manually sshing to the machine.</p>
<p>What I need:</p>
<ol>
<li>adding ntp.keys</li>
<li>configuring ntp.conf</li>
<li>restart ntp daemon (here I encountered a problem)</li>
</ol>
<p>Point 1 and 2 can be easly achived with DaemonSet. I mounted /etc/ntp/ directory to init container and created right files. Unofrtunately point 3 seems to be impossible. Is there any way I can run "systemctl restart ntp" on each node of the cluster?</p>
<p>My problem is mainly due to GCP API does not provide nodes endpoint so I can't get IPs of nodes and then run ansible to perform NTP configuration.
The goal is to **automatically **</p>
<ol>
<li>provision cluster (with terraform)</li>
<li>configure NTP client on each node</li>
</ol>
| <p>You are on the right path, If you don't want to SSH into the node and manual job, what you can do is like create one <strong>Daemonset</strong> and automate <strong>SSH</strong> process.</p>
<p><strong>Daemonset</strong> runs on all available <strong>nodes</strong> in <strong>Cluster</strong>. You can mount your <strong>shell</strong> <strong>script</strong> to <strong>Node</strong> path and <strong>execute</strong> it by doing <strong>SSH</strong> from <strong>Image</strong>.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: daemonset-access
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cron-namespace-admin-rbac
subjects:
- kind: ServiceAccount
name: daemonset-access
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: umalog
labels:
k8s-app: umalog
spec:
selector:
matchLabels:
name: umalog
template:
metadata:
labels:
name: umalog
spec:
serviceAccountName: daemonset-access
securityContext:
fsGroup: 1000
containers:
- command:
- bash
- -c
- |
mkdir -p /work-dir/filter_config
while true;
do
echo "inside while";
/root/reload-uma.sh;
sleep 1800;
done
image: oraclelinux:8
imagePullPolicy: IfNotPresent
name: daemon-ssh
volumeMounts:
- mountPath: /root/reload-uma.sh
name: uma-agent-reload
subPath: reload-uma.sh
- mountPath: /work-dir-ssh
name: do-node-ssh
volumes:
- configMap:
defaultMode: 511
name: uma-agent-reload
name: uma-agent-reload
- hostPath:
path: /home/opc/.ssh
type: DirectoryOrCreate
name: do-node-ssh
</code></pre>
<p><strong>Configmap</strong></p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: uma-agent-reload
data:
reload-uma.sh: |
#!/bin/bash -x
cat ~/.ssh/id_rsa.pub >> /work-dir-ssh/authorized_keys
ssh -o StrictHostKeyChecking=no <User-name-ssh>@`kubectl -n default get pod $HOSTNAME -o yaml | grep hostIP | awk -d: {'print $2'} | tail -1`
sudo systemctl reload <your-service-agent>.service
</code></pre>
|
<p>im confuse to convert nanocore to core of CPU
my formula is:</p>
<p><code>nanocore / 1000000000 = CORE (1000000000 is 1 billion)</code></p>
<p>if my linux os have 10 core.
I want to calculator percentage cpu usage. This formula is:</p>
<p><code>(nanocore / 1000000000) / 10 * 100 = percentage cpu usage ?</code></p>
<p>is it right ?</p>
| <p>Yes, a core is equal to 1e9 (1,000,000,000) nanocores.</p>
<p>You can test this by seeing that, for example in Datadog, a metric like <code>(kubernetes.cpu.usage.total / 1000000000) / kubernetes_state.node.cpu_allocatable * 100</code> gives you a percentage that should be capped at 100%. The 1st metric is measured in nanocores, and the 2nd is measured in cores (<a href="https://docs.datadoghq.com/containers/kubernetes/data_collected/" rel="nofollow noreferrer">see definition here</a>).</p>
<p>Here's another answer that aligns with this: <a href="https://discuss.kubernetes.io/t/metric-server-cpu-and-memory-units/7497/2" rel="nofollow noreferrer">link</a></p>
|
<p>I'm using Jenkins configuration as code (JCASC).</p>
<p>I'm having a pod template and I want to add NodeSelector + Tolerations.
podTemplate doesn't support key of tolerations and NodeSelector so I need to add pod YAML spec...</p>
<pre><code> agent:
enabled: true
podTemplates:
podTemplates:
jenkins-slave-pod: |
- name: jenkins-slave-pod
label: global-slave
serviceAccount: jenkins
idleMinutes: "15"
containers:
- name: main
image: 'xxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global'
command: "sleep"
args: "30d"
privileged: true
</code></pre>
<p>I was thinking of adding yaml: and just configuring the spec of the pod...
But when I'm adding yaml: and adding yamlStrategy: merge/overrid it ignores the YAML it and only uses my podTemplate instead.</p>
<p>How can I merge/override my podTemplate and add pod with tolerations/nodeSelecotr?</p>
<p>Thats the YAML I want to have inside my podTemplate:</p>
<pre><code>
apiVersion: v1
kind: Pod
serviceAccount: jenkins-non-prod
idleMinutes: "15"
containers:
- name: main
image: 'xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global'
command: "sleep"
args: "30d"
privileged: true
spec:
nodeSelector:
karpenter.sh/provisioner-name: jenkins-provisioner
tolerations:
- key: "jenkins"
operator: "Exists"
effect: "NoSchedule"
</code></pre>
<p><a href="https://i.stack.imgur.com/NVjt4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NVjt4.png" alt="enter image description here" /></a></p>
| <p>I try to give you a little suggestion, let me know if it works.</p>
<p>If you have an up&running Jenkins instance (with the Kubernetes plugin installed), you can go to βManage Jenkinsβ/βConfigure Cloudsβ and prepare your Pod Templates as you see fit.
There you will also find the definition of nodeSelector and Toleration.</p>
<p>Once you have saved the setup you prefer, go to βManage Jenkinsβ/βConfiguration as Codeβ and save the JCASC as Code configuration of your Jenkins (click βDownload Configurationβ).</p>
<p>You can replicate this working mode for any new configuration you want to add to your Jenkins.</p>
|
<p>We have a requirement to connect a K8s POD to an Azure VPN Gateway in a secure manner. This is what our network topology is:</p>
<p><a href="https://i.stack.imgur.com/sH8cx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sH8cx.png" alt="enter image description here" /></a></p>
<p>Firstly is this possible to achieve and secondly how would we go about creating this peering? If peering isn't the best option then what would you recommend to solve this problem? TIA</p>
<p>We have created the VPN gateway, VNET, and a local network and confirmed that they can communicate in both directions. The problem is how we bring this into K8s.</p>
| <p>I tried to reproduce the same in my environment I have created a virtual network gateway vnet local network gateway like below:</p>
<p><img src="https://i.imgur.com/FlsYk8w.png" alt="enter image description here" /></p>
<p>In virtual network added gateway subnet like below:</p>
<p><img src="https://i.imgur.com/GFa00tv.png" alt="enter image description here" /></p>
<p>created local network gateway :</p>
<p><img src="https://i.imgur.com/psawSTt.png" alt="enter image description here" /></p>
<p>On-premise try to configure <a href="https://www.mcse.gen.tr/demand-dial-ile-site-to-site-vpn/" rel="nofollow noreferrer">Routing and remote access role</a> in tools -> select custom configuration ->Vpn access, Lan routing ->finish</p>
<p>in network interface select -> New demand-dial interface -> in vpn type select IPEv2 and in the destination address screen provide public IP of virtual network gateway</p>
<p><img src="https://i.imgur.com/r4Qa8OC.png" alt="enter image description here" /></p>
<p>Now, try to create a connection like below:</p>
<p><img src="https://i.imgur.com/z0U9zK6.png" alt="enter image description here" /></p>
<p><img src="https://i.imgur.com/ChSmpUf.png" alt="enter image description here" /></p>
<p>Now, I have created an aks cluster with pod like below:</p>
<p><img src="https://i.imgur.com/DzX6129.png" alt="enter image description here" /></p>
<p>To communicate with pod make sure to use <em><strong>Azure Container Networking Interface (CNI)</strong></em> every pod gets an IP address from the subnet and can be accessed directly each pod receives an IP address and can directly communicate with other pods and services.
you can AKS nodes based on the maximum number of pod can support. Advanced network features and scenarios such as Virtual Nodes or Network Policies (either Azure or Calico) are supported with Azure CNI.</p>
<p>When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, <em><strong>Gateway should be able reach the pods directly.</strong></em> <a href="https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/configure-kubenet.md#virtual-network-peering-and-expressroute-connections" rel="nofollow noreferrer">Refer</a></p>
<p><img src="https://i.imgur.com/GZVumUM.png" alt="enter image description here" /></p>
<ul>
<li>You can use AKS's advanced features such as virtual nodes or Azure Network Policy. Use <a href="https://docs.projectcalico.org/v3.9/security/calico-network-policy" rel="nofollow noreferrer">Calico network policies</a>. network policy allows an traffic between pods within a cluster and communicated</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
</code></pre>
<p><img src="https://i.imgur.com/OawP8Dy.png" alt="enter image description here" /></p>
<p>To more in detail <em><strong>refer</strong></em> this link:</p>
<p><a href="https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/configure-kubenet.md" rel="nofollow noreferrer">Azure configure-kubenet - GitHub</a></p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-network" rel="nofollow noreferrer">Network connectivity and secure in Azure Kubernetes Service | Microsoft</a></p>
|
<p>I want to build the the <code>secretName</code> dynamically base on the value of the <code>my-label</code> key (trough a <code>ENV</code>). Is this possible?</p>
<p>I used the a similar approach to use label values as ARGs which worked.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
my-label: "my-value"
spec:
containers:
- name: my-container
image: my-image
env:
- name: MY_ENV_VAR
valueFrom:
fieldRef:
fieldPath: metadata.labels['my-label']
volumeMounts:
- name: my-secret
mountPath: /path/to/my-secret
volumes:
- name: my-secret
secret:
secretName: my-secret-$(MY_ENV_VAR)
</code></pre>
| <p>The fastest solution is surely to use kustomize.</p>
<p>Following your data, first organize the repository by creating a folder called "base" and one called "dev".</p>
<p>Then move the "my-cronjob" manifest into the "base" folder and add a kustomization.yaml file that invokes the CronJob.</p>
<p>Finally, create a file called kustomization.yaml inside the "dev" folder, calling the files from the "base" folder plus the patch.</p>
<p>Example:</p>
<p><a href="https://i.stack.imgur.com/LnpCF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LnpCF.png" alt="Repo structure" /></a></p>
<p>base/kustomization.yaml</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./my-cronjob.yaml
</code></pre>
<p>dev/kustomization.yaml</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
patches:
- target:
kind: CronJob
name: my-cronjob
patch: |-
- op: replace
path: /spec/jobTemplate/spec/template/spec/containers/0/env/0/valueFrom/fieldRef/fieldPath
value: metadata.labels['DEV']
</code></pre>
<p>To replicate to other environments, just copy the "dev" folder and paste it into a "prod" folder (for example) and edit the patch with the correct parameter.</p>
|
<p>I am getting into CI/CD and Microservices. I want to try Kubernetes for a project that I am starting. I spent the last two weeks learning about those topics. But I don't have the knowledge to make the decision. I would like to know if for a small project is better to setup Kubernetes in my server or pay a Cloud service. What difficulties I would have to setup Kubernetes manually? My impression is that setting up a Kubernetes by hand is more like toward setting up an email server, is much more viable to pay to have a email domain than set up an email server manually.</p>
<p>I would like only to have kubernetes to get my services up automatically and other basic things.</p>
| <p>If you have an <strong>on-prem</strong> set up and you can expose your cluster you can go with that creating the <strong>K8s</strong> cluster and managing it.</p>
<p>Else are you planning to set up manually on cloud <strong>VMs</strong>? instead better to use managed one from <strong>Digital Ocean</strong> or <strong>Oracle OKE</strong> which may cost around <strong>$20-30</strong> months if a small project.</p>
<p>Now you have the option to use <strong>Kubespray</strong>, and <strong>Kubeadm</strong> to setup the manual cluster and manage it so would be pretty easy to do it but again setting on <strong>VMs</strong> & managing it too much of work, instead go with the provider who gives free control plane and has only pay for <strong>Worker</strong> <strong>Nodes</strong>.</p>
<p>You can also go for the <strong>Serverless containers</strong>, <strong>Cloud RUN</strong> from <strong>GCP</strong> & <strong>ECS</strong> from <strong>AWS</strong> worry about your container <strong>workload</strong> and pay for resources actually your container using with option to <strong>scaling</strong>.</p>
|
<p>so I am creating a system composed of different components that are installed via helm charts. Since I needed different customizations for each of the charts, I created my own separated repositories, copied the charts there and added customizations.</p>
<p>The question is: how do I conveniently upgrade the charts versions, while keeping the customizations?</p>
<p>For some of the components it was an easy task, since custom templates were moved to different folder and when upgrade is needed I simply replace old files with the new one.</p>
<p>The issue is that in some charts I needed to alter the original files directly, for example in the middle of a template adding necessary stuff. So, during the upgrade of such component I have to go through all files, compare them, remember what changes are mine and then update the whole repository. So yes, you guessed it - it's manual, inconvenient and error prune task.</p>
<p>I tried looking for good practices how to maintain such repositories, but I couldn't find any answers, so I'd be more than happy to take any advice.</p>
| <p>If your changes are very specific to your project and not something that can be merged into the original repo, or you have legal loopholes to jump through, you can fork the original code and periodically sync up with the upstream.</p>
<p>The process is outlined here:
<a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork" rel="nofollow noreferrer">https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork</a></p>
<p>Obviously you still have to resolve conflicts, but you can set up a CI pipeline to periodically sync and update your helm registry.</p>
|
<p>I am trying to understand where is this warning coming from ? I have disabled the PSP support in my cluster and indeed using k8s version lower than 1.25. But I want to understand and disable this warning. Is it possible ? Which controller is responsible to handle this WARNING?</p>
<pre><code>kubectl get psp -A
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
</code></pre>
| <p>There is a blog K8s where various aspects of the topic "Warnings" are explained.
<a href="https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings</a></p>
<p>In summary, these have been introduced since version 1.19 and you canβt remove them easily (unless you use the k8s.io/client-go library to modify the K8s API).
<a href="https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling</a></p>
<p>The last resort might be to "throw away" the output:</p>
<pre><code>kubectl get psp -A 2>&1 | grep -vi "warn" | grep -vi "deprecat"
</code></pre>
|
<p>I am using the non HA version of ArgoCD (v2.6.5) installed in a single node k3s cluster.
The goal is to deploy a sample application together with kube-prometheus-stack, loki, tempo & minIO via Helm.</p>
<p>However, when I create an "Application" in Github and reference it in Argocd, all of them are in "Out of sync" state. Once it tries to re-sync, they change the status to "Unknown".</p>
<p>The installation of ArgoCD was done with the next command. (Basic install)</p>
<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>And, as example, the kube-prometheus-stack Application I create in Github looks this way:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kube-prometheus-stack
namespace: argocd
spec:
project: default
source:
chart: kube-prometheus-stack
repoURL: https://prometheus-community.github.io/helm-charts
targetRevision: 44.4.1
helm:
releaseName: kube-prometheus-stack
destination:
server: "https://kubernetes.default.svc"
namespace: observability
</code></pre>
<p>Any idea what I could be missing?</p>
<p>Thanks!</p>
| <p>This was fixed by deploying the CRDs separately. There seem to be more issues with a few objects and CRDs within the kube-prometheus-stack and loki charts, however, I can close this one.</p>
|
<p>I have a question about Kubernetes containers and persistent volumes.</p>
<p><strong>How can I make some of the preexisting folders of a Kubernetes container persistent?</strong></p>
<p>I know the usage of PVCs in Kubernetes but the problem about mounting a PVC to a container is that this operation -naturally- deletes everything in the mount path. eg. Say that we have an image which has a non-empty directory <code>/xyz</code> and we want to make this directory persistent. If we create a PVC and mount it to <code>/xyz</code>, we would lose everything inside <code>/xyz</code> as well (we don't want this to happen). So we want that directory to be persistent from the start with the files inside of it.</p>
<p>I'm not so sure if Docker or any other container technology responds such a feature, so it may not be suitable for Kubernetes too. Would be glad if anyone can enlighten me about this. Thanks!</p>
<p>My approaches so far:</p>
<ul>
<li><em>Copying</em>: Creating a PVC for the directory contents and mounting it to an init container or job that copies <code>/xyz</code> to the <code>/mounted/xyz</code>, then mounting PVC to the main container's <code>/xyz</code>. This approach has some drawbacks if the directory is too fat or has some OS/runtime-specific configurations.</li>
<li><em>Hostpath</em>: Populating a directory with the contents of <code>/xyz</code> (eg. <code>/in/host/xyz</code>) before starting the container. Then mounting this path from host to the container. Not a good approach since it's hard to automate.</li>
</ul>
| <p>there is no way to mount a Volume in a certain folder without overwriting its contents.</p>
<p>In my opinion the best approaches could be:</p>
<ol>
<li><p>The first one reported by you (for large content):</p>
<p>a. Create PVC</p>
<p>b. Add an initContainer to your Deployment that mount the Volume in a DIFFERENT path from the directory containing the data to move/copy</p>
<p>c. Add to the initContainer a "command" field with the commands to move/copy the content from the "source" directory to the mounted volume (target)</p>
<p>d. Mount to the "main" container the PVC used in the initContainer at the "source" directory path</p>
</li>
<li><p>Create a K8s cronjob (or job that works once if the files are never modified) that syncs from one folder to another (similar to point 1, but avoid waiting a long time before the application Pod starts, since the initContainer is no longer needed).
<a href="https://i.stack.imgur.com/EoWMU.png" rel="nofollow noreferrer">Cronjob example</a>
(Pay attention to file owners; you may need to run the job under the same serviceAccount that produced those files)</p>
</li>
<li><p>If they are static files, build the Docker image with all the contents of the folder already inside (Dockerfile β> copy). <a href="https://docs.docker.com/engine/reference/builder/" rel="nofollow noreferrer">https://docs.docker.com/engine/reference/builder/</a></p>
</li>
</ol>
<p>I strongly recommend not using hostPath in PRODUCTION environments.
<a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p>
|
<p>I'm pretty new to Kubernetes, I have docker for mac, not minikube.</p>
<p>I have the deployment file running with one pod (containing an express app) in the cluster, then i created a NodePort service.</p>
<p>Inside the express app i have a get request to '/posts' and I want to access it from the localhost.</p>
<p>i have this service file here:</p>
<p><a href="https://i.stack.imgur.com/nZXWY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nZXWY.png" alt="enter image description here" /></a></p>
<p>However, when i'm trying to access my localhost in the port that is given to the NodePort service,
for example localhost:30134/posts,
i get page isn't working on Chrome.</p>
<p>Anyone has an idea why?</p>
| <p>PREMISE:</p>
<blockquote>
<p>Every node in the cluster configures itself to listen on that assigned port and to forward traffic to one of the ready endpoints associated with that Service. You'll be able to contact the type: NodePort Service, from outside the cluster, by connecting to any node using the appropriate protocol (for example: TCP), and the appropriate port (as assigned to that Service).</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p>
<p>First you should figure out what node the "posts" pod is running on.</p>
<pre><code>kubectl -n NAMESPACE get pods -owide | grep -i posts
</code></pre>
<p>Once you figure out which node it's running on, retrieve its INTERNAL-IP:</p>
<pre><code>kubectl get nodes -owide
</code></pre>
<p>After that you will be able to reach the node via Browser (NODE-INTERNAL-IP:NODEPORT-PORT).</p>
<p>If you absolutely want to reach the service via "localhost" name, add the entry in the /etc/hosts file.</p>
<p>For example:</p>
<pre><code>echo "127.0.0.1 NODE-INTERNAL-IP" >> /etc/hosts
</code></pre>
|
<p>It's currently possible to allow a single domain or subdomain but I would like to allow multiple origins. I have tried many things like adding headers with snipets but had no success.</p>
<p>This is my current ingress configuration:</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: nginx-ingress
namespace: default
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress
uid: adcd75ab-b44b-420c-874e-abcfd1059592
resourceVersion: '259992616'
generation: 7
creationTimestamp: '2020-06-10T12:15:18Z'
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/enable-cors: 'true'
ingress.kubernetes.io/force-ssl-redirect: 'true'
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
nginx.ingress.kubernetes.io/cors-allow-credentials: 'true'
nginx.ingress.kubernetes.io/cors-allow-headers: 'Authorization, X-Requested-With, Content-Type'
nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, POST, DELETE, HEAD, OPTIONS'
nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com'
nginx.ingress.kubernetes.io/enable-cors: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: 'true'
</code></pre>
<p>I also would like to extend the cors-allow-origin like:</p>
<pre><code>nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com, https://otherexample.com'
</code></pre>
<p>Is it possible to allow multiple domains in other ways?</p>
| <p>A some time ago, ingress-nginx allows multiple origins. See issue <a href="https://github.com/kubernetes/ingress-nginx/issues/5496" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/5496</a></p>
<p>Example of usage:</p>
<pre><code>nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com, https://another.com, http://localhost:8000"
</code></pre>
|
<p>I'm trying to create a k8s node struct and assign default value to it.</p>
<p>The following code complains "<em>unknown field Name in struct literal</em>"</p>
<pre><code>node = &corev1.Node{Name: pod.Spec.NodeName}
</code></pre>
<p>However, this code is fine:</p>
<pre><code>node = &corev1.Node{}
node.Name = pod.Spec.NodeName
</code></pre>
<p>why?</p>
| <p>If the second code works, I'd assume there is an embedded struct, and looking at the code (assuming it's <a href="https://github.com/kubernetes/api/blob/master/core/v1/types.go#L5558-L5576" rel="nofollow noreferrer">this</a>) that appears to be the case:</p>
<pre><code>type Node struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the behavior of a node.
// https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec NodeSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Most recently observed status of the node.
// Populated by the system.
// Read-only.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status NodeStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
</code></pre>
<p>So the name field is likely either part of <code>metav1.TypeMeta</code>, or <code>metav1.ObjectMeta</code>, digging <a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#ObjectMeta" rel="nofollow noreferrer">deeper</a> it's on ObjectMeta, which means to define inline, you'll need to do something like:</p>
<p>package main</p>
<pre><code>import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// ...
node := &corev1.Node{ObjectMeta: metav1.ObjectMeta{ Name: pod.Spec.NodeName}}
</code></pre>
|
<p>I have an issue with my GKE cluster. I am using two node pools: secondary - with standard set of highmen-n1 nodes, and primary - with preemptible highmem-n1 nodes. Issue is that I have many pods in Error/Completed status which are not cleared by k8s, all ran on preemptible set. THESE PODS ARE NOT JOBS.</p>
<p>GKE documentation says that:
"Preemptible VMs are Compute Engine VM instances that are priced lower than standard VMs and provide no guarantee of availability. Preemptible VMs offer similar functionality to Spot VMs, but only last up to 24 hours after creation."</p>
<p>"When Compute Engine needs to reclaim the resources used by preemptible VMs, a preemption notice is sent to GKE. Preemptible VMs terminate 30 seconds after receiving a termination notice."
Ref: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms</a></p>
<p>And from the kubernetes documentation:
"For failed Pods, the API objects remain in the cluster's API until a human or controller process explicitly removes them.</p>
<p>The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of Succeeded or Failed), when the number of Pods exceeds the configured threshold (determined by terminated-pod-gc-threshold in the kube-controller-manager). This avoids a resource leak as Pods are created and terminated over time."
Ref: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection</a></p>
<p>So, from my understanding every 24 hours this set of nodes is changing, so it kills all the pods running on them and depending on graceful shutdown pods are ending up in Completed or Error state. Nevertheless, kubernetes is not clearing or removing them, so I have tons of pods in mentioned statuses in my cluster, which is not expected at all.</p>
<p>I am attaching screenshots for reference.
<a href="https://i.stack.imgur.com/MOnNk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MOnNk.png" alt="Pods in Error/Completed State" /></a>
Example <code>kubectl describe pod</code> output:
Status: Failed
Reason: Terminated
Message: Pod was terminated in response to imminent node shutdown.</p>
<p>Apart from that, no events, logs, etc.</p>
<p>GKE version:
1.24.7-gke.900</p>
<p>Both Node pools versions:
1.24.5-gke.600</p>
<p>Did anyone encounter such issue or knows what's going on there? Is there solution to clear it in a different way than creating some script and running it periodically?</p>
<p>I tried digging in into GKE logs, but I couldn't find anything. I also tried to look for the answers in docs, but I've failed.</p>
| <p>The given commands does not work for me.</p>
<p>I have created a few manifests that you can apply in your cluster to automatically delete the Pods matching the criteria with a kubernetes CronJob.</p>
<p><a href="https://github.com/tyriis/i-see-dead-pods" rel="nofollow noreferrer">https://github.com/tyriis/i-see-dead-pods</a></p>
<p>this is working for me</p>
<pre><code>kubectl get pods \
--all-namespaces \
-o go-template \
--template='{{range .items}}{{printf "%s %s %s\n" .metadata.namespace .metadata.name .status.message}}{{end}}' \
| grep "Pod was terminated in response to imminent node shutdown." \
| awk '{print $1, $2}' \
| xargs -r -n2 kubectl delete pod -n
</code></pre>
|
<p>I am trying to make socket connection from react application to nodejs server hosted on GKE. I am getting connection error "Connection closed before it was established".
HTTP connection working fine.
dev.host.com is used for frontend and dev.host.com/api is used for backend and included in ingress.yml file below. Can it might be the issue due to this setup or I missed any configuration. I haved configured 3 Replicas.</p>
<p><strong>GKE Configuration for frontend:</strong>
<em>frontend-deployment.yaml</em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deploy
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- image: image-frontend:latest
name: vektor-fe
imagePullPolicy: Always
livenessProbe:
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- port: 3000
targetPort: 80
type: ClusterIP
</code></pre>
<p><em>ingress.yml</em></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- dev.host.com
secretName: my-ingress-tls
rules:
- host: dev.host.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: frontend-service
port:
number: 3000
- path: /api
pathType: ImplementationSpecific
backend:
service:
name: backend-service
port:
number: 4001
</code></pre>
<p><strong>Backend Configuration</strong>
<em>backend-deploy.yaml</em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deploy
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- image: my-app:latest
name: vektor-be
imagePullPolicy: Always
ports:
- containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
ports:
- port: 4001
targetPort: 4000
selector:
app: app
type: ClusterIP
</code></pre>
<p>Tried socket connection to dev.host.com at port 4001 but got error "Connection closed before it was established". socket.io-client is used in react and socket.io is used in backend.</p>
| <p>try the ingress config like</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: core-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/websocket-services: ws-service
nginx.org/websocket-services: ws-service
name: core-ingress
spec:
rules:
- host: test.io
http:
paths:
- backend:
serviceName: ws-service
servicePort: 80
tls:
- hosts:
- test.io
secretName: core-secret
</code></pre>
|
<p>I have a k8s cluster which uses rancher.io/local-path.
There is a PV there</p>
<pre><code>
$ kubectl describe pv pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f
Name: pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: rancher.io/local-path
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-path
Status: Bound
Claim: gitlab/gitlab-prometheus-server
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 8Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [agent-1]
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /var/lib/rancher/k3s/storage/pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f_gitlab_gitlab-prometheus-server
HostPathType: DirectoryOrCreate
Events: <none>
</code></pre>
<p>I would like to move that PV to another node, agetn-5. How can I achieve that?
Important point that PV in question is <code>rancher.io/local-path</code> provisioned.</p>
| <p>It isn't possible to do this in a single step.</p>
<p>But you can:</p>
<ol>
<li>Access the K8s cluster where PVC exists</li>
<li>Create a Deployment (or single Pod) that mounts PVC on the path you prefer (Example /xyz)</li>
<li>Run</li>
</ol>
<pre><code> kubectl -n NAMESPACE cp POD_NAME:/xyz /tmp/
</code></pre>
<p>to locally copy the contents of the /xyz folder to the /tmp path</p>
<ol start="4">
<li><p>Logout from K8s cluster</p>
</li>
<li><p>Login to the K8s cluster where data will be migrated</p>
</li>
<li><p>Create new PVC</p>
</li>
<li><p>Create a Deployment (or Single Pod) that mounts the PVC on the path you prefer (Example /new-xyz)</p>
</li>
<li><p>Run</p>
</li>
</ol>
<pre><code> kubectl -n NAMESPACE cp /tmp/xyz/ POD_NAME:/new-xyz/
</code></pre>
<p>to copy the local content to the path /new-xyz</p>
|
<p>ENV: <br />
k8s: v1.20.5 <br />
ingress-nginx: v1.6.4</p>
<p>I created ingress-nginx-controller from offical yaml: <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/baremetal/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/baremetal/deploy.yaml</a></p>
<p>and I changed the network type to hostnetwork:</p>
<pre class="lang-yaml prettyprint-override"><code>hostNetwork: true
</code></pre>
<p>Then I created a deployment to create a backend server.Below is the yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: http-svc
spec:
selector:
app: http
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: http-deployment
labels:
app: http
spec:
replicas: 2
selector:
matchLabels:
app: http
template:
metadata:
labels:
app: http
spec:
containers:
- name: http
image: hashicorp/http-echo:alpine
args: ["-text", "hello", "-listen=:80"]
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: "test.com"
http:
paths:
- pathType: Prefix
path: "/test"
backend:
service:
name: http-svc
port:
number: 80
</code></pre>
<p>Every things looks running fine, but I still got "500 Internal Server Error" when I access the web server via ingress-nginx.</p>
<p>Below is the info for the resources:</p>
<pre class="lang-bash prettyprint-override"><code>#kubectl describe ingress
Name: test-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
test.com
/test http-svc:80 (192.168.107.203:80,192.168.122.81:80)
Annotations: <none>
Events: <none>
#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc ClusterIP 10.100.58.107 <none> 80/TCP 48m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 283d
#kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
http-deployment-764c4597c5-rdks7 1/1 Running 0 48m 192.168.122.81 k8s-node4 <none> <none>
http-deployment-764c4597c5-rf99t 1/1 Running 0 48m 192.168.107.203 k8s-node3 <none> <none>
#kubectl get pods -n ingress-nginx -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-kb64z 0/1 Completed 0 44m 192.168.107.204 k8s-node3 <none> <none>
ingress-nginx-admission-patch-xmswb 0/1 Completed 1 44m 192.168.122.82 k8s-node4 <none> <none>
ingress-nginx-controller-69695968f9-7dtxf 1/1 Running 0 44m 10.1.1.12 k8s-node2 <none> <none>
</code></pre>
<p>I can directly acces my backend server via service ip, so I think the problem maybe is ingress-nginx can not forward the request to the backend?</p>
<p>Below is the log in ingress-nginx-controller:</p>
<pre><code>2023/03/19 07:31:54 [error] 25#25: *25199 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET /test HTTP/1.1", host: "test.com"
2023/03/19 07:31:54 [error] 26#26: *25201 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "test.com"
10.1.1.11 - - [19/Mar/2023:07:31:54 +0000] "GET /test HTTP/1.1" 500 170 "-" "curl/7.29.0" 76 0.000 [upstream-default-backend] [] 127.0.0.1:8181 : 127.0.0.1:8181 0 : 170 0.000 : 0.000 500 : 500 28acaef695f43cac09e7dfc932511c92
</code></pre>
<p>Looks like it forwarded the request to localhost,but why? Did I miss something?</p>
<p>Below is the different respones from different way to access:</p>
<pre><code>#curl http://10.100.58.107
hello
#curl http://test.com/test
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>nginx</center>
</body>
</html>
</code></pre>
<p>I want to access the backend server via ingress-nginx successfully.</p>
| <p><strong>Debug pointers:</strong></p>
<ul>
<li>Application Level
<ul>
<li>Check if you have a <code>/test</code> endpoint in your service what does it respond with. Because based on your test requests, with the IP you made request to <code>/</code> and with DNS you made a request to <code>/test</code></li>
</ul>
</li>
<li>K8s Service Level
<ul>
<li>Port forward the service and try to run it on your local machine.
<pre><code>kubectl port-forward svc/http-svc 80:80
</code></pre>
</li>
</ul>
</li>
<li>Ingress Level
<ul>
<li>Is the error thrown by Nginx? If yes, then nginx ingress controller is throwing the error.</li>
<li>Check if you have used the proper ingress class name in your ingress object based on what you configured while deploying your nginx ingress controller.</li>
<li>As visible in your logs it says <code>[upstream-default-backend]</code> , that I feel based on my understanding once the controller receives the request it doesn't have any configured server configuration for this host.
<ul>
<li>This happens because nginx ingress controller only adds the server configuration of the hosts that happens to have an ingress class in their ingress object.</li>
</ul>
</li>
</ul>
</li>
</ul>
|
<p>I need to add swap memory in kubernates pod. so that if any pods exceeds the available RAM then it can use the swap memory from hard disk. is that possible in kubernates?</p>
| <p>As of <em>Kubernetes 1.22</em>, swap memory is supported (as an alpha feature).</p>
<ol>
<li>Configure the node's host OS with a swap memory device (e.g., with <code>swapon</code> or <code>/etc/fstab</code>);</li>
<li>Configure kubelet on that node to:
<ul>
<li>still start despite detecting the presence of swap (disable <code>fail-on-swap</code>),</li>
<li>enable the <code>NodeSwap</code> feature gate, and</li>
<li>configure <code>MemorySwap.SwapBehavior=UnlimitedSwap</code> to let kubernetes workloads use swap memory.</li>
</ul>
</li>
</ol>
<p>Note, there is currently no support for setting swap limits individually per workload (although this is planned for the beta). Either no containers are permitted to use any swap, or all containers can use unlimited swap memory.</p>
<p>(If workloads are <em>not</em> permitted to use swap then, depending on the Linux kernel cgroups version, they could still get swapped anyway. Prior to cgroups v2, processes were not able to enforce separate limits for swap and physical memory, but only for the combined total.)</p>
<p>See the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory" rel="nofollow noreferrer">docs</a>, and the kubernetes enhancement proposal (KEP) cited therein, for more details.</p>
|