prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am trying to configure Strimzi Kafka listener custom cert, following the documentation: <a href="https://strimzi.io/docs/operators/latest/full/configuring.html#ref-alternative-subjects-certs-for-listeners-str" rel="nofollow noreferrer">https://strimzi.io/docs/operators/latest/full/configuring.html#ref-alternative-subjects-certs-for-listeners-str</a>
I want to expose those listener outside of the Azure Kubernetes Service within the private virtual network.</p>
<p>I have provided a custom cert with private key generated by an internal CA and pointed towards that secret in the Kafka configuration:</p>
<p><code>kubectl create secret generic kafka-tls --from-literal=listener.cer=$cert --from-literal=listener.key=$skey -n kafka</code></p>
<p>`</p>
<pre><code>listeners:
- name: external
port: 9094
type: loadbalancer
tls: true
authentication:
type: tls
#Listener TLS config
configuration:
brokerCertChainAndKey:
secretName: kafka-tls
certificate: listener.cer
key: listener.key
bootstrap:
loadBalancerIP: 10.67.249.253
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
brokers:
- broker: 0
loadBalancerIP: 10.67.249.251
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
- broker: 1
loadBalancerIP: 10.67.249.252
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
- broker: 2
loadBalancerIP: 10.67.249.250
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
authorization:
type: simple
</code></pre>
<p>`</p>
<p>Certificate has following records:</p>
<p>SAN:
*.kafka-datalake-prod-kafka-brokers *.kafka-datalake-prod-kafka-brokers.kafka.svc kafka-datalake-prod-kafka-bootstrap kafka-datalake-prod-kafka-bootstrap.kafka.svc kafka-datalake-prod-kafka-external-bootstrap kafka-datalake-prod-kafka-external-bootstrap.kafka.svc kafka-datalake-prod-azure.custom.domain</p>
<p>CN=kafka-datalake-produkty-prod-azure.custom.domain</p>
<p>I have also created an A record in the custom DNS for the given address: kafka-datalake-produkty-prod-azure.custom.domain 10.67.249.253</p>
<p>Then, I created a KafkaUser object:</p>
<pre><code>apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
name: customuser
namespace: kafka
labels:
strimzi.io/cluster: kafka-datalake-prod
spec:
authentication:
type: tls
authorization:
type: simple
acls:
- resource:
type: topic
name: notify.somecustomapp.prod.topic_name
patternType: literal
operations:
- Create
- Describe
- Write
# host: "*"
</code></pre>
<p>When I then retrieve the secrets from the Kafka cluster on AKS:</p>
<p><code>kubectl get secret kafka-datalake-prod-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > broker.crt kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.key}' | base64 -d > customuser.key kubectl get secret customuser -n kafka -o jsonpath='{.data.user\.crt}' | base64 -d > customuser.crt</code></p>
<p>Communication fails, when I try to connect and send some messages with a producer using those 3 files to authenticate/authorize, I get a following issue:</p>
<p><code>INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <connecting> [IPv4 ('10.67.249.253', 9094)]>: connecting to 10.67.249.253:9094 [('10.67.249.253', 9094) IPv4] INFO:kafka.conn:Probing node bootstrap-0 broker version INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL CA from certs/prod/broker.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Cert from certs/prod/customuser.crt INFO:kafka.conn:<BrokerConnection node_id=bootstrap-0 host=10.67.249.253:9094 <handshake> [IPv4 ('10.67.249.253', 9094)]>: Loading SSL Key from certs/prod/customuser.key [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)</code></p>
<p>What am I doing wrong?</p>
<p>The communication worked perfectly fine when I was using the same method of connecting, yet the cluster itself and listeners were using the default certs generated by Strimzi cluster.</p>
<p>All the best,
Krzysztof</p>
| <p>@Turing85 @Jakub</p>
<p>Many thanks for your comments - especially those critical ones</p>
<p>And thanks, Jakub, for pointing me towards using the CA of custom certificate. What needed to be done in order to fix this was:</p>
<ol>
<li>switch the value obtained from kafka-datalake-prod-cluster-ca-cert secret with the full chain of root CA, intermediate signing cert and the certificate itself.</li>
<li>Add LoadBalancer IPs of brokers - this is stated in the documentation, yet the way it is formulated misguided me into thinking that adding hostnames/service names to SAN is enough (<a href="https://strimzi.io/docs/operators/latest/full/configuring.html#tls_listener_san_examples" rel="nofollow noreferrer">https://strimzi.io/docs/operators/latest/full/configuring.html#tls_listener_san_examples</a>, and later <a href="https://strimzi.io/docs/operators/latest/full/configuring.html#external_listener_san_examples" rel="nofollow noreferrer">https://strimzi.io/docs/operators/latest/full/configuring.html#external_listener_san_examples</a>).</li>
</ol>
<p>After those changes, everything started to work.</p>
<p>Thank you for help.</p>
|
<p>I would like to install a helm release using argocd, i defined a helm app declaratively like the following :</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: moon
namespace: argocd
spec:
project: aerokube
source:
chart: moon2
repoURL: https://charts.aerokube.com/
targetRevision: 2.4.0
helm:
valueFiles:
- values.yml
destination:
server: "https://kubernetes.default.svc"
namespace: moon1
syncPolicy:
syncOptions:
- CreateNamespace=true
</code></pre>
<p>Where my values.yml:</p>
<pre><code>customIngress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
ingressClassName: nginx
host: moon3.benighil-mohamed.com
tls:
- secretName: moon-tls
hosts:
- moon3.benighil-mohamed.com
configs:
default:
containers:
vnc-server:
repository: quay.io/aerokube/vnc-server
resources:
limits:
cpu: 400m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
</code></pre>
<p>Notice, the app does not take values.yml into consideration, and i get the following error:</p>
<pre><code>rpc error: code = Unknown desc = Manifest generation error (cached): `helm template . --name-template moon --namespace moon1 --kube-version 1.23 --values /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml --api-versions acme.cert-manager.io/v1 --api-versions acme.cert-manager.io/v1/Challenge --api-versions acme.cert-manager.io/v1/Order --api-versions admissionregistration.k8s.io/v1 --api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --api-versions apiextensions.k8s.io/v1 --api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1/APIService --api-versions apps/v1 --api-versions apps/v1/ControllerRevision --api-versions apps/v1/DaemonSet --api-versions apps/v1/Deployment --api-versions apps/v1/ReplicaSet --api-versions apps/v1/StatefulSet --api-versions argoproj.io/v1alpha1 --api-versions argoproj.io/v1alpha1/AppProject --api-versions argoproj.io/v1alpha1/Application --api-versions argoproj.io/v1alpha1/ApplicationSet --api-versions autoscaling/v1 --api-versions autoscaling/v1/HorizontalPodAutoscaler --api-versions autoscaling/v2 --api-versions autoscaling/v2/HorizontalPodAutoscaler --api-versions autoscaling/v2beta1 --api-versions autoscaling/v2beta1/HorizontalPodAutoscaler --api-versions autoscaling/v2beta2 --api-versions autoscaling/v2beta2/HorizontalPodAutoscaler --api-versions batch/v1 --api-versions batch/v1/CronJob --api-versions batch/v1/Job --api-versions batch/v1beta1 --api-versions batch/v1beta1/CronJob --api-versions ceph.rook.io/v1 --api-versions ceph.rook.io/v1/CephBlockPool --api-versions ceph.rook.io/v1/CephBlockPoolRadosNamespace --api-versions ceph.rook.io/v1/CephBucketNotification --api-versions ceph.rook.io/v1/CephBucketTopic --api-versions ceph.rook.io/v1/CephClient --api-versions ceph.rook.io/v1/CephCluster --api-versions ceph.rook.io/v1/CephFilesystem --api-versions ceph.rook.io/v1/CephFilesystemMirror --api-versions ceph.rook.io/v1/CephFilesystemSubVolumeGroup --api-versions ceph.rook.io/v1/CephNFS --api-versions ceph.rook.io/v1/CephObjectRealm --api-versions ceph.rook.io/v1/CephObjectStore --api-versions ceph.rook.io/v1/CephObjectStoreUser --api-versions ceph.rook.io/v1/CephObjectZone --api-versions ceph.rook.io/v1/CephObjectZoneGroup --api-versions ceph.rook.io/v1/CephRBDMirror --api-versions cert-manager.io/v1 --api-versions cert-manager.io/v1/Certificate --api-versions cert-manager.io/v1/CertificateRequest --api-versions cert-manager.io/v1/ClusterIssuer --api-versions cert-manager.io/v1/Issuer --api-versions certificates.k8s.io/v1 --api-versions certificates.k8s.io/v1/CertificateSigningRequest --api-versions coordination.k8s.io/v1 --api-versions coordination.k8s.io/v1/Lease --api-versions crd.projectcalico.org/v1 --api-versions crd.projectcalico.org/v1/BGPConfiguration --api-versions crd.projectcalico.org/v1/BGPPeer --api-versions crd.projectcalico.org/v1/BlockAffinity --api-versions crd.projectcalico.org/v1/CalicoNodeStatus --api-versions crd.projectcalico.org/v1/ClusterInformation --api-versions crd.projectcalico.org/v1/FelixConfiguration --api-versions crd.projectcalico.org/v1/GlobalNetworkPolicy --api-versions crd.projectcalico.org/v1/GlobalNetworkSet --api-versions crd.projectcalico.org/v1/HostEndpoint --api-versions crd.projectcalico.org/v1/IPAMBlock --api-versions crd.projectcalico.org/v1/IPAMConfig --api-versions crd.projectcalico.org/v1/IPAMHandle --api-versions crd.projectcalico.org/v1/IPPool --api-versions crd.projectcalico.org/v1/IPReservation --api-versions crd.projectcalico.org/v1/KubeControllersConfiguration --api-versions crd.projectcalico.org/v1/NetworkPolicy --api-versions crd.projectcalico.org/v1/NetworkSet --api-versions discovery.k8s.io/v1 --api-versions discovery.k8s.io/v1/EndpointSlice --api-versions discovery.k8s.io/v1beta1 --api-versions discovery.k8s.io/v1beta1/EndpointSlice --api-versions events.k8s.io/v1 --api-versions events.k8s.io/v1/Event --api-versions events.k8s.io/v1beta1 --api-versions events.k8s.io/v1beta1/Event --api-versions flowcontrol.apiserver.k8s.io/v1beta1 --api-versions flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration --api-versions flowcontrol.apiserver.k8s.io/v1beta2 --api-versions flowcontrol.apiserver.k8s.io/v1beta2/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta2/PriorityLevelConfiguration --api-versions moon.aerokube.com/v1 --api-versions moon.aerokube.com/v1/BrowserSet --api-versions moon.aerokube.com/v1/Config --api-versions moon.aerokube.com/v1/DeviceSet --api-versions moon.aerokube.com/v1/License --api-versions moon.aerokube.com/v1/Quota --api-versions networking.k8s.io/v1 --api-versions networking.k8s.io/v1/Ingress --api-versions networking.k8s.io/v1/IngressClass --api-versions networking.k8s.io/v1/NetworkPolicy --api-versions node.k8s.io/v1 --api-versions node.k8s.io/v1/RuntimeClass --api-versions node.k8s.io/v1beta1 --api-versions node.k8s.io/v1beta1/RuntimeClass --api-versions objectbucket.io/v1alpha1 --api-versions objectbucket.io/v1alpha1/ObjectBucket --api-versions objectbucket.io/v1alpha1/ObjectBucketClaim --api-versions operator.tigera.io/v1 --api-versions operator.tigera.io/v1/APIServer --api-versions operator.tigera.io/v1/ImageSet --api-versions operator.tigera.io/v1/Installation --api-versions operator.tigera.io/v1/TigeraStatus --api-versions policy/v1 --api-versions policy/v1/PodDisruptionBudget --api-versions policy/v1beta1 --api-versions policy/v1beta1/PodDisruptionBudget --api-versions policy/v1beta1/PodSecurityPolicy --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1/ClusterRole --api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --api-versions rbac.authorization.k8s.io/v1/Role --api-versions rbac.authorization.k8s.io/v1/RoleBinding --api-versions scheduling.k8s.io/v1 --api-versions scheduling.k8s.io/v1/PriorityClass --api-versions snapshot.storage.k8s.io/v1 --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --api-versions snapshot.storage.k8s.io/v1beta1 --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotContent --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1/CSIDriver --api-versions storage.k8s.io/v1/CSINode --api-versions storage.k8s.io/v1/StorageClass --api-versions storage.k8s.io/v1/VolumeAttachment --api-versions storage.k8s.io/v1beta1 --api-versions storage.k8s.io/v1beta1/CSIStorageCapacity --api-versions v1 --api-versions v1/ConfigMap --api-versions v1/Endpoints --api-versions v1/Event --api-versions v1/LimitRange --api-versions v1/Namespace --api-versions v1/Node --api-versions v1/PersistentVolume --api-versions v1/PersistentVolumeClaim --api-versions v1/Pod --api-versions v1/PodTemplate --api-versions v1/ReplicationController --api-versions v1/ResourceQuota --api-versions v1/Secret --api-versions v1/Service --api-versions v1/ServiceAccount --include-crds` failed exit status 1: Error: open /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml: no such file or directory
</code></pre>
<p>Notice both <code>application.yml</code> and <code>values.yml</code> are located in the same directory on my local machine, ie: the structure of the 2 files in question looks like :</p>
<pre><code>.
├── application.yml
└── values.yml
</code></pre>
<p>Any help please ?</p>
| <p>Cleanest way to achieve what you want is using the remote chart as dependency:</p>
<p>Chart.yaml</p>
<pre><code>name: mychartname
version: 1.0.0
apiVersion: v2
dependencies:
- name: moon2
version: "2.4.0"
repository: "https://charts.aerokube.com/"
</code></pre>
<p>And overriding its values like this:</p>
<p>values.yaml</p>
<pre><code>moon2:
customIngress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
ingressClassName: nginx
host: moon3.benighil-mohamed.com
tls:
- secretName: moon-tls
hosts:
- moon3.benighil-mohamed.com
configs:
default:
containers:
vnc-server:
repository: quay.io/aerokube/vnc-server
resources:
limits:
cpu: 400m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
</code></pre>
<p>Pay attention to this file. You need to create a key in your values file with the same name as the dependency(<code>moon2</code> in your case), and indent the values you want to override one level.</p>
<p>You need to upload both of these files to a repository and point your ArgoCD application URL to this repository.</p>
<p>This has the advantage that whenever the upstream helm chart gets updated, all you need to do is increase the version in Chart.yaml</p>
|
<p>Inventory file (inventory/k8s.yaml):</p>
<pre><code>plugin: kubernetes.core.k8s
connections:
- kubeconfig: ~/.kube/config
context: 'cluster-2'
</code></pre>
<p>Task file (roles/common/tasks/main.yaml):</p>
<pre><code># Method 1: Using `kubernetes.core` plugin to list the pod names:
- name: Get a list of all pods from any namespace
kubernetes.core.k8s_info:
kind: Pod
register: pod_list
- name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].metadata.name') }} "
# Method 2: Using `shell` command to list the pod names:
- name: Get node names
shell: kubectl get pods
register: pod_list2
- name: Print pod names
debug:
msg: "{{ pod_list2.stdout }}"
</code></pre>
<p>Ansible config (ansible.cfg):</p>
<pre><code>[inventory]
enable_plugins = host_list, auto, yaml, ini, kubernetes.core.k8s
</code></pre>
<p>Main file (main.yaml):</p>
<pre><code>---
- hosts: localhost
gather_facts: false
collections:
- azure.azcollection
- kubernetes.core
roles:
- "common"
</code></pre>
<p>Running command to execute task: <code>ansible-playbook main.yaml -i cluster-2/k8s.yaml -e role=common -e cluster_name=cluster-2</code></p>
<p>Question:
I am running the above configs to run get the pods from the remote cluster mentioned in the inventory file. But, the problem is, I am still getting the pod names from the local cluster and not the cluster-2 in Method 1 and 2.</p>
<p>k8s plugin should get the list of pods from cluster-2 as described in the inventory file. How can I connect to remote kubernetes cluster?</p>
<p>I also checked output with <code>-vvvv</code>:</p>
<pre><code>ansible-playbook [core 2.14.0]
config file = /Users/test/u/apps/ansible.cfg
configured module search path = ['/Users/test/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/test/Library/Python/3.9/lib/python/site-packages/ansible
ansible collection location = /Users/test/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/test/Library/Python/3.9/bin/ansible-playbook
python version = 3.9.12 (main, Mar 26 2022, 15:52:10) [Clang 13.0.0 (clang-1300.0.29.30)] (/usr/local/opt/[email protected]/bin/python3.9)
jinja version = 3.1.2
libyaml = True
Using /Users/test/u/apps/ansible.cfg as config file
setting up inventory plugins
Loading collection kubernetes.core from /Users/test/.ansible/collections/ansible_collections/kubernetes/core
</code></pre>
| <p>You're trying to use both the kubernetes inventory plugin <strong>and</strong> the <code>k8s_info</code> module, and because of that you're getting conflicting results. The two don't have anything to do with each other.</p>
<h2>The inventory module</h2>
<p>The kubernetes inventory module is -- I think -- a weird beast; it produces an ansible inventory in which the pods in your cluster are presented as Ansible hosts. To see a list of all the pod names in your cluster, you could write a playbook like this:</p>
<pre><code>- hosts: all
gather_facts: false
tasks:
- name: Print pod names
debug:
msg: "{{ inventory_hostname }}"
</code></pre>
<p>This will respect the context you've configured in your kubernetes inventory plugin configuration. For example, if I have in <code>inventory/k8s.yaml</code> the following:</p>
<pre><code>plugin: kubernetes.core.k8s
connections:
- kubeconfig: ./kubeconfig
context: 'kind-cluster2'
</code></pre>
<p>Then the above playbook will list the pod names from <code>kind-cluster2</code>, regardless of the <code>current-context</code> setting in my <code>kubeconfig</code> file. In my test environment, this produces:</p>
<pre><code>PLAY [all] *********************************************************************
TASK [Print pod names] *********************************************************
ok: [kubernetes] => {
"msg": "kubernetes"
}
ok: [coredns-565d847f94-2shl6_coredns] => {
"msg": "coredns-565d847f94-2shl6_coredns"
}
ok: [coredns-565d847f94-md57c_coredns] => {
"msg": "coredns-565d847f94-md57c_coredns"
}
ok: [kube-dns] => {
"msg": "kube-dns"
}
ok: [etcd-cluster2-control-plane_etcd] => {
"msg": "etcd-cluster2-control-plane_etcd"
}
ok: [kube-apiserver-cluster2-control-plane_kube-apiserver] => {
"msg": "kube-apiserver-cluster2-control-plane_kube-apiserver"
}
ok: [kube-controller-manager-cluster2-control-plane_kube-controller-manager] => {
"msg": "kube-controller-manager-cluster2-control-plane_kube-controller-manager"
}
ok: [kube-scheduler-cluster2-control-plane_kube-scheduler] => {
"msg": "kube-scheduler-cluster2-control-plane_kube-scheduler"
}
ok: [kindnet-nc27b_kindnet-cni] => {
"msg": "kindnet-nc27b_kindnet-cni"
}
ok: [kube-proxy-9chgt_kube-proxy] => {
"msg": "kube-proxy-9chgt_kube-proxy"
}
ok: [local-path-provisioner-684f458cdd-925v5_local-path-provisioner] => {
"msg": "local-path-provisioner-684f458cdd-925v5_local-path-provisioner"
}
PLAY RECAP *********************************************************************
coredns-565d847f94-2shl6_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
coredns-565d847f94-md57c_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
etcd-cluster2-control-plane_etcd : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kindnet-nc27b_kindnet-cni : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-apiserver-cluster2-control-plane_kube-apiserver : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-controller-manager-cluster2-control-plane_kube-controller-manager : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-dns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-proxy-9chgt_kube-proxy : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-scheduler-cluster2-control-plane_kube-scheduler : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
local-path-provisioner-684f458cdd-925v5_local-path-provisioner : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
</code></pre>
<p>The key point here is that your inventory will consist of a list of pods. I've never found this particularly useful.</p>
<h2>The <code>k8s_info</code> module</h2>
<p>The <code>k8s_info</code> queries a kubernetes cluster for a list of objects. It doesn't care about your inventory configuration -- it will run on whichever target host you've defined for your play (probably <code>localhost</code>) and perform the rough equivalent of <code>kubectl get <whatever></code>. If you want to use an explicit context, you need to set that as part of your module parameters. For example, to see a list of pods in <code>kind-cluster2</code>, I could use the following playbook:</p>
<pre><code>- hosts: localhost
gather_facts: false
tasks:
- kubernetes.core.k8s_info:
kind: pod
kubeconfig: ./kubeconfig
context: kind-cluster2
register: pods
- debug:
msg: "{{ pods.resources | json_query('[].metadata.name') }}"
</code></pre>
<p>Which in my test environment produces as output:</p>
<pre><code>PLAY [localhost] ***************************************************************
TASK [kubernetes.core.k8s_info] ************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"coredns-565d847f94-2shl6",
"coredns-565d847f94-md57c",
"etcd-cluster2-control-plane",
"kindnet-nc27b",
"kube-apiserver-cluster2-control-plane",
"kube-controller-manager-cluster2-control-plane",
"kube-proxy-9chgt",
"kube-scheduler-cluster2-control-plane",
"local-path-provisioner-684f458cdd-925v5"
]
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
</code></pre>
<hr />
<p>In conclusion: you probably want to use <code>k8s_info</code> rather than the inventory plugin, and you'll need to configure the module properly by setting the <code>context</code> (and possibly the <code>kubeconfig</code>) parameters when you call the module.</p>
<hr />
<blockquote>
<p>Is there any way I can define context and kubeconfig outside of the tasks (globally) if I am using k8s_info module?</p>
</blockquote>
<p>According to <a href="https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s_info_module.html" rel="nofollow noreferrer">the documentation</a>, you could set the <code>K8S_AUTH_KUBECONFIG</code> and <code>K8S_AUTH_CONTEXT</code> environment variables if you want to globally configure the settings for the <code>k8s_info</code> module. You could also write your task like this:</p>
<pre><code> - kubernetes.core.k8s_info:
kind: pod
kubeconfig: "{{ k8s_kubeconfig }}"
context: "{{ k8s_context }}"
register: pods
</code></pre>
<p>And then define the <code>k8s_kubeconfig</code> and <code>k8s_context</code> variables somewhere else in your Ansible configuration (e.g., as group vars). This makes it easy to retarget things to a different cluster with only a single change.</p>
|
<p>I tried to deploy the Kafka-UI in my local Kubernetes cluster, but ingress-nginx gives 502 error (Bad Gateway). I used the following configurations:</p>
<p><strong>Deployment:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-ui-deployment
labels:
app: kafka-ui
spec:
replicas: 1
selector:
matchLabels:
app: kafka-ui
template:
metadata:
labels:
app: kafka-ui
spec:
containers:
- name: kafka-ui
image: provectuslabs/kafka-ui:latest
env:
- name: KAFKA_CLUSTERS_0_NAME
value: "K8 Kafka Cluster"
- name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
value: kafka-svc:9093
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "1000m"
ports:
- containerPort: 8088
protocol: TCP</code></pre>
</div>
</div>
</p>
<p><strong>Service:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: kafka-ui-service
spec:
selector:
app: kafka-ui
ports:
- protocol: TCP
port: 80
targetPort: 8088</code></pre>
</div>
</div>
</p>
<p><strong>Ingress:</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> ingressClassName: public
rules:
- host: "localhost"
http:
paths:
- path: /kafka-ui
pathType: Prefix
backend:
service:
name: kafka-ui-service
port:
number: 80</code></pre>
</div>
</div>
</p>
<p>Port-forward the targetport got following error:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="false" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>Forwarding from 127.0.0.1:8088 -> 8088
Forwarding from [::1]:8088 -> 8088
channel 9: open failed: connect failed: Connection refused
Handling connection for 8088
Handling connection for 8088
E0623 09:18:20.768161 33100 portforward.go:406] an error occurred forwarding 8088 -> 8088: error forwarding port 8088 to pod 75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0, uid : failed to execute portforward in network namespace "/var/run/netns/cni-a5ed0994-0456-6b6c-5a79-90e582ef09b3": failed to connect to localhost:8088 inside namespace "75353d54479df5f235c03db1899367dc77e82877986be849761eba6193ca72c0", IPv4: dial tcp4 127.0.0.1:8088: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found
E0623 09:18:20.768994 33100 portforward.go:234] lost connection to pod</code></pre>
</div>
</div>
</p>
<p>Any suggestions will be appreciated.
Thanks for your help!</p>
| <p>The main error was the port. The right port is 8080. This yaml works fine for me.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-ui-deployment
labels:
app: kafka-ui
spec:
replicas: 1
selector:
matchLabels:
app: kafka-ui
template:
metadata:
labels:
app: kafka-ui
spec:
containers:
- name: kafka-ui
image: provectuslabs/kafka-ui:latest
env:
- name: KAFKA_CLUSTERS_0_NAME
value: "K8 Kafka Cluster"
- name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
value: kafka-kafka-bootstrap.kafka:9092 # <- service-name.namespace:9092
imagePullPolicy: Always
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "1000m"
ports:
- containerPort: 8080 # <- Rectify the port
protocol: TCP</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: kafka-ui-service
namespace: kafka
spec:
selector:
app: kafka-ui
ports:
- protocol: TCP
port: 8080
targetPort: 8080 # <- Rectify the port.</code></pre>
</div>
</div>
</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> ingressClassName: public
rules:
- host: "localhost"
http:
paths:
- path: /kafka-ui
pathType: Prefix
backend:
service:
name: kafka-ui-service
port:
number: 8080</code></pre>
</div>
</div>
</p>
|
<p>I am wondering if <code>systemd</code> could be used as the <strong>cgroup driver</strong> in <strong>cgroup v1</strong> environment.</p>
<p>NOTE: As mentioned in <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">Kubernetes Container Runtimes Doc</a>, <code>cgroupfs</code> is preferred when the OS have <strong>cgroup v1</strong>.</p>
<p>I have tried to set up a Kubernetes cluster using <code>systemd</code> as <strong>cgroup driver</strong>, and it is working correctly for now.</p>
<p>The test env is:</p>
<ul>
<li>Kubelet: 1.23</li>
<li>OS: Ubuntu 20.04 (Kernel 5.4.0, cgroup v1)</li>
<li>CRI: containerd 1.5.9</li>
<li>Cgroup Driver: systemd</li>
</ul>
<p>Are there any risks by using <code>systemd</code> in <strong>cgroup v1</strong> env?</p>
| <blockquote>
<p>NOTE: As mentioned in Kubernetes Container Runtimes Doc, cgroupfs is preferred when the OS have cgroup v1.</p>
</blockquote>
<p>Can you specify which paragraph is this? If not mistaken the document didn't state cgroupfs is preferred over systemd for distro that uses cgroup v1. systemd is widely accepted as the init system but cgroup v2 is available only if you run a fairly new (>=5.8) kernel.</p>
<blockquote>
<p>Are there any risks by using systemd in cgroup v1 env?</p>
</blockquote>
<p>Cgroup v1 is mostly in-use to date and systemd is designed to work with it. That being said, cgroupfs is the default for kubelet at this time of writing. As kernel mature overtime, systemd may one day become the default and all the backing CRI will follow thru.</p>
<p>A side note, docker default to cgroupfs on system that only support cgroup v1 (regardless if systemd is present). It will use systemd on system that uses cgroup v2 and systemd is present. However, k8s has dropped docker as the CRI with the removal of dockershim starting v1.24. You can continue with dockershim with <a href="https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/" rel="nofollow noreferrer">Mirantis</a>.</p>
|
<p>I'm using Flink Kubernetes Operator 1.3.0 and need to pass some environment variables to a Python job. I have followed the <a href="https://github.com/apache/flink-kubernetes-operator/tree/release-1.3/examples/flink-python-example" rel="nofollow noreferrer">official documentation</a> and the example runs fine. How can I inject environment variables so that I can use it inside the python file?</p>
<p>EDIT:</p>
<p>Here's the yaml file that I've used. Its straight from the example link above:</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: python-example
spec:
image: localhost:32000/flink-python-example:1.16.0
flinkVersion: v1_16
flinkConfiguration:
taskmanager.numberOfTaskSlots: "1"
serviceAccount: flink
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"]
parallelism: 1
upgradeMode: stateless
</code></pre>
<p>As you can see it's a custom resource of kind FlinkDeployment. And here's the python code:</p>
<pre class="lang-py prettyprint-override"><code>import logging
import sys
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import StreamTableEnvironment
def python_demo():
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
t_env = StreamTableEnvironment.create(stream_execution_environment=env)
t_env.execute_sql("""
CREATE TABLE orders (
order_number BIGINT,
price DECIMAL(32,2),
buyer ROW<first_name STRING, last_name STRING>,
order_time TIMESTAMP(3)
) WITH (
'connector' = 'datagen'
)""")
t_env.execute_sql("""
CREATE TABLE print_table WITH ('connector' = 'print')
LIKE orders""")
t_env.execute_sql("""
INSERT INTO print_table SELECT * FROM orders""")
if __name__ == '__main__':
logging.basicConfig(stream=sys.stdout, level=logging.INFO, format="%(message)s")
python_demo()
</code></pre>
| <p>Found the solution.</p>
<p>This is not detailed in the reference
<a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference" rel="nofollow noreferrer">https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference</a></p>
<p>or example Flink Deployment
<a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/" rel="nofollow noreferrer">https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/</a></p>
<p>But here it says: <a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference/#jobmanagerspec" rel="nofollow noreferrer">https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/reference/#jobmanagerspec</a>
<code>JobManager pod template. It will be merged with FlinkDeploymentSpec.podTemplate</code></p>
<p>So I just added envFrom from the example in which shows you how to extend the FlinkDeployment CRD:</p>
<p><a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/" rel="nofollow noreferrer">https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/pod-template/</a></p>
<p>Confirmed this is working as I had to get this work for my own application now</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: python-example
spec:
image: localhost:32000/flink-python-example:1.16.0
flinkVersion: v1_16
flinkConfiguration:
taskmanager.numberOfTaskSlots: "1"
serviceAccount: flink
jobManager:
resource:
memory: "2048m"
cpu: 1
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
serviceAccount: flink
containers:
# Do not change the main container name
- name: flink-main-container
envFrom:
- secretRef:
name: <SECRET RESOURCE NAME>
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/opt/flink-python_2.12-1.16.0.jar # Note, this jarURI is actually a placeholder
entryClass: "org.apache.flink.client.python.PythonDriver"
args: ["-pyclientexec", "/usr/local/bin/python3", "-py", "/opt/flink/usrlib/python_demo.py"]
parallelism: 1
upgradeMode: stateless
</code></pre>
|
<p>I want to watch changes to Pods continuously using the client-go Kubernetes SDK. I am using the below code to watch the changes:</p>
<pre class="lang-golang prettyprint-override"><code>func (c *Client) watchPods(namespace string, restartLimit int) {
fmt.Println("Watch Kubernetes Pods")
watcher, err := c.Clientset.CoreV1().Pods(namespace).Watch(context.Background(),
metav1.ListOptions{
FieldSelector: "",
})
if err != nil {
fmt.Printf("error create pod watcher: %v\n", err)
return
}
for event := range watcher.ResultChan() {
pod, ok := event.Object.(*corev1.Pod)
if !ok || !checkValidPod(pod) {
continue
}
owner := getOwnerReference(pod)
for _, c := range pod.Status.ContainerStatuses {
if reflect.ValueOf(c.RestartCount).Int() >= int64(restartLimit) {
if c.State.Waiting != nil && c.State.Waiting.Reason == "CrashLoopBackOff" {
doSomething()
}
if c.State.Terminated != nil {
doSomethingElse()
}
}
}
}
}
</code></pre>
<p>The code is watching changes to the Pods, but it exits after some time. I want to run this continuously. I also want to know how much load it puts on the API Server and what is the best way to run a control loop for looking for changes.</p>
| <p>In Watch, a long poll connection is established with the API server. Upon establishing a connection, the API server sends an initial batch of events and any subsequent changes. The connection will be dropped after a timeout occurs.</p>
<p>I would suggest using an Informer instead of setting up a watch, as it is much more optimized and easier to setup. While creating an informer, you can register specific functions which will be invoked when pods get created, updated and deleted. Even in informers you can target specific pods using a labelSelector, similar to watch. You can also create shared informers, which are shared across multiple controllers in the cluster. This results in reducing the load on the API server.</p>
<p>Below are few links to get you started:</p>
<ol>
<li><a href="https://aly.arriqaaq.com/kubernetes-informers/" rel="nofollow noreferrer">https://aly.arriqaaq.com/kubernetes-informers/</a></li>
<li><a href="https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/" rel="nofollow noreferrer">https://www.cncf.io/blog/2019/10/15/extend-kubernetes-via-a-shared-informer/</a></li>
<li><a href="https://pkg.go.dev/k8s.io/client-go/informers" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/client-go/informers</a></li>
</ol>
|
<p>We've been looking at memory for capacity planning and have a Helm-deployed 8GB-limited QuestDB instance running on one of our k8s clusters.</p>
<p>We recently began scraping metrics off of it. I'm trying to get to the bottom of the <code>questdb_memory_mem_used</code> metric, which occasionally sees excursions way beyond the resource limits.</p>
<p><a href="https://i.stack.imgur.com/mL1dn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mL1dn.png" alt="Chart displaying Memory Used, with peaks of up to almost 25Gb" /></a></p>
<p>Does anyone have a good handle on what contributes to this metric and what we could be seeing?</p>
<p>For reference the <code>NATIVE_*</code> tagged metrics seem much more sane in the same time period:</p>
<p><a href="https://i.stack.imgur.com/fWyEQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fWyEQ.png" alt="List of several QuestDB monitoring metrics, including FAST_MAP at 789MB and FAST_MAP_LONG_LIST at 6.31GB" /></a></p>
| <p>According to the <a href="https://questdb.io/docs/third-party-tools/prometheus/" rel="nofollow noreferrer">documented Prometheus metrics</a> exposed by QuestDB, <code>questdb_memory_mem_used</code> includes all native memory allocations which may include virtual memory if it wasn't touched yet.</p>
<p>This metric includes mmapped files, so that's why its value is that big. You might see that metric grow when you access large tables on your instance. <code>DISTINCT</code> and <code>JOIN</code> queries will also affect this metric.</p>
<p>Please note mmapped memory is elastic and mostly virtual. Only free memory is used for the page cache, so it's fine if this metric has a large value.</p>
|
<p>I have created pod with below pod definition which uses mongo official docker image. The expected result here is mongo docker creates user and pwd with env variables <code>MONGO_INITDB_ROOT_USERNAME</code> and <code>MONGO_INITDB_ROOT_PASSWORD</code> and then it will use <code>/etc/mongo/mongod.conf</code> provided to it from volume. Instead what happens is - on first connection - I am unable to connect saying user does not exist.</p>
<p>The error disappears if I remove the <code>command</code> section. Any Idea how to resolve this issue.</p>
<p>The equivalent docker command works well, but in kubernetes auth does not work if I provide a custom configuration file.</p>
<pre><code>docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret --name some-mongo -v /etc/mongo:/etc/mongo -v /etc/ssl/keyfile:/data/db/keyfile mongo:4.2.23 --config /etc/mongo/mongod.conf
</code></pre>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mongodb
labels:
db: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.2.23
command:
- mongod
- "--config"
- "/etc/mongo/mongod.conf"
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongosecret
key: user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongosecret
key: password
volumeMounts:
- name: mongodb-keyfile
mountPath: /etc/ssl
- name: mongodb-config
mountPath: /etc/mongo
readOnly: true
volumes:
- name: mongodb-keyfile
secret:
secretName: mongodb-keyfile
defaultMode: 0600
- name: mongodb-config
configMap:
name: mongodb-config
</code></pre>
| <p>As per this <a href="https://stackoverflow.com/questions/62018646/unable-to-authenticate-mongodb-deployed-in-kubernetes">SO</a> , As you said Post removing “command” from deployment file and it is working Because when you set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD ENV variables in your manifest. Mongo container will enable --auth by itself. So, you don't need to specify explicitly and check here.</p>
<p>Refer to this <a href="https://stackoverflow.com/questions/34559557/how-to-enable-authentication-on-mongodb-through-docker">SO1</a> , <a href="https://stackoverflow.com/questions/51815216/authentication-mongo-deployed-on-kubernetes">SO2</a> for more information. You can also pass the username and password as <a href="https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets" rel="nofollow noreferrer">secrets</a> .</p>
|
<p>I'm moving my project to Kubernetes using Traefik for routing and MetalLB as my load balancer.</p>
<p>I've deployed several apps and I'd like to make use of official <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Kubernetes-Dashboard</a>. So I deployed the Kubernetes-Dashboard using recommended config and created IngressRoute:</p>
<pre class="lang-yaml prettyprint-override"><code># dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
entryPoints:
- websecure
routes:
- match: Host(`k8s.example.com`, `www.k8s.example.com`)
kind: Rule
middlewares:
# - name: https-redirectscheme
# - name: nginx-basic-auth
services:
- kind: Service
name: kubernetes-dashboard
# namespace: kubernetes-dashboard
port: 443
tls:
secretName: k8s.example.com-tls
</code></pre>
<p>It shows up in the Traefik Dashboard, but when I try to access k8s.example.com I get <code>Internal Server Error</code>.</p>
<p>Thank you</p>
| <p>Found the answer here: <a href="https://stackoverflow.com/a/69999245/3883694">https://stackoverflow.com/a/69999245/3883694</a></p>
<p>You can disable SSL certificate verification.</p>
<p><a href="https://doc.traefik.io/traefik/routing/overview/#transport-configuration" rel="nofollow noreferrer">https://doc.traefik.io/traefik/routing/overview/#transport-configuration</a></p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: traefik.containo.us/v1alpha1
kind: ServersTransport
metadata:
name: traefik-dashboard-transport
namespace: traefik
spec:
serverName: traefik-dashboard
insecureSkipVerify: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard
spec:
entryPoints:
- web
routes:
- match: (PathPrefix(`/dashboard`) || Host(`traefik.example.com`))
kind: Rule
services:
- name: api@internal
kind: TraefikService
serversTransport: traefik-dashboard-transport
</code></pre>
|
<p>Just curious if it is possible to execute a command inside <code>minikube</code> without doing <code>minikube ssh</code> and then executing the command.</p>
<p>Something like:</p>
<p><code>minikube ssh exec -it <command></code></p>
| <p>As of 2023, it is possible. Just run:</p>
<pre><code>minikube ssh '<command>'
</code></pre>
<p>For example:</p>
<pre><code>minikube ssh 'ls -la'
</code></pre>
|
<p>Secret value is returning NIL</p>
<p>When i run the command below to check the values in the secret</p>
<pre><code>kubectl get secret cred2 -n service-b -o jsonpath='{.data.*}' | base64 -d
</code></pre>
<p>It returns a Nil value. How do i get the value that was passed on the values.yaml file?</p>
<pre><code>{"auths":{"%!s(<nil>)":{"username":"%!s(<nil>)","password":"%!s(<nil>)","email":"%!s(<nil>)","auth":"JSFzKDxuaWw+KTolIXMoPG5pbD4p"}}}%`
</code></pre>
<p>Sample Code used to generate the secret</p>
<pre><code>{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
{{ range $.Values.imageCredentials }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .name }}
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" $ }}
{{ end }}
{{ end }}
</code></pre>
<pre><code>values.yaml
imageCredentials:
- name: cred1
registry: quay.io
username: someone
password: sillyness
email: [email protected]
- name: cred2
registry: quay.io
username: someone
password: sillyness
email: [email protected]
</code></pre>
<pre><code>_helpers
{{- define "imagePullSecret" }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
</code></pre>
| <p>You should think of <code>template</code> (and the Helm-specific <code>include</code>) like function calls that take a single parameter. Here you're passing that parameter as <code>$</code>, a special variable that refers to the top-level object. You probably want <code>.</code>, which within a <code>range</code> loop refers to the current item.</p>
<pre class="lang-yaml prettyprint-override"><code>{{ range $.Values.imageCredentials }}
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
{{/* ^ not `$` */}}
{{ end }}
</code></pre>
<p>You might be confusing this case with the similar <a href="https://stackoverflow.com/questions/74964282/create-kubernetes-docker-registry-secret-from-yaml-file-for-each-lookup-namespac">Create kubernetes docker-registry secret from yaml file for each lookup namespaces?</a>. In that question, the template is trying to refer to <code>.Values.imageCredentials</code>. This expression can be decomposed as: within <code>.</code> (the template parameter), find the field <code>Values</code>, and within that find the field <code>imageCredentials</code>. In that question the template parameter must be the top-level Helm object so that it can dereference <code>.Values</code>. But in your example here, you loop over a list in the top-level template, and need to pass the individual values into the supporting template.</p>
<pre class="lang-yaml prettyprint-override"><code>{{/* Your question: template accepts a sub-item of the values; pass
`.` or another derived expression */}}
{{- define "imagePullSecret" -}}
{{ printf ... .registry .email ... }}
{{- end -}}
{{/* Other question: template accepts the top-level Helm object; pass
`$`, a saved reference to the top-level object, or you can use
`.` outside a `range` or `with` block that redefines it */}}
{{- define "imagePullSecret" -}}
{{- with .Values.imageCredentials }}...{{- end }}
{{- end -}}
</code></pre>
|
<p>In the Kubernetes documentation for controlling-access to the API server, under the <a href="https://kubernetes.io/docs/concepts/security/controlling-access/#authorization" rel="nofollow noreferrer">Authorization section</a> it says that authorization is controlled through a Policy.</p>
<p>However, a Policy is not found as an API resource:</p>
<pre><code>❯ k api-resources | grep -i policy
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
❯ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.3+k3s1
</code></pre>
<p>So what exactly is a Policy? How is it setup?</p>
<p>The docs are not very clear on this point.</p>
| <blockquote>
<p>...under the Authorization section it says that authorization is controlled through a Policy.</p>
</blockquote>
<p>The sample you saw is a cluster that uses ABAC. From your screenshot your cluster is likely to use RBAC which access policy is not used.</p>
<blockquote>
<p>So what exactly is a Policy? How is it setup?</p>
</blockquote>
<p>Here's the official documentation for <a href="https://kubernetes.io/docs/reference/access-authn-authz/abac/" rel="nofollow noreferrer">Attribute-based access control</a></p>
|
<p>I'm not sure how to phrase this so I apologize if the title of the question does not make sense to you.</p>
<p>Due to various historical reasons, I have multiple teams contribute to the same code repo, that serves multiple service endpoints. Currently all teams deployments and releases are done together which creates a lot of churn.</p>
<p>I'm trying to get to this state: team A and B can still share the same code base, but they can deploy separately using different Kubernetes namespace? Like:</p>
<ul>
<li>Team A's code is all under <code>com/mycompany/team_a</code>, team B's under <code>com/mycompany/team_b</code></li>
<li>Somewhere in the repo there is a config that does the mapping:
<ul>
<li><code>com/mycompany/team_a/*</code> => <code>config_team_a.yaml</code>, that has a Kubernetes config, maybe with namespace <code>TeamA/ServiceA</code></li>
<li><code>com/mycompany/team_b/*</code> => <code>config_team_b.yaml</code> with namespace <code>TeamB/ServiceB</code></li>
</ul>
</li>
</ul>
<p>So that they can build their image separately and, of course, deploy separately.</p>
| <p>Correct me if I'm wrong, but from the description of your problem it looks like you actually have two problems:</p>
<ul>
<li>The fact that you have separate services code in the same repo (team A and team B);</li>
<li>The fact that you have several environments (development/production, for example)</li>
</ul>
<p>The second issue can be easily solved if you use <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>, for example. It allows you to template your builds and pass different configs to it.
The first one, can also be partly solved by helm, since you can also separate your teams builds using templating.</p>
<p>However, a few years ago, I was working on a .net monorepo and faced yet another problem: every time there was a PR merged to our git repo, a build was triggered in Jenkins for every service we had, even those that did not have changes. From the description of your problem, it is not clear to me if you have a Jenkins pipeline configured and/or if you are also facing something similar, but if you are, you can have a look at what I did to workaround the issue: <a href="https://github.com/pgeadas/jenkinssharedlibrary/blob/master/src/com/pgeadas/K8sUtilities.groovy" rel="nofollow noreferrer">repo</a>. Feel free to have a look and I hope that helps.</p>
|
<p>I want to add or copy files into persistent volume and then use it in container using volume mount ?any help</p>
| <p>Once PVC/PV are created (<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a>), there are number of possible solutions.</p>
<p>For specific question, options 1 and 2 will suffice. Listing more for reference, however this list does not try to be complete.</p>
<ol>
<li>Simplest and native, <code>kubectl cp</code>: <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp</a></li>
<li><code>rsync</code> - still quite simple, but also robust. Recommended for a task (both of below options were tested)</li>
</ol>
<ul>
<li>TO pod: <a href="https://serverfault.com/questions/741670/rsync-files-to-a-kubernetes-pod">https://serverfault.com/questions/741670/rsync-files-to-a-kubernetes-pod</a></li>
<li>FROM pod: <a href="https://cybercyber.org/using-rsync-to-copy-files-to-and-from-a-kubernetes-pod.html" rel="nofollow noreferrer">https://cybercyber.org/using-rsync-to-copy-files-to-and-from-a-kubernetes-pod.html</a></li>
</ul>
<ol start="3">
<li><code>tar</code>, but incremental: <a href="https://www.freshleafmedia.co.uk/blog/incrementally-copying-rsyncing-files-from-a-kubernetes-pod" rel="nofollow noreferrer">https://www.freshleafmedia.co.uk/blog/incrementally-copying-rsyncing-files-from-a-kubernetes-pod</a></li>
<li>Tools for synchronisation, backup, etc</li>
</ol>
<ul>
<li>For example, <a href="https://github.com/backube/volsync" rel="nofollow noreferrer">https://github.com/backube/volsync</a></li>
</ul>
|
<p>Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in <code>pending state</code></p>
<p>I’ve tried installing on argocd namespace and existing ones. Still doesn’t work</p>
<p>I tried to install it using the commands below:</p>
<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
</code></pre>
| <p>Make sure you have created a fargate profile with the namespace selector as <code>argocd</code>. It might be one of the issues.</p>
<p>refer this <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile</a></p>
|
<p>I have created a new EKS cluster with 1 worker node in a public subnet. I am able to query node, connect to the cluster, and run pod creation command, however, when I am trying to create a pod it fails with the below error got by describing the pod. Please guide.</p>
<pre><code> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 81s default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
Warning FailedScheduling 16m default-scheduler 0/2 nodes are available: 2 Too many pods, 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable, 3 Too many pods. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
Warning FailedScheduling 14m (x3 over 22m) default-scheduler 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 Too many pods. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 12m default-scheduler 0/2 nodes are available: 1 Too many pods, 2 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 2 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
Warning FailedScheduling 7m14s default-scheduler no nodes available to schedule pods
Warning FailedScheduling 105s (x5 over 35m) default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
</code></pre>
<p>I am able to get status of the node and it looks ready:</p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-12-61.ec2.internal Ready <none> 15m v1.24.7-eks-fb459a0
</code></pre>
<p>While troubleshooting I tried below options:</p>
<ol>
<li>recreate the complete demo cluster - still the same error</li>
<li>try recreating pods with different images - still the same error</li>
<li>trying to increase to instance type to t3.micro - still the same error</li>
<li>reviewed security groups and other parameters in a cluster - Couldnt come to RCA</li>
</ol>
| <p>it's due to the node's POD limit or <strong>IP</strong> limit on Nodes.</p>
<p>So if we see <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="nofollow noreferrer">official Amazon doc</a>, <strong>t3.micro</strong> maximum 2 interface you can use and <strong>2</strong> private IP. Roughly you might be getting around <strong>4</strong> IPs to use and <strong>1st</strong> IP get used by Node etc, There will be also default system PODs running as Daemon set and so.</p>
<p>Add new instance or upgrade to larger instance who can handle more pods.</p>
|
<p>I'm trying to see if there's a way to apply a kustomize patchTransformer to a specific container in a pod other than using its array index. For example, if I have 3 containers in a pod, (0, 1, 2) and I want to patch container "1" I would normally do something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>patch: |-
- op: add
path: /spec/containers/1/command
value: ["sh", "-c", "tail -f /dev/null"]
</code></pre>
<p>That is heavily dependent on that container order remaining static. If container "1" is removed for whatever reason, the array is reshuffled and container "2" suddenly becomes container "1", making my patch no longer applicable.</p>
<p>Is there a way to patch by name, or target a label/annotation, or some other mechanism?</p>
<pre><code>path: /spec/containers/${NAME_OF_CONTAINER}/command
</code></pre>
<p>Any insight is greatly appreciated.</p>
| <p>You may have seen <em>JSONPath</em> syntax like this floating around the internet and hoped that you could select a list item and patch it using Kustomize.</p>
<pre><code>/spec/containers[name=my-app]/command
</code></pre>
<p>As @Rico mentioned in <a href="https://stackoverflow.com/a/63928566/4785629">his answer</a>: This is a limitation with <a href="https://www.rfc-editor.org/rfc/rfc6902.html" rel="nofollow noreferrer">JSON6902</a> - it only accepts paths using <em>JSONPointer</em> syntax, defined by <a href="https://www.rfc-editor.org/rfc/rfc6901.html" rel="nofollow noreferrer">JSON6901</a>.</p>
<p><em><strong>So, no, you cannot currently address a list item using <code>[key=value]</code> syntax when using kustomize's <code>patchesJson6902</code>.</strong></em></p>
<p>However, the problem presented in the original question around dealing with changes to the order of list items does have a solution using <em>JSONPointer</em> syntax (JSON6901) without moving to Strategic Merge Patch (which can depend on CRD authors correctly annotating how list-item merges should be applied).</p>
<p>Simply add another JSON6902 operation to your patches to <code>test</code> that the item remains at the index you specified.</p>
<pre><code># First, test that the item is still at the list index you expect
- op: test
path: /spec/containers/0/name
value: my-app
# Now that you know your item is still at index-0, it's safe to patch its command
- op: replace
path: /spec/containers/0/command
value: ["sh", "-c", "tail -f /dev/null"]
</code></pre>
<p>The <code>test</code> operation will fail your patch if the value at the specified path does not match what is provided. This way, you can be sure that your other patch operation's dependency on the item's index is still valid!</p>
<p>I use this trick especially when dealing with custom resources, since I:</p>
<ul>
<li>A) Don't have to give kustomize a whole new openAPI spec, and</li>
<li>B) Don't have to depend on the CRD authors having added the correct extension annotation (like: <code>"x-kubernetes-patch-merge-key": "name"</code>) to make sure my strategic merge patches on list items work the way I need them to.</li>
</ul>
|
<p>What is the best way to allow CORS requests at this time? (Given that CORS support in the Contour Ingress currently is in the "<a href="https://github.com/projectcontour/contour/issues/437" rel="nofollow noreferrer">parking lot</a>")</p>
<p>My particular use case is hosting a GRPC service, which envoy reverse proxies. Conveniently, contour also supports grpc-web out-of-the-box, which we'd like to use for our web service.</p>
<p>However, given that CORS are not supported, we cannot do cross-domain requests.</p>
<p>Apart from making our web app use the same domain as the GRPC api, is there any other solution that could fill our need at the moment?</p>
<p>Basically, we'd want the envoy to be configured very similarly to the <a href="https://github.com/grpc/grpc-web/blob/master/net/grpc/gateway/examples/echo/envoy.yaml" rel="nofollow noreferrer">GRPC web example config</a>.</p>
| <p>For anyone that stumbles upon this question trying to setup Controur, gRPC, and TLS;</p>
<p>you want to use <code>HTTPProxy</code> instead. Working configuration with TLS:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: service-proxy
spec:
virtualhost:
fqdn: service.example.com
corsPolicy:
allowCredentials: true
allowOrigin:
- "*"
allowMethods:
- GET
- POST
- OPTIONS
allowHeaders:
- authorization
- cache-control
- x-grpc-web
- User-Agent
- x-accept-content-transfer-encoding
- x-accept-response-streaming
- x-user-agent
- x-grpc-web
- grpc-timeout
- Grpc-Message
- Grpc-Status
- content-type
tls:
secretName: service-secret
routes:
- conditions:
- prefix: /
services:
- name: my-service
port: 80
---
apiVersion: apps/v1
kind: Deployment metadata:
labels:
app: my-app
run: my-service
name: my-service
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: my-app
run: my-service
template:
metadata:
labels:
app: my-app
run: my-service
spec:
containers:
- image: image:latest
name: my-service
resources: {}
imagePullPolicy: Always
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 2
httpGet:
path: /health-check
port: 3000
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app
run: my-service
annotations:
projectcontour.io/upstream-protocol.h2c: "80"
spec:
ports:
- port: 80
targetPort: 50051
protocol: TCP
selector:
run: my-service
</code></pre>
<h2>A Couple notes</h2>
<ol>
<li>My understanding from the documentation is that the <code>projectcontour.io/upstream-protocol.h2c</code> should actually be <code>projectcontour.io/upstream-protocol.h2</code>, but doing that I get a <code>TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER</code> error in the response. This configuration, with <code>h2c</code>, seems to be working and actually employing TLS to transport request/response data.</li>
<li>I haven't gone through and groomed the <code>allowHeadrs</code>, this is just a set that's working for me right now using <code>grpcurl</code> and a web application frontend built with React using the awesome <a href="https://github.com/deeplay-io/nice-grpc/tree/master/packages/nice-grpc-web" rel="nofollow noreferrer">nice-grpc-web</a> library.</li>
<li>Obligatory - <strong>you should not use "*" as an allow origin in production</strong> because it is a security concern - warning (<em>really...</em> don't do it).</li>
<li>The TLS secret <code>secret-service</code> was actually manually generated, I haven't tested the cert-manager stuff yet.</li>
</ol>
|
<p>There's a working k8s configuration which uses the same port name in a pod and in a service. Here's a config part of the pod:</p>
<pre><code>ports:
- containerPort: 8000
name: app-port
protocol: TCP
</code></pre>
<p>Here's a config part of the service:</p>
<pre><code> ports:
- name: app-port
nodePort: 32000
port: 8000
protocol: TCP
targetPort: app-port
type: NodePort
</code></pre>
<p>How is the name <code>app-port</code> resolved?</p>
| <p>When you create the Service, it is associated with Pods selected by the label selector defined in the Service <code>spec.selector</code>.</p>
<p>When a request is made to the Service, the Control Plane retrieves its <code>spec.ports[*].targetPort</code> value:</p>
<ul>
<li><p>If it does not exist, the value of <code>spec.ports[*].port</code> field is used instead to forward traffic to the Pods.</p>
</li>
<li><p>If it does exist and it is a number, the <code>targetPort</code> value is used to forward traffic to the Pods.</p>
</li>
<li><p>If it is a string, the Control Plane looks up the port by name in <code>spec.ports[*].name</code> of the Pod, and uses the port with the matching name as the target port.</p>
</li>
</ul>
<p>Here is the relevant piece of code in Kubernetes:</p>
<pre><code>// FindPort locates the container port for the given pod and portName. If the
// targetPort is a number, use that. If the targetPort is a string, look that
// string up in all named ports in all containers in the target pod. If no
// match is found, fail.
func FindPort(pod *v1.Pod, svcPort *v1.ServicePort) (int, error) {
portName := svcPort.TargetPort
switch portName.Type {
case intstr.String:
name := portName.StrVal
for _, container := range pod.Spec.Containers {
for _, port := range container.Ports {
if port.Name == name && port.Protocol == svcPort.Protocol {
return int(port.ContainerPort), nil
}
}
}
case intstr.Int:
return portName.IntValue(), nil
}
return 0, fmt.Errorf("no suitable port for manifest: %s", pod.UID)
}
</code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/blob/v1.26.0/pkg/api/v1/pod/util.go#L32" rel="nofollow noreferrer">source</a></p>
|
<p>I am trying to create two ingress service, one which will expose the frontend to internet and 2nd which will connect backend to frontend. This is in <code>Minikube</code> and I have the <code>ingress addon</code> enabled.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: angular-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: angular.test
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: angular-service
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: springboot.test
- http:
paths:
- path: /exact/form-config/applicant/field?section=Additional%20Details&page-index=1&enabled-on=
pathType: Exact
backend:
service:
name: angular-service
port:
number: 8080
</code></pre>
<p>I want to use the name of the backend host url in my angular application for springboot and I am want them to connect without using IP since IP are dynamic. Is this correct?</p>
| <p>Given you have a service for your Angular app like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: your-angular-app
namespace: dev
spec:
selector:
app: your-angular-app
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
</code></pre>
<p>And a service for your Spring Boot app in the same namespace like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: your-spring-app
namespace: dev
spec:
selector:
app: your-spring-app
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
</code></pre>
<p>Make sure that your deployments (or whatever you use to actually create your app instances in the cluster) have matching labels. Deployment example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: your-spring-app
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: your-spring-app # <- this must match the spec.selector.app in the service
</code></pre>
<p>Assuming that your Spring app offers all API endpoints with the <code>/api</code> prefix, you could use an Ingress like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: your-ingress
namespace: dev
spec:
rules:
- host: your-site.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: your-angular-app
port:
name: http
- path: /api
pathType: Prefix
backend:
service:
name: your-spring-app
port:
name: http
</code></pre>
<p>In a cloud environment you would most likely need additional annotations on your Ingress like the Ingress class, but these information can be found in the Cloud provider's documentation.</p>
|
<p>We have a EKS cluster running the 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added <code>"system:masters"</code> for eks worker nodes role. Below is the code snipped for the modified configmap.</p>
<pre><code>data:
mapRoles: |
- groups:
- system:nodes
- system:bootstrappers
- system:masters
rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role
username: system:node:{{EC2PrivateDNSName}}
</code></pre>
<p>After adding this section, the EKS worker nodes successfully got admin access to the cluster. But in the EKS dashboard, the nodegroups are in a degraded state. It shows the below error in the Health issues section. Not able to update cluster due to this error. Please help.</p>
<p><code>Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.</code></p>
| <p>During an issue such as this one, a quick way to get more details is by looking at the "Health issues" section on the EKS service page. As can be seen in the attached screenshot below, which has the same error in the description, there is an access permissions issue with the specific role <code>eks-quickstart-test-ManagedNodeInstance</code>.</p>
<p><a href="https://i.stack.imgur.com/dG1le.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dG1le.png" alt="enter image description here" /></a></p>
<p>The aforementioned role lacks permissions to the cluster and the same can be updated in the <code>aws-auth.yaml</code> configuration as described below:</p>
<ol>
<li>Run the following command from the role/user which created the EKS cluster:</li>
</ol>
<blockquote>
<p>kubectl get cm aws-auth -n kube-system -o yaml > aws-auth.yaml</p>
</blockquote>
<ol start="2">
<li>Add the role along with the required permissions such as <code>system:masters</code> in the <code>mapRoles:</code> section as shown below:</li>
</ol>
<pre><code>mapRoles: |
- rolearn: arn:aws:iam::<AWS-AccountNumber>:role/eks-quickstart-test-ManagedNodeInstance
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- system:masters
</code></pre>
<ol start="3">
<li>Apply the updates to the cluster with the command:</li>
</ol>
<blockquote>
<p>kubectl apply -f aws-auth.yaml</p>
</blockquote>
<p>This should resolve the permission issues and your cluster nodes should be visible as healthy and ready for pods to be scheduled.</p>
|
<p>There's a working k8s configuration which uses the same port name in a pod and in a service. Here's a config part of the pod:</p>
<pre><code>ports:
- containerPort: 8000
name: app-port
protocol: TCP
</code></pre>
<p>Here's a config part of the service:</p>
<pre><code> ports:
- name: app-port
nodePort: 32000
port: 8000
protocol: TCP
targetPort: app-port
type: NodePort
</code></pre>
<p>How is the name <code>app-port</code> resolved?</p>
| <p>Basically you will refer to the target port with the <strong>port number</strong>. But, you can give a name to each port mentioned in the pod configuration and use this name in the <strong>service</strong> to refer to the specified port. By this your <strong>service</strong> will be aware of which port it needs to consider from the pod.</p>
<p>To answer your question, the <strong>port name</strong> which you created in the <strong>pod belongs only to the pod configuration</strong>. It can be used as a reference to get the portnumber of a pod. But in your service you are creating a new <strong>portname</strong> attribute which only belongs to the service. The <code>targetPort</code> in service will only consider the port number from the <code>portname</code> attribute created in the service and not from the pod.</p>
<p>Likewise the kubernetes knows that it needs to consider the <code>target port number</code> for the <strong>service</strong> by the <strong>port name</strong> created in the <strong>service</strong> and not from the <strong>pod</strong>.</p>
<p>Basically k8's uses DNS to determine all this stuff. For more information you can refer to this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">link1</a> <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">link2</a></p>
|
<p>From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that.</p>
| <p>Using below methods for checking the reason for pod restart:</p>
<p>Use <code>kubectl describe deployment <deployment_name></code> and <code>kubectl describe pod <pod_name></code> which contains the information.</p>
<pre><code># Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Warning BackOff 40m kubelet, gke-xx Back-off restarting failed container
# ..
</code></pre>
<p>You can see that the pod is restarted due to image pull backoff. We need to troubleshoot on that particular issue.</p>
<p>Check for logs using : <code>kubectl logs <pod_name></code></p>
<p>To get previous logs of your container (the restarted one), you may use --previous key on pod, like this:</p>
<pre><code>kubectl logs your_pod_name --previous
</code></pre>
<p>You can also write a final message to /dev/termination-log, and this will show up as described in <a href="https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/#:%7E:text=Termination%20messages%20provide%20a%20way,to%20the%20general%20Kubernetes%20logs." rel="nofollow noreferrer">docs</a>.</p>
<p>Attaching a <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting" rel="nofollow noreferrer">troubleshooting</a> doc for reference.</p>
|
<p>We have elasticsearch cluster at <code>${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}</code>
and filebeat pod at k8s cluster that exports other pods' logs</p>
<p>There is <code>filebeat.yml</code>:</p>
<pre><code>filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
equals:
kubernetes.namespace: develop
config:
- type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
exclude_lines: ["^\\s+[\\-`('.|_]"]
hints.enabled: true
hints.default_config:
type: container
multiline.type: pattern
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
http:
enabled: true
host: localhost
port: 5066
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}'
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
indices:
- index: "develop"
when:
equals:
kubernetes.namespace: "develop"
- index: "kubernetes-dev"
when:
not:
and:
- equals:
kubernetes.namespace: "develop"
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- decode_json_fields:
fields: ["message"]
add_error_key: true
process_array: true
overwrite_keys: false
max_depth: 10
target: json_message
</code></pre>
<p>I've checked: filebeat has access to <code>/var/log/containers/</code> on kuber but elastic cluster still doesn't get any <code>develop</code> or <code>kubernetes-dev</code> indices. (Cluster has relative index templates for this indices)</p>
<p><code>http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_cluster/health?pretty</code>:</p>
<pre><code>{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 14,
"active_shards" : 28,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
</code></pre>
<p>Filebeat log:</p>
<pre><code>{
"log.level": "info",
"@timestamp": "2022-11-25T08:35:18.084Z",
"log.logger": "monitoring",
"log.origin": {
"file.name": "log/log.go",
"file.line": 184
},
"message": "Non-zero metrics in the last 30s",
"service.name": "filebeat",
"monitoring": {
"metrics": {
"beat": {
"cgroup": {
"cpu": {
"stats": {
"periods": 38
}
},
"cpuacct": {
"total": {
"ns": 1576170001
}
},
"memory": {
"mem": {
"usage": {
"bytes": 4096
}
}
}
},
"cpu": {
"system": {
"ticks": 13570,
"time": {
"ms": 56
}
},
"total": {
"ticks": 23320,
"time": {
"ms": 90
},
"value": 23320
},
"user": {
"ticks": 9750,
"time": {
"ms": 34
}
}
},
"handles": {
"limit": {
"hard": 1048576,
"soft": 1048576
},
"open": 11
},
"info": {
"ephemeral_id": "a88f461e-1fd3-48d3-a1c7-3bfea1124ab9",
"uptime": {
"ms": 7530198
},
"version": "8.1.0"
},
"memstats": {
"gc_next": 17767824,
"memory_alloc": 16385608,
"memory_total": 709250600,
"rss": 104206336
},
"runtime": {
"goroutines": 23
}
},
"filebeat": {
"harvester": {
"open_files": 0,
"running": 0
}
},
"libbeat": {
"config": {
"module": {
"running": 0
}
},
"output": {
"events": {
"active": 0
}
},
"pipeline": {
"clients": 0,
"events": {
"active": 0
}
}
},
"registrar": {
"states": {
"current": 0
}
},
"system": {
"load": {
"1": 3.99,
"15": 3.91,
"5": 4.08,
"norm": {
"1": 0.4988,
"15": 0.4888,
"5": 0.51
}
}
}
},
"ecs.version": "1.6.0"
}
}
</code></pre>
<p>Where can be problem? Im absolutely new to elk so working with it not from the start is a bit hard</p>
| <p>In the end I've found out that elastic cluster's version used by company is much older than that of filebeat (8.1 against 7.9). So temporary allowing usage of older versions has solved the issue for time being.</p>
<pre><code>output.elasticsearch:
allow_older_versions: true
</code></pre>
<p>BUT @Paulo also had a point and construction <code>not + and</code> was excessive in the end.</p>
|
<p>I am using an Ingress using Google-managed SSL certs mostly similar to what is described here:</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate</a></p>
<p>However my backend service is a grpc service that is using HTTP2. According to the same documentation if I am using HTTP2 my backend needs to be <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2#creating_the_deployment" rel="nofollow noreferrer">"configured with SSL".</a></p>
<p>This sounds like I need a separate set of certificates for my backend service to configure it with SSL.</p>
<p>Is there a way to use the same Google managed certs here as well?</p>
<p>What are my other options here? I am using, Google managed certs for the Ingress not to manage any certs on my own, if I then use self signed certificates for my service, that kind of defeats the purpose.</p>
| <p>i don't think it's required to create SSL for the backend services if you are terminating the HTTPS at <strong>LB</strong> level. You can attach your certs to at LB level and the backed-end will be HTTPS > HTTP.</p>
<p>You might need to create <strong>SSL/TLS</strong> new cert in case there is diff version <code>ssl-protocols: TLSv1.2 TLSv1.3</code>, Cipher set in your ingress controller configmap which you are using <strong>Nginx ingress controller</strong>, <strong>Kong</strong> etc.</p>
<p>If you are looking for End to End <strong>HTTPS</strong> traffic definitely you <strong>need to create a cert</strong> for the <strong>backend service</strong>.</p>
<p>You can also create/manage the <strong>Managed certificate</strong> or <strong>Custom cert</strong> with <a href="https://cert-manager.io/" rel="nofollow noreferrer">Cert manager</a> the <strong>K8s secret</strong> and <strong>mount</strong> to deployment which will be used further by the <strong>service</strong>, in that case, no need to manage or create the certs. Ingress will <strong>passthrough</strong> the HTTPS request to service directly.</p>
<p>In this case, it will be an end-to-end <strong>HTTPS</strong> setup.</p>
<p><strong>Update</strong> :</p>
<blockquote>
<p>Note: To ensure the load balancer can make a correct HTTP2 request to
your backend, your backend must be configured with SSL. For more
information on what types of certificates are accepted, see Encryption
from the load balancer to the backends ." end to end tls seems to be a
requirement for HTTP2</p>
</blockquote>
<p>This is my site <a href="https://findmeip.com" rel="nofollow noreferrer">https://findmeip.com</a> it's running on <strong>HTTP2</strong> and terminating the <strong>SSL/TLS</strong> at the Nginx level only.</p>
<p>Definitely, it's good to go with the suggested practice so you can use the <strong>ESP</strong> option from the Google, setting GKE ingress + ESP + grpc stack.</p>
<p><a href="https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr" rel="nofollow noreferrer">https://cloud.google.com/endpoints/docs/openapi/specify-proxy-startup-options?hl=tr</a></p>
<p>If not want to use <strong>ESP</strong> check above suggested :</p>
<blockquote>
<p>You can Mount Managed certificate to
deployment which will be used further by the service, in that case, no
need to manage or create the certs. In other words, cert-manager will create/manage/re-new SSL/TLS on behalf of you in K8s secret which will used by service.</p>
</blockquote>
|
<p>I'm trying to install the ingress-nginx controller via helm. When I run the install command on the <a href="https://kubernetes.github.io/ingress-nginx/deploy/#environment-specific-instructions" rel="nofollow noreferrer">Kubernetes website</a>, I get the error "module 'glib' has no attribute 'get_user_config_dir".</p>
<pre><code>helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
</code></pre>
<p>Here is the traceback:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 42, in <module>
main()
File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 28, in main
config_dirs = (glib.get_user_config_dir(),) + glib.get_system_config_dirs()
AttributeError: module 'glib' has no attribute 'get_user_config_dir'
</code></pre>
<p>I get a similar error when I just run helm on its own.</p>
<pre><code>$ helm
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 42, in <module>
main()
File "/Library/Frameworks/Python.framework/Versions/3.8/bin/helm", line 28, in main
config_dirs = (glib.get_user_config_dir(),) + glib.get_system_config_dirs()
AttributeError: module 'glib' has no attribute 'get_user_config_dir'
</code></pre>
<p>How can I set the attribute get_user_config_dir so that helm installs the ingress-nginx controller?</p>
<hr>
<p>Edit:</p>
<p>I'm updating this question to show the version of glib I am using.</p>
<pre><code>$ pip3 show glib
Name: glib
Version: 1.0.0
Summary: A python graphics library that piggy-backs off of pyqt5
Home-page: UNKNOWN
Author: its_me
Author-email: [email protected]
License: MIT
Location: /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages
Requires:
Required-by:
</code></pre>
| <p>I had exactly the same problem when running helm via ansible.</p>
<p>I checked the helm module I installed. It turned out I installed the wrong <a href="https://pypi.org/project/helm/" rel="nofollow noreferrer">helm</a> module:
<strong>Helm is a system monitor released under GNU GPLv3.</strong></p>
<p>Instead I had to install <a href="https://pypi.org/project/helm-python/" rel="nofollow noreferrer">helm-python</a>:</p>
<pre><code>pip install helm-python
</code></pre>
<p>Hope this helps</p>
<p>Regards!</p>
|
<p>I need consume messages from Kafka. Application deployed in k8s pod. If I create some readiness probe to check pre-initialization steps are done or not, will k8s be able to control the consume to start consume after that?<br />
I know readiness probe can help to control allocate outside request to pod before it is up. But not sure will it be able to control request initiated from pod.</p>
| <p>Kubernetes uses liveness and readiness probes to find out if your pods are healthy. If the liveness probe fails, Kubernetes will kill the container and automatically restart it if the restart policy is set accordingly. If the readiness probe fails then Kubernetes will remove the pod from serving requests through a service. But,it doesn't necessarily mean kafka is ready to consume messages. kafka can not consume messages until broker is <code>unfenced</code>. I would suggest you to manually handle this.</p>
|
<p>I'm working on an application that launches K8S Job (dockerised computer science batchs applications) and I want to prioritizes their launchs.</p>
<p>I don't want to use preemption because all jobs have to be done and I want to be sure that the scheduling order is maintained.</p>
<p>When I read this doc: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#non-preempting-priority-class" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#non-preempting-priority-class</a>
It seems that, in non preempting cases, high priority pods can be scheduled after low priority ones if K8S doesn't have the necessary resources at the time.
In case of high priority Jobs are the most demanding, this kind of pods will never be scheduled.</p>
<p>How can I have a control of that decisions?</p>
<p>Thanks!</p>
| <p>As you need to use only Non preemptive refer to this <a href="https://stackoverflow.com/a/62135156/19230181">SO</a> and <a href="https://tutorialwing.com/preemptive-or-non-preemptive-priority-scheduling/" rel="nofollow noreferrer">Doc</a> which helps you in understanding the usage of this non preemptive class.</p>
|
<p>I have a workflow template which outputs an artifact, this artifact has to be passed to another workflow template as an input. how we can do that? I'm following the way below which is not working</p>
<p>Here is <code>WorflowTemplate1.yaml</code></p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: arfile
spec:
entrypoint: main
templates:
- name: main
volumes:
- name: vol
emptyDir: {}
inputs:
parameters:
script:
image: "ubuntu"
volumeMounts:
- name: vol
mountPath: "{{inputs.parameters.Odir}}"
command: ["bash"]
source: |
#!/usr/bin/env bash
echo "This is artifact testing" > /tmp/arfile
outputs:
parameters:
- name: arfile
path: "{{inputs.parameters.Odir}}/arfile"
</code></pre>
<p>Here is the <code>WorkflowTemplate2.yaml</code></p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: bfile
spec:
entrypoint: main
templates:
- name: main
volumes:
- name: vol
emptyDir: {}
inputs:
parameters:
- name: image
value: "ubuntu"
- name: Odir
value: "/tmp"
artifacts:
- name: arfile
path: /tmp/arfile
container:
image: "ubuntu"
command: ["cat"]
args:
- /tmp/arfile
</code></pre>
<p>Here is the workflow which is calling the above two workflow templates.I'm unable to pass artifacts of workflowtemplate1 to workflowtemplate2 from this workflow.</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: apr-
spec:
entrypoint: main
templates:
- name: main
outputs:
artifacts:
- name: arfile
from: "tasks['dfile'].outputs.artifacts.arfile"
dag:
tasks:
- name: dfile
templateRef:
name: arfile
template: main
arguments:
parameters:
- name: bimg
value: "ubuntu"
- name: bci
depends: dfile
templateRef:
name: bfile
template: main
arguments:
parameters:
- name: img
value: "ubuntu"
artifacts:
- name: arfile
from: "{{tasks.dfile.outputs.artifacts.arfile}}"
</code></pre>
<p>What's wrong I'm doing here?</p>
| <p>I think I found the issue. I need to use <code>artifacts</code> instead of <code>parameters</code> in <code>WorkflowTemplate1.yaml</code> in outputs code block</p>
<p>here's the fix</p>
<pre><code>outputs:
artifacts:
- name: arfile
path: "{{inputs.parameters.Odir}}/arfile"
</code></pre>
|
<p>I have a old k8s cluster with 1 master and 2 workers node. It was shutdown for a long time. Now I started it. It had many running pods and deployments.
After restart the VM's all k8s command return</p>
<pre><code>The connection to the server 123.70.70.70:6443 was refused - did you specify the right host or port?
</code></pre>
<p><strong>What I have done so far?</strong>
I saw many stack question to fix this error also on git and some other sites. All need <code>kubeadm reset</code>
If I reset it I will loss all running pods. I don't know how to start those pods again as it was not deployed by me.</p>
<p><strong>What I want?</strong>
Is there a way I can make all the pods and nodes up and running without reset?
Or even if I reset how can I get all the pods back in there running stage? This cluster was design and set it up by someone else I have no idea about its deployments.</p>
<p><strong>Update Question</strong></p>
<p>When I run <code>docker ps -a | grep api</code> I see this</p>
<pre><code>1a6ba468af3a 123.70.70.70:444/demo/webapikl "dotnet UserProfileA…" 6 months ago Exited (255) 22 hours ago k8s_webapikl-image_webapikl-deployment-74648888d5-bjpcj_fwd_13a76dd9-8ce5
</code></pre>
<p>Like this many containers. Now any advice to start them please?</p>
<p>I am new to K8s that's why I like to be sure before I do anything.</p>
| <p>The error you are getting usually comes when the <code>KUBECONFIG</code> environment variable is not exported. Run the following commands as a regular user or run the last command as root.</p>
<pre><code>sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
</code></pre>
<p>Refer my SO answer <a href="https://stackoverflow.com/a/48827221/3537880">here</a></p>
<p>Now that you are able to run <code>kubectl</code> commands, you should see any pods that are created as a control plane component or as a workload. Use following command to see the nodes as part of your cluster.</p>
<pre><code>kubectl get nodes
</code></pre>
<p>Make sure to verify that all the control plane components are running fine as well</p>
<pre><code>kubectl get pods -n kube-system
</code></pre>
|
<p>I would like to create an argoCD application right from the git repository, ie the gitOps way. I already created a CRD file for the application which looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-service
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
syncPolicy:
syncOptions:
- CreateNamespace=true
project: default
source:
path: clusters/helm-chart
repoURL: https://github.com/user/my-repo.git
targetRevision: HEAD
helm:
values: |
image:
repository: user/my-image
pullPolicy: Always
tag: xxx
</code></pre>
<p>My current workflow is to apply this CRD to my cluster with <code>k apply -f application.yaml</code>.</p>
<p><strong>Question:</strong> how can I instruct ArgoCD to go and sync/create the application I have defined at <code>https://github.com/user/my-repo.git</code> without first creating that application "manually"?</p>
| <p>At some point you have to manually apply a manifest to your ArgoCD instance.</p>
<p>You can limit that to a single manifest if you utilize the <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/" rel="nofollow noreferrer">app-of-apps</a> pattern, in which you have a repository that contains all your ArgoCD application manifests.</p>
<p>You can also create <a href="https://argocd-applicationset.readthedocs.io/en/stable/" rel="nofollow noreferrer">ApplicationSets</a> to automatically generate ArgoCD applications from templates based on the content of a git repository, the names of clusters registered with ArgoCD, and other data.</p>
|
<p>I have a old k8s cluster with 1 master and 2 workers node. It was shutdown for a long time. Now I started it. It had many running pods and deployments.
After restart the VM's all k8s command return</p>
<pre><code>The connection to the server 123.70.70.70:6443 was refused - did you specify the right host or port?
</code></pre>
<p><strong>What I have done so far?</strong>
I saw many stack question to fix this error also on git and some other sites. All need <code>kubeadm reset</code>
If I reset it I will loss all running pods. I don't know how to start those pods again as it was not deployed by me.</p>
<p><strong>What I want?</strong>
Is there a way I can make all the pods and nodes up and running without reset?
Or even if I reset how can I get all the pods back in there running stage? This cluster was design and set it up by someone else I have no idea about its deployments.</p>
<p><strong>Update Question</strong></p>
<p>When I run <code>docker ps -a | grep api</code> I see this</p>
<pre><code>1a6ba468af3a 123.70.70.70:444/demo/webapikl "dotnet UserProfileA…" 6 months ago Exited (255) 22 hours ago k8s_webapikl-image_webapikl-deployment-74648888d5-bjpcj_fwd_13a76dd9-8ce5
</code></pre>
<p>Like this many containers. Now any advice to start them please?</p>
<p>I am new to K8s that's why I like to be sure before I do anything.</p>
| <p>I really thankful for your time and effort. What worked for me is <a href="https://stackoverflow.com/questions/56320930/renew-kubernetes-pki-after-expired/56334732#56334732">this</a> stack overflow Answer along with some changes.</p>
<p>In my case when I was running <code>systemctl status kubelet</code> I see this error</p>
<pre><code>devops@kubemaster:/$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2023-01-11 12:51:04 EET; 9s ago
Docs: https://kubernetes.io/docs/home/
Process: 188116 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXT>
Main PID: 188116 (code=exited, status=1/FAILURE)
</code></pre>
<p>Kubelet was stuck at activating.</p>
<p>I follow these steps as mentioned answer.</p>
<pre><code> cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/
$ kubeadm init phase kubeconfig all
$ reboot
</code></pre>
<p>I also had to delete my <code>etcd .crt</code> and <code>.key</code> files from <code>/etc/kubernetes/pki/etcd/</code> as mentioned in one comment.</p>
<p>This make kubelet in active state and then I generate new join command and add all the working nodes with master node one by one. Once all nodes were ready I delete the terminating and crashed-loop back pods. Kubeadm created them on different worker nodes. Now all pods working without any issue.</p>
|
<p>Following the example on <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">kubernetes.io</a> I'm trying to connect to an external IP from within the cluster (and i need some port proxy, so not ExternalName service). However it is not working. This is the response I'm expecting</p>
<pre class="lang-bash prettyprint-override"><code>ubuntu:/opt$ curl http://216.58.208.110:80
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
</code></pre>
<p>if I use the following config</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: my-service-1
labels:
kubernetes.io/service-name: my-service
addressType: IPv4
ports:
- name: http
appProtocol: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "216.58.208.110"
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 8888
targetPort: 80
</code></pre>
<p>I expect the following command to get same result:</p>
<pre class="lang-bash prettyprint-override"><code>minikube kubectl -- run -it --rm --restart=Never curl --image=curlimages/curl curl -- my-service:8888
</code></pre>
<p>but I get nothing.
if I start an debian image with</p>
<pre class="lang-bash prettyprint-override"><code>minikube kubectl -- run -it --rm --restart=Never debian --image=debian:latest
</code></pre>
<p>then</p>
<pre class="lang-bash prettyprint-override"><code>apt update && apt install dnsutils curl -y && nslookup my-service && curl my-service:8888
</code></pre>
<p>gives</p>
<pre class="lang-bash prettyprint-override"><code>Server: 10.96.0.10
Address: 10.96.0.10#53
Name: my-service.default.svc.cluster.local
Address: 10.111.116.160
curl: (28) Failed to connect to my-service port 8888: Connection timed out
</code></pre>
<p>Am i missing something? or is it not supposed to work this way?</p>
| <p>After some trial and error it seem that if <code>ports[0].name = http</code> is set for the <code>endpointslice</code> it stops working.</p>
<p>it stops working for when for the <code>service</code> <code>spec.ports[0].targetPort</code> is set to <code>80</code> or <code>http</code> as well.</p>
<p>(it does work when <code>ports[0].name = ''</code>)</p>
<p>Further investing shows that it works if:</p>
<p>for <code>service</code></p>
<pre class="lang-yaml prettyprint-override"><code>spec:
ports:
- port: 8888
name: http
targetPort: http
</code></pre>
<p>for <code>endpointslice</code></p>
<pre class="lang-yaml prettyprint-override"><code>ports:
- port: 80
name: http
</code></pre>
<p>I guess if you want to name them both the <code>service</code> and the <code>endpointslice</code> have to have corresponding <code>.name</code> values.</p>
|
<p>How to create a notification filter in pub/sub subscription to select a particular message section in a JSON log.</p>
<p>For example:</p>
<ul>
<li>You have a JSON log:</li>
</ul>
<pre><code>
{
"incident_type": Bla bla",
"incident_state": "Open",
"message": GKE Cluster upgradinging, Can't perform update operation",
"run": "Unsuccessful"
}
</code></pre>
<hr />
<p>Tried this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-notifications" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-notifications</a></p>
<hr />
| <p>Maybe you can use the subscription <strong>filter</strong></p>
<blockquote>
<p>If a filter syntax is provided, subscribers will only receive messages
that match the filter.</p>
</blockquote>
<p>With <strong>Golang</strong></p>
<pre><code>import (
"context"
"fmt"
"io"
"cloud.google.com/go/pubsub"
)
func createWithFilter(w io.Writer, projectID, subID, filter string, topic *pubsub.Topic) error {
filter := "attributes.author=\"unknown\""
client, err := pubsub.NewClient(ctx, projectID)
if err != nil {
return fmt.Errorf("pubsub.NewClient: %v", err)
}
defer client.Close()
sub, err := client.CreateSubscription(ctx, subID, pubsub.SubscriptionConfig{
Topic: topic,
Filter: filter,
})
if err != nil {
return fmt.Errorf("CreateSubscription: %v", err)
}
fmt.Fprintf(w, "Created subscription with filter: %v\n", sub)
return nil
}
</code></pre>
<p>Ref : <a href="https://cloud.google.com/pubsub/docs/subscription-message-filter" rel="nofollow noreferrer">Google doc</a></p>
<p><a href="https://i.stack.imgur.com/LfD72.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LfD72.png" alt="enter image description here" /></a></p>
|
<p>I need to know a way to scale down all the deployments on a kubernetes namespace except for one with a specific string inside the name since it has dependencies. This on an AzureCLI task inside of an azure pipeline. Any ideas?</p>
<p>Something like:
If name contains "randomname" then do not scale up/down the service.</p>
<p>I did try some exceptions but still not working.</p>
| <p>You can add a <em>label</em> on the one you want to exclude, and then use queries using <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#api" rel="nofollow noreferrer">labels and selectors</a> to apply operations on the selected set of resources.</p>
|
<p>I have a PersistenceVolumeClaim defined by</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>And the containers section of the deployment yaml looks like this</p>
<pre><code>spec:
containers:
- name: my-container
image: abc/xyz:1.2.3
volumeMounts:
- mountPath: /var/store
name: mystore
volumes:
- name: mystore
persistentVolumeClaim:
claimName: my-pvc
</code></pre>
<p>I have a few questions about this set up.</p>
<ol>
<li>Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)?</li>
<li>How would this behave if the pod replicas are on different kubernetes nodes?</li>
</ol>
<p><strong>Edit</strong></p>
<p>I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume?</p>
| <blockquote>
<p>Do each replica of my pod get 1GB storage space (Assuming the PersistentVolume has enough space)?</p>
</blockquote>
<p>No. Since you use one <code>PersistentVolumeClaim</code>, you will get one <code>PersistentVolume</code>.</p>
<blockquote>
<p>How would this behave if the pod replicas are on different kubernetes nodes?</p>
</blockquote>
<p>It will not work, unless you use a volume type that can be used from multiple nodes at once, with <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> <code>ReadWriteMany</code> or <code>ReadOnlyMany</code>. But you have declared <code>ReadWriteOnce</code> in your PersistentVolumeClaim, so it will likely not work.</p>
<blockquote>
<p>I would like all replicas of my pod to have it's own storage (not a shared one). Is there a way to achieve this without creating a RWM volume?</p>
</blockquote>
<p>Yes, you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> instead of <code>Deployment</code>, and use the <code>volumeClaimTemplates:</code>-field.</p>
|
<p>We are running our end-to-end tests using a single Cassandra node running on k8s, this node gets quite a lot reads and writes, note that this node is deleted once tests have finished, so there is no need to consider long term maintenance of data etc. what optimisations would you recommend to configure in this use case to reduce overhead?
Disabling auto compaction had came in my mind... anything else?</p>
| <p>So there are always a few things that I do when building up a single node for development or testing. My goals are more about creating something which matches the conditions in production, as opposed to reducing overhead. Here's my list:</p>
<ul>
<li>Rename the cluster to something other than "Test Cluster."</li>
<li>Set the snitch to <code>GossipingPropertyFileSnitch</code>.</li>
<li>Enable both the <code>PasswordAuthenticator</code> and the <code>CassandraAuthorizer</code>.</li>
<li>If you use client or node to node SSL, you'll want to enable that, too.</li>
<li>Provide non-default values for the <code>dc</code> and <code>rack</code> names in the <code>cassandra-rackdc.properties</code> file.</li>
<li>Create all keyspaces using <code>NetworkTopologyStrategy</code> and the <code>dc</code> name from the previous step.</li>
</ul>
<p>Again, I wouldn't build an unsecured node with <code>SimpleStrategy</code> keyspaces in production. So I don't test that way, either.</p>
<p>With building a new single node cluster each time, I can't imagine much overhead getting in your way. I don't think that you can fully disable compaction, but you <em>can</em> reduce the compaction throughput (YAML) down to the point where it will consume almost no resources:</p>
<pre><code>compaction_throughput: 1MiB/s
</code></pre>
<p>It might be easiest to set that in the YAML, but you can also do this from the command line:</p>
<pre><code>nodetool setcompactionthroughput 1
</code></pre>
<p>I'd also have a look at the GC settings, and try to match what you have in production as well. But for the least amount of overhead with the least config, I'd go with G1GC.</p>
|
<p>I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).</p>
<p>I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.</p>
<p>CI/CD simply goes through the process below.</p>
<ol>
<li>push code to master branch</li>
<li>builds the NextJS code</li>
<li>builds the docker image and pushes it to GCR</li>
<li>pulls the docker image and deploys it in.</li>
</ol>
<p>The content of the menifest file is as follows.</p>
<p>.gitlab-ci.yml</p>
<pre><code>stages:
- build-push
- deploy
image: docker:19.03.12
variables:
GCP_PROJECT_ID: PROJECT_ID..
GKE_CLUSTER_NAME: cicd-micro-cluster
GKE_CLUSTER_ZONE: asia-northeast1-b
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
DOCKER_IMAGE_TAG: latest
services:
- docker:19.03.12-dind
build-push:
stage: build-push
before_script:
- docker info
- echo "$GKE_ACCESS_KEY" > key.json
- docker login -u _json_key --password-stdin https://gcr.io < key.json
script:
- docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
- docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG
deploy:
stage: deploy
image: google/cloud-sdk
script:
- export USE_GKE_GCLOUD_AUTH_PLUGIN=True
- echo "$GKE_ACCESS_KEY" > key.json
- gcloud auth activate-service-account --key-file=key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set container/cluster $GKE_CLUSTER_NAME
- gcloud config set compute/zone $GKE_CLUSTER_ZONE
- gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
- kubectl apply -f deployment.yaml
- gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}@$p" --quiet; done < tags
</code></pre>
<p>Dockerfile</p>
<pre><code># Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
elif [ -f package-lock.json ]; then npm ci; \
elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
else echo "Lockfile not found." && exit 1; \
fi
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
CMD ["node", "server.js"]
</code></pre>
<p>deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontweb-lesson-prod
labels:
app: frontweb-lesson
spec:
selector:
matchLabels:
app: frontweb-lesson
template:
metadata:
labels:
app: frontweb-lesson
spec:
containers:
- name: frontweb-lesson-prod-app
image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: frontweb-lesson-prod-svc
labels:
app: frontweb-lesson
spec:
selector:
app: frontweb-lesson
ports:
- name: http
port: 80
protocol: TCP
targetPort: 3000
type: LoadBalancer
loadBalancerIP: "EXTERNAL_IP.."
</code></pre>
<p>Is there something I'm missing?</p>
| <p>By default,<strong>imagepullpolicy</strong> will be <strong>Always</strong> but there could be chances if there is no change in the <strong>deployment</strong> file when applying it might not update the <strong>deployment</strong>. As you are using the same <strong>label</strong> each time <strong>latest</strong>.</p>
<p>As there different between <code>kubectl apply</code> and <code>kubectl patch</code> command</p>
<p>What you can do is add minor <strong>label</strong> change or <strong>annotation</strong> change in <strong>deployment</strong> and check image will get updated with <code>kubectl apply</code> command too otherwise it will be mostly <strong>unchange</strong> response of <code>kubectl apply</code></p>
<p>Ref : <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">imagepullpolicy</a></p>
<blockquote>
<p>You should avoid using the :latest tag when deploying containers in
production as it is harder to track which version of the image is
running and more difficult to roll back properly.</p>
</blockquote>
|
<p>I am trying to evaluate whether I should use Spot VMs and the measure for availability is given as a percentage by Azure.
<a href="https://www.linkedin.com/pulse/viewing-eviction-rates-spot-virtual-machines-azure-portal-dave-callan/" rel="nofollow noreferrer">https://www.linkedin.com/pulse/viewing-eviction-rates-spot-virtual-machines-azure-portal-dave-callan/</a></p>
<p>Does anyone know where I can find the average time a VM is available?</p>
<p>Optimally I would like to have a Bell curve to estimate the probability of my jobs finishing.</p>
| <p>The Azure spot VMs are mostly used when your workload can handle sudden interruptions without damaging the process too much.</p>
<p>When it comes to the <strong>percentage</strong>, the formula looks like this:</p>
<pre><code>(total_evictions * time_lost) / total_runtime = eviction rate
</code></pre>
<p>So for example, let's say that you want to run a VM for 100 hours, and the eviction rate is 5%, the number of evictions is hard to tell, but could be something like the following examples:</p>
<pre><code>// 50 evictions, each removed 0.1h of computation time.
(50 * 0.1) / 100 = 0.05 = 5%
// 10 evictions, each removed 1h of computation time.
(10 * 1) / 100 = 0.10 = 10%
// 2 evictions, each removed 5h of computation time.
(2 * 5) / 100 = 0.10 = 10%
</code></pre>
<p>So there is no exact answer to your question, and also no average, only speculation, and mathematics :)</p>
|
<p>I have a service "A" deployed in "X" namespace. "Z" service from "P" namespace, calls it on<br />
<code>svc-a.x.svc.cluster.local</code><br />
I have to deploy staging of service "A" in Y namespace as well and I want to register these IPs under<br />
<code>svc-a.x.svc.cluster.local</code><br />
Is there any way to do it? I want to the main service to select pods from different namespaces.</p>
| <p>You can try using a Service without selectors with an EndPointSlice which refers to a Service from each namespace.</p>
<p>Create <code>svc-a</code> in namespace <code>X</code> which selects / points to pods in namespace X. The Service will be available at <code>svc-a.x.svc.cluster.local</code>.</p>
<p>Create <code>svc-a</code> in namespace <code>Y</code> which selects / points to pods in namespace Y. The Service will be available at <code>svc-a.y.svc.cluster.local</code>.</p>
<p>Create a <code>svc-a</code> in namespace <code>Z</code> without selectors.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: svc-a
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>The Service will be available at <code>svc-a.z.svc.cluster.local</code>.</p>
<p>Create an <a href="https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/" rel="nofollow noreferrer">EndpointSlice</a> in namespace <code>Z</code> with <code>svc-a.x.svc.cluster.local</code> and <code>svc-a.y.svc.cluster.local</code> as endpoints and attach it to <code>svc-a</code>:</p>
<pre><code>apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: svc-a
labels:
kubernetes.io/service-name: svc-a
addressType: FQDN
ports:
- name: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "svc-a.x.svc.cluster.local"
- "svc-a.y.svc.cluster.local"
</code></pre>
<p>So now you'll have <code>svc-a.z.svc.cluster.local</code> available in any namespace pointing to backends in both the <code>X</code> and <code>Y</code> namespaces.</p>
|
<p>I am novice to k8s, so this might be very simple issue for someone with expertise in the k8s.</p>
<p>I am working with two nodes </p>
<ol>
<li>master - 2cpu, 2 GB memory</li>
<li>worker - 1 cpu, 1 GB memory</li>
<li>OS - ubuntu - hashicorp/bionic64</li>
</ol>
<p>I did setup the master node successfully and i can see it is up and running </p>
<pre><code>vagrant@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 29m v1.18.2
</code></pre>
<p>Here is token which i have generated </p>
<pre><code>vagrant@master:~$ kubeadm token create --print-join-command
W0419 13:45:52.513532 16403 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521
</code></pre>
<p><strong>Issue</strong> - But when i try to join it from the worker node i get</p>
<pre><code>vagrant@worker:~$ sudo kubeadm join 10.0.2.15:6443 --token xuz63z.todnwgijqb3z1vhz --discovery-token-ca-cert-hash sha256:d4dadda6fa90c94eca1c8dcd3a441af24bb0727ffc45c0c27161ee8f7e883521
W0419 13:46:17.651819 15987 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: couldn't validate the identity of the API Server: Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 10.0.2.15:6443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>Here are the ports which are occupied </p>
<pre><code>10.0.2.15:2379
10.0.2.15:2380
10.0.2.15:68
</code></pre>
<p>Note i am using CNI from - </p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
| <ol>
<li><p>Run the command 'kubectl config view' or 'kubectl cluster-info' to check the IP address of Kubernetes control plane. In my case it is 10.0.0.2.</p>
<p>$ kubectl config view</p>
<p>apiVersion: v1</p>
<p>clusters:</p>
<ul>
<li><p>cluster:</p>
<p>certificate-authority-data: DATA+OMITTED</p>
<p>server: <a href="https://10.0.0.2:6443" rel="nofollow noreferrer">https://10.0.0.2:6443</a></p>
</li>
</ul>
<p>Or</p>
<p>$ kubectl cluster-info</p>
<p>Kubernetes control plane is running at <a href="https://10.0.0.2:6443" rel="nofollow noreferrer">https://10.0.0.2:6443</a></p>
</li>
<li><p>Tried to telnet the Kubernetes control plane.</p>
<p>telnet 10.0.0.2 6443</p>
<p>Trying 10.0.0.2...</p>
</li>
<li><p>Press Control + C in your keyboard to terminate the telnet command.</p>
</li>
<li><p>Go to your Firewall Rules and add port 6443 and make sure to allow all instances in the network.</p>
</li>
<li><p>Then try to telnet the Kubernetes control plane once again and you should be able to connect now:</p>
<p>$ telnet 10.0.0.2 6443</p>
<p>Trying 10.0.0.2...</p>
<p>Connected to 10.0.0.2.</p>
<p>Escape character is '^]'.</p>
</li>
<li><p>Try to join the worker nodes now. You can run the command 'kubeadm token create --print-join-command' to create new token just in case you forgot to save the old one.</p>
</li>
<li><p>Run 'kubectl get nodes' on the control-plane to see this node join the cluster</p>
<p>$ kubectl get nodes</p>
<p>NAME STATUS ROLES AGE VERSION</p>
<p>k8s Ready control-plane 57m v1.25.0</p>
<p>wk8s-node-0 Ready 36m v1.25.0</p>
<p>wk8s-node-1 Ready 35m v1.25.0</p>
</li>
</ol>
|
<p>I'm trying to exec kubernetes pod using the Websocket, as per the kubernetes document it can be achieved through passing the <strong>Bearer THETOKEN</strong></p>
<p>When using bearer token authentication from an http client, the API server expects an Authorization header with a value of Bearer THETOKEN</p>
<p>Here is the sample for <code>wscat</code> passing Header Value <code>--header "Authorization: Bearer $TOKEN"</code> to establish exec to pod and the connection went successfully</p>
<pre><code>/ # wscat --header "Authorization: Bearer $TOKEN" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg
-6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh"
</code></pre>
<p>But when it comes to <a href="https://developer.mozilla.org/en/docs/Web/API/WebSocket" rel="noreferrer">Websocket API</a> connection from web browser </p>
<blockquote>
<p>How to pass this Beaer Token in the web Socket as per the doc there is no standard way to pass custom header </p>
</blockquote>
<p>Tried URI Query Parameter <strong>access_token= Bearer TOKEN</strong> in the API query it doesn't work and the Authentication denied with 403 </p>
<pre><code>wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg-%206f69577c74-jxbwn/exec?stdout=1&stdin=1&stderr=1&tty=1&command=%2Fbin%2Fsh&command=-i&access_token=$TOKEN
</code></pre>
| <p>I never used websocket with kubernetes before, but here is the documentation about the token authentication method for websocket browser clients <a href="https://github.com/kubernetes/kubernetes/pull/47740" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/47740</a></p>
<p>You must to send token in subprotocol parameter with the token encoded in base64.</p>
<p>So it should be:</p>
<pre><code>wscat -s "base64url.bearer.authorization.k8s.io.$TOKEN_IN_BASE64","base64.binary.k8s.io" -c "wss://api.0cloud0.com/api/v1/namespaces/ba410a7474380169a5ae230d8e784535/pods/txaclqhshg
-6f69577c74-jxbwn/exec?stdin=1&stdout=1&stderr=1&tty=1&command=sh"
</code></pre>
|
<p>I have a k8s cluster where I deploy some containers.</p>
<p>The cluster is accessible at microk8s.hostname.internal.</p>
<p>At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.</p>
<p>And this works great.</p>
<p>Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.</p>
<p>How do I do this?</p>
<p>Currently installed addons in microk8s:</p>
<pre><code>aasa@bolsrv0891:/snap/bin$ microk8s status
microk8s is running
high-availability: no
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
</code></pre>
<p>Update 1:
If I portforward to my service it works.
I have tried this ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
namespace: jupyter-notebook
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: jupyter.microk8s.hostname.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
</code></pre>
<p>But I cant access it nor ping it. Chrome says:
jupyter.microk8s.hostname.internal’s server IP address could not be found.</p>
<p>My service looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jupyter-service
namespace: jupyter-notebook
spec:
ports:
- name: 7070-8888
port: 7070
protocol: TCP
targetPort: 8888
selector:
app: jupyternotebook
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>I can of course ping microk8s.hostname.internal.</p>
<p>Update 2:</p>
<p>The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: jupyter-ingress
namespace: jupyter-notebook
spec:
rules:
- http:
paths:
- path: "/jupyter-notebook/?(.*)"
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
</code></pre>
<p>This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.</p>
| <p>To do this you would have to configure a kube service, kube ingress and the configure your DNS.</p>
<p>Adding an entry into the <code>hosts</code> file would allow DNS resolution to <code>otherapplication.microk8s.hostname.internal</code></p>
<p>You could use <code>dnsmasq</code> to allow for wildcard resolution e.g. <code>*.microk8s.hostname.internal</code></p>
<p>You can test the dns reoslution using <code>nslookup</code> or <code>dig</code></p>
|
<p>I'm trying to exec a command into a running pod. I'm using go K8sclient to achieve this but facing a issue. I also don't know if solution is correct or not. Can anyone please check and provide correct solution?</p>
<p>This is my code.</p>
<pre><code> namespace := getNamespace()
podName := "maxscale-0"
config, err := rest.InClusterConfig()
if err != nil {
log.Fatal(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
req := clientset.CoreV1().Pods(namespace).Exec(podName, &corev1.PodExecOptions{
Command: []string{"sh", "-c", "grep -oP '\"name\": \"\\K[^\"]*' /var/lib/maxscale/MariaDB-Monitor_journal.json"},
})
// Set up a stream to capture the output
execStream, err := req.Stream()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
// Print the output
buf := new(bytes.Buffer)
buf.ReadFrom(execStream)
fmt.Println(buf.String())
</code></pre>
<p>The error I got is</p>
<pre><code>clientset.CoreV1().Pods(namespace).Exec undefined (type "k8s.io/client-go/kubernetes/typed/core/v1".PodInterface has no field or method Exec)
</code></pre>
| <p>As @David Maze shared, to use k8's go client to exec command in a pod follow the below code:</p>
<pre><code>import (
"io"
v1 "k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/remotecommand"
)
// ExecCmd exec command on specific pod and wait the command's output.
func ExecCmdExample(client kubernetes.Interface, config *restclient.Config, podName string,
command string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error {
cmd := []string{
"sh",
"-c",
command,
}
req := client.CoreV1().RESTClient().Post().Resource("pods").Name(podName).
Namespace("default").SubResource("exec")
option := &v1.PodExecOptions{
Command: cmd,
Stdin: true,
Stdout: true,
Stderr: true,
TTY: true,
}
if stdin == nil {
option.Stdin = false
}
req.VersionedParams(
option,
scheme.ParameterCodec,
)
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
return err
}
err = exec.Stream(remotecommand.StreamOptions{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
})
if err != nil {
return err
}
return nil
}
</code></pre>
<p>Also refer to this <a href="https://github.com/kubernetes/client-go/issues/912" rel="nofollow noreferrer">link</a> for more information</p>
|
<p>I am just wondering to know how should I create a <code>docker</code> file for a Flutter app then deploy it on a <code>Kubernetes</code> cluster?</p>
<p>I found the following <code>Dockerfile</code> and <code>server.sh</code> script from <a href="https://blog.logrocket.com/containerizing-flutter-web-apps-with-docker/" rel="nofollow noreferrer">this</a> website but I am not sure if this a correct way of doing it?</p>
<pre><code># Install Operating system and dependencies
FROM ubuntu:22.04
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3
RUN apt-get clean
# download Flutter SDK from Flutter Github repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter environment path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Run flutter doctor
RUN flutter doctor
# Enable flutter web
RUN flutter channel master
RUN flutter upgrade
RUN flutter config --enable-web
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN flutter build web
# Record the exposed port
EXPOSE 5000
# make server startup script executable and start the web server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]
</code></pre>
<p>And:</p>
<pre><code>#!/bin/bash
# Set the port
PORT=5000
# Stop any program currently running on the set port
echo 'preparing port' $PORT '...'
fuser -k 5000/tcp
# switch directories
cd build/web/
# Start the server
echo 'Server starting on port' $PORT '...'
python3 -m http.server $PORT
</code></pre>
<p>I did all the steps and it seems it works fine but as long as I use <code>skaffold</code> I don't know how/where to put the following command to automate this step as well (I have already ran this command manually):</p>
<pre><code>docker run -i -p 8080:5000 -td flutter_docker
</code></pre>
<p><em><strong>I still like to know was the above files, proper/official way to doing that or there is a better way of it?</strong></em></p>
<p><code>EDIT</code>: I created the following <code>deployment & service</code> file to put the deploy the created image on <code>Kubernetes</code> local <code>Kind</code> cluster but when I run <code>kubectl get pods</code> I can not find this image but I find it by doing <code>docker images</code>. Why this happens and how can I put in on a <code>Kubernetes</code> pod instead of <code>docker</code> images?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: front
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
| <p>The question (title) is misleading.
There are 2 parts.</p>
<ol>
<li>How to containerize the app (in this case flutter app).</li>
<li>How to deploy the app on the k8s cluster.</li>
</ol>
<p>To deal with the first part, You have Dockerfile. There is room for improvement but I think this Dockerfile should work. Then you need to build a container image. Please refer to the <a href="https://docs.docker.com/engine/reference/commandline/build/" rel="nofollow noreferrer">official documentation</a>. Finally, you need to push this created container image to some repository. (We may skip this pushing stage but to make things simple I am suggesting pushing the image)</p>
<p>For the second part, you should be familiar with basic Kubernetes concepts. You can run the container from a previously built container image with the help of the <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">k8s Pod object</a>. To access the application, you need one more k8s object and that is the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> (Load balancer or Node port type).</p>
<p>I know things are a bit complex (at initial levels) but please follow a good course/book I have gone through the blog post you shared, and this talks only about the first part and not the second part. You will have a container image at the end of this blog post.</p>
<p>I suggest going through the free playground offered by <a href="https://killercoda.com/killer-shell-ckad/" rel="nofollow noreferrer">killer shell</a>, if you don't want to set up a k8s cluster on your own, that is again another learning curve. Skip the first tile on this page this is just a playground, but from the second tile, they have enough material.</p>
<p><strong>Improvements for Edited Question:</strong></p>
<ul>
<li>server.sh: maintaining a startup script is quite standard practice if you have complex logic to start the process. We can skip this file but in that case, a few steps will be added to Dockerfile.</li>
<li><code>kubectl get pods</code> does not show you images but it will show you running pods in the cluster (in default namespace). Not sure how you ran and connected to the cluster. But try to add output of the command.</li>
<li>few pointers to impve dockerfile:
<ul>
<li>Use a small base image footprint. Ubuntu: xx has many packages pre-installed, maybe you don't need all of them. Ubuntu has slim images also or try to find a flutter image.</li>
<li>Try to reduce Run statements. you can club 2-3 commands in one. this will reduce layers in the image.</li>
<li>instead of <code>RUN git clone</code>, you should clone code before docker build and copy/add code in the container image. In this way, you can control which files you need to add to the image. You also don't require to have a git tool installed in the container image.</li>
<li><code>RUN ["chmod", "+x", "/app/server/server.sh"]</code> and <code>RUN mkdir </code> both statements are not needed at all if you write Dockerfile smartly.</li>
<li>Dockerfiles should be clean, crisp, and precise.</li>
</ul>
</li>
</ul>
<p>PS: Sorry but this is not a classroom section. I know this is a bit complex thing for beginners. But please try to learn from some good sources/books.</p>
|
<p>I'm running colima with kubernetes like:
<code>colima start --kuberenetes</code></p>
<p>I created a few running pods, and I want to see access the through the browsers.
But I don't know what is the colima IP (or kubernetes node IP).</p>
<p>help appreciated</p>
| <p>You can get the nodeIp so:</p>
<pre><code>kubectl get node
NAME STATUS ROLES AGE VERSION
nodeName Ready <none> 15h v1.26.0
</code></pre>
<p>Then with the nodeName:</p>
<pre><code>kubectl describe node nodeName
</code></pre>
<p>That gives you a descrition of the node and you should look for this section:</p>
<pre><code>Addresses:
InternalIP: 10.165.39.165
Hostname: master
</code></pre>
<p>Ping it to verify the network.</p>
<p>Find your host file on Mac and make an entry like:</p>
<pre><code>10.165.39.165 test.local
</code></pre>
<p>This let you access the cluster with a domain name.</p>
<p>Ping it to verify.</p>
<p>You can not access from outside the cluster a ClusterIp.
To access your pod you have several possibilities.</p>
<ol>
<li>if your service is type ClusterIp, you can create a temporary connection from your host with a port forward.</li>
</ol>
<pre><code>kubectl port-forward svc/yourservicename localport:podport
</code></pre>
<ol start="2">
<li>(i would raccomend this) create a service type: NodePort</li>
</ol>
<p>Then</p>
<pre><code>kubectl get svc -o wide
</code></pre>
<p>Shows you the NodePort: between(30000-32000).</p>
<p>You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort.</p>
<p>Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands.</p>
<p>Update:</p>
<p>if you want to start colima with an ipAddress, first find one of your local network which is available.</p>
<p>Your network setting you can get with:</p>
<pre><code>ifconfig
</code></pre>
<p>find the network. Should be the same of that of your Internet router.</p>
<p>Look for the subnet. Most likely 255.255.255.0.</p>
<p>The value to pass then:</p>
<pre><code> --network-address xxx.xxx.xxx.xxx/24
</code></pre>
<p>In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible.</p>
<p>Again check with ping and follow the steps from begining to verify the kubernetes node configuration.</p>
|
<p>Is it possible to specify or change the service account to be used when accessing the kube api from within the cluster using rest.InClusterConfig in golang?
It seems to use the default service account (or the service account the pod running is under) but i want to use another service account.
I am aware that i can use BuildConfigFromFlags and use the configs from a config file that may be tied to a service account, but i wanted to see if it is possible to override the service account with rest.InClusterConfig</p>
| <p>In Kubernetes, a Pod (or multiple for the same service) has a ServiceAccount. That is the way it is designed.</p>
<p>This ServiceAccount can be a specific that you create, you don't have to use a default ServiceAccount in a Namespace.</p>
|
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Node IP</th>
<th>Role</th>
<th>OS</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>192.x.x.11</code></td>
<td>Master 1</td>
<td>RHEL8</td>
</tr>
<tr>
<td><code>192.x.x.12</code></td>
<td>Master 2</td>
<td>RHEL8</td>
</tr>
<tr>
<td><code>192.x.x.13</code></td>
<td>Master 3</td>
<td>RHEL8</td>
</tr>
<tr>
<td><code>192.x.x.16</code></td>
<td>VIP</td>
<td></td>
</tr>
</tbody>
</table>
</div><h1>Use-Cases</h1>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>No of Masters Ready or Running</th>
<th>Expected</th>
<th>Actual</th>
</tr>
</thead>
<tbody>
<tr>
<td>3 Masters</td>
<td>Ingress Created with VIP IP and ping to VIP should work</td>
<td>VIP is working</td>
</tr>
<tr>
<td>2 Masters</td>
<td>Ingress Created with VIP IP and ping to VIP should work</td>
<td>VIP is working</td>
</tr>
<tr>
<td>1 Master</td>
<td>Ingress Created with VIP IP and ping to VIP should work</td>
<td>VIP is not working, Kubectl is not responding</td>
</tr>
</tbody>
</table>
</div>
<p>I have Created a RKE2 HA Cluster with <strong>kube-vip</strong> and the cluster is working fine only when at least 2 masters are in Running, but I want to test a use case where only 1 master is available the VIP should be able to ping and any ingress created with VIP address should work.</p>
<p>In my case when 2 masters are down I'm facing an issue with kube-vip-ds pod, when i check the logs using crictl command I'm getting the below error can someone suggest to me how to reslove this issue.</p>
<pre><code>
E0412 12:32:20.733320 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: etcdserver: request timed out
E0412 12:32:20.733715 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: etcdserver: request timed out
E0412 12:32:25.812202 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
E0412 12:32:25.830219 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field
E0412 12:33:27.204128 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock)
E0412 12:33:27.504957 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock)
E0412 12:34:29.346104 1 leaderelection.go:322] error retrieving resource lock kube-system/plndr-cp-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-cp-lock)
E0412 12:34:29.354454 1 leaderelection.go:325] error retrieving resource lock kube-system/plndr-svcs-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io plndr-svcs-lock)
</code></pre>
<p>Thanks.</p>
| <p>Kindly check if you have stacked up etcd datastore as part of your k8s cluster.
etcd for its quorum requires at least 2 masters to be running and in that case failure toleration is n-1, for 3 nodes it shall tolerate only one failure..so in your case as 2 masters are down your cluster is non-operational</p>
|
<p>I have a docker container python app deployed on a kubernetes cluster on Azure (I also tried on a container app). I'm trying to connect this app to Azure key vault to fetch some secrets. I created a managed identity and assigned it to both but the python app always fails to find the managed identity to even attempt connecting to the key vault.</p>
<p>The Managed Identity role assignments:</p>
<p>Key Vault Contributor -> on the key vault</p>
<p>Managed Identity Operator -> Managed Identity</p>
<p>Azure Kubernetes Service Contributor Role,
Azure Kubernetes Service Cluster User Role,
Managed Identity Operator -> on the resource group that includes the cluster</p>
<p>Also on the key vault Access policies I added the Managed Identity and gave it access to all key, secrets, and certs permissions (for now)</p>
<p>Python code:</p>
<pre><code> credential = ManagedIdentityCredential()
vault_client = SecretClient(vault_url=key_vault_uri, credential=credential)
retrieved_secret = vault_client.get_secret(secret_name)
</code></pre>
<p>I keep getting the error:</p>
<pre><code>azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: no azure identity found for request clientID
</code></pre>
<p>So at some point I attempted to add the managed identity clientID in the cluster secrets and load it from there and still got the same error:</p>
<p>Python code:</p>
<pre><code> def get_kube_secret(self, secret_name):
kube_config.load_incluster_config()
v1_secrets = kube_client.CoreV1Api()
string_secret = str(v1_secrets.read_namespaced_secret(secret_name, "redacted_namespace_name").data).replace("'", "\"")
json_secret = json.loads(string_secret)
return json_secret
def decode_base64_string(self, encoded_string):
decoded_secret = base64.b64decode(encoded_string.strip())
decoded_secret = decoded_secret.decode('UTF-8')
return decoded_secret
managed_identity_client_id_secret = self.get_kube_secret('managed-identity-credential')['clientId']
managed_identity_client_id = self.decode_base64_string(managed_identity_client_id_secret)
</code></pre>
<p><strong>Update:</strong></p>
<p>I also attempted to use the secret store CSI driver, but I have a feeling I'm missing a step there. Should the python code be updated to be able to use the secret store CSI driver?</p>
<pre><code># This is a SecretProviderClass using user-assigned identity to access the key vault
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-user-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true" # Set to true for using managed identity
userAssignedIdentityID: "$CLIENT_ID" # Set the clientID of the user-assigned managed identity to use
vmmanagedidentityclientid: "$CLIENT_ID"
keyvaultName: "$KEYVAULT_NAME" # Set to the name of your key vault
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
objects: ""
tenantId: "$AZURE_TENANT_ID"
</code></pre>
<p>Deployment Yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: redacted_namespace
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: redacted_image
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
imagePullPolicy: Always
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "250m"
env:
- name: test-secrets
valueFrom:
secretKeyRef:
name: test-secrets
key: test-secrets
volumeMounts:
- name: test-secrets
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: test-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname-user-msi"
dnsPolicy: ClusterFirst
</code></pre>
<p><strong>Update 16/01/2023</strong></p>
<p>I followed the steps in the answers and the linked docs to the letter, even contacted Azure support and followed it step by step with them on the phone and the result is still the following error:</p>
<p><code>"failed to process mount request" err="failed to get objectType:secret, objectName:MongoUsername, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://<RedactedVaultName>.vault.azure.net/secrets/<RedactedSecretName>/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=<RedactedClientId>&resource=https%3A%2F%2Fvault.azure.net"</code></p>
| <p>Using the <a href="https://secrets-store-csi-driver.sigs.k8s.io/topics/set-as-env-var.html" rel="nofollow noreferrer">Secrets Store CSI Driver</a>, you can configure the <code>SecretProviderClass</code> to use a <a href="https://learn.microsoft.com/azure/aks/workload-identity-overview" rel="nofollow noreferrer">workload identity</a> by setting the <code>clientID</code> in the <code>SecretProviderClass</code>. You'll need to use the client ID of your <a href="https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview?source=recommendations#managed-identity-types" rel="nofollow noreferrer">user assigned managed identity</a> and change the <code>usePodIdentity</code> and <code>useVMManagedIdentity</code> setting to <code>false</code>.</p>
<p>With this approach, you don't need to add any additional code in your app to retrieve the secrets. Instead, you can mount a secrets store (using CSI driver) as a volume mount in your pod and have secrets loaded as environment variables which is documented <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/sync-with-k8s-secrets/" rel="nofollow noreferrer">here</a>.</p>
<p>This <a href="https://learn.microsoft.com/azure/aks/csi-secrets-store-identity-access#configure-workload-identity" rel="nofollow noreferrer">doc</a> will walk you through setting it up on Azure, but at a high-level here is what you need to do:</p>
<ol>
<li>Register the <code>EnableWorkloadIdentityPreview</code> feature using Azure CLI</li>
<li>Create an AKS cluster using Azure CLI with the <code>azure-keyvault-secrets-provider</code> add-on enabled and <code>--enable-oidc-issuer</code> and <code>--enable-workload-identiy</code> flags set</li>
<li>Create an Azure Key Vault and set your secrets</li>
<li>Create an Azure User Assigned Managed Identity and set an access policy on the key vault for the the managed identity' client ID</li>
<li>Connect to the AKS cluster and create a Kubernetes <code>ServiceAccount</code> with annotations and labels that enable this for Azure workload identity</li>
<li>Create an Azure identity federated credential for the managed identity using the AKS cluster's OIDC issuer URL and Kubernetes ServiceAccount as the subject</li>
<li>Create a Kubernetes <code>SecretProviderClass</code> using <code>clientID</code> to use workload identity and adding a <code>secretObjects</code> block to enable syncing objects as environment variables using Kubernetes secret store.</li>
<li>Create a Kubernetes <code>Deployment</code> with a <code>label</code> to use workload identity, the <code>serviceAccountName</code> set to the service account you created above, volume using CSI and the secret provider class you created above, volumeMount, and finally environment variables in your container using <code>valueFrom</code> and <code>secretKeyRef</code> syntax to mount from your secret object store.</li>
</ol>
<p>Hope that helps.</p>
|
<p>I use MetalLB and Nginx-ingress controller to provide internet access to my apps.
I see that in most configurations, the service is set to ClusterIP, as the ingress will send traffic there.</p>
<p>My question is: does this end up with double load balancing, that is, one from MetalLB to my ingress, and another from my ingress to the pods via ClusterIP?</p>
<p>If so, is this how it is supposed to be, or is there a better way?</p>
| <p>Metallb doesn't receive and forward any traffic, so</p>
<blockquote>
<p>from MetalLB to my ingress</p>
</blockquote>
<p>doesn't really make sense. Metallb just configures kubernetes services with an external ip and tells your surrounding infrastructure where to find it. Still with your setup there will be double load-balancing:</p>
<p>Traffic reaches your cluster and is load-balanced between your nginx pods. Nginx handles the request and forwards it to the application, which will result in a second load-balancing.</p>
<p>But this makes total sense, because if you're using an ingress-controller, you don't want all incoming traffic to go through the same pod.</p>
<p>Using an ingress-controller with metallb can be done and can improve stability while performing updates on you application, but it's not required.</p>
<p>Metallb is a solution to implement kubernetes services of type <code>LoadBalancing </code> when there is no cloud provider to do that for you.</p>
<p>So if you don't need layer 7 load-balancing mechanism you can instead of using a service of type <code>ClusterIP</code> with an ingress-controller just use a service of type <code>LoadBalancing</code>. Metallb will give that service an external ip from your pool and announce it to it's peers.</p>
<p>In that case, when traffic reaches the cluster it will only be load-balanced once.</p>
|
<p>We have our web api service running in OpenShift for the past few months</p>
<p>When we deployed this to OpenShift, initially we have given basic request and limits for memory and CPU.</p>
<p>Sometime when the resource limit crossed its threshold we had to increase the limit</p>
<p>We have several services deployed and we have given some random request and limits for the Pod.
we are trying to figure out a way to provide resource limits and request based on the past few months that it is running on OpenShift</p>
<p>My idea is to look at the last few months on requests what is POD is receiving and come up with a value to requests and limits</p>
<p>I am thinking PROMQL can help me to provide this value, can someone help me with a query to determine average resource and limits based on past 4 to 5 weeks of requests on the POD ?</p>
| <p>Try the below queries which are helpful in your case :</p>
<pre><code>avg (
avg_over_time(container_cpu_usage_seconds_total:rate5m[30d])
) by (pod_name)
</code></pre>
<p>The above query is used to determine the average CPU usage of a certain pod for the past 30 days.</p>
<pre><code>avg (
avg_over_time(container_memory_usage_seconds_total:rate5m[30d])
) by (pod_name)
</code></pre>
<p>The above query is used to determine the average memory usage of a certain pod for the past 30 days.</p>
<p>In the query avg is used to calculate the average of the sample values in the input series, grouped by their <code>[pod_name]</code>. <code>avg_over_time</code> is used for getting the average value of all points in the specified interval, we can get the metrics like cpu and memory usage for the specified interval by using the respective queries.</p>
<p>For more info follow this <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#aggregation_over_time" rel="nofollow noreferrer">doc</a>.</p>
|
<p>I am trying to set up a Kubernetes master node. Every time I try to start kubelet I am getting the error message:</p>
<pre><code>command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set
</code></pre>
<p>I tries to set up the container runtime endpoint with the following command:</p>
<pre><code>sudo kubelet --container-runtime-endpoint=unix:///run/containerd/containerd.sock
</code></pre>
<p>But when I do, I get the following log with a failing command:</p>
<pre><code>I0116 09:43:02.562785 4142 server.go:412] "Kubelet version" kubeletVersion="v1.26.0"
I0116 09:43:02.563067 4142 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0116 09:43:02.563458 4142 server.go:575] "Standalone mode, no API client"
I0116 09:43:02.566523 4142 server.go:463] "No api server defined - no events will be sent to API server"
I0116 09:43:02.566664 4142 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I0116 09:43:02.567002 4142 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0116 09:43:02.567130 4142 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
I0116 09:43:02.567232 4142 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0116 09:43:02.567305 4142 container_manager_linux.go:308] "Creating device plugin manager"
I0116 09:43:02.567449 4142 state_mem.go:36] "Initialized new in-memory state store"
E0116 09:43:02.570133 4142 run.go:74] "command failed" err="failed to run Kubelet: validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
</code></pre>
<p>Can anyone help me with this?</p>
<p>I have already tried resetting kubeadm because I read in a forum that could cause this problem.</p>
<p>This did not solve my problem. Outside of that, I read that going back to an older version of Kubernetes will work.
(But I did not try that. I want to use version 1.26 which is the newest).</p>
<p>And I cannot imagine Kubernetes making a version where containerd doesn't work. So where is the fix?</p>
| <p>Looks like you are encountering the problem with <a href="https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#cri-api-removal" rel="noreferrer">Removal of the CRI v1alpha2 API and containerd 1.5 support</a> in K8s 1.26.</p>
<p><strong>Possible workarounds:</strong></p>
<p>You can solve this by removing the containerd package that came with Ubuntu and installing the containerd.io package from docker repositories. Then you get a 1.6 release. Then re-generate the container's config including the group changes and restart on 1.25-05. Then you are able to complete the upgrade to 1.26.</p>
<p>From there it was a simple do-release-upgrade to the latest ubuntu.</p>
<p><strong>1)</strong> In the Docker repos, there are packages for containerd 1.6 and above. So you can also add the Docker repos, and install containerd.io from there:</p>
<pre><code>sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install containerd.io
</code></pre>
<p><strong>2)</strong> Manually upgrading containerd to 1.6 or above, by downloading and replacing the binaries</p>
<pre><code>wget https://github.com/containerd/containerd/releases/download/v1.6.12/containerd-1.6.12-linux-amd64.tar.gz
tar xvf containerd-1.6.12-linux-amd64.tar.gz
systemctl stop containerd
cd bin
cp * /usr/bin/
systemctl start containerd
</code></pre>
<p><strong>3)</strong> The one listed in the link above - running an older version of the kubelet (1.25)</p>
<pre><code>apt remove --purge kubelet
apt install -y kubeadm kubelet=1.25.5-00
</code></pre>
<p>Please go through the similar <a href="https://serverfault.com/questions/1118051/failed-to-run-kubelet-validate-service-connection-cri-v1-runtime-api-is-not-im">ServerFault Answers</a> for detailed step by srep information.</p>
<p><strong>EDIT :</strong></p>
<p><strong>4)</strong> Third-party replacement, <a href="https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd/#what-is-cri-dockerd" rel="noreferrer">cri-dockerd</a>, is available. The cri-dockerd adapter lets you use Docker Engine through the <strong>Container Runtime Interface</strong>.</p>
<p>If you already use cri-dockerd, you aren't affected by the dockershim removal. Before you begin, <a href="https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/" rel="noreferrer">Check whether your nodes use the dockershim</a>.</p>
|
<p>I'm trying to trigger cronjob manually(not scheduled) using fabric8 library
but getting the following error:</p>
<pre><code>Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.20.0.1:443/apis/batch/v1/
namespaces/engineering/jobs. Message: Job.batch "app-chat-manual-947171" is invalid: spec.template.spec.containers[0].name: Re
quired value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.template.spec.co
ntainers[0].name, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=batch, kind=Job, name=ap
p-chat-manual-947171, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Job.batch "app-chat-man
ual-947171" is invalid: spec.template.spec.containers[0].name: Required value, metadata=ListMeta(_continue=null, remainingItemCount=
null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
</code></pre>
<p>my code is running at the cluster:</p>
<p>maven dependency:</p>
<pre><code><dependency>
<groupId>io.fabric8</groupId>
<artifactId>kubernetes-client</artifactId>
<version>6.3.1</version>
</dependency>
</code></pre>
<p>java code:</p>
<pre><code>public static void triggerCronjob(String cronjobName, String applicableNamespace) {
KubernetesClient kubernetesClient = new KubernetesClientBuilder().build();
final String podName = String.format("%s-manual-%s", cronjobName.length() > 38 ? cronjobName.substring(0, 38) : cronjobName,
new Random().nextInt(999999));
System.out.println("triggerCronjob method invoked, applicableNamespace: " + applicableNamespace
+ ", cronjobName: " + cronjobName + ", podName: " + podName);
Job job = new JobBuilder()
.withApiVersion("batch/v1")
.withNewMetadata()
.withName(podName)
.endMetadata()
.withNewSpec()
.withBackoffLimit(4)
.withNewTemplate()
.withNewSpec()
.addNewContainer()
.withName(podName)
.withImage("perl")
.withCommand("perl", "-Mbignum=bpi", "-wle", "print bpi(2000)")
.endContainer()
.withRestartPolicy("Never")
.endSpec()
.endTemplate()
.endSpec().build();
kubernetesClient.batch().v1().jobs().inNamespace(applicableNamespace).createOrReplace(job);
kubernetesClient.close();
System.out.println("CronJob triggered: applicableNamespace: " + applicableNamespace + ", cronjob name: " + cronjobName);
}
</code></pre>
<p>the code executed at the kubernetes cluster, but not form the application, it's an external program that's running in the cluster.</p>
<p>my goal is to trigger given job in a given namespace.</p>
| <p>If you want to trigger an already existing CronJob, you need to provide ownerReference for the existing CronJob in Job:</p>
<pre class="lang-java prettyprint-override"><code>// Get already existing CronJob
CronJob cronJob = kubernetesClient.batch().v1()
.cronjobs()
.inNamespace(namespace)
.withName(cronJobName)
.get();
// Create new Job object referencing CronJob
Job newJobToCreate = new JobBuilder()
.withNewMetadata()
.withName(jobName)
.addNewOwnerReference()
.withApiVersion("batch/v1")
.withKind("CronJob")
.withName(cronJob.getMetadata().getName())
.withUid(cronJob.getMetadata().getUid())
.endOwnerReference()
.addToAnnotations("cronjob.kubernetes.io/instantiate", "manual")
.endMetadata()
.withSpec(cronJob.getSpec().getJobTemplate().getSpec())
.build();
// Apply job object to Kubernetes Cluster
kubernetesClient.batch().v1()
.jobs()
.inNamespace(namespace)
.resource(newJobToCreate)
.create();
</code></pre>
|
<p>I am getting below error when running <code>mlflow app</code></p>
<blockquote>
<p>raise AttributeError("module {!r} has no attribute " AttributeError:
module 'numpy' has no attribute 'object'</p>
</blockquote>
<p>Can someone help me with this</p>
| <p>Since version 1.24 of numpy, <code>np.object</code> is deprecated, and needs to be replaced with <code>object</code> (cf. <a href="https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations" rel="noreferrer">numpy release notes</a>).</p>
<p>You either need to update this in your code, or another package you're using needs to be updated (not possible to answer without more information).</p>
<p>One (dirty) workaround for now would be to fix your numpy version to the last version still supporting <code>np.object</code> with <code>pip install numpy==1.23.4</code></p>
|
<p>I have a terraform code as given below</p>
<pre><code> locals {
file_path = format("%s-%s", var.test1, var.test2)
test_decode = yamldecode((data.github_repository_file.test.content))
}
data "github_repository_file" "test" {
repository = "test-repo"
branch = "develop"
file = "${local.file_path}/local/test.yaml"
}
test_encode = ${yamlencode(local.test_decode.spec.names)}
</code></pre>
<p>This is working fine when a "<strong>.spec.names</strong>" attribute present in the test.yaml file. Since we are selecting the test.yaml based on local.file_path some times attribute .spec.names might not present in the test.yaml and the plan failing with <strong>"Error: Unsupported attribute"</strong>. How to check "<strong>.spec.names</strong>" attribute present in the test.yaml?</p>
<p>Updating the question to add yaml example</p>
<p><strong>Yaml with names attribute</strong></p>
<pre><code>apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
names:
key1: "value1"
key2: "value2"
key3: "value3"
key4: "value4"
</code></pre>
<p><strong>YAML without names attribute</strong></p>
<pre><code>apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
</code></pre>
| <p>You can use <a href="https://developer.hashicorp.com/terraform/language/functions/try" rel="nofollow noreferrer">try</a>:</p>
<pre><code> test_encode = yamlencode(try(local.test_decode.spec.names, "some_default_value"))
</code></pre>
|
<p>I am new to Kubernetes and if I am not wrong, a service can be exposed inside the cluster using <code>ClusterIP</code>, and to the outside world using <code>NodePort</code> or <code>LoadBalancer</code> types. But my requirement is that I have a single container that has few REST APIs. I want that one API (the basic health check API) should be exposed to the outside, and the rest of the APIs should be available only within the cluster (accessible by other nodes). How can I achieve this?</p>
| <p>You can keep your service as <strong>ClusterIP</strong> and use the <strong>ingress</strong>.</p>
<p>With <strong>ingress</strong> and <strong>ingress controller</strong>, you can setup and expose the desired path outside of cluster.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>, you can install the <strong>Nginx ingress controller</strong> and create the <strong>ingress</strong> resource.</p>
<p>Read more about the <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">nginx ingress controller</a> setup and <strong>ingress</strong> reosuce setup.</p>
|
<p>I am just setting two simple services on Mac using minikube</p>
<p>I have the service set up and I can access it via ingress / minikube tunnel . So i know the service works</p>
<p>I am using Spring Boot 3, with the so i need to specify the <code>spring-cloud-starter-kubernetes-all</code> package. This means I need to specify a url for <code>spring.cloud.kubernetes.discovery.discovery-server-url </code></p>
<p>When i try to do the simple call to</p>
<p><code>discoveryClient.getServices()</code></p>
<p>I get the error "Connection refused <a href="https://kubernetes.docker.internal:6443/apps%22" rel="nofollow noreferrer">https://kubernetes.docker.internal:6443/apps"</a></p>
<p>"apps" is my second service</p>
<p>It is refusing connection to the value of <code>spring.cloud.kubernetes.discovery.discovery-server-url</code></p>
<p>At the moment i have this set to <code>spring.cloud.kubernetes.discovery.discovery-server-url=https://kubernetes.docker.internal:6443</code></p>
<p>I am assuming this is incorrect and I need some help as to what is the correct url to set this to / or the correct place to find this. I thought this would be the internal url.</p>
| <p>You are trying to configure your discovery client with the Kubernetes API server URL, which is incorrect.
Your client application needs to be connected to <strong>Spring Cloud Kubernetes Discovery Server</strong>.
It's an independent application that will work like a <strong>proxy</strong> between your client SpringBoot apps and Kubernetes.
You can find its images here: <a href="https://hub.docker.com/r/springcloud/spring-cloud-kubernetes-discoveryserver/tags" rel="nofollow noreferrer">https://hub.docker.com/r/springcloud/spring-cloud-kubernetes-discoveryserver/tags</a>
And it should be deployed to Kubernetes via yaml file.</p>
<p>Then you can configure
<code>spring.cloud.kubernetes.discovery.discovery-server-url</code>
with this discovery server URL. That URL will most likely come from a Kubernetes service that you will create for the discovery server application.</p>
<p>Please, find the full deployment YAML and the related documentation here:
<a href="https://spring.io/blog/2021/10/26/new-features-for-spring-cloud-kubernetes-in-spring-cloud-2021-0-0-m3" rel="nofollow noreferrer">https://spring.io/blog/2021/10/26/new-features-for-spring-cloud-kubernetes-in-spring-cloud-2021-0-0-m3</a></p>
<p>Please, let us know how that goes</p>
|
<p>I have a kubernetes cluster with a node pool. I enabled autoscaling. I defined min & max values of nodes in node pool options.</p>
<p>I observed that I need those nodes during working hours only. During non working hours nodes are mostly idle. I want to save cost by shutting down nodes if they are idle for say - 30 mins.</p>
<p>Is there any option on node pool to configure node based on time period.</p>
<p>I know I can set minimum node to 0 but in that case I have to wait for new node to spin.</p>
<p>Atul Sureka</p>
| <p>There is no default option however you can write the <strong>cloud function</strong> and trigger it with the <strong>scheduler</strong> to scale down the <strong>GKE nodes</strong> on a <strong>time</strong> basis. i had the same requirement so written <strong>cloud function</strong> script.</p>
<p>i tried two methods</p>
<p><strong>Method 1</strong> : Cloud Scheduler publish a message to <strong>Pub/sub</strong> --> Which processed by <strong>Cloud Function</strong> --> Cloud Function scale the <strong>GKE nodes</strong></p>
<p><strong>Method 2</strong> : Cloud Scheduler send HTTP request to <strong>Cloud function</strong> --> Cloud Function scale the <strong>GKE nodes based on paylob</strong></p>
<pre><code>/* HTTP function get data from event and resize the GKE cluster pools based on data */
const container = require("@google-cloud/container");
const client = new container.v1.ClusterManagerClient();
exports.helloHTTP = async (req, res) => {
console.log(`Request Body`, req.body);
const request = {
projectId: req.body.projectId,
zone: req.body.zone,
clusterId: req.body.cluster_id,
nodePoolId: req.body.node_pool_id,
nodeCount: req.body.node_count,
};
const result = await client.setNodePoolSize(request);
const operation = result[0];
console.log(operation);
res.status(200);
};
</code></pre>
<p><a href="https://github.com/harsh4870/cloud-function-scale-gke" rel="nofollow noreferrer">GitHub repo</a> & follow my article for more details <a href="https://medium.com/@harsh.manvar111/scale-gke-with-cloud-function-d76f20f9416e" rel="nofollow noreferrer">Medium article</a></p>
|
<p>I am running airflow using postgres.</p>
<p>There was a phenomenon that the web server was slow during operation.</p>
<p>It was a problem caused by data continuing to accumulate in dag_run and log of the db table (it became faster by accessing postgres and deleting data directly).</p>
<p>Are there any airflow options to clean the db periodically?</p>
<p>If there is no such option, we will try to delete the data directly using the dag script.</p>
<p>And I think it's strange that the web server slows down because there is a lot of data. Does the web server get all the data when opening another window?</p>
| <p>You can purge old records by running:</p>
<pre class="lang-bash prettyprint-override"><code>airflow db clean [-h] --clean-before-timestamp CLEAN_BEFORE_TIMESTAMP [--dry-run] [--skip-archive] [-t TABLES] [-v] [-y]
</code></pre>
<p>(<a href="https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#clean" rel="nofollow noreferrer">cli reference</a>)</p>
<p>It is a quite common setup to include this command in a DAG that runs periodically.</p>
|
<p>I have deployment an application, But pod always in pending state.</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
server1 Ready control-plane 8d v1.24.9
server2 Ready worker1 8d v1.24.9
server3 Ready worker2 8d v1.24.9
server4 Ready worker3 8d v1.24.9
</code></pre>
<pre><code>$ kubectl get all -n jenkins
NAME READY STATUS RESTARTS AGE
pod/jenkins-6dc9f97c7-ttp64 0/1 Pending 0 7m42s
</code></pre>
<pre><code>$ kubectl describe pods jenkins-6dc9f97c7-ttp64 -n jenkins
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m42s default-scheduler 0/4 nodes are available: 3 node(s) had volume node affinity conflict, 4 node(s) didn't match Pod's node affinity/selector. preemption: 0/4 nodes are available: 4 Preemption is not helpful for scheduling.
</code></pre>
<p>The event history confirms a <code>FailedScheduling</code> error is the reason.</p>
<p>my <code>deployment.yml</code> has forced to assign the pod into master node.</p>
<pre><code> spec:
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
</code></pre>
<p>Since from Kubernetes version 1.20+ onwards <code>node-role.kubernetes.io/master</code> is deprecated in favor of
<code>node-role.kubernetes.io/control-plane</code> i have updated like below. However still pod showing as <code>pending</code>.</p>
<pre><code> spec:
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: node-role.kubernetes.io/control-plane
</code></pre>
<p><code>PersistentVolume.yml</code> side i have below content.</p>
<pre><code>...
.....
..........
local:
path: /ksdata/apps/nodejs/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- server1
</code></pre>
<p><strong>Label details:-</strong></p>
<pre><code>$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
server1 Ready control-plane 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
server2 Ready worker1 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server2,kubernetes.io/os=linux,node-role.kubernetes.io/worker1=worker
server3 Ready worker2 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server3,kubernetes.io/os=linux,node-role.kubernetes.io/worker2=worker
server4 Ready worker3 9d v1.24.9 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=server4,kubernetes.io/os=linux,node-role.kubernetes.io/worker3=worker
</code></pre>
<pre><code>$ kubectl describe node | egrep -i taint
Taints: key=value:NoSchedule
Taints: <none>
Taints: <none>
Taints: <none>
</code></pre>
| <p>You have <strong>4</strong> nodes in the cluster, so generally one is <strong>master</strong> node, and on top of that <strong>app</strong> pods not schedule so <strong>3</strong> <strong>nodes</strong> are left.</p>
<p>While with the worker node, your deployment has of <strong>node affinity</strong> set so due to that it can't schedule a pod on that node and it's stuck in a <strong>pending</strong> state.</p>
<p>Check the <strong>PVC</strong> mostly it won't be able to get created</p>
<p><strong>Update</strong></p>
<p>Remove taint form <strong>master</strong> or <strong>control-plane</strong></p>
<pre><code>kubectl taint node server1 key=value:NoSchedule-
</code></pre>
<p><strong>Toleration to set on master</strong></p>
<pre><code> spec:
nodeSelector:
kubernetes.io/hostname: "server1"
</code></pre>
<p>#if have tain and not removed try toleration else fine with nodeselector</p>
<pre><code>tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
</code></pre>
|
<p>I am trying to use VSCode Cloud Studio plugin to deploy and debug a project in Kubernetes. When I use intellij and Cloud Studio plugin there, everything works perfect. My MongoDB is persistent with each deployment. When I use VSCode and Cloud Studio there, MongoDB is not persistent anymore. I would appreciate any tips to make it work in VSCode too.</p>
<p>When I deploy via intellij it uses the same persistent volume claim. When I deploy via VSCode it creates a new persistent volume claim everytime.</p>
<p>Here is the launch.json for VSCode:</p>
<pre><code> {
"configurations": [
{
"name": "Kubernetes: Run/Debug",
"type": "cloudcode.kubernetes",
"request": "launch",
"skaffoldConfig": "${workspaceFolder}\\skaffold.yaml",
"watch": false,
"cleanUp": false,
"portForward": true,
"imageRegistry": "XYZ",
"debug": [
{
"image": "XYZ",
"containerName": "XYZ",
"sourceFileMap": {
"${workspaceFolder}": "/root/"
}
}
]
}
]
}
</code></pre>
<p>Here is the workspace.xml from intellij:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ChangeListManager">
<list default="true" id="b5a077d4-323a-4042-8c4a-3bdd2d997e47" name="Changes" comment="" />
<option name="SHOW_DIALOG" value="false" />
<option name="HIGHLIGHT_CONFLICTS" value="true" />
<option name="HIGHLIGHT_NON_ACTIVE_CHANGELIST" value="false" />
<option name="LAST_RESOLUTION" value="IGNORE" />
</component>
<component name="Git.Settings">
<option name="RECENT_GIT_ROOT_PATH" value="$PROJECT_DIR$" />
</component>
<component name="MarkdownSettingsMigration">
<option name="stateVersion" value="1" />
</component>
<component name="ProjectId" id="2KV2OUqPUEf43q5Aj0UCGkKKm10" />
<component name="ProjectViewState">
<option name="hideEmptyMiddlePackages" value="true" />
<option name="showLibraryContents" value="true" />
</component>
<component name="PropertiesComponent">
<property name="RunOnceActivity.OpenProjectViewOnStart" value="true" />
<property name="RunOnceActivity.ShowReadmeOnStart" value="true" />
<property name="WebServerToolWindowFactoryState" value="false" />
<property name="com.google.cloudcode.ide_session_index" value="20230118_0001" />
<property name="last_opened_file_path" value="$PROJECT_DIR$" />
<property name="nodejs_package_manager_path" value="npm" />
<property name="settings.editor.selected.configurable" value="preferences.pluginManager" />
<property name="ts.external.directory.path" value="C:\Program Files\JetBrains\IntelliJ IDEA 2021.3.2\plugins\JavaScriptLanguage\jsLanguageServicesImpl\external" />
</component>
<component name="RunDashboard">
<option name="excludedTypes">
<set>
<option value="gcp-app-engine-local-run" />
</set>
</option>
</component>
<component name="RunManager">
<configuration name="Develop on Kubernetes" type="google-container-tools-skaffold-run-config" factoryName="google-container-tools-skaffold-run-config-dev" show_console_on_std_err="false" show_console_on_std_out="false">
<option name="allowRunningInParallel" value="false" />
<option name="buildEnvironment" value="Local" />
<option name="cleanupDeployments" value="false" />
<option name="deployToCurrentContext" value="true" />
<option name="deployToMinikube" value="false" />
<option name="envVariables" />
<option name="imageRepositoryOverride" />
<option name="kubernetesContext" />
<option name="mappings">
<list />
</option>
<option name="moduleDeploymentType" value="DEPLOY_MODULE_SUBSET" />
<option name="projectPathOnTarget" />
<option name="resourceDeletionTimeoutMins" value="2" />
<option name="selectedOptions">
<list />
</option>
<option name="skaffoldConfigurationFilePath" value="$PROJECT_DIR$/skaffold.yaml" />
<option name="skaffoldModules">
<list>
<option value="XYZ" />
</list>
</option>
<option name="skaffoldNamespace" />
<option name="skaffoldProfile" />
<option name="skaffoldWatchMode" value="ON_DEMAND" />
<option name="statusCheck" value="true" />
<option name="verbosity" value="WARN" />
<method v="2" />
</configuration>
</component>
<component name="SpellCheckerSettings" RuntimeDictionaries="0" Folders="0" CustomDictionaries="0" DefaultDictionary="application-level" UseSingleDictionary="true" transferred="true" />
<component name="TaskManager">
<task active="true" id="Default" summary="Default task">
<changelist id="b5a077d4-323a-4042-8c4a-3bdd2d997e47" name="Changes" comment="" />
<created>1674045398429</created>
<option name="number" value="Default" />
<option name="presentableId" value="Default" />
<updated>1674045398429</updated>
<workItem from="1674045401219" duration="2543000" />
</task>
<servers />
</component>
<component name="TypeScriptGeneratedFilesManager">
<option name="version" value="3" />
</component>
</project>
</code></pre>
<p>All other files are the same for the project of course.</p>
| <p>It looks like the Cloud Code for IntelliJ configuration is restricting the deployment to the XYZ module, but not in the Cloud Code for VS Code configuration.</p>
|
<p>I am trying to add labels to the nodes using helm chart, however getting error while deploying.</p>
<p>yaml template</p>
<pre><code>
apiVersion: v1
kind: Node
metadata:
name: {{ index (lookup "v1" "Node" "" "").items 0 "metadata" "name" }}
labels:
content-strange: "true"
name: {{ index (lookup "v1" "Node" "" "").items 1 "metadata" "name" }}
labels:
content-strange: "true"
name: {{ index (lookup "v1" "Node" "" "").items 2 "metadata" "name" }}
labels:
content-strange: "true"
</code></pre>
<p>Error</p>
<pre><code>helm install famous famous.1.1.tgz -n famous-ns1
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: Node "10.x.x.x" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "famous"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "famous-ns1"
</code></pre>
| <p>You can't use Helm to modify existing objects. Helm works by running its templating engine to construct complete Kubernetes manifests, and them submits them to the cluster. This process assumes that an object is wholly owned by Helm, and these objects don't already exist, and nothing other than <code>helm upgrade</code> will modify them.</p>
<p>The error you're getting here is in fact because the Node objects already exist; Kubernetes creates them when the actual nodes (physical machines, cloud instances, VMs) get created and are joined to the cluster. You can't modify these using Helm.</p>
|
<p>OAUTH2 is used for authentication and the OAUTH2 proxy is deployed in Kubernetes. When a request is received by the NGINX Ingress controller, it always routes the traffic to OAUTH proxy. The requirement is when the request contains a specific header (For example: abc) then those requests should be routed directly to the backend. Those shouldn't be routed to OAUTH proxy. Can this be done using some sort of an annotation in NGINX Ingress controller? Can we by pass those traffic going to OAUTH2?</p>
| <p>You may want to have a look at <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary</a></p>
<p>Let's say you have a normal Ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-backend
spec:
ingressClassName: nginx
rules:
- host: XXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
</code></pre>
<p>Set the header name and value for your desired backend on a second Ingress, with canary enabled.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-backend-header
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: sample-header
nginx.ingress.kubernetes.io/canary-by-header-value: abc
spec:
ingressClassName: nginx
rules:
- host: XXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend-with-header
port:
number: 80
</code></pre>
<p>Now, every request with sample-header: abc routes to the second ingress/service. Any other value, e. g. sample-header: test, will route to the first ingress/service.</p>
|
<p>Actually, I use kubernetes service accounts mostly with NodeJS, and this works fine, but I have this one service made in Go and I can't seem to make it work with service accounts (I know that the service account is correctly configured because I tested it with a pod).</p>
<p>I'm using this lib <a href="https://github.com/aws/aws-sdk-go" rel="nofollow noreferrer">https://github.com/aws/aws-sdk-go</a></p>
<p>Up till now I tried this:</p>
<pre class="lang-golang prettyprint-override"><code> sess := session.Must(session.NewSession())
creds := stscreds.NewCredentials(sess, os.Getenv("AWS_ROLE_ARN"))
svc := s3.New(sess, &aws.Config{Credentials: creds})
</code></pre>
<p>And also this (just in case):</p>
<pre class="lang-golang prettyprint-override"><code> region := os.Getenv("AMAZON_REGION")
sess := session.Must(session.NewSession(&aws.Config{Region: &region}))
svc := s3.New(sess)
</code></pre>
<p>for the first case I got the following error:</p>
<pre><code>AccessDenied: User: arn:aws:sts::xxxxxxxx:assumed-role/staging-worker-node/i-0xxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxx:role/EKSServiceAccount-app
</code></pre>
<p>and for the second case, I got a generic permission error.</p>
<p>I read the docs and tried a few things more (that may not be relevant here), but I can't see to make it work, maybe because I don't have much experience with golang.</p>
| <p>There are a few things you can try to get your Go service to work with service accounts on Kubernetes:</p>
<p>Verify that your Go service is properly configured to use the Kubernetes service account. This can be done by checking that the service account is correctly mounted as a volume in the pod definition and that the service is able to read the credentials from the volume.</p>
<p>Make sure that the AWS SDK for Go you are using (<a href="https://github.com/aws/aws-sdk-go" rel="nofollow noreferrer">https://github.com/aws/aws-sdk-go</a>) is configured to use the correct credentials. The SDK supports several methods for providing credentials, including environment variables, shared credentials file, and IAM roles.</p>
<p>You can try using the <code>k8s.io/client-go</code> library instead of the <code>AWS SDK</code> for Go, this will help you to use the Kubernetes service account to authenticate with the Kubernetes API and obtain the required credentials for the AWS SDK.</p>
<p>If you are using the Kubernetes service account to authenticate with an external service such as AWS, you may also need to configure an IAM role that allows the service account to access the necessary resources.</p>
<p>Double check that your Go service is correctly using the Kubernetes service account token and is passing it along as an authentication token to the AWS SDK.</p>
<p>You can also try to use the <code>k8s.io/client-go</code> library to get the secret and use it in your go code.</p>
|
<p>Why is there a pod limit in Kubernetes?</p>
<p>It makes intuitive sense to me that there'll be some limitation, but I'm curious to know the specific botteleneck which warrants the limit.</p>
| <p>The default limit of 110 pods per node is merely a compromise of Kubernetes, I think, not a technical limit.</p>
<p>Some vendors have additional limitations.</p>
<p>For example, on Azure, there's a limit on the number of IP addresses you can assign to a node. So if your Kubernetes cluster is configured to assign a IP address from Azure VNet to each pod, the limit is 30 (See <a href="https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node</a>).</p>
<p>On IBM Cloud, if you use IBM Block Storage for persistent volumes, they will be mounted as 'Secondary volumes' on your node, and you can only have 12 of those per node, so that's limit of 12 pods with persistent volumes. It sucks when you hit that limit when scaling up the first time :-(
On other vendors or with other storage classes, this limit is larger: <a href="https://kubernetes.io/docs/concepts/storage/storage-limits/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/storage-limits/</a></p>
|
<p>when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this? The example I am using is dagster. When installing with:</p>
<p><code>helm install dagster dagster/dagster \ --namespace dagster \ --create-namespace</code></p>
<p>everything starts up fine and secrets are created. When updating the image and tag and performing an upgrade with:</p>
<p><code>helm upgrade -f charts/dagster-user-deployments/values.yaml dagster ./charts/dagster-user-deployments -n dagster</code></p>
<p>the image is upgraded, but all secrets are deleted. Why would/ could this happen?</p>
<p>After running the upgrade command, I expect secrets to still be in place, and the new image to be pulled and run.</p>
| <p><code>when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this?</code></p>
<p>This is currently how helm works, here's the <a href="https://github.com/helm/helm-www/issues/1259" rel="nofollow noreferrer">issue opened</a> for discussion, there are several workarounds provided here as well.</p>
|
<p>I am new to GKE Can please help to understand What does this statement mean .</p>
<blockquote>
<p>You cannot manually configure LoadBalancer features using the Google Cloud SDK or the Google Cloud console. You must use BackendConfig or FrontendConfig Kubernetes resources.</p>
</blockquote>
| <p>In simple words, if you are running the <strong>GKE</strong> with <strong>GCE</strong> <strong>ingress controller</strong>, <strong>serviceType</strong> <strong>LoadBalancer</strong> yaml, you have to use the <strong>BackendConfig</strong> or <strong>FrontendConfig</strong> to configure the Load Balancer.</p>
<p><strong>BackendConfig</strong> or <strong>FrontendConfig</strong> is used to configure the <strong>Load Balancer</strong>. Suppose you are looking for <strong>HTTP > HTTPS</strong> <strong>redirect</strong>, <strong>SSL policy</strong> it's not suggested to <strong>set/configure</strong> the from <strong>GCP console</strong> instead use the <strong>YAML</strong> config.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#associating_backendconfig_with_your_ingress" rel="nofollow noreferrer">Backend Doc ref</a> & <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#configuring_ingress_features_through_frontendconfig_parameters" rel="nofollow noreferrer">Frontend Doc ref</a></p>
|
<p>I know why use StatefulSet for stateful applications. (e.g. DB or something)
In most cases, I can see like "You want to deploy stateful app to k8s? Use StatefulSet!"
However, I couldn't see like "You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet" ever.</p>
<p>Even nobody says "I don't recommend to use StatefulSet for stateless app", many stateless apps is deployed through Deployment, like it is the standard.</p>
<p>The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app.
Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps?</p>
<p>I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong.</p>
<p>I'm really curious about this question.</p>
<p>P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs.</p>
<p>I googled "When not to use StatefulSet", "when Deployment is better than StatefulSet", "Why Deployment is used for stateless apps", or something more questions.</p>
<p>I also see the k8s docs about StatefulSet either.</p>
| <p>In contrast to a Kubernetes Deployment, where pods are easily replaceable, each pod in a StatefulSet is given a name and treated individually. Pods with distinct identities are necessary for stateful applications.</p>
<p>This implies that if any pod perishes, it will be apparent right away. StatefulSets act as controllers but do not generate ReplicaSets; rather, they generate pods with distinctive names that follow a predefined pattern. The ordinal index appears in the DNS name of a pod. A distinct persistent volume claim (PVC) is created for each pod, and each replica in a StatefulSet has its own state.</p>
<p>For instance, a StatefulSet with four replicas generates four pods, each of which has its own volume, or four PVCs. StatefulSets require a headless service to return the IPs of the associated pods and enable direct interaction with them. The headless service has a service IP but no IP address and has to be created separately.The major components of a StatefulSet are the set itself, the persistent volume and the headless service.</p>
<p>That all being said, people deploy Stateful Applications with Deployments, usually they mount a RWX PV into the pods so all "frontends" share the same backend. Quite common in CNCF projects.</p>
|
<p>Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration.</p>
<p>My deployment of single instance MySQL DB instance is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-db-testing-service
namespace: testing
spec:
type: ClusterIP
ports:
- port: 3306
#targetPort: 3306
selector:
app: mysql-db-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-db-testing
namespace: testing
spec:
selector:
matchLabels:
app: mysql-db-testing
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-db-testing
spec:
containers:
- name: mysql-db-container-testing
image: mysql:8.0.31
env:
- name: MYSQL_ROOT_PASSWORD
value: test12345
ports:
- containerPort: 3306
name: mysql-port
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-persistent-storage
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc-mysql-testing
nodeSelector:
env: preprod
</code></pre>
<p>As I've mentioned I can get to the container via CLI:</p>
<p><a href="https://i.stack.imgur.com/TDQtX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TDQtX.png" alt="enter image description here" /></a></p>
<p>Console output regarding the working pod with db looks like:
<a href="https://i.stack.imgur.com/Cz7vj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cz7vj.png" alt="enter image description here" /></a></p>
<p>Console output regarding the service:</p>
<p><a href="https://i.stack.imgur.com/BIBad.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIBad.png" alt="v" /></a></p>
<p>Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help.</p>
| <p>You are using ClusterIP service(line 7). The kubernetes ClusterIP service is not made to allow you to access a pod outside of the cluster. ClusterIP just provide a way to have a not changing IP for other internal services to access your pod.
You should use instead Loadbalanacer.
Cf <a href="https://stackoverflow.com/a/48281728/8398523">https://stackoverflow.com/a/48281728/8398523</a> for differences</p>
|
<p>I know why use StatefulSet for stateful applications. (e.g. DB or something)
In most cases, I can see like "You want to deploy stateful app to k8s? Use StatefulSet!"
However, I couldn't see like "You want to deploy stateless app to k8s? Then, DO NOT USE StatefulSet" ever.</p>
<p>Even nobody says "I don't recommend to use StatefulSet for stateless app", many stateless apps is deployed through Deployment, like it is the standard.</p>
<p>The StatefulSet has clear pros for stateful app, but I think Deployment doesn't for stateless app.
Is there any pros in Deployment for stateless apps? Or is there any clear cons in StatefulSet for stateless apps?</p>
<p>I supposed that StatefulSet cannot use LoadBalancer Service or StatefulSet has penalty to use HPA, but all these are wrong.</p>
<p>I'm really curious about this question.</p>
<p>P.S. Precondition is the stateless app also uses the PV, but not persists stateful data, for example logs.</p>
<p>I googled "When not to use StatefulSet", "when Deployment is better than StatefulSet", "Why Deployment is used for stateless apps", or something more questions.</p>
<p>I also see the k8s docs about StatefulSet either.</p>
| <h2>Different Priorities</h2>
<p>What happens when a Node becomes unreachable in a cluster?</p>
<h2>Deployment - Stateless apps</h2>
<p>You want to maximize availability. As soon as Kubernetes detects that there are fewer than the desired number of replicas running in your cluster, the controllers spawn new replicas of it. Since these apps are stateless, it is very easy to do for the Kubernetes controllers.</p>
<h2>StatefulSet - Stateful apps</h2>
<p>You want to maximize availability - but not you must ensure <strong>data consistency</strong> (the state). To ensure <em>data consistency</em>, each replica has its own unique ID, and there are never multiple replicas of this ID, e.g. it is unique. This means that you cannot spawn up a new replica, unless that you are sure that the replica on the unreachable Node are terminated (e.g. stops using the Persistent Volume).</p>
<h3>Conclusion</h3>
<p>Both Deployment and StatefulSet try to maximize the availability - but StatefulSet cannot sacrifice <strong>data consistency</strong> (e.g. your state), so it cannot act as fast as Deployment (stateless) apps can.</p>
<p>These priorities does not only happens when a Node becomes unreachable, but at all times, e.g. also during upgrades and deployments.</p>
|
<p>I have a production cluster is currently running on K8s version <code>1.19.9</code>, where the kube-scheduler and kube-controller-manager failed to have leader elections. The leader is able to acquire the first lease, however it then cannot renew/reacquire the lease, this has caused other pods to constantly in the loop of electing leaders as none of them could stay on long enough to process anything/stay on long enough to do anything meaningful and they time out, where another pod will take the new lease; this happens from node to node. Here are the logs:</p>
<pre><code>E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded
E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty
I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost
</code></pre>
<p>Detailed Docker logs:</p>
<pre><code>Flag --port has been deprecated, see --secure-port instead.
I1201 22:14:10.374271 1 serving.go:331] Generated self-signed cert in-memory
I1201 22:14:10.735495 1 controllermanager.go:175] Version: v1.19.9+vmware.1
I1201 22:14:10.736289 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt
I1201 22:14:10.736302 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I1201 22:14:10.736684 1 secure_serving.go:197] Serving securely on 0.0.0.0:10257
I1201 22:14:10.736747 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
I1201 22:14:10.736868 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1201 22:14:20.737137 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded
E1201 22:14:32.803658 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded
E1201 22:14:44.842075 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: Get "https://[IP address]:[Port]/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s": context deadline exceeded
E1201 22:15:13.386932 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: context deadline exceeded
I1201 22:15:44.818571 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I1201 22:15:44.818755 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="master001_1d360610-1111-xxxx-aaaa-9999 became leader"
I1201 22:15:44.818790 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="master001_1d360610-1111-xxxx-aaaa-9999 became leader"
E1201 22:15:54.818902 1 request.go:1001] Unexpected error when reading response body: context deadline exceeded
E1201 22:15:54.819079 1 leaderelection.go:361] Failed to update lock: resource name may not be empty
I1201 22:15:54.819137 1 leaderelection.go:278] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
F1201 22:15:54.819176 1 controllermanager.go:293] leaderelection lost
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000fb20d0, 0x4c, 0xc6)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x6a57fa0, 0xc000000003, 0x0, 0x0, 0xc000472070, 0x68d5705, 0x14, 0x125, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
</code></pre>
<p>My duct tape recovery method was to shutdown the other candidates and disable leader elections <code>--leader-elect=false</code>. We manually set a leader and let it stay on for a while, then reactivated leader elections after. This has seemed to work as intended again, the leases are renewing normally after.</p>
<p>Could it be possible that the api-server may be too overwhelmed to expend any resources(?), because the elections have failed due to timeout? Was wondering if anyone has ever encountered such an issue.</p>
| <p>@<strong>janeosaka</strong>, you are right This problem occurs when you have a <code>1)resource crunch</code> or <code>2)network issue</code>.</p>
<p>It seems the leader election API call is getting timeout as Kube API Server had a resource crunch and it has increased the latency of API calls.</p>
<p><strong>1)Resource Crunch :</strong> (<strong>Increasing the CPU and Memory of the nodes</strong>)</p>
<p>It seems that it is the expected behavior. When the leader election fails the controller is not able to renew the lease and per design the controller is restarted to ensure that a single controller is active at a time.</p>
<p>LeaseDuration and RenewDeadline (RenewDeadline is the duration that the acting master will retry), are <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/manager/manager.go#L177-L186" rel="nofollow noreferrer">configurable in controller-runtime</a>.</p>
<p>Another approach you may consider is to leverage <a href="https://kubernetes.io/docs/concepts/cluster-administration/flow-control/" rel="nofollow noreferrer">API Priority & Fairness</a> to increase the chances of success of the calls made to the API by your controller if it is not at the origin of the API overload.</p>
<p><strong>2)Network Issue :</strong>
If it is a network issue : (The leader election lost is a symptom that the host has network problems, not a cause).</p>
<pre><code>Check the issue may resolve after restarting the SDN pod
</code></pre>
<p><code>"sdn-controller"</code> and <code>"sdn"</code> are very different things. If restarting an <em>sdn</em> pod fixed things, then the <em>sdn-controller</em> the error you noticed was not the actual problem.</p>
|
<p>I am using kubernetes with docker desktop on MacOS Monterey.
I have problem with starting kubernetes, because 1 year passed and my kubernetes certificates are invalid.</p>
<p>How can I renew them ?</p>
<p>Error message:</p>
<pre><code>Error: Kubernetes cluster unreachable: Get "https://kubernetes.docker.internal:6443/version": EOF
</code></pre>
<p>I tried to install <code>kubeadm</code> but I think it is only suitable if I use <code>minikube</code>.</p>
<p>Edit:
I am using Mac with M1 chip.</p>
| <p>You will need to create a new set of <a href="https://docs.docker.com/desktop/faqs/macfaqs/#how-do-i-add-tls-certificates" rel="nofollow noreferrer">certificates</a> and keys in order to update the certificates used by Docker Desktop for MacOS. After that, you will need to add the new certificates and keys to the Kubernetes configuration file. Create a certificate signing request (CSR) first, then use the CSR to create new certificates and keys. The Kubernetes configuration file needs to be updated to point to the new certificates and keys after they have been obtained in the appropriate directory structure. Finally, in order for the brand-new certificates and keys to take effect, you will need to restart your Kubernetes cluster.</p>
<p>Using the minikube command-line tool.Use the minikube delete command to get rid of the existing cluster is the first step in updating the certificates. The minikube start command can be used to create a new cluster with the updated certificates after the cluster has been deleted. Finally, save the cluster configuration file with the most recent <a href="https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/" rel="nofollow noreferrer">certificates</a> by employing the minikube get-kube-config command.</p>
<p>Check for the kubernetes version if you are using an older version upgrade it to the latest version,the Kubernetes version can be upgraded after a Docker Desktop update. However, when a new Kubernetes version is added to Docker Desktop, the user needs to reset its current cluster in order to use the newest version.</p>
|
<p>I have a requirement to store files in a PVC and attach that PVC to a Nginx ingress controller deployment so that application can access the files which are present in PVC but at the same application should not write back logs and configurations to PVC.</p>
<p>Can you please let me know how can i achieve it?</p>
<p>I Created a PVC and attached it a deployment but it is writing logs and configurations to it.</p>
| <p>You can use the <a href="https://cloud.google.com/storage/docs/gcs-fuse" rel="nofollow noreferrer">GCP fuse</a> and <strong>store</strong> files to <strong>S3</strong> directly that way it would be a little easy to manage if it's works for you.</p>
<p>However, if you want to go with your Idea you have to implement the <strong>ReadWriteMany</strong> (<a href="https://stackoverflow.com/a/57798369/5525824">Read More</a>). So <strong>two</strong> <strong>POD</strong> can attach to one <strong>PVC</strong> and one write and the other one read.</p>
<p>You can also use the EFS, or NFS file systems <a href="https://cloud.google.com/community/tutorials/gke-filestore-dynamic-provisioning" rel="nofollow noreferrer">GKE with Filestore</a> from GCP or <a href="https://min.io/" rel="nofollow noreferrer">MinIO</a>, <a href="https://www.gluster.org/" rel="nofollow noreferrer">GlusterFS</a>.</p>
<p><a href="https://stackoverflow.com/a/68586922/5525824">Ref answer glusterfs</a></p>
<blockquote>
<p>I Created a PVC and attached it a deployment but it is writing logs
and configurations to it.</p>
</blockquote>
<pre><code>volumeMounts:
- name: file
mountPath: /var/data
readOnly: true
</code></pre>
<p>You can set the <strong>mode</strong> when mounting the file or directory and set it as <strong>read-only</strong> mode also <code>readOnly: true</code>.</p>
|
<p>I have setup the Kubernetes cluster with <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Kubespray</a>
<a href="https://i.stack.imgur.com/bLcYE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bLcYE.png" alt="enter image description here" /></a></p>
<p>Once I restart the node and check the status of the node I am getting as below</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl get nodes
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
</code></pre>
<p><a href="https://i.stack.imgur.com/adOC2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/adOC2.png" alt="enter image description here" /></a></p>
<hr />
<p>Environment:</p>
<pre class="lang-bash prettyprint-override"><code>OS : CentOS 7
Kubespray
kubelet version: 1.22.3
</code></pre>
<p>Need your help on this.</p>
<p>Regards,
Zain</p>
| <p>This work for me, I'm using minukube,</p>
<p>When checking the minikube status by running the command <code>minikube status</code> you'll probably get something like that</p>
<pre><code> E0121 07:14:19.882656 7165 status.go:415] kubeconfig endpoint: got:
127.0.0.1:55900, want: 127.0.0.1:49736
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
</code></pre>
<p>To fix it, I just followed the next steps:</p>
<ol>
<li><code>minikube update-context</code></li>
<li><code>minukube start</code></li>
</ol>
|
<h3>Context</h3>
<p>I've installed minikube in github codespaces, and that works fantastic! With this setup I'm able to port forward any application running in minikube and reach it with the url generated by github codespaces.</p>
<h3>Problem</h3>
<p>I'd like to use github actions to deploy an app into the minikube cluster that runs in github codespaces.</p>
<h3>Question</h3>
<p>Is it possible, if so the how to do it?</p>
| <p>It toured out that it is possible. There are 2 ways that you could solve this problem.</p>
<h3>Push based</h3>
<ol>
<li>Start GitHub codespace with minikube installed in it</li>
<li>Install and configure GitHub's <a href="https://docs.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners" rel="nofollow noreferrer">self hosted runner</a> in GitHub Codespaces</li>
<li>Configure and start GitHub's self hosted runner in GitHub Codespace</li>
<li>Run your GitHub's Actions on self hosted runners
<pre class="lang-yaml prettyprint-override"><code>jobs:
build:
runs-on:
labels:
- self-hosted
- self-hosted-runner-label
</code></pre>
</li>
</ol>
<p>I've successful implement above flow in this <a href="https://github.com/ldynia/codespace-minikube" rel="nofollow noreferrer">repository</a>.</p>
<h3>Pull based</h3>
<ol>
<li>Start GitHub Codespace with minikube installed in it</li>
<li>Install <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/" rel="nofollow noreferrer">ArgoCD</a> in minikube</li>
<li>Point ArgoCD towards your GitHub repository</li>
<li>Use GitHub Actions to generate new k8s manifests files</li>
</ol>
|
<p>From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that.</p>
| <p>It's also a good thing to check your cluster and node-pool operations.</p>
<ol>
<li>Check the cluster operation in cloud shell and run the command:</li>
</ol>
<pre><code>gcloud container operations list
</code></pre>
<ol start="2">
<li>Check the age of the nodes with the command</li>
</ol>
<pre><code>kubectl get nodes
</code></pre>
<ol start="2">
<li>Check and analyze your deployment on how it reacts to operations such as cluster upgrade, node-pool upgrade & node-pool auto-repair. You can check the cloud logging if your cluster upgrade or node-pool upgrades using queries below:</li>
</ol>
<p>Please note you have to add your cluster and node-pool name in the queries.</p>
<p>Control plane (master) upgraded:</p>
<pre><code>resource.type="gke_cluster"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateCluster" OR "UpdateClusterInternal")
(protoPayload.metadata.operationType="UPGRADE_MASTER"
OR protoPayload.response.operationType="UPGRADE_MASTER")
resource.labels.cluster_name=""
</code></pre>
<p>Node-pool upgraded</p>
<pre><code>resource.type="gke_nodepool"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateNodePool" OR "UpdateClusterInternal")
protoPayload.metadata.operationType="UPGRADE_NODES"
resource.labels.cluster_name=""
resource.labels.nodepool_name=""
</code></pre>
|
<p>MacOS Big Sur 11.6.8
minikube version: v1.28.0</p>
<p>Following several tutorials on ingress and attempting to get it working locally. Everything appears to work: manual <code>minikube service foo</code> works, <code>kubectl get ingress</code> shows an IP, pinging the designated host name resolves the expected IP, etc. I went through a few different tutes with the same results.</p>
<p>I boiled it down to the simplest replication from the tutorial at <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">kubernetes.io</a> :</p>
<pre><code># kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
# kubectl expose deployment web --type=NodePort --port=8080
# kubectl get service web (ensure it's a node port)
# minikube service web --url (test url)
# kubectl apply -f ingress_hello_world.yaml
# curl localkube.com
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: localkube.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
</code></pre>
<p>Manual service works:</p>
<pre><code>>minikube service web --url
http://127.0.0.1:50111
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
>curl http://127.0.0.1:50111
Hello, world!
Version: 1.0.0
Hostname: web-84fb9498c7-hnphb
</code></pre>
<p>Ingress looks good:</p>
<pre><code>>minikube addons list | grep ingress
| ingress | minikube | enabled ✅ | Kubernetes |
>kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx localkube.com 192.168.49.2 80 15m
</code></pre>
<p>ping resolves the address mapped in /etc/hosts:</p>
<pre><code>>ping localkube.com
PING localkube.com (192.168.49.2): 56 data bytes
</code></pre>
<p>I have looked through similar questions with no positive results. I have gone from this simple example to apache to mongo deployments via config files. Each time I can get to the app through a manual service mapping or by creating an external service (LoadBalancer / nodePort), but when I get to the Ingress part the config applies with no errors and everything appears to be working except for it actually... working.</p>
| <p>The behavior you are describing is most probably because the <code>ingress and ingress-dns addons</code> are currently only supported on Linux Systems when using the Docker driver, as mentioned in the <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/#known-issues" rel="nofollow noreferrer">Known Issues</a> section of the minikube documentation.</p>
<p>Minikube supports ingress differently on the Mac and Linux.</p>
<p>On Linux the ingress is fully supported and therefore does not need the use of minikube tunnel.</p>
<p>On Mac there is an open issue due to a network issue. The documentation states that the minikube ingress addon is not supported, seems misleading if not incorrect. It's just supported differently (and not as well).</p>
<p>Please go through <a href="https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/" rel="nofollow noreferrer">Ingress DNS</a> and similar <a href="https://stackoverflow.com/questions/70961901/ingress-with-minikube-working-differently-on-mac-vs-ubuntu-when-to-set-etc-host">SO</a> for more information.</p>
|
<p>MacOS Big Sur 11.6.8
minikube version: v1.28.0</p>
<p>Following several tutorials on ingress and attempting to get it working locally. Everything appears to work: manual <code>minikube service foo</code> works, <code>kubectl get ingress</code> shows an IP, pinging the designated host name resolves the expected IP, etc. I went through a few different tutes with the same results.</p>
<p>I boiled it down to the simplest replication from the tutorial at <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">kubernetes.io</a> :</p>
<pre><code># kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
# kubectl expose deployment web --type=NodePort --port=8080
# kubectl get service web (ensure it's a node port)
# minikube service web --url (test url)
# kubectl apply -f ingress_hello_world.yaml
# curl localkube.com
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: localkube.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
</code></pre>
<p>Manual service works:</p>
<pre><code>>minikube service web --url
http://127.0.0.1:50111
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
>curl http://127.0.0.1:50111
Hello, world!
Version: 1.0.0
Hostname: web-84fb9498c7-hnphb
</code></pre>
<p>Ingress looks good:</p>
<pre><code>>minikube addons list | grep ingress
| ingress | minikube | enabled ✅ | Kubernetes |
>kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx localkube.com 192.168.49.2 80 15m
</code></pre>
<p>ping resolves the address mapped in /etc/hosts:</p>
<pre><code>>ping localkube.com
PING localkube.com (192.168.49.2): 56 data bytes
</code></pre>
<p>I have looked through similar questions with no positive results. I have gone from this simple example to apache to mongo deployments via config files. Each time I can get to the app through a manual service mapping or by creating an external service (LoadBalancer / nodePort), but when I get to the Ingress part the config applies with no errors and everything appears to be working except for it actually... working.</p>
| <p>Based on Veera's answer, I looked into the ingress issue with macOS and <code>minikube tunnel</code>. To save others the hassle, here is how I resolved the issue:</p>
<ol>
<li>ingress doesn't seem to work on macOS (the different pages say "with docker" but I had the same outcome with other drivers like hyperkit.</li>
<li>the issue seems to be IP / networking related. You can not get to the minikube IP from your local workstation. If you first run <code>minikube ssh</code> you can ping and curl the minikube IP and the domain name you mapped to that IP in /etc/hosts. However, this does not help trying to access the service from a browser.</li>
<li>the solution is to map the domain names to 127.0.0.1 in /etc/hosts (instead of the ingress assigned IP) and use ingress components to control the domain-name -> service mappings as before...</li>
<li>then starting a tunnel with <code>sudo minikube tunnel</code> will keep a base tunnel open, and create tunneling for any existing or new ingress components. This combined with the ingress rules will mimic host header style connecting to any domain resolving to the local host.</li>
</ol>
<p>Here is a full example of a working solution on mac. Dump this to a file named ingress_hello_world.yaml and follow the commented instructions to achieve a simple ingress solution that routes 2 domains to 2 different services (note this will work with pretty much any internal service, and can be a ClusterIP instead of NodePort):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
ingressClassName: nginx
rules:
- host: test1.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- host: test2.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
# Instructions:
# start minikube if not already
# >minikube start --vm-driver=docker
#
# enable ingress if not already
# >minikube addons enable ingress
# >minikube addons list | grep "ingress "
# | ingress | minikube | enabled ✅ | Kubernetes |
#
# >kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
# deployment.apps/web created
#
# >kubectl expose deployment web --type=NodePort --port=8080
# service/web exposed
#
# >kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0
# deployment.apps/web2 created
#
# >kubectl expose deployment web2 --port=8080 --type=NodePort
# service/web2 exposed
#
# >kubectl get service | grep web
# web NodePort 10.101.19.188 <none> 8080:31631/TCP 21m
# web2 NodePort 10.102.52.139 <none> 8080:30590/TCP 40s
#
# >minikube service web --url
# http://127.0.0.1:51813
# ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
#
# ------ in another console ------
# >curl http://127.0.0.1:51813
# ^---- this must match the port from the output above
# Hello, world!
# Version: 1.0.0 <---- will show version 2.0.0 for web2
# Hostname: web-84fb9498c7-7bjtg
# --------------------------------
# ctrl+c to kill tunnel in original tab, repeat with web2 if desired
#
# ------ In another console ------
# >sudo minikube tunnel
# ✅ Tunnel successfully started
#
# (leave open, will show the following when you start an ingress component)
# Starting tunnel for service example-ingress.
# --------------------------------
#
# >kubectl apply -f ingress_hello_world.yaml
# ingress.networking.k8s.io/example-ingress created
#
# >kubectl get ingress example-ingress --watch
# NAME CLASS HOSTS ADDRESS PORTS AGE
# example-ingress nginx test1.com,test2.com 80 15s
# example-ingress nginx test1.com,test2.com 192.168.49.2 80 29s
# wait for this to be populated ----^
#
# >cat /etc/hosts | grep test
# 127.0.0.1 test1.com
# 127.0.0.1 test2.com
# ^---- set this to localhost ip
#
# >ping test1.com
# PING test1.com (127.0.0.1): 56 data bytes
#
# >curl test1.com
# Hello, world!
# Version: 1.0.0
# Hostname: web-84fb9498c7-w6bkc
#
# >curl test2.com
# Hello, world!
# Version: 2.0.0
# Hostname: web2-7df4dcf77b-66g5b
# ------- Cleanup:
# stop tunnel
#
# >kubectl delete -f ingress_hello_world.yaml
# ingress.networking.k8s.io "example-ingress" deleted
#
# >kubectl delete service web
# service "web" deleted
#
# >kubectl delete service web2
# service "web2" deleted
#
# >kubectl delete deployment web
# deployment.apps "web" deleted
#
# >kubectl delete deployment web2
# deployment.apps "web2" deleted
</code></pre>
|
<p>Pod A is on ClusterIP service type, so incoming requests from external resources are not allowed.
Pod A executes outgoing requests to 3rd party services (Such as Google APIs).
And I want to specify the IP address that this request is coming from on google for security reasons.</p>
<p>Is there a way to find the IP address this pod uses for outgoing HTTP requests?</p>
| <p>If it is a public cluster where each node in the cluster has an ip address the public ip will be the address of the node the pod is on.
If it is a private cluster you can deploy a nat gateway for all the nodes and specify static ip addresses.</p>
<p>you can use this terraform module for a private cluster:
<a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster" rel="nofollow noreferrer">https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-cluster</a></p>
<p>Plus a nat gateway from here
<a href="https://cloud.google.com/nat/docs/gke-example#terraform" rel="nofollow noreferrer">https://cloud.google.com/nat/docs/gke-example#terraform</a></p>
|
<p>I got problem with connecting my k3s cluster to GitLab Docker Registry.</p>
<p>On cluster I got created secret in default namespace like this</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=https://gitlab.domain.tld:5050 --docker-username=USERNAME --docker-email=EMAIL --docker-password=TOKEN
</code></pre>
<p>Then in Deployment config I got this secret included, my config:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app.kubernetes.io/name: "app"
app.kubernetes.io/version: "1.0"
namespace: default
spec:
template:
metadata:
labels:
app: app
spec:
imagePullSecrets:
- name: regcred
containers:
- image: gitlab.domain.tld:5050/group/appproject:1.0
name: app
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>But the created pod is still unable to pull this image.
There is still error message of:</p>
<pre><code>failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
</code></pre>
<p>Can you help me, where the error may be?
If I try connect to this GitLab registry via secrets above on local docker, it working fine, docker login is right, also a pulling of this image.</p>
<p>Thanks</p>
| <p>To pull from a private container registry on Gitlab you must first create a <code>Deploy Token</code> similar to how the pipeline or similar "service" would access it. Go to the repository then go to <code>Settings</code> -> <code>Repository</code> -> <code>Deploy Tokens</code></p>
<p>Give the deploy token a <code>name</code>, and a <code>username</code>(it says optional but we'll be able to use this custom username with the token) and make sure it has read_registry access. That is all it needs to pull from the registry. If you later need to push then you would need write_registry. Once you click <code>create deploy token</code> it will show you the token be sure to copy it as you won't see it again.</p>
<p>Now just recreate your secret in your k8s cluster.</p>
<pre><code> kubectl create secret docker-registry regcred --docker-server=<private gitlab registry> --docker-username=<deploy token username> --docker-password=<deploy token>
</code></pre>
<p>Make sure to apply the secret to the same namespace as your deployment that is pulling the image.</p>
<p>[See Docs] <a href="https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/project/deploy_tokens/#gitlab-deploy-token</a></p>
|
<p>Hi I've deployed single MySQL db instance in Azure via the YAML file in Azure Kubernetes service. I can get into the container via CLI when I'm inside my cluster. I would like to connect with db instance via external client like MySQL Workbench or Sqlelectron or others, outside the cluster. As I found out it's possible via correctly exposing DB instance by Service configuration.</p>
<p>My deployment of single instance MySQL DB instance is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-db-testing-service
namespace: testing
spec:
type: ClusterIP
ports:
- port: 3306
#targetPort: 3306
selector:
app: mysql-db-testing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-db-testing
namespace: testing
spec:
selector:
matchLabels:
app: mysql-db-testing
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-db-testing
spec:
containers:
- name: mysql-db-container-testing
image: mysql:8.0.31
env:
- name: MYSQL_ROOT_PASSWORD
value: test12345
ports:
- containerPort: 3306
name: mysql-port
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-persistent-storage
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: azure-managed-disk-pvc-mysql-testing
nodeSelector:
env: preprod
</code></pre>
<p>As I've mentioned I can get to the container via CLI:</p>
<p><a href="https://i.stack.imgur.com/TDQtX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TDQtX.png" alt="enter image description here" /></a></p>
<p>Console output regarding the working pod with db looks like:
<a href="https://i.stack.imgur.com/Cz7vj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cz7vj.png" alt="enter image description here" /></a></p>
<p>Console output regarding the service:</p>
<p><a href="https://i.stack.imgur.com/BIBad.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIBad.png" alt="v" /></a></p>
<p>Is there something missing in my deployment YAML file or maybe there are missing some fields? How can I expose db to the outside world? I would be grateful for help.</p>
| <p>You have used the <strong>type: ClusterIP</strong> so it won't expose the MYSQL outside the cluster ideally, your Microservices running in the cluster will be able to access it however you can not use it externally.</p>
<p>To expose the service we generally have to use the <strong>type: LoadBalancer</strong>. It will directly expose your <strong>MySQL</strong> service internet and from your <strong>local</strong> <strong>workbench</strong>, you can connect to <strong>DB</strong> running on K8s.</p>
<p>If you really don't want to expose the MySQL service directly to internet you can deploy the <strong>adminer</strong>.</p>
<p>So traffic will flow like</p>
<pre><code>internet > adminer > internal communication > MySQL service > MySQL POD
</code></pre>
<p>YAML file to deploy and get the UI output directly in the browser, it will ask of <strong>MySQL</strong> DB <strong>username</strong>, <strong>password</strong>, Host (mysql-db-testing-service.testing.svc.cluster.local) to connect</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: adminer
labels:
app: adminer
spec:
selector:
matchLabels:
app: adminer
template:
metadata:
labels:
app: adminer
spec:
containers:
- name: adminer
image: adminer:4.6.3
ports:
- containerPort: 8080
env:
- name: ADMINER_DESIGN
value: "pappu687"
---
apiVersion: v1
kind: Service
metadata:
name: adminer-svc
spec:
type: ClusterIP(Internally to cluster)/LoadBalancer (Expose to internet)
selector:
app: adminer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>Port-forward for local access or use service <code>type: LoadBalancer</code></p>
<pre><code>kubectl port-forward svc/adminer-svc 8080:8080
</code></pre>
<p>Open <code>localhost:8080</code> in browser</p>
|
<p>It seems a silly question, but how do I expand the "+ 1 more..." from the endpoints list?</p>
<p>Particularly I want to see all the endpoints of a service and the following command hides some endpoints behind the "+ 1 more...".</p>
<pre><code> kubectl describe service <name>
</code></pre>
<p>I tried this:</p>
<pre><code> kubectl get endpoints -o wide
</code></pre>
<p>It still doesn't show the hidden one.</p>
<p>Thanks</p>
| <p>An interesting approach would be to output the json and extract the relevant fields. This is useful if you want to analyse the output after. You may want to add the namespace in case you have multiple.</p>
<p>Single command:</p>
<pre><code>kubectl get endpoints -o jsonpath="{range .items[*]}{.metadata.name},{.subsets[*].addresses[*].ip}{'\n'}{end}"
</code></pre>
<p>Bash script</p>
<pre><code>#!/usr/bin/env bash
for service in `kubectl get endpoints -o jsonpath="{range .items[*]}{.metadata.name},{.subsets[*].addresses[*].ip}{'\n'}{end}"`; do
echo $service | awk -F, {'print $1'}
for endpoint in $(echo $services | awk -F, {'print $2'}); do
echo $endpoint
done
done
</code></pre>
|
<p>When deploying the app, certain environment-specific settings need to be applied to the server.xml, which cannot be applied when the container is built. Has anyone tried using a volume_mounted config file, and where would I tell tomcat the location of this custom config?</p>
| <p>To illustrate <a href="https://stackoverflow.com/users/19246531/nataraj-medayhal">Nataraj Medayhal</a>, you can find an example based on configMap on <a href="https://github.com/devlinx9/k8s_tomcat_custer" rel="nofollow noreferrer"><code>devlinx9/k8s_tomcat_custer</code></a></p>
<blockquote>
<p>The configMap is used to control the configuration of tomcat, in this we added the cluster configuration, save the following text in a file <code>configmap-tomcat.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: testconfig
data:
server.xml: |
<?xml version="1.0" encoding="UTF-8"?>
<Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
...
</Server>
</code></pre>
<p>Create the configMap:</p>
<pre><code>kubectl apply -f configmap-tomcat.yaml -n {namespace}
</code></pre>
</blockquote>
|
<p>I have a devops pipeline divided in three steps:</p>
<ul>
<li><code>kubectl apply -f configmap.yml</code></li>
<li><code>kubectl apply -f deployment.yml</code></li>
<li><code>kubectl rollout restart deployment/test-service</code></li>
</ul>
<p>I think that when the <code>configmap.yml</code> changes the <code>rollout restart</code> step is useful. But when only the <code>deployment.yml</code> changes, I'm worried that the "extra" <code>rollout restart</code> step is not useful and should be avoided.</p>
<p>Should I execute <code>rollout restart</code> only when the <code>configmap.yml</code> changes or should I don't care about?</p>
| <p>This isn't a direct answer, but it ended up being too long for a comment and I think it's relevant. If you were to apply your manifests using <a href="https://kustomize.io/" rel="nofollow noreferrer"><code>kustomize</code></a> (<em>aka</em> <code>kubectl apply -k</code>), then you get the following behavior:</p>
<ul>
<li><code>ConfigMaps</code> are generated with a content-based hash appended to their name</li>
<li>Kustomize substitutes the generated name into your <code>Deployment</code></li>
<li>This means the <code>Deployment</code> is only modified when the content of the <code>ConfigMap</code> changes, causing an implicit re-deploy of the pods managed by the <code>Deployment</code>.</li>
</ul>
<p>This largely gets you the behavior you want, but it would require some changes to your deployment pipeline.</p>
|
<p>I have a problem where we essentially discovered a piece of stale configuration in a live environment on one of our deployments (a config map was added as a volume mount). Reading through the docs <a href="https://v3.helm.sh/docs/faq/changes_since_helm2/" rel="nofollow noreferrer">here</a> (search for 'Upgrades where live state has changed') we can see that helm v2 would purge changes that were introduced to a template via external actors. Whereas v3 is very clever and will merge externally introduced changes alongside template changes as long as they dont conflict.</p>
<p>So how do we in helm v3 run a upgrade that purges any manual template changes that may have been introduced?</p>
| <p>Based on the description, the <code>--force</code> flag should do the trick.</p>
<p><code>--force force resource updates through a replacement strategy</code></p>
<p>However, there are some issues with it as mentioned in this <a href="https://github.com/helm/helm/issues/9433" rel="nofollow noreferrer">GitHub issue</a>.</p>
|
<p>I just installed prometheus operator as indicated here: <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>:</p>
<pre><code>kubectl apply --server-side -f manifests/setup
kubectl wait \
--for condition=Established \
--all CustomResourceDefinition \
--namespace=monitoring
kubectl apply -f manifests/
</code></pre>
<p>After that I just tried to setup my own service monitor for grafana as follows:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: in1-grafana-service-monitor
namespace: monitoring
spec:
selector:
matchLabels:
app.kubernetes.io/name: grafana
endpoints:
- port: http
interval: 10s
</code></pre>
<p>This monitor works just fine and I can see it in the Prometheus /targets and /service-discovery.</p>
<p>The fact is that when I want to create this same service monitor but outside the "monitoring" namespace it just not appears neither in /targets or in /service-discovery. My setup for this service monitor is as follows:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: out1-grafana-service-monitor
namespace: other-namespace
spec:
selector:
matchLabels:
app.kubernetes.io/name: grafana
namespaceSelector:
any: true
endpoints:
- port: http
interval: 10s
</code></pre>
<p>How can I make Prometheus operator to scrape service monitors (and services) outside the monitoring namespace?</p>
<p>I checked the output of <code>kubectl get prom -Ao yaml</code> and it just displays an empty list:</p>
<pre><code>[...]
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector: {}
[...]
</code></pre>
<p>Any help will be appreciated.</p>
<p>Thank you.</p>
<p>I expect that the service monitor outside the monitoring namespace works as I need it for other service (Not for Grafana).</p>
| <p>After looking at the yaml files I realized that Prometheus doesn't have the permissions to read all namespaces. And after looking at the repository customization examples I found the solution: <a href="https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/monitoring-additional-namespaces.md</a></p>
<p>Hope this helps someone else in the future.</p>
|
<p>I'm setting up an on-premise kubernetes cluster with kubeadm.</p>
<p>Here is the Kubernestes version</p>
<pre><code>clientVersion:
buildDate: "2022-10-12T10:57:26Z"
compiler: gc
gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506
gitTreeState: clean
gitVersion: v1.25.3
goVersion: go1.19.2
major: "1"
minor: "25"
platform: linux/amd64
kustomizeVersion: v4.5.7
serverVersion:
buildDate: "2022-10-12T10:49:09Z"
compiler: gc
gitCommit: 434bfd82814af038ad94d62ebe59b133fcb50506
gitTreeState: clean
gitVersion: v1.25.3
goVersion: go1.19.2
major: "1"
minor: "25"
platform: linux/amd64
</code></pre>
<p>I have installed metallb version 0.13.7</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
</code></pre>
<p>Everything is running</p>
<pre><code>$ kubectl get all -n metallb-system
NAME READY STATUS RESTARTS AGE
pod/controller-84d6d4db45-l2r55 1/1 Running 0 35s
pod/speaker-48qn4 1/1 Running 0 35s
pod/speaker-ds8hh 1/1 Running 0 35s
pod/speaker-pfbcp 1/1 Running 0 35s
pod/speaker-st7n2 1/1 Running 0 35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 35s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 4 4 4 4 4 kubernetes.io/os=linux 35s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 35s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-84d6d4db45 1 1 1 35s
</code></pre>
<p>But when i try to apply an IPaddressPool CRD i get an error</p>
<pre><code>kubectl apply -f ipaddresspool.yaml
</code></pre>
<p>ipaddresspool.yaml file content</p>
<pre><code>apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.2.100-192.168.2.199
</code></pre>
<p>The error is a fail to call the validation webhook no route to host</p>
<pre><code>Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host
</code></pre>
<p>Here is the same error with line brakes</p>
<pre><code>Error from server (InternalError):
error when creating "ipaddresspool.yaml":
Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io":
failed to call webhook:
Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s":
dial tcp 10.104.14.119:443: connect: no route to host
</code></pre>
<p>The IP -address is correct</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webhook-service ClusterIP 10.104.14.119 <none> 443/TCP 18m
</code></pre>
<p>I have also tried installing metallb v 0.13.7 using helm but with the same result</p>
<p>Does someone know why the webhook cannot be called?</p>
<p><strong>EDIT</strong></p>
<p>As an answer to Thomas question, here is the description for webhook-service. NOTE that this is from <strong>another cluster</strong> with the <strong>same problem</strong> because I deleted the last cluster so the IP is not the same as last time</p>
<pre><code>$ kubectl describe svc webhook-service -n metallb-system
Name: webhook-service
Namespace: metallb-system
Labels: <none>
Annotations: <none>
Selector: component=controller
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.157.72
IPs: 10.105.157.72
Port: <unset> 443/TCP
TargetPort: 9443/TCP
Endpoints: 172.17.0.3:9443
Session Affinity: None
Events: <none>
</code></pre>
| <p>Once understood the issue is fairly simple.</p>
<p>The metallb setup described above works as it is supposed to.
However, the Kubernetes setup does not. Most likely due to bad network configuration.</p>
<hr />
<h3>Understanding the error</h3>
<p>The key to understanding what is going on is the following error:</p>
<pre><code>Error from server (InternalError): error when creating "ipaddresspool.yaml": Internal error occurred: failed calling webhook "ipaddresspoolvalidationwebhook.metallb.io": failed to call webhook: Post "https://webhook-service.metallb-system.svc:443/validate-metallb-io-v1beta1-ipaddresspool?timeout=10s": dial tcp 10.104.14.119:443: connect: no route to host
</code></pre>
<p>Part of the applied metallb manifest is going to deploy a so-called <code>ValidatingWebhookConfiguration</code>.</p>
<p><a href="https://i.stack.imgur.com/gWdSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gWdSM.png" alt="enter image description here" /></a></p>
<p>In the case of metallb this validating webhook will force the kube-apiserver to:</p>
<ol>
<li>send metallb-related objects like <code>IPAddressPool</code> to the webhook whenever someone creates or updates such an object</li>
<li>wait for the webhook to perform some checks on the object (e.g. validate that CIDRs and IPs are valid and not something like <code>481.9.141.12.27</code>)</li>
<li>and finally receive an answer from the webhook whether or not that object satisfies metallb's requirements and is allowed to be created (persisted to etcd)</li>
</ol>
<p>The error above pretty clearly suggests that the first out of the three outlined steps is failing.</p>
<hr />
<h3>Debugging</h3>
<p>To fix this error one has to debug the current setup, particularly the connection from the kube-apiserver to <code>webhook-service.metallb-system.svc:443</code>.</p>
<p>There is a wide range of possible network misconfigurations that could lead to the error. However, with the information available to us it is most likely going to be an error with the configured CNI.</p>
<p>Knowing that here is some help and a bit of guidance regarding the further debugging process:</p>
<p>Since the kube-apiserver is hardened by default it won't be possible to execute a shell into it.
For that reason one should deploy a debug application with the same network configuration as the kube-apiserver onto one of the control-plane nodes.
This can be achieved by executing the following command:</p>
<pre><code>kubectl debug -n kube-system node/<control-plane-node> -it --image=nicolaka/netshoot
</code></pre>
<p>Using common tools one can now reproduce the error inside the interactive shell. The following command is expected to fail (in a similar fashion to the kube-apiserver):</p>
<pre><code>curl -m 10 -k https://<webhook-service-ip>:443/
</code></pre>
<p>Given above error message it should fail due to bad routing on the node.
To check the routing table execute the following command:</p>
<pre><code>routel
</code></pre>
<blockquote>
<p>Does someone know why the webhook cannot be called?</p>
</blockquote>
<p>The output should show multiple CIDR ranges configured one of which is supposed to include the IP queried earlier.
Most likely the CIDR range in question will either be missing or a bad gateway configured which leads to the <code>no route to host</code> error.
It is the CNIs job to update routing tables on all nodes and ensure that nodes can reach these addresses so adding or editing new Kubernetes related entries to the routing table manually is not recommended.
Further debugging is dependent on the exact setup.
Depending on the setup and CNI of choice kube-proxy may or may not be involved in the issue as well.
However, inspecting the CNI configuration and logs is a good next step.</p>
<hr />
<h3>Some bonus information</h3>
<p>Some CNIs require the user to pay more attention to certain features and configuration as there can be issues involved otherwise.
Here are some popular CNIs that fall into this category:</p>
<ul>
<li>Calico (see <a href="https://metallb.universe.tf/configuration/calico/" rel="nofollow noreferrer">here</a>)</li>
<li>Weave (see <a href="https://metallb.universe.tf/configuration/weave/" rel="nofollow noreferrer">here</a>)</li>
<li>Kube-Router (see <a href="https://metallb.universe.tf/configuration/kube-router/" rel="nofollow noreferrer">here</a>)</li>
</ul>
|
<p>Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.</p>
<p>For that, I have created a deployment and service for NGINX as shown below, <a href="https://i.stack.imgur.com/2Gk93.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Gk93.png" alt="enter image description here" /></a></p>
<p>As per my search, found that we have below to expose to outside world</p>
<ul>
<li>MetalLb</li>
<li>Ingress NGINX</li>
<li>Some HELM resources</li>
</ul>
<p>I would like to know all these 3 or any more approaches in such way it help me to learn new things.</p>
<p><code>GOAL</code></p>
<ul>
<li>Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.</li>
<li>How Can I make my service has its own public IP to access from the outside cluster?</li>
</ul>
| <p>You need to deploy an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">ingress controller</a> in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> object and route your request to the appropriate service.</p>
<p>One of the most commonly used ingress controller is the <a href="https://docs.nginx.com/nginx-ingress-controller/" rel="nofollow noreferrer">Nginx Ingress Controller</a>. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md" rel="nofollow noreferrer">here</a>.</p>
<p>MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md#over-a-nodeport-service" rel="nofollow noreferrer">expose it as a NodePort Service</a> that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.</p>
|
<p>I have created an AKS cluster using the following Terraform code</p>
<pre><code>resource "azurerm_virtual_network" "test" {
name = var.virtual_network_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = [var.virtual_network_address_prefix]
subnet {
name = var.aks_subnet_name
address_prefix = var.aks_subnet_address_prefix
}
subnet {
name = "appgwsubnet"
address_prefix = var.app_gateway_subnet_address_prefix
}
tags = var.tags
}
data "azurerm_subnet" "kubesubnet" {
name = var.aks_subnet_name
virtual_network_name = azurerm_virtual_network.test.name
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_virtual_network.test]
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_name
location = azurerm_resource_group.rg.location
dns_prefix = var.aks_dns_prefix
resource_group_name = azurerm_resource_group.rg.name
http_application_routing_enabled = false
linux_profile {
admin_username = var.vm_user_name
ssh_key {
key_data = file(var.public_ssh_key_path)
}
}
default_node_pool {
name = "agentpool"
node_count = var.aks_agent_count
vm_size = var.aks_agent_vm_size
os_disk_size_gb = var.aks_agent_os_disk_size
vnet_subnet_id = data.azurerm_subnet.kubesubnet.id
}
service_principal {
client_id = local.client_id
client_secret = local.client_secret
}
network_profile {
network_plugin = "azure"
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
service_cidr = var.aks_service_cidr
}
# Enabled the cluster configuration to the Azure kubernets with RBAC
azure_active_directory_role_based_access_control {
managed = var.azure_active_directory_role_based_access_control_managed
admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids
azure_rbac_enabled = var.azure_rbac_enabled
}
oms_agent {
log_analytics_workspace_id = module.log_analytics_workspace[0].id
}
timeouts {
create = "20m"
delete = "20m"
}
depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
tags = var.tags
}
resource "azurerm_role_assignment" "ra1" {
scope = data.azurerm_subnet.kubesubnet.id
role_definition_name = "Network Contributor"
principal_id = local.client_objectid
depends_on = [data.azurerm_subnet.kubesubnet]
}
</code></pre>
<p>and followed the below steps to install the ISTIO as per the <a href="https://istio.io/latest/docs/setup/install/helm/" rel="nofollow noreferrer">ISTIO documentation</a></p>
<pre><code>#Prerequisites
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
#create namespace
kubectl create namespace istio-system
# helm install istio-base and istiod
helm install istio-base istio/base -n istio-system
helm install istiod istio/istiod -n istio-system --wait
# Check the installation status
helm status istiod -n istio-system
#create namespace and enable istio-injection for envoy proxy containers
kubectl create namespace istio-ingress
kubectl label namespace istio-ingress istio-injection=enabled
## helm install istio-ingress for traffic management
helm install istio-ingress istio/gateway -n istio-ingress --wait
## Mark the default namespace as istio-injection=enabled
kubectl label namespace default istio-injection=enabled
## Install the App and Gateway
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml
# Check the Services, Pods and Gateway
kubectl get services
kubectl get pods
kubectl get gateway
# Ensure the app is running
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
</code></pre>
<p>and it is responding as shown below</p>
<p><a href="https://i.stack.imgur.com/5EHbb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5EHbb.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/MfKvS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MfKvS.png" alt="enter image description here" /></a></p>
<pre><code># Check the
$INGRESS_NAME="istio-ingress"
$INGRESS_NS="istio-ingress"
kubectl get svc "$INGRESS_NAME" -n "$INGRESS_NS"
</code></pre>
<p>it returns the external IP as shown below</p>
<p><a href="https://i.stack.imgur.com/AMU9t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMU9t.png" alt="enter image description here" /></a></p>
<p>However, I am not able to access the application</p>
<p><a href="https://i.stack.imgur.com/97RHe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/97RHe.png" alt="enter image description here" /></a></p>
<p>Also I am getting an error while trying to run the following commands to find the ports</p>
<pre><code>kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'
kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="https")].port}'
kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}'
</code></pre>
| <p>This is because the ingress gateway selector when installed with Helm is <code>istio: ingress</code>, instead of <code>istio: ingressgateway</code> when installed with istioctl.</p>
<p>If you modify the Gateway to reflect this, then it should work:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
namespace: default
spec:
selector:
istio: ingress
...
</code></pre>
<p>One way to show this (without knowing this issue previously) is with <code>istioctl analyze</code>:</p>
<pre><code>$ istioctl analyze
Error [IST0101] (Gateway default/bookinfo-gateway) Referenced selector not found: "istio=ingressgateway"
Error: Analyzers found issues when analyzing namespace: default.
See https://istio.io/v1.16/docs/reference/config/analysis for more information about causes and resolutions.
</code></pre>
|
<p>In Kubernetes, <em>is it possible to enforce virtual memory (physical page swapping to disk) on a pod/container with memory requests and limits set?</em></p>
<p>For instance, as per the Kubernetes documentation, “if you set a memory limit of 4GiB for a container, the kubelet (and container runtime) enforce the limit. The runtime prevents the container from using more than the configured resource limit. For example: when a process in the container tries to consume more than the allowed amount of memory, the system kernel terminates the process that attempted the allocation, with an out of memory (OOM) error.”</p>
<p><em>Hence, is it possible to configure the pod (and hence linux kernel) to enforce virtual memory (that is paging and memory swapping ) on the specified physical memory limits of the pod (4GiB) instead of OOM error? am I missing something?</em></p>
| <p>Reading the <a href="https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#memory-interface-files" rel="nofollow noreferrer">kernel documentation</a> on this leads me to believe this is <strong>not</strong> possible. And I don't think this is a desirable behavior. Let's just think about the following scenario: You have a machine with 64GB of physical memory with 10GB of those used. Then you start a process with a "physical" memory limit of 500MB. If this memory limit is reached the kernel would start swapping and the process would stall even though there is enough memory available to service the memory requests of the process.</p>
<p>The memory limit you specify on the container is actually not a physical memory limit, but a virtual memory limit with overcommit allowed. This means your process can allocate as much memory as it wants (until you reach the overcommit limit), but it gets killed as soon as it tries to use too much memory.</p>
|
<p>I have created an AKS cluster using the following Terraform code</p>
<pre><code>resource "azurerm_virtual_network" "test" {
name = var.virtual_network_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = [var.virtual_network_address_prefix]
subnet {
name = var.aks_subnet_name
address_prefix = var.aks_subnet_address_prefix
}
subnet {
name = "appgwsubnet"
address_prefix = var.app_gateway_subnet_address_prefix
}
tags = var.tags
}
data "azurerm_subnet" "kubesubnet" {
name = var.aks_subnet_name
virtual_network_name = azurerm_virtual_network.test.name
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_virtual_network.test]
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_name
location = azurerm_resource_group.rg.location
dns_prefix = var.aks_dns_prefix
resource_group_name = azurerm_resource_group.rg.name
http_application_routing_enabled = false
linux_profile {
admin_username = var.vm_user_name
ssh_key {
key_data = file(var.public_ssh_key_path)
}
}
default_node_pool {
name = "agentpool"
node_count = var.aks_agent_count
vm_size = var.aks_agent_vm_size
os_disk_size_gb = var.aks_agent_os_disk_size
vnet_subnet_id = data.azurerm_subnet.kubesubnet.id
}
service_principal {
client_id = local.client_id
client_secret = local.client_secret
}
network_profile {
network_plugin = "azure"
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
service_cidr = var.aks_service_cidr
}
# Enabled the cluster configuration to the Azure kubernets with RBAC
azure_active_directory_role_based_access_control {
managed = var.azure_active_directory_role_based_access_control_managed
admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids
azure_rbac_enabled = var.azure_rbac_enabled
}
oms_agent {
log_analytics_workspace_id = module.log_analytics_workspace[0].id
}
timeouts {
create = "20m"
delete = "20m"
}
depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
tags = var.tags
}
resource "azurerm_role_assignment" "ra1" {
scope = data.azurerm_subnet.kubesubnet.id
role_definition_name = "Network Contributor"
principal_id = local.client_objectid
depends_on = [data.azurerm_subnet.kubesubnet]
}
</code></pre>
<p>and followed the below steps to install the ISTIO as per the <a href="https://istio.io/latest/docs/setup/install/helm/" rel="nofollow noreferrer">ISTIO documentation</a></p>
<pre><code>#Prerequisites
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
#create namespace
kubectl create namespace istio-system
# helm install istio-base and istiod
helm install istio-base istio/base -n istio-system
helm install istiod istio/istiod -n istio-system --wait
# Check the installation status
helm status istiod -n istio-system
#create namespace and enable istio-injection for envoy proxy containers
kubectl create namespace istio-ingress
kubectl label namespace istio-ingress istio-injection=enabled
## helm install istio-ingress for traffic management
helm install istio-ingress istio/gateway -n istio-ingress --wait
## Mark the default namespace as istio-injection=enabled
kubectl label namespace default istio-injection=enabled
## Install the App and Gateway
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml
# Check the Services, Pods and Gateway
kubectl get services
kubectl get pods
kubectl get gateway
# Ensure the app is running
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
</code></pre>
<p>and it is responding as shown below</p>
<p><a href="https://i.stack.imgur.com/5EHbb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5EHbb.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/MfKvS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MfKvS.png" alt="enter image description here" /></a></p>
<pre><code># Check the
$INGRESS_NAME="istio-ingress"
$INGRESS_NS="istio-ingress"
kubectl get svc "$INGRESS_NAME" -n "$INGRESS_NS"
</code></pre>
<p>it returns the external IP as shown below</p>
<p><a href="https://i.stack.imgur.com/AMU9t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMU9t.png" alt="enter image description here" /></a></p>
<p>However, I am not able to access the application</p>
<p><a href="https://i.stack.imgur.com/97RHe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/97RHe.png" alt="enter image description here" /></a></p>
<p>Also I am getting an error while trying to run the following commands to find the ports</p>
<pre><code>kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="http2")].port}'
kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="https")].port}'
kubectl -n "$INGRESS_NS" get service "$INGRESS_NAME" -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}'
</code></pre>
| <p>This is because you have hit <a href="https://artifacthub.io/packages/helm/istio-official/gateway#general-concerns" rel="nofollow noreferrer">general concerns</a> of istio- prefix get striped, from the steps by steps installation with <code>istio-ingress</code> will stripe with <code>ingress</code>, so if you using <code>istio-ingressgateway</code> that could match with app selector , or change the app selector to match with it.</p>
|
<p>I am getting below error after installing keda in my k8s cluster and created some scaled object...</p>
<p>whatever command i am running EG: " kubectl get pods" i am getting response with below error message..</p>
<p>How to get rid of below error message.</p>
<p>E0125 11:45:32.766448 316 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1</p>
| <p>This error is from <code>client-go</code> when there are no resources available in <code>external.metrics.k8s.io/v1beta1</code> here in client-go, it gets all ServerGroups.
When KEDA is not installed then external.metrics.k8s.io/v1beta1 is not part of ServerGroups and hence its not called and therefore no issue.</p>
<p>But when KEDA is installed then it creates an ApiService</p>
<pre><code>$ kubectl get apiservice | grep keda-metrics
v1beta1.external.metrics.k8s.io keda/keda-metrics-apiserver True 20m
</code></pre>
<p>But it doesn't create any external.metrics.k8s.io resources</p>
<pre><code>$ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": []
}
</code></pre>
<p>Since there are no resources, client-go throws an error.</p>
<p>The workaround is registering a dummy resource in the empty resource group.</p>
<p>Refer to this <a href="https://github.com/kubeshop/botkube/issues/829" rel="noreferrer">Github</a> link for more detailed information.</p>
|
<p>Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.</p>
<p>For that, I have created a deployment and service for NGINX as shown below, <a href="https://i.stack.imgur.com/2Gk93.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Gk93.png" alt="enter image description here" /></a></p>
<p>As per my search, found that we have below to expose to outside world</p>
<ul>
<li>MetalLb</li>
<li>Ingress NGINX</li>
<li>Some HELM resources</li>
</ul>
<p>I would like to know all these 3 or any more approaches in such way it help me to learn new things.</p>
<p><code>GOAL</code></p>
<ul>
<li>Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.</li>
<li>How Can I make my service has its own public IP to access from the outside cluster?</li>
</ul>
| <p>You need to set up <code>MetalLB</code> to get an external IP address for the <code>LoadBalancer</code> type services. It will give a local network IP address to the service.</p>
<p>Then you can do port mapping (configuration in the router) of incoming traffic of port <code>80</code> and port <code>443</code> to your external service IP address.</p>
<p>I have done a similar setup you can check it here in detail:
<a href="https://developerdiary.me/lets-build-low-budget-aws-at-home/" rel="nofollow noreferrer">https://developerdiary.me/lets-build-low-budget-aws-at-home/</a></p>
|
<p>I am studying "kubectl describe" sourcecodes at <a href="https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/blob/master/pkg/describe/describe.go</a></p>
<p>However, I still could not figure out how "kubectl decsribe [CRD]" works (as in which function/functions are called).</p>
<p>I am a Go newbie, so would like to get some pointers please. Thanks.</p>
<p>I have read describePod function and understand how it works more or less, but still could not figure out how "kubectl describe [CRD]" works.</p>
| <p>The "kubectl describe " function can be found in the command-line interface (CLI) of Kubernetes, specifically in the "kubectl" tool. "kubectl" is used to manage and interact with a Kubernetes cluster and its resources.
<a href="https://i.stack.imgur.com/mZ6Tz.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>I have an ingress for my application:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapi-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: mysite.com
http:
paths:
- path: "/posts"
pathType: Prefix
backend:
service:
name: myservice
port:
number: 80
</code></pre>
<p>When I run <code>kubectl describe ing myapi-ingress</code>, I can see that the ingress is stuck in sync state:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 26m (x2 over 27m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>PS. Before this happens, I tried to install another ingress for internal usage under another namespace and ingressclassname.</p>
<p>I'm getting 404 when I try to hit this endpoint. Nothing in the logs.</p>
<p>What is the problem?</p>
| <p>The problem was the host name set on the Ingress</p>
|
<p>Using Kubernetes, exactly the <code>kubectl apply -f ./auth.yaml</code> statement, i'm trying to run a Authorization Server in a pod, but when I check out the logs, this show me the following error:</p>
<pre><code> . ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.6.13)
2022-12-07 01:33:30.099 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : Starting MsvcAuthApplication v1.0-SNAPSHOT using Java 18.0.2.1 on msvc-auth-7d696f776d-hpk99 with PID 1 (/app/msvc-auth-1.0-SNAPSHOT.jar started by root in /app)
2022-12-07 01:33:30.203 INFO 1 --- [ main] o.v.s.msvc.auth.MsvcAuthApplication : The following 1 profile is active: "kubernetes"
2022-12-07 01:33:48.711 INFO 1 --- [ main] o.s.c.k.client.KubernetesClientUtils : Created API client in the cluster.
2022-12-07 01:33:48.913 INFO 1 --- [ main] o.s.c.a.ConfigurationClassPostProcessor : Cannot enhance @Configuration bean definition 'org.springframework.cloud.kubernetes.client.KubernetesClientAutoConfiguration' since its singleton instance has been created too early. The typical cause is a non-static @Bean method with a BeanDefinitionRegistryPostProcessor return type: Consider declaring such methods as 'static'.
2022-12-07 01:33:49.794 INFO 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=9e09a67e-4528-373e-99ad-3031c15d14ab
2022-12-07 01:33:50.922 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.113 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration' of type [org.springframework.cloud.commons.config.CommonsConfigAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.184 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.187 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'loadBalancerClientsDefaultsMappingsProvider' of type [org.springframework.cloud.client.loadbalancer.LoadBalancerDefaultMappingsProviderAutoConfiguration$$Lambda$420/0x0000000800f30898] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.205 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'defaultsBindHandlerAdvisor' of type [org.springframework.cloud.commons.config.DefaultsBindHandlerAdvisor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.311 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'kubernetes.manifests-io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties' of type [io.kubernetes.client.spring.extended.manifests.config.KubernetesManifestsProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.412 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.419 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig' of type [org.springframework.cloud.client.loadbalancer.reactive.LoadBalancerBeanPostProcessorAutoConfiguration$ReactorDeferringLoadBalancerFilterConfig] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:51.489 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'reactorDeferringLoadBalancerExchangeFilterFunction' of type [org.springframework.cloud.client.loadbalancer.reactive.DeferringLoadBalancerExchangeFilterFunction] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-12-07 01:33:58.301 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9000 (http)
2022-12-07 01:33:58.393 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2022-12-07 01:33:58.393 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.68]
2022-12-07 01:33:58.795 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2022-12-07 01:33:58.796 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 26917 ms
2022-12-07 01:34:01.099 WARN 1 --- [ main] o.s.security.core.userdetails.User : User.withDefaultPasswordEncoder() is considered unsafe for production and is only intended for sample applications.
2022-12-07 01:34:02.385 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest
2022-12-07 01:34:02.413 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-12-07 01:34:02.677 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-12-07 01:34:02.991 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizationServerSecurityFilterChain' defined in class path resource [org/villamzr/springcloud/msvc/auth/SecurityConfig.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:658) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:638) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1352) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1195) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.23.jar!/:5.3.23]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.23.jar!/:5.3.23]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.6.13.jar!/:2.6.13]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:745) ~[spring-boot-2.6.13.jar!/:2.6.13]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:420) ~[spring-boot-2.6.13.jar!/:2.6.13]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.6.13.jar!/:2.6.13]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1317) ~[spring-boot-2.6.13.jar!/:2.6.13]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) ~[spring-boot-2.6.13.jar!/:2.6.13]
at org.villamzr.springcloud.msvc.auth.MsvcAuthApplication.main(MsvcAuthApplication.java:12) ~[classes!/:1.0-SNAPSHOT]
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.security.web.SecurityFilterChain]: Factory method 'authorizationServerSecurityFilterChain' threw exception; nested exception is java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.3.23.jar!/:5.3.23]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ~[spring-beans-5.3.23.jar!/:5.3.23]
... 25 common frames omitted
Caused by: java.lang.NoClassDefFoundError: jakarta/servlet/http/HttpServletRequest
at org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer.getEndpointsMatcher(OAuth2AuthorizationServerConfigurer.java:235) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0]
at org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(OAuth2AuthorizationServerConfiguration.java:63) ~[spring-security-oauth2-authorization-server-1.0.0.jar!/:1.0.0]
at org.villamzr.springcloud.msvc.auth.SecurityConfig.authorizationServerSecurityFilterChain(SecurityConfig.java:51) ~[classes!/:1.0-SNAPSHOT]
at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.CGLIB$authorizationServerSecurityFilterChain$1(<generated>) ~[classes!/:1.0-SNAPSHOT]
at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf$$FastClassBySpringCGLIB$$a983a242.invoke(<generated>) ~[classes!/:1.0-SNAPSHOT]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.3.23.jar!/:5.3.23]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.3.23.jar!/:5.3.23]
at org.villamzr.springcloud.msvc.auth.SecurityConfig$$EnhancerBySpringCGLIB$$477933bf.authorizationServerSecurityFilterChain(<generated>) ~[classes!/:1.0-SNAPSHOT]
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:577) ~[na:na]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.3.23.jar!/:5.3.23]
... 26 common frames omitted
Caused by: java.lang.ClassNotFoundException: jakarta.servlet.http.HttpServletRequest
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[msvc-auth-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521) ~[na:na]
... 37 common frames omitted
</code></pre>
<p>This is the auth.yaml configuration.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: msvc-auth
spec:
replicas: 1
selector:
matchLabels:
app: msvc-auth
template:
metadata:
labels:
app: msvc-auth
spec:
containers:
- image: villamzr/auth:latest
name: msvc-auth
ports:
- containerPort: 9000
env:
- name: LB_USUARIOS_URI
valueFrom:
configMapKeyRef:
name: msvc-usuarios
key: lb_usuarios_uri
---
apiVersion: v1
kind: Service
metadata:
name: msvc-auth
spec:
type: LoadBalancer
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: msvc-auth
</code></pre>
<p>this one is the pom.xml of the microservice</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.villamzr.springcloud.msvc</groupId>
<artifactId>curso-kubernetes</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<groupId>org.villamzr.springcloud.msvc.auth</groupId>
<artifactId>msvc-auth</artifactId>
<name>msvc-auth</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>18</java.version>
<spring-cloud.version>2021.0.5</spring-cloud.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-oauth2-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-oauth2-authorization-server</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-client-loadbalancer</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
</code></pre>
<p>and this one is the Securityconfig</p>
<pre><code>package org.villamzr.springcloud.msvc.auth;
import com.nimbusds.jose.jwk.JWKSet;
import com.nimbusds.jose.jwk.RSAKey;
import com.nimbusds.jose.jwk.source.ImmutableJWKSet;
import com.nimbusds.jose.jwk.source.JWKSource;
import com.nimbusds.jose.proc.SecurityContext;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.annotation.Order;
import org.springframework.core.env.Environment;
import org.springframework.security.config.Customizer;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configurers.oauth2.server.resource.OAuth2ResourceServerConfigurer;
import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity;
import org.springframework.security.core.userdetails.User;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.security.core.userdetails.UserDetailsService;
import org.springframework.security.oauth2.core.AuthorizationGrantType;
import org.springframework.security.oauth2.core.ClientAuthenticationMethod;
import org.springframework.security.oauth2.core.oidc.OidcScopes;
import org.springframework.security.oauth2.jwt.JwtDecoder;
import org.springframework.security.oauth2.server.authorization.client.InMemoryRegisteredClientRepository;
import org.springframework.security.oauth2.server.authorization.client.RegisteredClient;
import org.springframework.security.oauth2.server.authorization.client.RegisteredClientRepository;
import org.springframework.security.oauth2.server.authorization.config.annotation.web.configuration.OAuth2AuthorizationServerConfiguration;
import org.springframework.security.oauth2.server.authorization.config.annotation.web.configurers.OAuth2AuthorizationServerConfigurer;
import org.springframework.security.oauth2.server.authorization.settings.AuthorizationServerSettings;
import org.springframework.security.oauth2.server.authorization.settings.ClientSettings;
import org.springframework.security.provisioning.InMemoryUserDetailsManager;
import org.springframework.security.web.SecurityFilterChain;
import org.springframework.security.web.authentication.LoginUrlAuthenticationEntryPoint;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.interfaces.RSAPrivateKey;
import java.security.interfaces.RSAPublicKey;
import java.util.UUID;
@Configuration
public class SecurityConfig {
@Autowired
private Environment env;
@Bean
@Order(1)
public SecurityFilterChain authorizationServerSecurityFilterChain(HttpSecurity http)
throws Exception {
OAuth2AuthorizationServerConfiguration.applyDefaultSecurity(http);
http.getConfigurer(OAuth2AuthorizationServerConfigurer.class)
.oidc(Customizer.withDefaults()); // Enable OpenID Connect 1.0
http
// Redirect to the login page when not authenticated from the
// authorization endpoint
.exceptionHandling((exceptions) -> exceptions
.authenticationEntryPoint(
new LoginUrlAuthenticationEntryPoint("/login"))
)
// Accept access tokens for User Info and/or Client Registration
.oauth2ResourceServer(OAuth2ResourceServerConfigurer::jwt);
return http.build();
}
@Bean
@Order(2)
public SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http)
throws Exception {
http
.authorizeHttpRequests((authorize) -> authorize
.anyRequest().authenticated()
)
// Form login handles the redirect to the login page from the
// authorization server filter chain
.formLogin(Customizer.withDefaults());
return http.build();
}
@Bean
public UserDetailsService userDetailsService() {
UserDetails userDetails = User.withDefaultPasswordEncoder()
.username("admin")
.password("12345")
.roles("USER")
.build();
return new InMemoryUserDetailsManager(userDetails);
}
@Bean
public RegisteredClientRepository registeredClientRepository() {
RegisteredClient registeredClient = RegisteredClient.withId(UUID.randomUUID().toString())
.clientId("usuarios-client")
.clientSecret("{noop}12345")
.clientAuthenticationMethod(ClientAuthenticationMethod.CLIENT_SECRET_BASIC)
.authorizationGrantType(AuthorizationGrantType.AUTHORIZATION_CODE)
.authorizationGrantType(AuthorizationGrantType.REFRESH_TOKEN)
.authorizationGrantType(AuthorizationGrantType.CLIENT_CREDENTIALS)
.redirectUri(env.getProperty("LB_USUARIOS_URI")+"/login/oauth2/code/msvc-usuarios-client")
.redirectUri(env.getProperty("LB_USUARIOS_URI")+"/authorized")
.scope(OidcScopes.OPENID)
.scope(OidcScopes.PROFILE)
.scope("read")
.scope("write")
.clientSettings(ClientSettings.builder().requireAuthorizationConsent(true).build())
.build();
return new InMemoryRegisteredClientRepository(registeredClient);
}
@Bean
public JWKSource<SecurityContext> jwkSource() {
KeyPair keyPair = generateRsaKey();
RSAPublicKey publicKey = (RSAPublicKey) keyPair.getPublic();
RSAPrivateKey privateKey = (RSAPrivateKey) keyPair.getPrivate();
RSAKey rsaKey = new RSAKey.Builder(publicKey)
.privateKey(privateKey)
.keyID(UUID.randomUUID().toString())
.build();
JWKSet jwkSet = new JWKSet(rsaKey);
return new ImmutableJWKSet<>(jwkSet);
}
private static KeyPair generateRsaKey() {
KeyPair keyPair;
try {
KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
keyPairGenerator.initialize(2048);
keyPair = keyPairGenerator.generateKeyPair();
}
catch (Exception ex) {
throw new IllegalStateException(ex);
}
return keyPair;
}
@Bean
public JwtDecoder jwtDecoder(JWKSource<SecurityContext> jwkSource) {
return OAuth2AuthorizationServerConfiguration.jwtDecoder(jwkSource);
}
@Bean
public AuthorizationServerSettings authorizationServerSettings() {
return AuthorizationServerSettings.builder().build();
}
}
</code></pre>
<p><strong>SOLUTIONS I TESTED BUT IT DOWS NOT WORK</strong></p>
<ol>
<li>I changed the tomcat server version to 10.x</li>
<li>I added the jakarta-api dependency to pom.xml of microservice, with 3.x, 5.x and 6.x versions</li>
<li>I added the <code>@EnableWebSecurity</code></li>
</ol>
<p><strong>NOTES</strong></p>
<ol>
<li>I'm using java 18</li>
<li>I'm using Oauth 2.1 and authorization server 1.0.0</li>
</ol>
| <p>I was using Spring Boot 3 but was missing:</p>
<pre><code> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</code></pre>
|
<p>I have the following CronJob to run a backup of my database, and I'd like the backup files to be appended with the date:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.postgresqlBackup.enabled }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: {{ .Values.postgresqlBackup.cron | quote }}
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: IfNotPresent
command:
- pg_dump
- --username=postgres
- --no-password
- --format=custom
- --file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak
- --host={{ include "dragalia-api.fullname" . }}-postgresql
- --verbose
volumeMounts:
- name: data
mountPath: /backups
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ include "dragalia-api.fullname" . }}-postgresql
key: postgres-password
optional: false
restartPolicy: Never
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ include "dragalia-api.fullname" . }}-db-backup
{{- end }}
</code></pre>
<p>The job executes successfully but I am left with files like:</p>
<pre><code>docker@minikube:/dragalia-api/db-backup$ ls
'dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak'
</code></pre>
<p>The entire filename is quoted and the string is not evaluated. How can I make it so that the string is evaluated by the shell?</p>
<p>Things I've tried:</p>
<ul>
<li>using backticks: <code>--file=/backups/dragalia-api-1`date +"%Y-%m-%d_%H-%M-%S".bak` </code>: still rendered literally</li>
<li>defining a DATE env var and putting ${DATE} in the string: rendered literally</li>
<li>escaping the % signs e.g. <code>\%Y</code>: rendered literally</li>
<li>passing a multi-line string to <code>sh -c</code>: this caused the job to fail on being unable to connect to the db, so I guess <code>--host</code> was not passed properly</li>
</ul>
<p>The only other thing I can think of is passing in a shell script, but I'd rather not create another resource if possible.</p>
<p>Equivalently, since the date information is stored by the filesystem, if there's some other way to pass a unique string into the filename that would work. Anything so that it keeps rolling backups instead of just the one.</p>
| <p>If you want to use shell substitution, then you need to execute your command <strong>with a shell</strong>. For example:</p>
<pre><code>containers:
- name: postgres
image: postgres:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- >
pg_dump
--username=postgres
--no-password
--format=custom
--file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak
--host={{ include "dragalia-api.fullname" . }}-postgresql
--verbose
</code></pre>
<p>Also, unrelated to your question, you should pin your <code>postgres</code> image to a specific version (<code>postgres:14</code>) or you'll be in for a rude surprise when <code>:latest</code> is unexpectedly a new major version.</p>
|
<p><strong>Goal:</strong></p>
<p>I want to use bitnami/kafka helm chart with SASL enabled with the PLAIN mechanism for only the external client. (client-broker, broker-broker, broker-zookeeper connection can be in PLAINTEXT mechanism)</p>
<p><strong>What I have Done:</strong></p>
<p>I've set configured parameters in values.yaml file:</p>
<pre><code>superUsers: User:adminuser
auth.externalClientProtocol: sasl
auth.sasl.jaas.clientUsers:
- adminuser
- otheruser
auth.sasl.jaas.clientPasswords:
- adminuserpass
- otheruserpass
auth.sasl.jaas.interBrokerUser: adminuser
</code></pre>
<p>And left other parameters as it is. But it doesn't seem to be enough. The broker container is going to backOff state when I try to install the chart.</p>
<p><strong>Question#1:</strong> Aren't these configuration parameters enough for setting up what I'm trying to achieve? Won't these create a JAAS config file for me?</p>
<p>From Kafka documentation <a href="https://kafka.apache.org/documentation/#security_sasl" rel="nofollow noreferrer">Kafka_SASL</a>, I have to pass a JAAS config file for the broker. It can be done by <code>sasl.jaas.config</code> configuration parameter. For me it should be something like this:</p>
<pre><code>listener.name.EXTERNAL.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="adminuser" \
password="adminuserpass" \
user_adminuser="adminuserpass" \
user_otheruser="otheruserpass";
</code></pre>
<p>But it doesn't seem there is any <code>sasl.jaas.config</code> available in bitnami/kafka.values.yaml.</p>
<p><strong>Question#2:</strong> How can I pass this JAAS config file values if the answer for question#1 is <strong>NO</strong>? Should I use <code>config</code> or <code>extraEnvVars</code> for this?</p>
<p><strong>Thanks!</strong></p>
| <p>This is work for me:</p>
<pre><code>...
authorizerClassName: "kafka.security.authorizer.AclAuthorizer"
auth:
clientProtocol: sasl
externalClientProtocol: sasl
interBrokerProtocol: plaintext
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: plain
jaas:
clientUsers:
- yourusername
clientPasswords:
- yourpassword
....
</code></pre>
<p>I don't know why but I have to set <code>clientProtocol: sasl</code>
otherwise, I get <code>java.io.IOException: /opt/bitnami/kafka/conf/kafka_jaas.conf (No such file or directory)</code>
also, I didn't see the error till I set <code>image.debug: true</code></p>
<p>Note: as u you probably see this will force authentication also between the clients inside the cluster</p>
|
<p>by default, nginx ingress listens to two ports 80 and 443, how to add listening on port 9898
I tried to change it in daemon set, but nothing came out
, I don 't even know where else to dig</p>
| <p>I'm not sure what will work exactly for you, but here's a few things you can try to approach this (read carefully because nginx is confusing):</p>
<ol>
<li><p>Define <code>service</code> for your deployment, and make sure it covers port routes you want and support on deployment end:</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: web
labels:
app: web-app
spec:
ports:
- port: 80
targetPort: 1337
protocol: TCP
selector:
app: web-app
</code></pre>
</li>
<li><p>Refer to it in nginx ingress:</p>
<pre><code> rules:
- host: mycoolwebapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80
</code></pre>
</li>
</ol>
<p>The catch here is that you can route <strong>ALL</strong> services via port 80 but use any target port you want, so that you can, say, add 50 ingress hosts/routes over a morning routing to port 80 and only difference they'll have is target port in <code>service</code>.<br />
3. If you are specifically unhappy with ports 80 and 443, you are welcome to edit <code>ingress-nginx-controller</code> (<code>service</code> one, because as I said nginx is confusing).<br />
4. Alternatively, you can find example of <code>ingress-nginx-controller</code> <em>service</em> on the web, customize it and apply, then connect <code>ingress</code> to it... but I advise against this because if nginx doesn't like anything you set up as custom service, it's easier to just reinstall whole helm release of it and try again.</p>
|