prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have a requirement where I want to publish the same kafka message to 2 replicas of kubernetes pods so that I can keep both the replicas in sync.
As our nodes get repaved frequently I don't want us to lose any data.
On the other hand we need our pod to be highly available and scalable.
Any help on above would be appreciated.</p>
| <p>You should be able to achieve resilience by just having one of the pods consume each message. To achieve this you setup your kafka library in a way that your consumers are in the same consumer group.</p>
<p>If you really want to consume each message twice or more, you can have 2 or more consumer groups and assign a number of pods to each group. each consumer group will consume the message only once.</p>
<p>For spring this <a href="https://docs.spring.io/spring-kafka/reference/html/" rel="nofollow noreferrer">link</a> could be helpful, look at the consumer props</p>
|
<p>When I am running the below command to record the change cause -</p>
<pre><code>kubectl set image deployment.v1.apps/deploy1 nginx=nginx:1.16.0 --record
</code></pre>
<p>Then it is recording the change cause but showing that --record has been deprecated.</p>
<pre><code>Flag --record has been deprecated, --record will be removed in the future
deployment.apps/deploy1 image updated
</code></pre>
<p>And when I run the command <code>kubectl set image deployment.v1.apps/deploy1 nginx=nginx:1.16.0</code> without --record then it is not recording the change cause.</p>
<p>So my question is if the <code>--record</code> has been deprecated and with be removed in the future then what is easy alternative of that to record change? ( I mean to record the command using which I have done rollout ).</p>
<p>Thanks in advance.</p>
| <p>Not sure if it's deprecated or will be fully.</p>
<p>You can use the <strong><a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_annotate/" rel="noreferrer">annotate</a></strong> to manage the history same way.</p>
<ol>
<li><p><strong>Create the deployment</strong></p>
<p>kubectl create deployment nginx --image=nginx:1.16.0 --replicas 1</p>
</li>
<li><p><strong>check the history</strong></p>
<p>kubectl rollout history deployment nginx</p>
</li>
<li><p><strong>update the image on deployment</strong></p>
<p>kubectl set image deployment nginx nginx=nginx:latest</p>
</li>
<li><p><strong>Annotate the deployment now and create the history</strong></p>
<p>kubectl annotate deployment nginx kubernetes.io/change-cause="version change to 1.16.0 to latest" --overwrite=true</p>
</li>
<li><p><strong><strong>Check the history</strong></strong></p>
<p>kubectl rollout history deployment nginx</p>
</li>
</ol>
|
<p>While testing SSH from one container to another in a K8s environment, I'm getting this strange issue of "matching key found" but ended up with error "Failed publickey.."</p>
<p>Have tried with securityCapability of "SYS_CHROOT" and with privileged as true in pod and container.</p>
<p>sshd config is below,</p>
<pre><code>PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
</code></pre>
<p>ssh command output:</p>
<pre><code>[jboss@home]$ ssh -i key.txt [email protected] -p 2025 -v
OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 58: Applying options for *
debug1: Connecting to 10.128.2.190 [10.128.2.190] port 2025.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file key.txt type -1
debug1: key_load_public: No such file or directory
debug1: identity file key.txt-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4
debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 10.128.2.190:2025 as 'root'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: curve25519-sha256 need=64 dh_need=64
debug1: kex: curve25519-sha256 need=64 dh_need=64
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:j5XrSrnXj/IuqIbvYOu234KT/OhQm/8qBiazCtD2G5E
debug1: Host '[10.128.2.190]:2025' is known and matches the ECDSA host key.
debug1: Found key in /opt/jboss/.ssh/known_hosts:2
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: key.txt
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
Permission denied (publickey).
</code></pre>
<p>sshd debug output:</p>
<pre><code>/usr/sbin/sshd -ddd -D -p 2025
debug2: load_server_config: filename /etc/ssh/sshd_config
debug2: load_server_config: done config len = 127
debug2: parse_server_config: config /etc/ssh/sshd_config len 127
debug3: /etc/ssh/sshd_config:2 setting Port 2022
debug3: /etc/ssh/sshd_config:7 setting PasswordAuthentication no
debug3: /etc/ssh/sshd_config:8 setting ChallengeResponseAuthentication no
debug3: /etc/ssh/sshd_config:9 setting UsePAM yes
debug3: /etc/ssh/sshd_config:10 setting SyslogFacility DAEMON
debug3: /etc/ssh/sshd_config:11 setting LogLevel DEBUG3
debug1: sshd version OpenSSH_7.4, OpenSSL 1.0.2k-fips 26 Jan 2017
debug1: private host key #0: ssh-rsa SHA256:bZPN1dSnLtGHMOgf5VJAMYYionA5GJo5fuKS0r4JtuA
debug1: private host key #1: ssh-dss SHA256:IFYQSI7Fn9WCcfIOiSdUvKR5hvJzhQd4u+3l+dNKfnc
debug1: private host key #2: ecdsa-sha2-nistp256 SHA256:j5XrSrnXj/IuqIbvYOu234KT/OhQm/8qBiazCtD2G5E
debug1: private host key #3: ssh-ed25519 SHA256:rO/wKAQObCmbaGu1F2vJMYLTDYr61+TWMsHDVBKJa1Q
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-ddd'
debug1: rexec_argv[2]='-D'
debug1: rexec_argv[3]='-p'
debug1: rexec_argv[4]='2025'
debug3: oom_adjust_setup
debug1: Set /proc/self/oom_score_adj from 1000 to -1000
debug2: fd 3 setting O_NONBLOCK
debug1: Bind to port 2025 on 0.0.0.0.
Server listening on 0.0.0.0 port 2025.
debug2: fd 4 setting O_NONBLOCK
debug3: sock_set_v6only: set socket 4 IPV6_V6ONLY
debug1: Bind to port 2025 on ::.
Server listening on :: port 2025.
debug3: fd 5 is not O_NONBLOCK
debug1: Server will not fork when running in debugging mode.
debug3: send_rexec_state: entering fd = 8 config len 127
debug3: ssh_msg_send: type 0
debug3: send_rexec_state: done
debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8
debug1: inetd sockets after dupping: 3, 3
Connection from 10.131.1.10 port 41462 on 10.128.2.190 port 2025
debug1: Client protocol version 2.0; client software version OpenSSH_7.4
debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000
debug1: Local version string SSH-2.0-OpenSSH_7.4
debug1: Enabling compatibility mode for protocol 2.0
debug2: fd 3 setting O_NONBLOCK
debug3: ssh_sandbox_init: preparing seccomp filter sandbox
debug2: Network child is on pid 1186
debug3: preauth child monitor started
debug1: SELinux support disabled [preauth]
debug3: privsep user:group 74:74 [preauth]
debug1: permanently_set_uid: 74/74 [preauth]
debug3: ssh_sandbox_child: setting PR_SET_NO_NEW_PRIVS [preauth]
debug3: ssh_sandbox_child: attaching seccomp filter program [preauth]
debug1: list_hostkey_types: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]
debug3: send packet: type 20 [preauth]
debug1: SSH2_MSG_KEXINIT sent [preauth]
debug3: receive packet: type 20 [preauth]
debug1: SSH2_MSG_KEXINIT received [preauth]
debug2: local server KEXINIT proposal [preauth]
debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 [preauth]
debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]
debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc [preauth]
debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc [preauth]
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: compression ctos: none,[email protected] [preauth]
debug2: compression stoc: none,[email protected] [preauth]
debug2: languages ctos: [preauth]
debug2: languages stoc: [preauth]
debug2: first_kex_follows 0 [preauth]
debug2: reserved 0 [preauth]
debug2: peer client KEXINIT proposal [preauth]
debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,ext-info-c [preauth]
debug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa,ssh-dss [preauth]
debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc [preauth]
debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc [preauth]
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: compression ctos: none,[email protected],zlib [preauth]
debug2: compression stoc: none,[email protected],zlib [preauth]
debug2: languages ctos: [preauth]
debug2: languages stoc: [preauth]
debug2: first_kex_follows 0 [preauth]
debug2: reserved 0 [preauth]
debug1: kex: algorithm: curve25519-sha256 [preauth]
debug1: kex: host key algorithm: ecdsa-sha2-nistp256 [preauth]
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none [preauth]
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none [preauth]
debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]
debug3: mm_request_send entering: type 120 [preauth]
debug3: mm_request_receive_expect entering: type 121 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 120
debug3: mm_request_send entering: type 121
debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]
debug3: mm_request_send entering: type 120 [preauth]
debug3: mm_request_receive_expect entering: type 121 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 120
debug3: mm_request_send entering: type 121
debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]
debug3: receive packet: type 30 [preauth]
debug3: mm_key_sign entering [preauth]
debug3: mm_request_send entering: type 6 [preauth]
debug3: mm_key_sign: waiting for MONITOR_ANS_SIGN [preauth]
debug3: mm_request_receive_expect entering: type 7 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 6
debug3: mm_answer_sign
debug3: mm_answer_sign: hostkey proof signature 0x557cd5190710(101)
debug3: mm_request_send entering: type 7
debug2: monitor_read: 6 used once, disabling now
debug3: send packet: type 31 [preauth]
debug3: send packet: type 21 [preauth]
debug2: set_newkeys: mode 1 [preauth]
debug1: rekey after 134217728 blocks [preauth]
debug1: SSH2_MSG_NEWKEYS sent [preauth]
debug1: expecting SSH2_MSG_NEWKEYS [preauth]
debug3: send packet: type 7 [preauth]
debug3: receive packet: type 21 [preauth]
debug1: SSH2_MSG_NEWKEYS received [preauth]
debug2: set_newkeys: mode 0 [preauth]
debug1: rekey after 134217728 blocks [preauth]
debug1: KEX done [preauth]
debug3: receive packet: type 5 [preauth]
debug3: send packet: type 6 [preauth]
debug3: receive packet: type 50 [preauth]
debug1: userauth-request for user root service ssh-connection method none [preauth]
debug1: attempt 0 failures 0 [preauth]
debug3: mm_getpwnamallow entering [preauth]
debug3: mm_request_send entering: type 8 [preauth]
debug3: mm_getpwnamallow: waiting for MONITOR_ANS_PWNAM [preauth]
debug3: mm_request_receive_expect entering: type 9 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 8
debug3: mm_answer_pwnamallow
debug3: Trying to reverse map address 10.131.1.10.
debug2: parse_server_config: config reprocess config len 127
debug3: mm_answer_pwnamallow: sending MONITOR_ANS_PWNAM: 1
debug3: mm_request_send entering: type 9
debug2: monitor_read: 8 used once, disabling now
debug2: input_userauth_request: setting up authctxt for root [preauth]
debug3: mm_start_pam entering [preauth]
debug3: mm_request_send entering: type 100 [preauth]
debug3: mm_inform_authserv entering [preauth]
debug3: mm_request_send entering: type 4 [preauth]
debug3: mm_inform_authrole entering [preauth]
debug3: mm_request_send entering: type 80 [preauth]
debug2: input_userauth_request: try method none [preauth]
debug3: userauth_finish: failure partial=0 next methods="publickey" [preauth]
debug3: send packet: type 51 [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 100
debug1: PAM: initializing for "root"
debug1: PAM: setting PAM_RHOST to "ip-10-131-1-10.ap-south-1.compute.internal"
debug1: PAM: setting PAM_TTY to "ssh"
debug2: monitor_read: 100 used once, disabling now
debug3: mm_request_receive entering
debug3: monitor_read: checking request 4
debug3: mm_answer_authserv: service=ssh-connection, style=
debug2: monitor_read: 4 used once, disabling now
debug3: mm_request_receive entering
debug3: monitor_read: checking request 80
debug3: mm_answer_authrole: role=
debug2: monitor_read: 80 used once, disabling now
debug3: receive packet: type 50 [preauth]
debug1: userauth-request for user root service ssh-connection method publickey [preauth]
debug1: attempt 1 failures 0 [preauth]
debug2: input_userauth_request: try method publickey [preauth]
debug3: userauth_pubkey: have signature for RSA SHA256:/7PPUU+YPuJeKNXZdPoShSqmlfL+rfae/Fb471C0Dyc [preauth]
debug3: mm_key_allowed entering [preauth]
debug3: mm_request_send entering: type 22 [preauth]
debug3: mm_key_allowed: waiting for MONITOR_ANS_KEYALLOWED [preauth]
debug3: mm_request_receive_expect entering: type 23 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 22
debug3: mm_answer_keyallowed entering
debug3: mm_answer_keyallowed: key_from_blob: 0x557cd51913e0
debug1: temporarily_use_uid: 0/0 (e=0/0)
debug1: trying public key file /root/.ssh/authorized_keys
debug1: fd 4 clearing O_NONBLOCK
debug1: matching key found: file /root/.ssh/authorized_keys, line 1 RSA SHA256:/7PPUU+YPuJeKNXZdPoShSqmlfL+rfae/Fb471C0Dyc
debug1: restore_uid: 0/0
debug3: mm_answer_keyallowed: key 0x557cd51913e0 is allowed
debug3: mm_request_send entering: type 23
debug3: mm_key_verify entering [preauth]
debug3: mm_request_send entering: type 24 [preauth]
debug3: mm_key_verify: waiting for MONITOR_ANS_KEYVERIFY [preauth]
debug3: mm_request_receive_expect entering: type 25 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 24
debug3: mm_answer_keyverify: key 0x557cd51912c0 signature unverified
debug3: mm_request_send entering: type 25
Failed publickey for root from 10.131.1.10 port 41462 ssh2: RSA SHA256:/7PPUU+YPuJeKNXZdPoShSqmlfL+rfae/Fb471C0Dyc
linux_audit_write_entry failed: Operation not permitted
debug1: do_cleanup
debug1: PAM: cleanup
debug3: PAM: sshpam_thread_cleanup entering
debug1: Killing privsep child 1186
linux_audit_write_entry failed: Operation not permitted
</code></pre>
| <p>After adding AUDIT_WRITE capability to the container, it started working. Apparently both SYS_CHROOT and AUDIT_WRITE are required for the container running sshd to work</p>
|
<p>I am using a containerized Spring boot application in Kubernetes. But the application automatically exits and restarts with exit code 143 and error message "Error".</p>
<p>I am not sure how to identify the reason for this error.</p>
<p>My first idea was that Kubernetes stopped the container due to too high resource usage, as described <a href="https://komodor.com/learn/exit-codes-in-containers-and-kubernetes-the-complete-guide/" rel="noreferrer">here</a>, but I can't see the corresponding kubelet logs.</p>
<p>Is there any way to identify the cause/origin of the <code>SIGTERM</code>? Maybe from spring-boot itself, or from the JVM?</p>
| <blockquote>
<p>Exit Code 143</p>
</blockquote>
<ol>
<li><p>It denotes that the process was terminated by an <code>external signal</code>.</p>
</li>
<li><p>The number 143 is a sum of two numbers: 128+x, <code># where x is the signal number sent to the process that caused it to terminate.</code></p>
</li>
<li><p>In the example, x equals 15, which is the number of the <code>SIGTERM</code> signal, meaning the process was killed forcibly.</p>
</li>
</ol>
<p>Hope this helps better.</p>
|
<p>I am trying to install Kubernetes on my CentOS machine, when I intialize the cluster, I have the following error.</p>
<p>I specify that I am behind a corporate proxy. I have already configured it for Docker in the directory: /etc/systemd/system/docker.service.d/http-proxy.conf
Docker work fine.</p>
<p>No matter how hard I look, I can't find a solution to this problem.</p>
<p>Thank you for your help.</p>
<pre><code># kubeadm init
W1006 14:29:38.432071 7560 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 14:29:38.432147 7560 version.go:103] falling back to the local client version: v1.19.2
W1006 14:29:38.432367 7560 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING HTTPProxy]: Connection to "https://192.168.XXX.XXX" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.13-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.7.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
</code></pre>
<pre><code># kubeadm config images pull
W1006 17:33:41.362395 80605 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 17:33:41.362454 80605 version.go:103] falling back to the local client version: v1.19.2
W1006 17:33:41.362685 80605 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
failed to pull image "k8s.gcr.io/kube-apiserver:v1.19.2": output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
| <p>I had this issue on version <code>version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2"</code> when i tried joining a second control panel.</p>
<pre><code>error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.9.3: output: E0923 04:47:51.763983 1598 remote_image.go:242] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"k8s.gcr.io/coredns:v1.9.3\": failed to resolve reference \"k8s.gcr.io/coredns:v1.9.3\": k8s.gcr.io/coredns:v1.9.3: not found" image="k8s.gcr.io/coredns:v1.9.3"
time="2022-09-23T04:47:51Z"...
</code></pre>
<blockquote>
<p>See #99321 it's now k8s.gcr.io/coredns/coredns:v1.9.3 instead of
k8s.gcr.io/coredns:v1.9.3 and i don't now why</p>
<p>by kluevandrew,
refererence: <a href="https://github.com/kubernetes/kubernetes/issues/112131" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/112131</a></p>
</blockquote>
<p>This worked, am using containerd:</p>
<pre><code>crictl pull k8s.gcr.io/coredns/coredns:v1.9.3
ctr --namespace=k8s.io image tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3
</code></pre>
<p>docker solution:</p>
<pre><code>docker pull k8s.gcr.io/coredns/coredns:v1.9.3
docker tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3
</code></pre>
|
<pre><code>{{- $root := . -}}
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ include "ingress.name" . }}
namespace: demo
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: "*.{{ .Values.host }}"
http:
paths:
{{- range $index, $service := .Values.deployments }}
- path: {{ $service.pathPrefix }}(/|$)(.*)
backend:
serviceName: {{ $service.deploymentName }}
servicePort: 80
{{- end }}
---
</code></pre>
<p>This ingress configuration is successfully working.</p>
<p>Additionally it is required that when anyone hit URL: <a href="http://example.com" rel="nofollow noreferrer">http://example.com</a>, it should redirect <a href="http://example.com/login" rel="nofollow noreferrer">http://example.com/login</a> permanently, so how can i achieve it?</p>
<p>Thanks.</p>
| <p>You can add the annotation like</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
location ~ / {
rewrite / https://<example.com to use $host>/login permanent;
}
</code></pre>
<p>if want to use the configuration snippet</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'https://example.com/') {
return 301 https://example.com/login;
}
</code></pre>
<p>or</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite / https://example.com/login permanent;
</code></pre>
<p>you can use the variables also if looking instead of using the fixed value.</p>
<p><code>$request_uri, $uri, $host</code> are few of variables you can leverage.</p>
|
<p>I have a ClusterIssuer that is expecting <code>secretName</code>, I see in the <code>ClusterIssuer</code> <code>spec</code>, I can specify the <code>secretName</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: postgres-operator-ca-certificate-cluster-issuer
spec:
ca:
secretName: postgres-operator-ca-certificate # <---- Here
</code></pre>
<p>but how to provide the reference to the secret namespace? This secret is created using <code>Certificate</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-operator-self-signed-ca-certificate
namespace: postgres # <---- This namespace can not be changed to cert-manager
spec:
isCA: true
commonName: postgres-operator-ca-certificate
secretName: postgres-operator-ca-certificate
issuerRef:
name: postgres-operator-selfsigned-clusterissuer
kind: ClusterIssuer
</code></pre>
<p>As this is <code>namespaced</code> is the suggestion is to use <code>Issuer</code> instead of <code>ClusterIssuer</code>? Does <code>ClusterIssuer</code> by default look in the <code>cert-manager</code> namespace?</p>
| <p>Typically it will look for the secret in the namespace <code>cert-manager</code> by default. Which namespace it looks in can be changed by your cert-manager installation by using the <code>--cluster-resource-namespace</code> argument, but not by individual ClusterIssuer.</p>
<p>From the documentation:</p>
<blockquote>
<p>If the referent is a cluster-scoped resource (e.g. a ClusterIssuer),
the reference instead refers to the resource with the given name in
the configured ‘cluster resource namespace’, which is set as a flag on
the controller component (and defaults to the namespace that
cert-manager runs in).</p>
</blockquote>
<p><a href="https://cert-manager.io/docs/reference/api-docs/#meta.cert-manager.io/v1.LocalObjectReference" rel="nofollow noreferrer">https://cert-manager.io/docs/reference/api-docs/#meta.cert-manager.io/v1.LocalObjectReference</a></p>
|
<p>We're trying to set up a spot node group in EKS with lower and higher capacity instance types, (e.g. <code>instance_types = ["t3.xlarge", "c5.4xlarge"]</code>), but ... only the t3 is used, even if we specify more CPU than it has to offer. Pods still try to use it and just hang.</p>
<p>How do we get the larger instances to come into play?</p>
| <p>An AWS AutoScalingGroup has the ability to put weights on the instance types, but that functionality isn't built into EKS. So what's happening is that the ASG is designed to create the first instance type if possible, and doesn't get impacted by your K8s workload requests, and therefor will always be the first type that is available.</p>
<p>You probably want to <strong>create two different node groups</strong> (one for the <code>t3.xlarge</code> and another for the <code>c5.4xlarge</code>). And depending on the workloads, maybe allow the min-size to be 0.</p>
<p>Alternatively, if you want to explicitly change the existing node group and not have two, then maybe these instructions would be useful: <a href="https://blog.porter.run/updating-eks-instance-type/" rel="nofollow noreferrer">https://blog.porter.run/updating-eks-instance-type/</a></p>
|
<p>I am trying to apply ingress rule in minikube but I am getting this error</p>
<pre><code>error: resource mapping not found for name: "dashboard-ingress" namespace: "kubernetes-dashboard" from "Desktop/minikube/dashboard-ingress.yaml": no matches for kind "Ingress" in version "networking.k8.io/v1"
</code></pre>
<p>dashboard-ingress.yaml</p>
<pre><code>apiVersion: networking.k8.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 80
</code></pre>
| <p>I have found the solution</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 80
</code></pre>
|
<p>I have a multistage pipeline with the following</p>
<p>Stage build:</p>
<ol>
<li>build docker image</li>
<li>push image to ACR</li>
<li>package helm chart</li>
<li>push helm chart to ACR</li>
</ol>
<p>Stage deployment:</p>
<ol>
<li>helm upgrade</li>
</ol>
<p><strong>Push helm chart to AKS:</strong></p>
<pre><code> task: HelmDeploy@0
displayName: 'helm publish'
inputs:
azureSubscriptionForACR: '$(azureSubscription)'
azureResourceGroupForACR: '$(resourceGroup)'
azureContainerRegistry: '$(containerRegistry)'
command: 'save'
arguments: '--app-version $(Version)'
chartNameForACR: 'charts/$(imageRepository):$(Version)'
chartPathForACR: $(chartPath)
</code></pre>
<p><strong>Deploy helm chart to AKS:</strong></p>
<pre><code> task: HelmDeploy@0
inputs:
connectionType: 'Kubernetes Service Connection'
kubernetesServiceConnection: '$(kubernetesServiceConnection)'
command: 'upgrade'
chartType: 'Name'
chartName: '$(containerRegistry)/charts/$(imageRepository):$(Version)'
chartVersion: '$(Version)'
azureSubscriptionForACR: '$(azureSubscription)'
azureResourceGroupForACR: '$(resourceGroup)'
azureContainerRegistry: '$(containerRegistry)'
install: true
releaseName: $(Version)
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>failed to download "<ACR>/charts/<repository>:0.9.26" at version "0.9.26" (hint: running `helm repo update` may help)
</code></pre>
<p><strong>ACR:</strong>
<code>az acr repository show-manifests --name <org> --repository helm/charts/<repository> --detail</code></p>
<pre><code> {
"changeableAttributes": {
"deleteEnabled": true,
"listEnabled": true,
"readEnabled": true,
"writeEnabled": true
},
"configMediaType": "application/vnd.cncf.helm.config.v1+json",
"createdTime": "2021-02-02T11:54:54.1623765Z",
"digest": "sha256:fe7924415c4e76df370630bbb0248c9296f27186742e9272eeb87b2322095c83",
"imageSize": 3296,
"lastUpdateTime": "2021-02-02T11:54:54.1623765Z",
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"tags": [
"0.9.26"
]
}
</code></pre>
<p>What am I doing wrong? Do I have to <code>export</code> the helm chart from ACR before I can deploy it?</p>
| <p>The answer from @sshepel actually helped somewhat, you need to login to the registry before being able to pull. However, it is sufficient with a simple AzureCLI login.</p>
<pre><code> - task: AzureCLI@2
displayName: Login to Azure Container Registry
inputs:
azureSubscription: <Azure Resource Manager service connection to your subscription and resource group>
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az acr login --name <container registry name>.azurecr.io
</code></pre>
<p>After that it worked perfectly with the undocumented HelmDeploy task.</p>
|
<p>A common requirement when deploying Kubernetes manifests to a cluster is to prefix the container names with a trusted registry prefix that mirrors the allowed images. Usually used along with an admission controller.</p>
<p>Is there a sensible way to do this using Kustomize without having to list every single image by name in the <code>kustomization.yaml</code> <code>images:</code> transformer stanza?</p>
<p>Given this <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "https://github.com/prometheus-operator/kube-prometheus"
</code></pre>
<p>if I want to prefix all the images it references with <code>mytrusted.registry/</code> I need to append this to my <code>kustomization.yaml</code>:</p>
<pre><code>images:
- name: grafana/grafana
newName: mytrusted.registry/grafana/grafana
- name: jimmidyson/configmap-reload
newName: mytrusted.registry/jimmidyson/configmap-reload
- name: k8s.gcr.io/kube-state-metrics/kube-state-metrics
newName: mytrusted.registry/k8s.gcr.io/kube-state-metrics/kube-state-metrics
- name: k8s.gcr.io/prometheus-adapter/prometheus-adapter
newName: mytrusted.registry/k8s.gcr.io/prometheus-adapter/prometheus-adapter
- name: quay.io/brancz/kube-rbac-proxy
newName: mytrusted.registry/quay.io/brancz/kube-rbac-proxy
- name: quay.io/prometheus/alertmanager
newName: mytrusted.registry/quay.io/prometheus/alertmanager
- name: quay.io/prometheus/blackbox-exporter
newName: mytrusted.registry/quay.io/prometheus/blackbox-exporter
- name: quay.io/prometheus/node-exporter
newName: mytrusted.registry/quay.io/prometheus/node-exporter
- name: quay.io/prometheus-operator/prometheus-operator
newName: mytrusted.registry/quay.io/prometheus-operator/prometheus-operator
- name: quay.io/prometheus/prometheus
newName: mytrusted.registry/quay.io/prometheus/prometheus
</code></pre>
<p>which I generated with this putrid, fragile monstrosity (which WILL break if your containers are specified by hash, or you have a port in your registry prefix):</p>
<pre><code>kustomize build | \
grep 'image:' | \
awk '$2 != "" { print $2}' | \
sort -u | \
cut -d : -f 1 | \
jq --raw-input '{ name: ., newName: ("mytrusted.registry/" + .) }' | \
faq -s -fjson -oyaml '{ images: .}'
</code></pre>
<p>(Note that the above will also NOT WORK completely, because Kustomize doesn't recognise images outside <code>PodTemplate</code>s, such as those in the <code>kind: Alertmanager</code> <code>spec.image</code> or the <code>kind: Prometheus</code> <code>spec.image</code>; it'd still be better than the current situation).</p>
<p>What I want instead is to able to express this in the image transformer without generating and maintaining lists of images, with something like the <strong>imaginary, does not work example</strong>:</p>
<pre><code>images:
- name: "(*)"
newName: "mytrusted.registry/$1"
</code></pre>
<p>i.e. use a capture group. Or something functionally similar, like an image transformer "prependName" option or similar.</p>
<p>This must be such a common problem to have, but I can't for the life of me find a well established way this is done by convention in the k8s world. Just lots of DIY fragile hacks.</p>
| <p>This answer is probably too late to help the original asker, but maybe it will help others who stumble upon this question through Google, like I did.</p>
<p>Kustomize has a built-in <code>PrefixTransformer</code> that can add a prefix to all your images, or indeed to any arbitrary field in your specs.</p>
<p>Create a file named <code>image-prefix.yaml</code> with the following contents:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: builtin
kind: PrefixTransformer
metadata:
name: image-prefix
prefix: mytrusted.registry/
fieldSpecs:
- path: spec/template/spec/containers/image
- path: spec/template/spec/initContainers/image
- path: spec/image # for kind Prometheus and Alertmanager
</code></pre>
<p>Then add this transformer to your <code>kustomization.yaml</code> as follows:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "https://github.com/prometheus-operator/kube-prometheus"
transformers:
- image-prefix.yaml
</code></pre>
<p>That should do it.</p>
<p>When you build this, you should see your prefix automatically added to all the images:</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl kustomize | grep image:
...
image: mytrusted.registry/quay.io/prometheus/blackbox-exporter:v0.22.0
image: mytrusted.registry/jimmidyson/configmap-reload:v0.5.0
image: mytrusted.registry/quay.io/brancz/kube-rbac-proxy:v0.13.0
image: mytrusted.registry/grafana/grafana:my-tag
image: mytrusted.registry/k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.6.0
...
</code></pre>
<p>I tested this with <code>kubectl</code> 1.25 and the version of Kustomize that comes bundled with it:</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl version --short --client
...
Client Version: v1.25.0
Kustomize Version: v4.5.7
</code></pre>
<p>You can further restrict the <code>PrefixTransformer</code> by using GVK (group/version/kind) triplets. For example, if for some reason you wanted to apply your image prefix only to Deployments, but not to DaemonSets, StatefulSets, or others, you would put something like this in your <code>image-prefix.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>fieldSpecs:
- kind: Deployment
path: spec/template/spec/containers/image
- kind: Deployment
path: spec/template/spec/initContainers/image
</code></pre>
<p>Also note that the <code>ImageTransformer</code> runs before the <code>PrefixTransformer</code>, so if you wanted to override the tag of a particular image in your <code>kustomization.yaml</code>, you should use the original image name without the prefix:</p>
<pre class="lang-yaml prettyprint-override"><code>images:
- name: grafana/grafana
newTag: my-tag
</code></pre>
<p>Unfortunately there is no clear documentation for <code>PrefixTransformer</code> that I could find, or I would have linked it here. I discovered all this by digging through Kustomize source code.</p>
<p>There are quite a few other built-in transformers that might be of interest, you can glean their usage by looking at the <code>*_test.go</code> files in each of the subfolders here:</p>
<p><a href="https://github.com/kubernetes-sigs/kustomize/tree/master/plugin/builtin" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/tree/master/plugin/builtin</a></p>
|
<p>database-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres-db
spec:
replicas:
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:latest
ports:
- protocol: TCP
containerPort: 1234
env:
- name: POSTGRES_DB
value: "classroom"
- name: POSTGRES_USER
value: temp
- name: POSTGRES_PASSWORD
value: temp
</code></pre>
<p>database-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: database-service
spec:
selector:
app: postgres-db
ports:
- protocol: TCP
port: 1234
targetPort: 1234
</code></pre>
<p>I want to use this database-service url for other deployment so i tried to add it in configMap</p>
<p>my-configMap.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: classroom-configmap
data:
database_url: database-service
</code></pre>
<p>[Not Working] Expected - database_url : database-service (will be replaced with corresponding service URL)</p>
<p><code>ERROR - Driver org.postgresql.Driver claims to not accept jdbcUrl, database-service</code></p>
<pre><code>$ kubectl describe configmaps classroom-configmap
</code></pre>
<p>Output :</p>
<pre><code>Name: classroom-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
database_url:
----
database-service
BinaryData
====
Events: <none>
</code></pre>
| <p>updated my-configMap.yaml (database_url)</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: classroom-configmap
data:
database_url: jdbc:postgresql://database-service.default.svc.cluster.local:5432/classroom
</code></pre>
<p>expected URL - jdbc:{DATABASE}://{DATABASE_SERVICE with NAMESPACE}:{DATABASE_PORT}/{DATABASE_NAME}</p>
<p>DATABASE_SERVICE - <code>database-service</code></p>
<p>NAMESPACE - <code>default</code></p>
<p>DATABASE_SERVICE with NAMESPACE - <code>database-service.default.svc.cluster.local</code></p>
|
<p>I'm using <a href="https://www.conftest.dev/" rel="nofollow noreferrer">conftest</a> for validating policies on Kubernetes manifests.</p>
<p>Below policy validates that images in StatefulSet manifests have to come from specific registry <code>reg_url</code></p>
<pre><code>package main
deny[msg] {
input.kind == "StatefulSet"
not regex.match("[reg_url]/.+", input.spec.template.spec.initContainers[0].image)
msg := "images come from artifactory"
}
</code></pre>
<p>Is there a way to enforce such policy for all kubernetes resources that have image field somewhere in their description? This may be useful for policy validation on all <code>helm</code> chart manifests, for instance.</p>
<p>I'm looking for something like:</p>
<pre><code>package main
deny[msg] {
input.kind == "*" // all resources
not regex.match("[reg_url]/.+", input.*.image) // any nested image field
msg := "images come from artifactory"
}
</code></pre>
| <p>You <em>could</em> do this using something like the <a href="https://www.openpolicyagent.org/docs/latest/policy-reference/#builtin-graph-walk" rel="nofollow noreferrer">walk</a> built-in function. However, I would recommend against it, because:</p>
<ul>
<li>You'd need to scan every attribute of every request/resource (expensive).</li>
<li>You can't know for sure that e.g. "image" means the same thing across all current and future resouce manifests, including CRDs.</li>
</ul>
<p>I'd probably just stick with checking for a match of resource kind here, and include any resource type known to have an image attribute with a shared meaning.</p>
|
<p>I am getting this error in the logs.
<code>[error] 117#117: *16706 upstream timed out (110: Operation timed out) while reading response header from upstream</code>.I have tried every possible way to check from where i am getting this exact 60s timeout.</p>
<p>I'll add more detail of how i am producing this error in details also (if needed). I don't see any timeout when I run the dotnet api (dockerized) locally.
That API runs for more then 5 minutes but here in AKS cluster it gets timeout in exactly 60s.</p>
<p>So i am using these headers in my config map (for nginx ingress controller).I have checked it by removing and adding these headers one by one but no change in that timeout.</p>
<pre><code> client-header-timeout: "7200"
keep-alive: "300"
keep-alive-requests: "100000"
keepalive-timeout: "300"
proxy-connect-timeout: "7200"
proxy-read-timeout: "7200"
proxy-send-timeout: "7200"
upstream-keepalive-requests: "100000"
upstream-keepalive-timeout: "7200"
</code></pre>
<p>And i have also tried adding these headers in my ingress resource/rule for that microservice.</p>
<pre><code>nginx.ingress.kubernetes.io/client-body-timeout: "7200"
nginx.ingress.kubernetes.io/client-header-timeout: "7200"
nginx.ingress.kubernetes.io/client-max-body-size: 5000m
nginx.ingress.kubernetes.io/keep-alive: "300"
nginx.ingress.kubernetes.io/keepalive-timeout: "300"
nginx.ingress.kubernetes.io/large-client-header-buffers: 64 128k
nginx.ingress.kubernetes.io/proxy-body-size: 5000m
nginx.ingress.kubernetes.io/proxy-buffer-size: 16k
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/send_timeout: "7200"
</code></pre>
<p>Nginx ingress controller version:</p>
<pre><code> Release: v1.0.5
Build: 7ce96cbcf668f94a0d1ee0a674e96002948bff6f
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
</code></pre>
<p>We are using Azure Kubernetes cluster and The core dns is resovling all the api/url from an active directory which is deployed in an Azure windows Virtual machine.</p>
<p>Backend API in in dotnet core <code>(sdk:6.0.400 and ASP.NET core Runtime 6.0.8)</code> (all the keepalives and request-timeouts settings are already tested that are defined in the code).</p>
| <p>Found the problem. May be i have missed something but seems these</p>
<pre><code>proxy-read-timeout: "7200"
proxy-send-timeout: "7200"
</code></pre>
<p>headers doesn't effect the timeouts for the backend GRPC communication.
I had to add the "server-snippet" to add these</p>
<pre><code>grpc_read_timeout 120s; grpc_send_timeout 120s; client_body_timeout 120s;
</code></pre>
|
<p>I have tried to list pods based on labels</p>
<pre><code> // Kubernetes client - package kubernetes
clientset := kubernetes.NewForConfigOrDie(config)
// create a temp list for storage
var podslice []string
// Get pods -- package metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
pods, _ := clientset.CoreV1().Pods("").List(metav1.ListOptions{})
for _, p := range pods.Items {
fmt.Println(p.GetName())
}
</code></pre>
<p>this is equivalent of</p>
<pre><code>kubectl get po
</code></pre>
<p>is there a way to get in golang</p>
<pre><code>kubectl get po -l app=foo
</code></pre>
<p>thanks in advance</p>
| <p>You may just be able to set using the <code>ListOptions</code> parameter.</p>
<pre><code>listOptions := metav1.ListOptions{
LabelSelector: "app=foo",
}
pods, _ := clientset.CoreV1().Pods("").List(listOptions)
</code></pre>
<p>If you have multiple labels, you may be able to perform this via the <code>labels</code> library, like below untested code:</p>
<pre><code>import "k8s.io/apimachinery/pkg/labels"
labelSelector := metav1.LabelSelector{MatchLabels: map[string]string{"app": "foo"}}
listOptions := metav1.ListOptions{
LabelSelector: labels.Set(labelSelector.MatchLabels).String(),
}
pods, _ := clientset.CoreV1().Pods("").List(listOptions)
</code></pre>
|
<p>In Kubernetes container repository I have my permission set to Private:
<a href="https://i.stack.imgur.com/K1iNY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K1iNY.png" alt="enter image description here" /></a></p>
<p>When I create a pod on my cluster I get the the pod status ending in <code>ImagePullBackOff</code> and when I describe the pod I see:</p>
<pre><code>Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
</code></pre>
<p>I am certainly logged in.</p>
<pre><code>docker login
Authenticating with existing credentials...
Login Succeeded
</code></pre>
<p>Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.</p>
| <p>If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.</p>
<p>If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., <code>roles/artifactregistry.reader</code>)</p>
<p>A step by step tutorial, with all the different cases, it is present in the official documentation: <a href="https://cloud.google.com/artifact-registry/docs/access-control#gcp" rel="nofollow noreferrer">https://cloud.google.com/artifact-registry/docs/access-control#gcp</a></p>
|
<p>I've set up an ingress to route traffic to my http server, however I would like to leave some routes inaccessible from outside of the cluster.</p>
<p>Example routes:</p>
<pre><code>/status -> end point to determine service status
/users/names -> returns users
/users/ages -> returns ages
</code></pre>
<p>current ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: my-namespace
name: my-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: localhost
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-service
port:
number: 8080
</code></pre>
<p>this works currently but leaves all routes accessible. What I want to do is only have routes that fall under the <code>/users</code> path open, so that would be both <code>/users/names</code> and <code>/users/ages</code>. That would leave <code>/status</code> inaccessible from outside of the cluster. Is this achievable from changing the ingress configuration? Any help would be appreciated.</p>
| <p>Just specify the path that you want to expose via the ingress like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: my-namespace
name: my-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: localhost
http:
paths:
- pathType: Prefix
path: /users # <- add the path here
backend:
service:
name: my-service
port:
number: 8080
</code></pre>
|
<p>I tried to install mongodb in EKS cluster by following the given links.
(using mongodb-kubernetes-operator)</p>
<p><a href="https://www.youtube.com/watch?v=VqeTT0NvRR4&t=1s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=VqeTT0NvRR4&t=1s</a></p>
<p><a href="https://github.com/mongodb/mongodb-kubernetes-operator" rel="nofollow noreferrer">https://github.com/mongodb/mongodb-kubernetes-operator</a></p>
<pre><code>kubectl apply -f config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml
kubectl get crd/mongodbcommunity.mongodbcommunity.mongodb.com
kubectl create ns mongo
kubectl apply -k config/rbac/ --namespace mongo
kubectl get role mongodb-kubernetes-operator --namespace mongo
kubectl get rolebinding mongodb-kubernetes-operator --namespace mongo
kubectl get serviceaccount mongodb-kubernetes-operator --namespace mongo
kubectl create -f config/manager/manager.yaml --namespace mongo
kubectl get pods --namespace mongo
</code></pre>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -f config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml --namespace mongo
kubectl get pods -n mongo
</code></pre>
<p>When checked</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods -n mongo
</code></pre>
<p><code>example-mongodb-0</code> pod is in pending state for very long time.</p>
<p>Upon describing the pod got the following error,</p>
<blockquote>
<p>"running PreBind plugin "VolumeBinding": binding volumes: timed out
waiting for the condition".</p>
</blockquote>
| <p>When contacted with AWS support team, got the following response.</p>
<blockquote>
<p>From your correspondence, I understand that you are facing issues
while creating the mongodb pods in your EKS cluster, and after
creating the pod, your pod is going to pending status.</p>
<p>Please let me know if I misunderstood your query. Thanks for sharing
the GitHub repository URL using the same. I put some effort into
replicating the same issue on my side, and thankfully I was able to
replicate the issue.</p>
<p>Further investigation into my pending pod problem I ran the following
describe command on my cluster,</p>
<p>"kubectl describe pod <pending_pod_name>"</p>
<p>After several minutes, I found the following line in the "event"
part of my output.</p>
<p>"running PreBind plugin "VolumeBinding": binding volumes: timed out
waiting for the condition".</p>
<p>On further investigation, I found that the mongodb pod module that you
are trying to deploy on your cluster is trying to create an EBS volume
as a persistent volume, which is why I got the aforementioned error.
We need the EBS CSI driver add-on installed in your cluster to create
an EBS volume using EKS, and the above error usually occurs if the EBS
CSI driver add-on is not present. Since this add-on is not installed
by default while creating the cluster you need to install it via EKS
console add-on tab.</p>
<p>Or another possibility is that, even though the add-on is present, it
won't have the required permission to create the EBS volume. So,
before we even install the EBS CSI driver add-on to the cluster, we
need to make sure that we have created the IAM role for attaching to
the add-on. The same is referred to over here[1].</p>
<p>In your case, you can check whether the EBS CSI driver is present by
running the following command:</p>
<p>"kubectl get pods -n kube-system"</p>
<p>And look for pods with names like "ebs-csi-controller-xxxxxxx." If
you find one, it means you've already installed the EBS CSI driver,
and the problem could be with permissions.</p>
<p>For that, you need to run the following command.</p>
<p>"kubectl describe pod ebs-csi-controller-xxxxxxx -c csi-provisioner
-n kube-system"</p>
<p>This will give an output of the configuration of the driver pod. In
that output, you need to check for an environment called
"AWS_ROLE_ARN:" If that wasn't present in your output, this implies
that you haven't provided the IAM OIDC provider role for the add-on.
So you need to create that role in the IAM console, then remove the
existing EBS CSI driver add-on from the EKS cluster console, and then
again add the EBS CSI driver add-on with that role as "Service
account role". More details for adding the EBS CSI driver add-on to
the cluster are referred to here[3].</p>
<p>If you already have the value for "AWS_ROLE_ARN" then you need to
check for the configuration of the role by using this
documentation[2].</p>
<p>So, keeping the above things in mind, I have created the IAM OIDC
provider role for the add-on. For that, you need to follow all the
steps regarding how to create an IAM role for the add-on as referred
to here[2].</p>
<p>After creating the IAM OIDC provider role, I have installed the add-on
via console by following the steps in this documentation[3] and for
the service account role, I have selected the OIDC provider role that
was created in the above step.</p>
<p>After installing the add-on, I tried to delete the mogodb database pod
by running the following command.</p>
<p>"kubectl delete -f
config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml"</p>
<p>Then run the following apply command to redeploy the pods.</p>
<p>"kubectl apply -f
config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml"</p>
<p>After I checked the pods, I could see that the mongodb database pod
had come to running status.</p>
<p>The above is the most common issue that might happen, if none of the
above is your problem then please share a convenient time along with
the timezone you're working in as well as contact number with country
code so that we can connect over a call and have a screen sharing
troubleshooting session.</p>
</blockquote>
<h1>reference links:</h1>
<p>[1] Amazon EBS CSI driver add-on : <a href="https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html</a></p>
<p>[2] How to create IAM OIDC provider for EBS CSI driver add-on : <a href="https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html</a></p>
<p>[3] Managing the EBS CSI driver add-on : <a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html</a></p>
<h1>Working commands/steps</h1>
<p>(Steps mentioned by support team)</p>
<ol>
<li>Creation of EKS cluster</li>
<li>Go to the newly created EKS cluster in AWS console. In the <strong>Overview</strong> tab, copy the value of <strong>OpenID Connect provider URL</strong> and save the value in some place for future reference.</li>
<li>Go to <em>IAM -> Identity providers -> Add Provider</em>. Select <em>OpenID Connect</em> as the <em>provider type.</em></li>
<li>Paste the copied url from step 2, in the <em>Provider URL</em> textbox and click <em>‘Get thumbprint’</em>. Set <em>Audience - sts.amazonaws.com</em> in the corresponding text box.</li>
<li>Click the <em>‘Add Provider’</em> button.</li>
<li>Create the required iam role. <em>IAM -> Roles -> Create Role</em>. In the <em>‘Select trusted entity’</em> section, choose <em>‘Web Identity’</em> . In <em>Identity provider</em> drop down, select the OIDC option that is created in step 5. Choose <strong>Audience - sts.amazonaws.com</strong> in the drop down. Click <strong>‘Next’</strong></li>
<li>Search for <em>AmazonEBSCSIDriverPolicy</em> policy in the next window and click ‘Next’ and give name,description,tags for the role and click create role.</li>
<li>In the <em>Roles</em> section, search for the newly created role in step 7 and go inside that role. <em>Trust relationships -> Edit trust policy.</em></li>
</ol>
<blockquote>
<p>"oidc.eks.eu-west-1.amazonaws.com/id/385AA11111111116116:sub":
"system:serviceaccount:kube-system:ebs-csi-controller-sa"</p>
</blockquote>
<ol start="9">
<li><p>Update the above text with current oidc id and add it as new key-value in the <em>Statement[0] -> Condition -> StringEquals</em>. Refer the full json structure of this trusted relationship json data in the last.</p>
</li>
<li><p>After updating the text, click ‘Update Policy’
Go to <em>EKS -> Clusters -> Newly created cluster in step 1</em>. Click <em>Add-ons</em> tab, <em>Add new.</em></p>
</li>
<li><p>In the pop up choose Name as <strong>Amazon EBS CSI Driver</strong>. <em>Version</em> as latest. Choose Role as the <em>role created in step 7</em>. If the above role is not listed in drop down, reload the section using the reload button and click <strong>Add</strong>.</p>
</li>
<li><p>After some time, the new <strong>Add on</strong> will become active. Then run this <code>kubectl get pods -n kube-system</code> command and we should see csi pods as shown.</p>
</li>
</ol>
<pre><code> ebs-csi-controller-68d49f84c8-sl7w6 6/6 Running 0 109s
ebs-csi-controller-68d49f84c8-w2k6r 6/6 Running 0 2m19s
ebs-csi-node-ldmsm 3/3 Running 0 2m20s
</code></pre>
<p>Then run the commands given in the question.</p>
<p>Following dictionary is the <strong>Trusted relationships</strong> json for role</p>
<pre class="lang-json prettyprint-override"><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::112345678900:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/Axxxxxxxxxxxxx"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.eu-west-1.amazonaws.com/id/Axxxxxxxxxxxxx:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa",
"oidc.eks.eu-west-1.amazonaws.com/id/Axxxxxxxxxxxxx:aud": "sts.amazonaws.com"
}
}
}
]
}
</code></pre>
|
<p>In Kubernetes container repository I have my permission set to Private:
<a href="https://i.stack.imgur.com/K1iNY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K1iNY.png" alt="enter image description here" /></a></p>
<p>When I create a pod on my cluster I get the the pod status ending in <code>ImagePullBackOff</code> and when I describe the pod I see:</p>
<pre><code>Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
</code></pre>
<p>I am certainly logged in.</p>
<pre><code>docker login
Authenticating with existing credentials...
Login Succeeded
</code></pre>
<p>Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.</p>
| <p>GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to <code>https://www.googleapis.com/auth/devstorage.read_only</code> as well.</p>
|
<p>It's currently possible to allow a single domain or subdomain but I would like to allow multiple origins. I have tried many things like adding headers with snipets but had no success.</p>
<p>This is my current ingress configuration:</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: nginx-ingress
namespace: default
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress
uid: adcd75ab-b44b-420c-874e-abcfd1059592
resourceVersion: '259992616'
generation: 7
creationTimestamp: '2020-06-10T12:15:18Z'
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/enable-cors: 'true'
ingress.kubernetes.io/force-ssl-redirect: 'true'
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
nginx.ingress.kubernetes.io/cors-allow-credentials: 'true'
nginx.ingress.kubernetes.io/cors-allow-headers: 'Authorization, X-Requested-With, Content-Type'
nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, POST, DELETE, HEAD, OPTIONS'
nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com'
nginx.ingress.kubernetes.io/enable-cors: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: 'true'
</code></pre>
<p>I also would like to extend the cors-allow-origin like:</p>
<pre><code>nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com, https://otherexample.com'
</code></pre>
<p>Is it possible to allow multiple domains in other ways?</p>
| <p>Ingress-nginx doesn’t support CORS with multiple origins.</p>
<p>However, you can try to use annotation: <strong>nginx.ingress.kubernetes.io/configuration-snippet</strong></p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_origin ~* "^https?://((?:exactmatch\.com)|(?:regexmatch\.com))$") {
add_header "Access-Control-Allow-Origin" "$http_origin" always;
add_header "Access-Control-Allow-Methods" "GET, PUT, POST, OPTIONS" always;
add_header "Access-Control-Allow-Headers" "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization" always;
add_header "Access-Control-Expose-Headers" "Content-Length,Content-Range" always;
}
</code></pre>
<p>Here to find more information <a href="https://github.com/kubernetes/ingress-nginx/issues/5496" rel="nofollow noreferrer">ingress nginx issues</a>.</p>
|
<p>My deployment had a readinessProbe configured like:</p>
<pre><code> readinessProbe:
port: 8080
path: /ready
initialDelaySeconds: 30
failureThreshold: 60
periodSeconds: 10
timeoutSeconds: 15
</code></pre>
<p>I want to remove the probe for some reason. However, after removing it from my YML file my deployment is not successful because look like the pod is never considered ready. Checking in GCP I discover that the result YML file has a readiness probe that points to some "default values" that I haven't set nowhere:</p>
<pre><code> readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 80
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
</code></pre>
<p>Is there a way to actually remove a ReadinessProbe for good?</p>
| <p>You need to set readinessProbe to null value like that:</p>
<pre><code>readinessProbe: null
</code></pre>
|
<p>I have Ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
labels:
app.kubernetes.io/managed-by: Helm
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: ingress
nginx.ingress.kubernetes.io/configuration-snippet: |
location ~ favicon.ico {
log_not_found off;
}
nginx.ingress.kubernetes.io/cors-allow-headers: content-type, x-att-timezone
nginx.ingress.kubernetes.io/cors-allow-methods: GET, POST, PUT, DELETE, OPTIONS
nginx.ingress.kubernetes.io/cors-allow-origin: '*'
nginx.ingress.kubernetes.io/cors-expose-headers: 'x-att-userrole, x-att-userdetails, x-att-userid, xatt-location '
nginx.ingress.kubernetes.io/enable-cors: 'true'
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
nginx.ingress.kubernetes.io/proxy-body-size: 10000m
nginx.ingress.kubernetes.io/proxy-connect-timeout: '6000000'
nginx.ingress.kubernetes.io/proxy-read-timeout: '6000000'
nginx.ingress.kubernetes.io/proxy-send-timeout: '6000000'
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
tls:
- hosts:
- st-my-doamin.com
secretName: ingress
rules:
- host: st-my-doamin.com
http:
paths:
- path: /rootpath/.*
pathType: Prefix
backend:
service:
name: someService
port:
number: 80
</code></pre>
<p>And i want to create redirection like this :</p>
<p>if i go to st-my-doamin.com/rootpath i will be <strong>redirect</strong> to st-my-doamin.com/rootpath/login</p>
<p>i tried to create this redirection and got error :</p>
<pre><code>This page isn’t working st-my-doamin.com redirected you too many times.
Try clearing your cookies.
ERR_TOO_MANY_REDIRECTS
</code></pre>
| <p>As you are getting the error "ERR_TOO_MANY_REDIRECTS" follow this <a href="https://kinsta.com/blog/err_too_many_redirects/" rel="nofollow noreferrer">link</a> it helps in clearing this error. Follow this <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Link</a> in redirecting the Path.</p>
<p>Add the below annotation in yaml :</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /get_similarity/$2
</code></pre>
<p>And add Path as below:</p>
<pre><code> - path: /rootpath(/|$)(.*)
</code></pre>
|
<p>I have currently deployed a small java application to my local Kubernetes cluster. I'm currently trying to test my application by port-forwarding the pod and then using postman to test my Controllers.</p>
<p>However, when I am testing I am getting a read timeout exception. No matter how long I set my timeout to be it will wait the entirety of the time and throw the exception.</p>
<p>This is strange because this only happens when it is running from my Kubernetes cluster and not when running the application locally. I can see this exception is thrown from a HttpClient I am using to retrieve some data from an External third-party API:</p>
<pre><code> @Client(value = "${rawg.api.url}")
public interface RawgClient {
@Get(value = "/{gameSlug}/${rawg.api.key}", produces = APPLICATION_JSON)
HttpResponse<RawgClientGameInfoResponse> retrieveGameInfo(@PathVariable("gameSlug") String gameSlug);
@Get(value = "${rawg.api.key}&search={searchTerm}",produces = APPLICATION_JSON)
HttpResponse<RawgClientSearchResponse> retrieveGameSearchByName(@PathVariable("searchTerm") String searchTerm);
}
</code></pre>
<p>However, When I check the logs after the exception is thrown I can see that the information was retrieved from the client:</p>
<pre><code> k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:15.220 [default-nioEventLoopGroup-1-2] TRACE i.m.c.e.PropertySourcePropertyResolver - Resolved value [?key=****] for property: rawg.api.key
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.242 [default-nioEventLoopGroup-1-2] ERROR i.m.r.intercept.RecoveryInterceptor - Type [com.agl.client.RawgClient$Intercepted] executed with error: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter io.micronaut.http.client.exceptions.ReadTimeoutException: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.exceptions.ReadTimeoutException.<clinit>(ReadTimeoutException.java:26)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.netty.DefaultHttpClient.lambda$exchangeImpl$45(DefaultHttpClient.java:1380)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onError(ReactorSubscriber.java:64)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.handleTimeout(FluxTimeout.java:295)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.doTimeout(FluxTimeout.java:280)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutTimeoutSubscriber.onNext(FluxTimeout.java:419)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.propagateDelay(MonoDelay.java:271)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:286)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.lang.Thread.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.257 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: RawgClient
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.259 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class com.agl.client.RawgClient$Intercepted null Definition: com.agl.client.RawgClient$Intercepted
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.259 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finalized bean definitions candidates: [Definition: com.agl.client.RawgClient$Intercepted]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.260 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class com.agl.client.RawgClient$Intercepted null Definition: com.agl.client.RawgClient$Intercepted
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.260 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [RawgClient] for qualifier: @Fallback
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.263 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - No qualifying beans of type [RawgClient] found for qualifier: @Fallback
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.276 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finding candidate beans for type: ExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ContentLengthExceededHandler null Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.JsonExceptionHandler null Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.HttpStatusHandler null Definition: io.micronaut.http.server.exceptions.HttpStatusHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ConversionErrorHandler null Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.DuplicateRouteHandler null Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.validation.exceptions.ConstraintExceptionHandler null Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.290 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.URISyntaxHandler null Definition: io.micronaut.http.server.exceptions.URISyntaxHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.291 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.292 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Finalized bean definitions candidates: [Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler, Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler, Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler, Definition: io.micronaut.http.server.exceptions.HttpStatusHandler, Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler, Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler, Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler, Definition: io.micronaut.http.server.exceptions.URISyntaxHandler, Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.292 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ContentLengthExceededHandler null Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.292 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.JsonExceptionHandler null Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.HttpStatusHandler null Definition: io.micronaut.http.server.exceptions.HttpStatusHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.ConversionErrorHandler null Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.DuplicateRouteHandler null Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.validation.exceptions.ConstraintExceptionHandler null Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.URISyntaxHandler null Definition: io.micronaut.http.server.exceptions.URISyntaxHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.293 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - class io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler null Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.294 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - Qualifying bean [ExceptionHandler] for qualifier: <ReadTimeoutException,Object>
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.306 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.http.exceptions.ContentLengthExceededException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.ContentLengthExceededHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.306 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class com.fasterxml.jackson.core.JsonProcessingException,class java.lang.Object] of candidate Definition: io.micronaut.http.server.exceptions.JsonExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.306 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.core.bind.exceptions.UnsatisfiedArgumentException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.UnsatisfiedArgumentHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.http.exceptions.HttpStatusException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.HttpStatusHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.core.convert.exceptions.ConversionErrorException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.ConversionErrorHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.web.router.exceptions.DuplicateRouteException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.DuplicateRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class javax.validation.ConstraintViolationException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.validation.exceptions.ConstraintExceptionHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.307 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class java.net.URISyntaxException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.URISyntaxHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.308 [default-nioEventLoopGroup-1-2] TRACE i.m.i.q.ClosestTypeArgumentQualifier - Bean type interface io.micronaut.http.server.exceptions.ExceptionHandler is not compatible with candidate generic types [class io.micronaut.web.router.exceptions.UnsatisfiedRouteException,interface io.micronaut.http.HttpResponse] of candidate Definition: io.micronaut.http.server.exceptions.UnsatisfiedRouteHandler
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.309 [default-nioEventLoopGroup-1-2] DEBUG i.m.context.DefaultBeanContext - No qualifying beans of type [ExceptionHandler] found for qualifier: <ReadTimeoutException,Object>
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.311 [default-nioEventLoopGroup-1-2] ERROR i.m.http.server.RouteExecutor - Unexpected error occurred: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter io.micronaut.http.client.exceptions.ReadTimeoutException: Read Timeout
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.exceptions.ReadTimeoutException.<clinit>(ReadTimeoutException.java:26)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.http.client.netty.DefaultHttpClient.lambda$exchangeImpl$45(DefaultHttpClient.java:1380)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onError(ReactorSubscriber.java:64)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.handleTimeout(FluxTimeout.java:295)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.doTimeout(FluxTimeout.java:280)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxTimeout$TimeoutTimeoutSubscriber.onNext(FluxTimeout.java:419)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorSubscriber.onNext(ReactorSubscriber.java:57)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.propagateDelay(MonoDelay.java:271)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.publisher.MonoDelay$MonoDelayRunnable.run(MonoDelay.java:286)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at io.micronaut.reactive.reactor.instrument.ReactorInstrumentation.lambda$init$0(ReactorInstrumentation.java:62)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:68)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at reactor.core.scheduler.SchedulerTask.call(SchedulerTask.java:28)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter at java.base/java.lang.Thread.run(Unknown Source)
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.323 [default-nioEventLoopGroup-1-2] TRACE i.m.h.s.netty.RoutingInBoundHandler - Encoding emitted response object [Internal Server Error] using codec: io.micronaut.json.codec.JsonMediaTypeCodec@6399551e
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.325 [default-nioEventLoopGroup-1-2] TRACE i.m.context.DefaultBeanContext - Looking up existing bean for key: T
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.368 [default-nioEventLoopGroup-1-2] DEBUG i.m.c.beans.DefaultBeanIntrospector - Found BeanIntrospection for type: class io.micronaut.http.hateoas.JsonError,
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.372 [default-nioEventLoopGroup-1-2] DEBUG i.m.j.m.BeanIntrospectionModule - Updating 5 properties with BeanIntrospection data for type: class io.micronaut.http.hateoas.JsonError
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.394 [default-nioEventLoopGroup-1-2] DEBUG i.m.c.beans.DefaultBeanIntrospector - Found BeanIntrospection for type: class io.micronaut.http.hateoas.DefaultLink,
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.397 [default-nioEventLoopGroup-1-2] DEBUG i.m.j.m.BeanIntrospectionModule - Updating 8 properties with BeanIntrospection data for type: class io.micronaut.http.hateoas.DefaultLink
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.414 [default-nioEventLoopGroup-1-2] DEBUG i.m.h.s.netty.RoutingInBoundHandler - Response 500 - PUT /api/games/search
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.418 [default-nioEventLoopGroup-1-2] DEBUG i.m.c.e.ApplicationEventPublisher - Publishing event: io.micronaut.http.context.event.HttpRequestTerminatedEvent[source=PUT /api/games/search]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.418 [default-nioEventLoopGroup-1-2] TRACE i.m.c.e.ApplicationEventPublisher - Established event listeners [io.micronaut.runtime.http.scope.RequestCustomScope@4f5af8bf] for event: io.micronaut.http.context.event.HttpRequestTerminatedEvent[source=PUT /api/games/search]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.419 [default-nioEventLoopGroup-1-2] TRACE i.m.c.e.ApplicationEventPublisher - Invoking event listener [io.micronaut.runtime.http.scope.RequestCustomScope@4f5af8bf] for event: io.micronaut.http.context.event.HttpRequestTerminatedEvent[source=PUT /api/games/search]
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.451 [default-nioEventLoopGroup-1-2] DEBUG i.m.h.client.netty.DefaultHttpClient - Sending HTTP GET to https://rawg.io/api/games/?key=b0f66f77c214441d9864062ee5580ca4&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.454 [default-nioEventLoopGroup-1-2] TRACE i.m.h.client.netty.DefaultHttpClient - Accept: application/json
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.454 [default-nioEventLoopGroup-1-2] TRACE i.m.h.client.netty.DefaultHttpClient - host: rawg.io
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.454 [default-nioEventLoopGroup-1-2] TRACE i.m.h.client.netty.DefaultHttpClient - connection: close
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.747 [default-nioEventLoopGroup-1-2] DEBUG io.netty.handler.ssl.SslHandler - [id: 0xfdc82b86, L:/10.1.0.167:44770 - R:rawg.io/172.67.75.230:443] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.826 [default-nioEventLoopGroup-1-2] DEBUG i.m.h.client.netty.DefaultHttpClient - Received response 301 from https://rawg.io/api/games/?key=****&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] DEBUG i.m.h.client.netty.DefaultHttpClient - Sending HTTP GET to https://rawg.io/api/games?key=****&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - Accept: application/json
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - host: rawg.io
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.900 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - connection: close
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:46.955 [default-nioEventLoopGroup-1-3] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x7b660180, L:/10.1.0.167:44776 - R:rawg.io/172.67.75.230:443] HANDSHAKEN: protocol:TLSv1.3 cipher suite:TLS_AES_128_GCM_SHA256
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:47.302 [default-nioEventLoopGroup-1-3] DEBUG i.m.h.client.netty.DefaultHttpClient - Received response 200 from https://rawg.io/api/games?key=****&search=Horizon
k8-agl-rawg-adapter-6db4f4ccd8-9srrf agl-rawg-adapter 21:42:47.302 [default-nioEventLoopGroup-1-3] TRACE i.m.h.client.netty.DefaultHttpClient - HTTP Client Response Received (200 OK) for Request: GET https://rawg.io/api/games?key=****&search=Horizon
</code></pre>
<p>Has anyone got any thoughts what could be causing this read timeout when the client is clearly returning a response?</p>
| <p>Some k8s deployments allow k8s pods to have only 1 CPU. In this circumstance it is not possible for the client and server to share an event loop (which is the default since it is more efficient to share an event loop if possible). The result is you get read timeouts. So you should configure a separate event loop for the client and the server to avoid this problem.</p>
<p>See <a href="https://docs.micronaut.io/latest/guide/#clientConfiguration" rel="nofollow noreferrer">https://docs.micronaut.io/latest/guide/#clientConfiguration</a> and the section "Configuring Event Loop Groups"</p>
|
<p>I want to create a crone in the cluster which should have access to all of the namespaces, I don't want to configure that job in each and every namespace as I have multiple namespaces. Is this possible?
Edit: I want to run same cronjob in all namespace</p>
| <p>You can use helm to achieve this, either <strong>static</strong> namespaces or <strong>dynamic</strong> all namespaces.</p>
<pre><code>{{- range $namespaces := .Values.namespaces }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
namespace: {{ $namespaces }}
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
---
{{- end }}
</code></pre>
<p>and values file</p>
<pre><code>namespaces:
- namesapce-1
- namesapce-2
</code></pre>
<p>Dynamic namespaces:</p>
<pre><code>helm install my-cronjob helm-chart --set "namespaces={$(k get ns | awk 'BEGIN{ORS=","} { if (NR>1) print $1}')}"
</code></pre>
<p>A complete working example <a href="https://github.com/adiii717/multi-namespace-deployment" rel="nofollow noreferrer">multi-namespace-deployment</a></p>
|
<p>I'm trying to set up a cluster Autoscaler for my kubernetes cluster and when I'm looking at the autoscaler logs im seeing these error messages:</p>
<pre><code>1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:cluster-autoscaler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0922 10:14:33.794709 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:cluster-autoscaler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
I0922 10:14:35.491641 1 reflector.go:255] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134
E0922 10:14:36.196200 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:cluster-autoscaler" cannot list resource "namespaces" in API group "" at the cluster scope
</code></pre>
<p>Anyone might have a clue what might be the issue?</p>
<p>Thanks</p>
| <p>Just make sure <strong>cluster-autoscaler</strong> has permission to use the resource <strong>csidrivers</strong> you can edit the RBAC and add the access for <code>storage.k8s.io</code></p>
<p>Edit cluster role with this</p>
<pre><code>- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- csinodes
- csidrivers
- csistoragecapacities
verbs:
- watch
- list
- get
</code></pre>
|
<p>Looks like there is no support to delete HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0.</p>
<p>Although It is straightforward to create HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0.</p>
<p>E.g.</p>
<pre><code> HorizontalPodAutoscalerStatus hpaStatus = k8sClient.resource(createHPA())
.inNamespace(namespace)
.createOrReplace().getStatus();
</code></pre>
<pre><code>public HorizontalPodAutoscaler createHPA(){
return new HorizontalPodAutoscalerBuilder()
.withNewMetadata()
.withName(applicationName)
.addToLabels("name", applicationName)
.endMetadata()
.withNewSpec()
.withNewScaleTargetRef()
.withApiVersion(hpaApiVersion)
.withKind("Deployment")
.withName(applicationName)
.endScaleTargetRef()
.withMinReplicas(minReplica)
.withMaxReplicas(maxReplica)
.addNewMetric()
.withType("Resource")
.withNewResource()
.withName("cpu")
.withNewTarget()
.withType("Utilization")
.withAverageUtilization(cpuAverageUtilization)
.endTarget()
.endResource()
.endMetric()
.addNewMetric()
.withType("Resource")
.withNewResource()
.withName("memory")
.withNewTarget()
.withType("AverageValue")
.withAverageValue(new Quantity(memoryAverageValue))
.endTarget()
.endResource()
.endMetric()
.withNewBehavior()
.withNewScaleDown()
.addNewPolicy()
.withType("Pods")
.withValue(podScaleDownValue)
.withPeriodSeconds(podScaleDownPeriod)
.endPolicy()
.withStabilizationWindowSeconds(podScaledStabaliztionWindow)
.endScaleDown()
.endBehavior()
.endSpec().build();
}
</code></pre>
<p>Any solution to delete HorizontalPodAutoscaler using fabric8's K8S Java client ver:6.0.0 will be appriciated.</p>
| <p>First, Need to identify which API group <code>(v1, v2beta1, v2beta2)</code> was used during deployment creation based on the same API group the autoscaling function need to be call to get the HPA instance and then go ahead to perform any action on that HPA instance.</p>
<p>In my case the deployment was created with v2beta2 API group, Below code snnipt helped me to deleted the HorizontalPodAutoscaler object from the provided name space.</p>
<pre><code>k8sClient.autoscaling().v2beta2().horizontalPodAutoscalers().inNamespace("test").withName("myhpa").delete()
</code></pre>
|
<p>I am trying to set up Kubernetes in my instance on a ubuntu 22.04 virtual machine. While in the installation process i am trying to add a gpg key for intra cluster communication. After running the command:</p>
<p><code>sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add</code></p>
<p>I got a message that <code>apt-key</code> add was <em>deprecated</em> and I should read the <code>apt-key(8) man</code> page. Can anyone tell me what exactly, I should type on my terminal instead of this!</p>
| <p>I share this link.</p>
<p><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management" rel="nofollow noreferrer">Install using native package management</a></p>
<ol>
<li>Update the apt package index and install packages needed to use the Kubernetes apt repository:</li>
</ol>
<pre><code>sudo apt-get update
sudo apt-get install -y ca-certificates curl
</code></pre>
<ol start="2">
<li>Download the Google Cloud public signing key:</li>
</ol>
<pre><code>sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
</code></pre>
<ol start="3">
<li>Add the Kubernetes apt repository:</li>
</ol>
<pre><code>echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
</code></pre>
<ol start="4">
<li>Update apt package index with the new repository and install kubectl:</li>
</ol>
<pre><code>sudo apt-get update
sudo apt-get install -y kubectl
</code></pre>
|
<p>Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "prometheus-kube-state-metrics" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "monitoring": current value is "monitoring-kogito-poc"</p>
<p>How to resolve this I have deleted the namespace and created new namespace and trying to install helm charts for grafana and prometheus.
But it couldn't allowed me to install.
I have deleted clusterrolebinding to for all below enter image description here</p>
<p>kubectl delete clusterrolebinding prometheus-kube-state-metrics
[enter image description here][1]</p>
<p>but it still showing and notable install prometheus
[1]: <a href="https://i.stack.imgur.com/QkNaS.png" rel="nofollow noreferrer">https://i.stack.imgur.com/QkNaS.png</a></p>
| <p>It was not enough just to delete the role binding. I had to also delete the role itself.</p>
<pre><code>kubectl delete clusterrole prometheus-kube-state-metrics
</code></pre>
<p>Furthermore I deleted all other prometheus-relevant role bindings and roles, too:</p>
<ul>
<li>prometheus-alertmanager</li>
<li>prometheus-pushgateway</li>
<li>prometheus-server</li>
<li>system:prometheus</li>
</ul>
<p>Finally I re-ran the installation and it worked.</p>
|
<p>On this document.</p>
<p><a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/kafka</a></p>
<p>it mentioned following:</p>
<pre><code>Note: the deployed ingress must contain the following block:
tcp:
9094: "{{ .Release.Namespace }}/{{ include "kafka.fullname" . }}-0-external:9094"
9095: "{{ .Release.Namespace }}/{{ include "kafka.fullname" . }}-1-external:9094"
9096: "{{ .Release.Namespace }}/{{ include "kafka.fullname" . }}-2-external:9094"
</code></pre>
<p>what does this means? what is this configuration? is this helm chart configuration or k8s configuration?</p>
| <p>I resolved this by referring to this guide.</p>
<p><a href="https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/</a></p>
<p>I was missing this step</p>
<pre><code>kubectl patch deployment ingress-nginx-controller --patch "$(cat ingress-nginx-controller-patch.yaml)" -n ingress-nginx
</code></pre>
<p>ingress-nginx-controller-patch.yaml</p>
<pre><code>spec:
template:
spec:
containers:
- name: controller
ports:
- containerPort: 6379
hostPort: 6379
</code></pre>
|
<p>I am using bitnami/rabbitmq image in kubernetes pod. Seems the image don`t have a root. May I know how to add the root user in the kubernetes.</p>
<p><a href="https://i.stack.imgur.com/ne7Q4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ne7Q4.png" alt="enter image description here" /></a></p>
| <p>From <code>3.7.7-r19</code> the RabbitMQ container has been migrated to a <strong>non-root user approach</strong>. Previously the container ran as the root user and the RabbitMQ daemon was started as the rabbitmq user. From now on, both the container and the RabbitMQ daemon run as user 1001. As a consequence, the data directory must be writable by that user. You can revert this behavior by changing USER 1001 to USER root in the Dockerfile.</p>
<p>So either you need to build <strong>custom docker image</strong> or use <strong>old docker image</strong>.</p>
<p><strong>Reference:</strong></p>
<p><a href="https://hub.docker.com/r/bitnami/rabbitmq" rel="nofollow noreferrer">https://hub.docker.com/r/bitnami/rabbitmq</a></p>
<p><a href="https://github.com/bitnami/containers/blob/main/bitnami/rabbitmq/README.md" rel="nofollow noreferrer">https://github.com/bitnami/containers/blob/main/bitnami/rabbitmq/README.md</a></p>
|
<p>I have created a secret.yaml file as follows:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: {{ (.Files.Glob "../trust.jks").AsSecrets | b64enc }}
</code></pre>
<p>I am calling this as part of template .yaml file in HELM.</p>
<p>.yaml</p>
<pre><code>apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
</code></pre>
<p>When I run helm install command the pod gets created successfully, and the volume is also mounted, but if I check the <code>truststore.jks</code> in /mnt/secrets using cat command below is the output:</p>
<pre><code>cat /mnt/secrets/truststore.jks
{}
</code></pre>
<p>I ran the dry run command to check the generated .yaml file, the secret is populted as below:</p>
<pre><code># Source: ag-saas/templates/tsSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: e30=
</code></pre>
<p>How do I get the file into my secret?</p>
| <p>There's a couple of things going on here:</p>
<ul>
<li><code>.Files.Glob</code> is intended to retrieve multiple files, <em>e.g.</em> <code>.Files.Glob "credentials/*.jks"</code>. For a single file <code>.File.Get</code> will retrieve its contents directly.</li>
<li>You can only access files inside the chart directory; referencing <code>.Files.Get "../trust.jks"</code> won't work.</li>
<li><code>.Files.Glob.AsSecret</code> renders a list of files to the entire contents of the <code>data:</code> block; you just directly need the file content.</li>
</ul>
<p>So your Secret should look like</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: {{ .Files.Get "trust.jks" | b64enc }}
</code></pre>
<p>where in the last line I've used <code>.Files.Get</code>, I've not tried to refer to a <code>"../..."</code> path outside the chart, and I don't render it to <code>...AsSecret</code>.</p>
<p>You also will need to move or copy (not symlink) the keyset file into the chart directory for this to work.</p>
<p>(In the current form, <code>.Files.Glob</code> won't match anything outside the chart directory, so you get an empty list of files. Then rendering that to <code>.AsSecrets</code> gets you an empty JSON object. You're using that string <code>{}</code> as the secret value, which gets correctly base64-encoded, but that's why <code>{}</code> comes out at the end.)</p>
|
<p>Why does the strimzi kafka operator have supported kafka versions; why do I care about this, if the version of kafka is being managed by the operator?</p>
<p>Is this only mentioned for client support?</p>
| <p>The Apache Kafka versions supported by the different Strimzi versions are listed on the <a href="https://strimzi.io/downloads/" rel="nofollow noreferrer">Strimzi website</a>. Supported in this case means <em>the versions for which we ship container images and which were tested</em>. There are several reasons why we don't support more versions:</p>
<ul>
<li>While you might not <code>care about this, if the version of kafka is being managed by the operator</code>, the operator still cares because it needs to understand what it operates because it encodes the operational knowledge.</li>
<li>As any other software, also Apache Kafka evolves, APIs (for example around the Admin APIs) and configurations (e.g. new options are added in different versions and the operator needs to understand them to validate them or update them) are changing etc. So supporting old versions is not always easy without code complexity.</li>
<li>We have limited resources to build and test the software. Both in terms of contributors but also as CI resources to run the build and test pipelines.</li>
</ul>
<p>The current Strimzi commitment to what Kafka versions does it support is listed <a href="https://github.com/strimzi/strimzi-kafka-operator/blob/main/KAFKA_VERSION_SUPPORT.md" rel="nofollow noreferrer">here</a>. If you are interested, you can always join the project and help to make things better. Sicne Strimzi is open source, you can also always try to add another Kafka versions yourself and build and test it.</p>
<p>The Kafka consumers and producers have normally very good backwards / forwards compatibility. So you do not necessarily need to always use the same version of the clients as the brokers.</p>
|
<p><strong>CONTEXT:</strong> I am trying to setup fluent bit for logging activities in pods in a number of node groups included in a cluster. And so it requires that each node group have an IAM role assigned to it with all the required policies so, fluent bit's daemonset could record and save logs into log groups in cloud watch. <a href="https://github.com/gretelai/FluentBitLogging/blob/master/terraform/applications/main.tf" rel="nofollow noreferrer">Here's</a> the repo of the solution I am following.</p>
<p><strong>WHAT HAVE I TRIED:</strong></p>
<ol>
<li>create individual node group roles and attach policies by passing inputs into relevant variables of the modules. Like so:</li>
</ol>
<pre><code>module "eks" {
...
eks_managed_node_groups = {
one = {
create_iam_role = true
iam_role_name = "fluent-bit-logger"
iam_role_use_name_prefix = true
iam_role_description = "Fluent-bit-logging for node group 1"
iam_role_tags = {
Name = "fb-ng-2"
}
two = {
(same config, with obvious naming changes)
}
}
...
}
</code></pre>
<ol start="2">
<li><p>The above step worked but I am trying to setup fluent bit through terraform's helm resource and based on the solution I am following I am required to input only one IAM node role or that's what I make of it.</p>
</li>
<li><p>Use the <code>eks_managed_node_group_defaults</code> attribute to create and
assign <em>hopefully the same</em> IAM role to both the node groups and this is how I did it.</p>
</li>
</ol>
<pre><code>module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.26.6"
cluster_name = local.cluster_name
cluster_version = "1.21"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
attach_cluster_primary_security_group = true
# Disabling and using externally provided security groups
create_security_group = false
# fluent-bit (IAM-policy-role)
create_iam_role = true
iam_role_name = "fluent-bit-logger"
iam_role_use_name_prefix = false
iam_role_description = "Fluent-bit-logging - default for node groups"
iam_role_tags = {
Name = "fb-ng-default"
}
</code></pre>
<p><strong>WHAT I EXPECT TO GET</strong>
A message of successful eks cluster module execution (including the expected common node group role for the both node groups within.</p>
<p><strong>THE ERROR I GET</strong></p>
<pre><code>Error: failed creating IAM Role (fluent-bit-logger): EntityAlreadyExists: Role with name fluent-bit-logger already exists.
│ status code: 409, request id: fx11xxax-axex-4xxx-b749-09xx8x8xx17x
│
│ with module.eks-cluster.module.eks.module.eks_managed_node_group["two"].aws_iam_role.this[0],
│ on .terraform/modules/eks-cluster.eks/modules/eks-managed-node-group/main.tf line 431, in resource "aws_iam_role" "this":
│ 431: resource "aws_iam_role" "this" {
</code></pre>
| <p>The error message states that you are attempting to create the same role twice, which would cause the AWS API to error. This is consistent with your config given the argument value:</p>
<pre><code>create_iam_role = true
</code></pre>
<p>for two different EKS node groups. You would need to change the value to <code>false</code>, manage the role with e.g. <code>aws_iam_role.fluent_bit_logger</code>, and then (best practices) update accordingly:</p>
<pre><code>iam_role_name = aws_iam_role.fluent_bit_logger.name
</code></pre>
<p>Otherwise, you could manage the role within the EKS module declaration by using the config you shared in the first part of the question.</p>
|
<p>My Kubernetes ingress has the following:</p>
<pre><code> ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
...
...
# -- Defines which ingress controller will implement the resource
ingressClassName: ""
</code></pre>
<p>I'm a bit confused about the difference between "kubernetes.io/ingress.class" and "ingressClassName.".</p>
<p>I believe "kubernetes.io/ingress.class" needs to match the ingressClass as defined in the AWS ALB Ingress Controller.</p>
<p>However, then I'm confused about the role of "ingressClassName"? As that seems to be the same thing.</p>
<p>Any clarity would be appreciated.</p>
| <p>Actually, both refer to the same thing, but <code>kubernetes.io/ingress.class</code> is deprecated from Kubernetes v1.22+. and <code>ingressClassName</code> is introduced in <code>1.18</code>, so if you are using higher version you can <code>ingressClassName</code></p>
<blockquote>
<p>Before the <code>IngressClass</code> resource and <code>ingressClassName</code> field were added in Kubernetes 1.18, Ingress classes were specified with a <code>kubernetes.io/ingress.class</code> annotation on the Ingress. This annotation was never formally defined, but was widely supported by Ingress controllers.</p>
</blockquote>
<blockquote>
<p><strong>The newer <code>ingressClassName</code> field on Ingresses is a replacement for that annotation</strong>, but is not a direct equivalent. While the annotation was generally used to reference the name of the Ingress controller that should implement the Ingress, <strong>the field is a reference to an IngressClass resource that contains additional Ingress configuration</strong>,</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation" rel="nofollow noreferrer">ingress-deprecated-annotation</a></p>
<p>Your example is</p>
<pre><code> ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
</code></pre>
<p>its equivalent value is <code>className: "nginx"</code></p>
<pre><code> ingress:
enabled: true
className: "alb" -> ingressClassName
</code></pre>
<p>if you check the <code>ingress</code> template, it will be like this</p>
<pre><code> ingressClassName: {{ .Values.ingress.className }}
</code></pre>
|
<p>I have 3 services in 3 different namespaces I want my ingress rules to map to these backends, on path based routes.
Can someone please guide on the same.
I am using nginx ingress inside azure Kubernetes cluster.</p>
| <p>A basic example with an assumption that your <code>nginx ingress</code> is working correctly inside your <code>AKS</code> would be following:</p>
<p>List of <code>Pods</code> with their <code>Services</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Pod</th>
<th>Namespace</th>
<th>Service name</th>
</tr>
</thead>
<tbody>
<tr>
<td>nginx</td>
<td>alpha</td>
<td>alpha-nginx</td>
</tr>
<tr>
<td>nginx</td>
<td>beta</td>
<td>beta-nginx</td>
</tr>
<tr>
<td>nginx</td>
<td>omega</td>
<td>omega-nginx</td>
</tr>
</tbody>
</table>
</div><hr />
<p><code>Ingress</code> definition for this particular setup:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alpha-ingress
namespace: alpha
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: "kubernetes.kruk.lan"
http:
paths:
- path: /alpha(/|$)(.*)
pathType: Prefix
backend:
service:
name: alpha-nginx
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: beta-ingress
namespace: beta
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: "kubernetes.kruk.lan"
http:
paths:
- path: /beta(/|$)(.*)
pathType: Prefix
backend:
service:
name: beta-nginx
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: omega-ingress
namespace: omega
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: "kubernetes.kruk.lan"
http:
paths:
- path: /omega(/|$)(.*)
pathType: Prefix
backend:
service:
name: omega-nginx
port:
number: 80
</code></pre>
<p>In this example <code>Ingress</code> will analyze and rewrite the requests for the <strong>same domain name</strong> to send the traffic to different namespaces i.e. <code>alpha</code>, <code>beta</code>, <code>omega</code>.</p>
<p>When you've have finalized your <code>Ingress</code> resource, you can use <code>curl</code> to validate your configuration.</p>
<pre class="lang-bash prettyprint-override"><code>curl kubernetes.kruk.lan/alpha | grep -i "<h1>"
<h1>Welcome to nginx from ALPHA namespace!</h1>
</code></pre>
<pre class="lang-bash prettyprint-override"><code>curl kubernetes.kruk.lan/beta | grep -i "<h1>"
<h1>Welcome to nginx from BETA namespace!</h1>
</code></pre>
<pre class="lang-bash prettyprint-override"><code>curl kubernetes.kruk.lan/omega | grep -i "<h1>"
<h1>Welcome to nginx from OMEGA namespace!</h1>
</code></pre>
<p>I'd encourage you to check following docs on rewrites:</p>
<ul>
<li><em><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress-nginx: Examples: Rewrite</a></em></li>
</ul>
<hr />
<p>PS: <code>Pods</code> are default <code>nginx</code> containers/images with added text to <code>/usr/share/nginx/html/index.html</code></p>
|
<p>I tried to use <code>microk8s add-node</code> in first machine. and second machine, I tried to <code>microk8s join <>:<>/<> --worker</code> but it returns this:</p>
<blockquote>
<p>the host name of the joining node does not resolve to the IP Refusing join (400)</p>
</blockquote>
| <p>You need to make sure the control plane node can also resolve the hostname of the workers.</p>
<p>Add on the <code>/etc/hosts</code> of the control plane node: e.g.:</p>
<pre><code>192.168.0.10 workerName
</code></pre>
|
<p>I have tried to deploy Kafka in k8s, so I need to persist its volume with hostpath, but when the volume configuration adds to the deployment file, this error shows in Kafka pod, and the pod state becomes Crashloopbackoff:</p>
<pre><code>mkdir: cannot create directory ‘/bitnami/config’: Permission denied
</code></pre>
<p>I think I have to change permission so the pod can create this file.</p>
<p>Deployment.yml:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kafka-broker
name: kafka-broker
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
containers:
- env:
- name: ALLOW_PLAINTEXT_LISTENER
value: "yes"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-service:2181
- name: KAFKA_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://:9092
image: bitnami/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
volumeMounts:
- name: kafka-data
readOnly: false
mountPath: "/bitnami/kafka"
volumes:
- name: kafka-data
hostPath:
path: /data/kafka-data</code></pre>
</div>
</div>
</p>
| <p>I have solved the problem by changing the path (where I mount the pod data )ownership on the worker servers with this command:</p>
<pre><code>sudo chown -R 1001:1001 /data/kafka-data
</code></pre>
<p>But I think this solution is not bestpractice.</p>
|
<p>I am looking for disk space usage metrics for EKS cluster that can be monitored.
I came across three different metric alerts : <br>
1.</p>
<pre><code>k8s-high-filesystem-usage:
name: "(k8s) High Filesystem Usage Detected"
type: metric alert
query: |
avg(last_10m):avg:kubernetes.filesystem.usage_pct{*} by {cluster_name} > 90
message: |
{{#is_warning}}
{{cluster_name.name}} filesystem usage greater than 80% for 10 minutes
{{/is_warning}}
{{#is_alert}}
{{cluster_name.name}} filesystem usage greater than 90% for 10 minutes
{{/is_alert}}
</code></pre>
<ol start="2">
<li></li>
</ol>
<pre><code>k8s-high-disk-usage:
name: "(k8s) High Disk Usage Detected"
type: metric alert
query: |
min(last_5m):min:kubelet.volume.stats.used_bytes{*} by {cluster_name} / avg:kubernetes.kubelet.volume.stats.capacity_bytes{*} by {cluster_name} * 100 > 90
message: |
({{cluster_name.name}} High disk usage detected
</code></pre>
<ol start="3">
<li></li>
</ol>
<pre><code>k8s-high-disk-usage:
name: "(k8s) High Disk Usage Detected"
type: metric alert
query: |
min(last_5m):min:system.disk.used{*} by {host,cluster_name} / avg:system.disk.total{*} by {host,cluster_name} * 100 > 90
message: |
({{cluster_name.name}} High disk usage detected on {{host.name}}
</code></pre>
<p>What does these three metrics mean? When can I use these?</p>
| <p>This is a confusing area that is poorly documented. I'm glad you asked the question.</p>
<p>First, some background on each metric.</p>
<h3><code>system.disk.used</code></h3>
<p>This metric is the most straightforward: disk space used, in bytes, of the disk partitions on the k8s host. This is a core check collected by the Datadog agent. Find the source for this metric in <a href="https://github.com/DataDog/datadog-agent/blob/ffb7ee79992815a77533f7950b7c1c45ae91fcd8/pkg/collector/corechecks/system/disk/disk.go" rel="nofollow noreferrer"><code>corechecks/system/disk/disk.go</code></a>. The check will report disk usage for each volume on the host.</p>
<h3><code>kubernetes.filesystem.usage_pct</code></h3>
<p>This metric reports disk space used for each node in a k8s cluster. The data is pulled from the metrics published by the kubelet under <code>/stats/summary</code>. You can query the data directly using <code>kubectl</code>, e.g.</p>
<pre><code>kubectl get --raw /api/v1/nodes/<node name>/proxy/stats/summary
</code></pre>
<p>The code can be found by tracing the calls in <a href="https://github.com/DataDog/datadog-agent/blob/ffb7ee79992815a77533f7950b7c1c45ae91fcd8/pkg/collector/corechecks/cluster/orchestrator/" rel="nofollow noreferrer">the cluster orchestrator</a> and <a href="https://github.com/DataDog/datadog-agent/blob/ffb7ee79992815a77533f7950b7c1c45ae91fcd8/pkg/util/kubernetes/kubelet/kubelet.go" rel="nofollow noreferrer">kubelet util</a> files. This metric also reports disk usage percentage <em>by pod</em>, <em>device</em>, and other potentially useful tags.</p>
<h3><code>kubernetes.kubelet.volume.stats.used_bytes</code></h3>
<p>This metric reports data about pods persistent volume claims. You can find out how many bytes are used by each pvc. This metric will only exist for pods with persistent volume claims. This is also in the <a href="https://github.com/DataDog/datadog-agent/blob/ffb7ee79992815a77533f7950b7c1c45ae91fcd8/pkg/collector/corechecks/cluster/orchestrator/processors/k8s/persistentvolumeclaim.go" rel="nofollow noreferrer">cluster/orchestrator code base</a>.</p>
<p>So, with that background in mind, when would you use each metric?</p>
<p>Use <code>system.disk.used</code> to track <em>the disk usage at the node level</em>. If you want to monitor the disk usage of hosts, watch this value. You should monitor on the <code>device</code> tag - you will be most interested in the physical disk partitions and the Docker volumes. You can probably ignore the <code>shm</code> and <code>tmpfs</code> volumes (virtual memory). Note that since this is a core check, this metric is reported for <em>any</em> host with the datadog agent installed, not just k8s hosts.</p>
<p>Use <code>kubernetes.filesystem.usage_pct</code> to track <em>disk usage by k8s hosts</em>. It probably makes sense to monitor with <code>cluster_name</code> <em>and</em> with <code>host</code>, and to use the max value, e.g. update your query to:</p>
<pre><code>avg(last_10m):max:kubernetes.filesystem.usage_pct{*} by {cluster_name,host}
</code></pre>
<p>If you want pod-level usage, you can also add <code>pod_name</code> to the query.</p>
<p>Finally, use <code>kubernetes.kubelet.volume.stats.used_bytes</code> to monitor disk space of persistent volume claims. You'll want to add the <code>persistentvolumeclaim</code> tag to the query so you know which claim you're looking at.</p>
|
<p>I just switched from ForkPool to gevent with concurrency (5) as the pool method for Celery workers running in Kubernetes pods. After the switch I've been getting a non recoverable erro in the worker:</p>
<p><code>amqp.exceptions.PreconditionFailed: (0, 0): (406) PRECONDITION_FAILED - delivery acknowledgement on channel 1 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more</code></p>
<p>The broker logs gives basically the same message:</p>
<p><code>2021-11-01 22:26:17.251 [warning] <0.18574.1> Consumer None4 on channel 1 has timed out waiting for delivery acknowledgement. Timeout used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more</code></p>
<p>I have the <code>CELERY_ACK_LATE</code> set up, but was not familiar with the necessity to set a timeout for the acknowledgement period. And that never happened before using processes. Tasks can be fairly long (60-120 seconds sometimes), but I can't find a specific setting to allow that.</p>
<p>I've read in another post in other forum a user who set the timeout on the broker configuration to a huge number (like 24 hours), and was also having the same problem, so that makes me think there may be something else related to the issue.</p>
<p>Any ideas or suggestions on how to make worker more resilient?</p>
| <p>The accepted answer is the correct answer. However, if you have an existing RabbitMQ server running and do not want to restart it, you can dynamically set the configuration value by running the following command on the RabbitMQ server:</p>
<p><code>rabbitmqctl eval 'application:set_env(rabbit, consumer_timeout, 36000000).'</code></p>
<p>This will set the new timeout to 10 hrs (36000000ms). For this to take effect, you need to restart your workers though. Existing worker connections will continue to use the old timeout.</p>
<p>You can check the current configured timeout value as well:</p>
<p><code>rabbitmqctl eval 'application:get_env(rabbit, consumer_timeout).'</code></p>
<p>If you are running RabbitMQ via Docker image, here's how to set the value: Simply add <code>-e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="-rabbit consumer_timeout 36000000"</code> to your <code>docker run</code> OR set the environment <code>RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS</code> to <code>"-rabbit consumer_timeout 36000000"</code>.</p>
<p>Hope this helps!</p>
|
<p>I have a question related to Kubernetes Ingress-nginx, I want to use <a href="http://nginx.org/en/docs/http/ngx_http_map_module.html" rel="nofollow noreferrer">ngx_http_map_module</a> to define a new attribute for <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/" rel="nofollow noreferrer">log-format-upstream</a>. The respective part in <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">helm chart</a> where I have defined my map looks like this:</p>
<pre><code>containerPort:
http: 80
https: 443
config:
log-format-upstream: $time_iso8601, $proxy_protocol_addr, $proxy_add_x_forwarded_for, $req_id, $remote_user, $bytes_sent, $request_time, $status, $host, $server_protocol, $uri $uri_category
http-snippet: |
map $uri $uri_category {
~(*/)([0-9]{3,}+)(/*)$ $2;
}
configAnnotations: {}
</code></pre>
<p>However, it gives me following error:</p>
<pre><code>nginx: [emerg] unexpected "{" in /tmp/nginx-cfg1517276787:255
</code></pre>
<p>The line 255 in the config looks like this:</p>
<pre><code> proxy_ssl_session_reuse on;
map $uri $uri_category { #Line: 255
~(*/)([0-9]{3,}+)(/*)$ $2;
}
upstream upstream_balancer {
</code></pre>
<p>I doubt that i havent defined <code>http-snippet</code> and map at the right location in the chart, but i am not sure where exactly it should be either?</p>
| <p><a href="https://stackoverflow.com/a/49440631">Related answer</a>: Surround the regex in double-quotes; Nginx uses <code>{</code> and <code>}</code> for defining blocks.</p>
<p>For example:</p>
<pre><code> map $uri $uri_category {
"~[0-9]{3,}" 'FOO';
}
server {
location / {
try_files $uri$uri_category =404;
}
}
</code></pre>
<p>That config appends <code>'FOO'</code> to three+ consecutive digits.</p>
<pre><code>/123 -> /123FOO
/4444 -> /4444FOO
</code></pre>
<hr />
<p>In your case, I think the regex should be something like:</p>
<p><code>"~(.*/)([0-9]{3,})(/.*)$" $2;</code></p>
|
<p>Turning the mtls mode to Strict in Istio is giving error : msg="Error performing http GET at <a href="http://172.xx.xx.xx:xxxx" rel="nofollow noreferrer">http://172.xx.xx.xx:xxxx</a> read: connection reset by peer"
But turning the mtls mode to Permissive works fine.</p>
<p>Is there any solution so that the error can be resolved in Strict mode</p>
<p>Sample of PeerAuthentication:</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: "default"
spec:
mtls:
mode: STRICT
</code></pre>
| <p>To resolve, ensure that istio-injection is turned on for all pods that are trying to communicate with each other so that they have the istio sidecar proxy running. This is because when the Istio Envoy proxies talk with each other they will automatically encrypt their communication. An Istio sidecar proxy will reject traffic if it doesn't come from another Istio sidecar that it trusts.</p>
<p>Alternatively, you could lower the security by changing your mutual TLS setting in your PeerAuthentication to <code>PERMISSIVE</code>.</p>
|
<p>I am wondering if there is anyway to schedule pods according to the node CPU utilization. Most of our pods in eks are scheduled on single node even though rest of the nodes have lot of CPU left. Our cron-jobs are trying to spin up the new pods in that same node and failing to start because of the low CPU. I believe default kube scheduler should take care of this but it's not happening in our case.</p>
<p>So, Is there such an option as setting pod to schedule according to the CPU Utilization or if we can achieve this with scheduling strategies like node selector/affinity/resource requests&limits etc.,</p>
<p>Thanks in Advance!</p>
| <p>The scheduler relies on the CPU request set for the pod and not the CPU usage.</p>
<p>You need to set a CPU request that is high enough so that the scheduler knows that the node already containing a pod is not sufficient and will schedule it on another node that is able to host it.</p>
<p><strong>EDIT:</strong></p>
<p>If you absolutely want to spread your pods over your different nodes, you can use a <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/" rel="nofollow noreferrer">topology-spread-constraint</a></p>
<pre><code>topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: myApp
</code></pre>
<p>Note: with the <code>ScheduleAnyway</code> config, the scheduler will prioritise spreading the pods, and if not possible, it will schedule them anyways. You can also choose not to let him by setting <code>DoNotSchedule</code></p>
|
<p>I'm looking into a new update to my kubernetes cluster in Azure. However, I'm not sure how to do this. I have been able to build an ingress controller like this one:</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "test.fullname" . -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "test.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ .port }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ .port }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p>My values is the following:</p>
<pre><code>replicaCount: 1
image:
repository: test01.azurecr.io/test
tag: update1
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 2000
targetPort: http
protocol: TCP
ingress:
enabled: true
className: ""
annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
hosts:
- host: test.com
paths:
- path: /test
pathType: Prefix
port: 80
tls: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
</code></pre>
<p>My pod is ready and it seems that the service is ready. However, the test.com domain is not working. I added a DNS record for my domain and I used my cluster's IP to make sure the domain will be available. However, I still have an issue to see the domain the error message is the following:</p>
<pre><code>Connection timed out && This site can’t be reached
</code></pre>
<p>Does anyone knows any better workaround to this?</p>
| <p>In Kubernetes you have <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controllers</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resources. What you have is the definition of an Ingress, not an Ingress Controller. An Ingress will not work unless there is an Ingress Controller installed in your cluster.</p>
<p>However, in AKS (Azure Kubernetes Service), it is possible to bind your Ingress resources to an <a href="https://learn.microsoft.com/en-us/azure/application-gateway/overview" rel="nofollow noreferrer">Azure Application Gateway</a>, which is an Azure resource outside of your cluster.</p>
<p>To achieve this you need <a href="https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview" rel="nofollow noreferrer">AGIC</a> (Application Gateway Ingress Controller) which will be in charge of forwarding your Ingress configuration to the Application Gateway. You have already achieved this partially by adding these annotations on the Ingress resources you want to have configured there:</p>
<pre><code>annotations:
appgw.ingress.kubernetes.io/use-private-ip: 'true'
kubernetes.io/ingress.class: azure/application-gateway
</code></pre>
<p><strong>Summary</strong>:</p>
<p>You have two options:</p>
<ol>
<li>Install an Ingress Controller such as <a href="https://docs.nginx.com/nginx-ingress-controller/" rel="nofollow noreferrer">nginx</a> or <a href="https://doc.traefik.io/traefik/" rel="nofollow noreferrer">traefik</a> and adapt the annotations on your Ingress resources accordingly.</li>
<li>Make sure you have an Application Gateway deployed in your subscription, AGIC installed in your cluster, and all the configuration needed to allow AGIC to modify the Application Gateway.</li>
</ol>
<p>If it is the first time you are working with Ingresses and Azure, I strongly recommend you follow the first option.</p>
|
<p>I would like to scale my deployment based on a custom logging metric, but I'm not able to make that work, I created already the custom metric and I'm also able to see it in the metric explorer but for some reason the stackdriver adapter is not able to get the metric values.</p>
<p><a href="https://i.stack.imgur.com/HVEUM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HVEUM.png" alt="Custom logging metric" /></a></p>
<p><a href="https://i.stack.imgur.com/MZYVm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MZYVm.png" alt="Custom metric in the metric explorer" /></a></p>
<p><a href="https://i.stack.imgur.com/wOcF4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wOcF4.png" alt="Custom metric name in metric explorer" /></a></p>
<p>This is my hpa.yaml</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
minReplicas: 1
maxReplicas: 5
metrics:
- external:
metricName: logging.googleapis.com|user|http_request_custom
targetValue: "20"
type: External
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
</code></pre>
<p>But i'm always getting the following error:</p>
<pre><code>"unable to get external metric default/logging.googleapis.com|user|http_request_custom/nil: unable to fetch metrics from external metrics API: the server could not find the requested resource (get logging.googleapis.com|user|http_request_custom.external.metrics.k8s.io)"
</code></pre>
<p>Should i do something different?? any idea?</p>
| <p>Not sure you have created the service account and granted access to the adapter however there is two models of custom metrics adapter. Legacy adapter and new resource version.</p>
<p>If adapter is up and running did you check the logs of POD ?</p>
<p>New resource model to install :</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter_new_resource_model.yaml
</code></pre>
<p>ref yaml you can use this way further metrics</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: pubsub
spec:
minReplicas: 1
maxReplicas: 5
metrics:
- external:
metric:
name: pubsub.googleapis.com|subscription|num_undelivered_messages
selector:
matchLabels:
resource.labels.subscription_id: echo-read
target:
type: AverageValue
averageValue: 2
type: External
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: pubsub
</code></pre>
<p>Ref : <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#pubsub_4" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#pubsub_4</a></p>
|
<p>We have an k8s operator (based on kubebuilder) which works as expected, now we need support for listening to secrets on the cluster.</p>
<p>The following code is working however I got event for <strong>all the secrets</strong> in the cluster which is <strong>not efficient</strong>,</p>
<p>I WANT to get the event <strong>only for specific secret</strong>, lets say secret with specific <code>labels/annotation</code>, how we can do it?</p>
<pre><code>func (r *InvReconciler) SetupWithManager(mgr ctrl.Manager) error {
manager := ctrl.NewControllerManagedBy(mgr).
For(&corev1alpha1.Inv{}, builder.WithPredicates(predicate.Or(predicate.GenerationChangedPredicate{}, predicate.AnnotationChangedPredicate{}))).
WithOptions(controller.Options{
})
manager = manager.Watches(&source.Kind{Type: &v1.Secret{}}, handler.EnqueueRequestsFromMapFunc(func(a client.Object) []reconcile.Request {
return r.secretHandler.GetSecret(a.GetNamespace(), a.GetName())
}))
return manager.Complete(r)
}
</code></pre>
<p>this is the function</p>
<pre><code>func (secretReq secretHandler) GetSecret(namespace string, name string) []reconcile.Request {
fmt.Println("secret is: ", namespace, "--", name)
return nil
}
</code></pre>
<p>lets say secret like the following and only for this secret (with labels <code>foo: bar</code> )i'll get the event when it was created or modified</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
labels:
foo: bar
name: mysecret
namespace: dev
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: dGVzdBo=
</code></pre>
<p>Im <strong>not</strong> talking about an <code>if statement</code> after I got the event as it already bring all the secrets event in the cluster.</p>
| <p>According to this <a href="https://github.com/operator-framework/operator-sdk/issues/694" rel="nofollow noreferrer">github</a> source, you should be able to select specific objects (e.g. secrets) with <code>EnqueueRequestForObject</code>. However, it is <strong>not</strong> possible (yet) to watch only for specific secret CRUD changes.</p>
<blockquote>
<p><strong>EnqueueRequestForObject</strong> to watch for your CRD resource changes. In
your CRD reconciler, you'd fetch all of the TLS secrets using a label
selector based on the search definition and then run your merge logic
with the matched secrets.</p>
<p><strong>EnqueueRequestFromMapFunc</strong> to watch for
secret changes and trigger a reconcile of one or more CRs. In your
mapper function, you'd fetch all of the CRs. For each CR that has a
search definition that matches the passed in secret, you'd create a
new reconcile.Request for the CR, and return the list of requests,
which would trigger your CRD reconciler for each CR that matched.</p>
</blockquote>
<p>The cleanest way is using a label selector and then merge the results with your existing code. An example of using a label selector is given in this <a href="https://stackoverflow.com/a/56356932/7950592">post</a>:</p>
<pre><code>func GetSecret(version string) (retVal interface{}, err error){
clientset := GetClientOutOfCluster()
labelSelector := metav1.LabelSelector{MatchLabels: map[string]string{"version":version}}
listOptions := metav1.ListOptions{
LabelSelector: labels.Set(labelSelector.MatchLabels).String(),
Limit: 100,
}
secretList, err := clientset.CoreV1().Secrets("namespace").List(listOptions)
retVal = secretList.Items[0]
return retVal, err
}
</code></pre>
|
<p>This output says that I'm running kubernetes with <code>containerd</code> as the container runtime:</p>
<pre><code>k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-worker3 Ready <none> 12d v1.24.4+k3s1 10.16.24.123 <none> Ubuntu 20.04.2 LTS 5.15.0-48-generic containerd://1.6.6-k3s1
k8s-worker1 Ready <none> 12d v1.24.4+k3s1 10.16.24.121 <none> Ubuntu 20.04.2 LTS 5.13.0-44-generic containerd://1.6.6-k3s1
k8s-master Ready control-plane,master 12d v1.24.4+k3s1 10.16.24.120 <none> Ubuntu 20.04.4 LTS 5.15.0-46-generic containerd://1.6.6-k3s1
k8s-worker2 Ready <none> 12d v1.24.4+k3s1 10.16.24.122 <none> Ubuntu 20.04.2 LTS 5.13.0-44-generic containerd://1.6.6-k3s1
</code></pre>
<p>I'm deploying one of my pods, it gets scheduled on node <code>k8s-worker3</code>, and <code>kubectl describe pods/mypod</code> says the image was already on the node.</p>
<p>But when I run <code>ctr</code> on the node it shows that there NO images:</p>
<pre><code>user@k8s-worker3:~$ sudo ctr images list
REF TYPE DIGEST SIZE PLATFORMS LABELS
</code></pre>
<p>And <code>docker images</code> doesn't show the correct version of the image.</p>
<p>Here's the processes running containerd:</p>
<pre><code>user@k8s-worker3:~$ ps -ef | grep container
root 985 1 0 15:23 ? 00:00:00 /usr/bin/containerd
root 1106 1 0 15:23 ? 00:00:01 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 1312 1057 0 15:23 ? 00:00:12 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
root 1918 1 0 15:23 ? 00:00:03 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 5abd3c3104abf812422757d9534c33475819cbf43b64845dd23f535193fed09e -address /run/k3s/containerd/containerd.sock
root 1970 1 0 15:23 ? 00:00:00 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 8c339c15ab8abb3050810b5fcaed817363fcf3b06800b554711b1fa0e95006e2 -address /run/k3s/containerd/containerd.sock
root 3231 1 0 15:25 ? 00:00:01 /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2/bin/containerd-shim-runc-v2 -namespace k8s.io -id 354aa1df3c2a8492a4842efe3882785553501d77f61d9b5ef0bf3343ace2a518 -address /run/k3s/containerd/containerd.sock
user 4957 3094 0 15:48 pts/0 00:00:00 grep --color=auto container
</code></pre>
<p>So what is happening here? Where is the image on the node??</p>
| <p>Try <strong>crictl images list</strong> as shown below</p>
<p><a href="https://i.stack.imgur.com/pEtOK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pEtOK.png" alt="enter image description here" /></a></p>
|
<p>I'll need to monitor java springboot containers on kubernetes.</p>
<p>I'll probably use the <a href="https://docs.datadoghq.com/fr/containers/kubernetes/installation/?tab=helm" rel="nofollow noreferrer">installation process using helm</a> to deploy the agent on the nodes.</p>
<p>I'll probably use the <a href="https://docs.datadoghq.com/getting_started/containers/autodiscovery/" rel="nofollow noreferrer">annotations on pods</a> to avoid configuration file managements</p>
<p>I saw in the documentation that there was a jar client that you can add to each pod to monitor the containers.</p>
<p>If I need to monitor a springboot application, do I have to install both the datadog agent on the nodes + the datadog agent on the pods to reach springboot
OR will the datadog agent on the nodes be allowed to monitor a springboot agent turning into a pod using only annotations and environnement variables ?</p>
| <p>Datadog come with deployment and daemonset</p>
<ul>
<li>cluster agent (for Kubernetes metrics) deployment</li>
<li>daemonset (for traching and logs) daemonset</li>
</ul>
<pre><code>helm repo add datadog https://helm.datadoghq.com
helm repo update
helm install <RELEASE_NAME> -f values.yaml --set datadog.apiKey=<DATADOG_API_KEY> datadog/datadog --set targetSystem=<TARGET_SYSTEM>
</code></pre>
<blockquote>
<p>This chart adds the Datadog Agent to all nodes in your cluster with a <strong>DaemonSet</strong>. It also optionally deploys the kube-state-metrics chart and uses it as an additional source of metrics about the cluster. A few minutes after installation, Datadog begins to report hosts and metrics.</p>
</blockquote>
<p><strong>Logs:</strong>
For logs and APM you need some extra config</p>
<pre><code>datadog:
logs:
enabled: true
containerCollectAll: true
</code></pre>
<p><a href="https://docs.datadoghq.com/containers/kubernetes/log/?tab=helm" rel="nofollow noreferrer">data-k8-logs-collection</a></p>
<p>Once everything is done, then its time to add auto-discovery
again no need to install anything for auto-discovery, until you need <a href="https://docs.datadoghq.com/containers/kubernetes/apm/?tab=helm" rel="nofollow noreferrer">APM</a> (profiling)</p>
<p>All you need to add</p>
<pre><code> ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.check_names: |
["openmetrics"]
ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.init_configs: |
[{}]
ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.instances: |
[
{
"prometheus_url": "http://%%host%%:5000/internal/metrics",
"namespace": "my_springboot_app",
"metrics": [ "*" ]
}
]
</code></pre>
<p>replace <code>5000</code> with the port of the container listening. again this is required to push Prometheus/openmetrics to datadog.</p>
<p>If you just need logs, no need for any extra fancy stuff, just <code>containerCollectAll: true</code> this is enough for logs collection.</p>
<p><strong>APM</strong></p>
<p>You need add JAVA agent, add this in the dockerfile</p>
<pre><code>RUN wget --no-check-certificate -O /app/dd-java-agent.jar https://dtdg.co/latest-java-tracer
</code></pre>
<p>and then you need to update <code>CMD</code> to let the agent collect tracing/apm/profiling</p>
<pre><code>java -javaagent:/app/dd-java-agent.jar -Ddd.profiling.enabled=$DD_PROFILING_ENABLED -XX:FlightRecorderOptions=stackdepth=256 -Ddd.logs.injection=$DD_LOGS_INJECTION -Ddd.trace.sample.rate=$DD_TRACE_SAMPLE_RATE -Ddd.service=$DD_SERVICE -Ddd.env=$DD_ENV -J-server -Dhttp.port=5000 -jar sfdc-core.jar
</code></pre>
<p><a href="https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/java/?tab=containers" rel="nofollow noreferrer">trace_collection_java</a></p>
|
<p>I'm using <code>Kubernetes version: 1.19.16</code> on bare metal <code>Ubuntu-18.04lts</code> server. When i tried to deploy the <code>nginx-ingress</code> yaml file it always fails with below errors.</p>
<p>Following steps followed to deploy nginx-ingress,</p>
<pre><code>$ git clone https://github.com/nginxinc/kubernetes-ingress.git
cd kubernetes-ingress/deployments
kubernetes-ingress/deployments$ git branch
* main
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f deployment/nginx-ingress.yaml
deployment.apps/nginx-ingress created
</code></pre>
<pre><code>$ kubectl get pods -n nginx-ingress -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-75c4bd64bd-mm52x 0/1 Error 2 21s 10.244.1.5 k8s-master <none> <none>
</code></pre>
<pre><code>$ kubectl -n nginx-ingress get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-75c4bd64bd-mm52x 0/1 CrashLoopBackOff 12 38m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress 0/1 1 0 38m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-75c4bd64bd 1 1 0 38m
</code></pre>
<pre><code>$ kubectl logs nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
W1003 04:53:02.833073 1 flags.go:273] Ignoring unhandled arguments: []
I1003 04:53:02.833154 1 flags.go:190] Starting NGINX Ingress Controller Version=2.3.1 PlusFlag=false
I1003 04:53:02.833158 1 flags.go:191] Commit=a8742472b9ddf27433b6b1de49d250aa9a7cb47e Date=2022-09-16T08:09:31Z DirtyState=false Arch=linux/amd64 Go=go1.18.5
I1003 04:53:02.844374 1 main.go:210] Kubernetes version: 1.19.16
F1003 04:53:02.846604 1 main.go:225] Error when getting IngressClass nginx: ingressclasses.networking.k8s.io "nginx" not found
</code></pre>
<pre><code>$ kubectl describe pods nginx-ingress-75c4bd64bd-mm52x -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m6s default-scheduler Successfully assigned nginx-ingress/nginx-ingress-75c4bd64bd-mm52x to k8s-worker-1
Normal Pulled 87s (x5 over 3m5s) kubelet Container image "nginx/nginx-ingress:2.3.1" already present on machine
Normal Created 87s (x5 over 3m5s) kubelet Created container nginx-ingress
Normal Started 87s (x5 over 3m5s) kubelet Started container nginx-ingress
Warning BackOff 75s (x10 over 3m3s) kubelet Back-off restarting failed container
</code></pre>
<p>Nginx Ingress controller Deployment file <a href="https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/main/deployments/deployment/nginx-ingress.yaml" rel="nofollow noreferrer">Link</a> for the reference.</p>
<p>As I'm using <code>kubernetes-ingress.git</code> repository main branch, not sure whether main branch is compatible with my Kubernetes version or not.</p>
<p>Can anyone share some pointer to solve this?</p>
| <p>I think you missed to install ingress-controller "NGINX" that is why it is not able to identify the same <a href="https://github.com/nginxinc/kubernetes-ingress/blob/main/deployments/common/ingress-class.yaml#L4" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/blob/main/deployments/common/ingress-class.yaml#L4</a></p>
<pre><code>kubectl apply -f common/ingress-class.yaml
</code></pre>
<p>You can follow thie steps from this document: <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/</a></p>
|
<p>What's the best method to enforce/block root containers running in an EKS cluster running managed nodes and applied to all namespaces (except kube-system) v1.22+ ? I tried the below but root containers are still able to run. Is this the latest method <a href="https://kubernetes.io/docs/tutorials/security/cluster-level-pss/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/security/cluster-level-pss/</a>?</p>
<pre><code>apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: pod-security
spec:
privileged: true
seLinux:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
runAsUser:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
volumes:
- '*'
</code></pre>
| <p>The answer is use a tool like kyverno or datree and enforce policies and audit them from the cluster build point</p>
<p><a href="https://kyverno.io/policies/pod-security/baseline/disallow-privileged-containers/disallow-privileged-containers/" rel="nofollow noreferrer">https://kyverno.io/policies/pod-security/baseline/disallow-privileged-containers/disallow-privileged-containers/</a></p>
|
<p>I'm using the following Airflow version inside my Docker container and I am currently having some issues related to a broken DAG</p>
<pre><code>FROM apache/airflow:2.3.4-python3.9
</code></pre>
<p>I have other DAGs running with the same argument 'request_cpu' and perfectly functional, I'm not sure what the issue could be</p>
<pre><code>Broken DAG: [/home/airflow/airflow/dags/my_project.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 858, in __init__
self.resources = coerce_resources(resources)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 133, in coerce_resources
return Resources(**resources)
TypeError: Resources.__init__() got an unexpected keyword argument 'request_cpu'
</code></pre>
<p>This is my current DAG configuration</p>
<pre><code># DAG configuration
DAG_ID = "my_project_id"
DAG_DESCRIPTION = "description"
DAG_IMAGE = image
default_args = {
"owner": "airflow",
"depends_on_past": False,
"max_active_tasks": 1,
"max_active_runs": 1,
"email_on_failure": True,
"email": ["[email protected]"],
"retries": 0,
"email_on_retry": False,
"image_pull_policy": "Always",
}
# Define desired resources.
compute_resources = {
# Cpu: 500m milliCPU is about half cpu, other values, 1, 2, 4... for full cpu allocation
"request_cpu": "500m",
# Memory: Mi for Megabytes or Gi for Gigabytes
"request_memory": "512Mi",
"limit_cpu": "500m",
"limit_memory": "1Gi",
}
with DAG(
DAG_ID,
default_args=default_args,
start_date=datetime(2022, 5, 9),
schedule_interval="0 21 */16 * *", # Every 16 days or twice per month
max_active_runs=1,
max_active_tasks=1,
catchup=False,
description=DAG_DESCRIPTION,
tags=["my tags"],
) as dag:
# AWS credentials
creds = tools.get_config_params(key="AWS-keys")
my_task = KubernetesPodOperator(
namespace="airflow",
image=DAG_IMAGE,
image_pull_secrets=[k8s.V1LocalObjectReference("docker-registry")],
container_resources=compute_resources,
env_vars={
"AWS_ACCESS_KEY_ID": creds["access_key"],
"AWS_SECRET_ACCESS_KEY": creds["secret_access_key"],
"EXECUTION_DATE": "{{ execution_date }}",
},
cmds=["python3", "my_project.py"],
is_delete_operator_pod=True,
in_cluster=False,
name="my-project-name",
task_id="my-task",
config_file=os.path.expanduser("~") + "/.kube/config",
get_logs=True,
resources=compute_resources,
)
</code></pre>
| <p>First <code>resources</code> is <a href="https://github.com/apache/airflow/blob/6b75be43171eafc45825d043ef051638aa103ccd/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L222-L223" rel="nofollow noreferrer">deprecated</a> so you should use only <code>container_resources</code>.</p>
<p>The <code>container_resources</code> is expecting <code>V1ResourceRequirements</code> not <code>dict</code>. You should do:</p>
<pre><code>from kubernetes.client import models as k8s
compute_resources=k8s.V1ResourceRequirements(
requests={
'memory': '512Mi',
'cpu': '500m'
},
limits={
'memory': '1Gi',
'cpu': 500m
}
)
</code></pre>
<p>Then</p>
<pre><code> my_task = KubernetesPodOperator(..., container_resources=compute_resources)
</code></pre>
|
<p>Are the resources in a kubernetes YAML manifest created in sequence?</p>
<p>Say I have a manifest file like so</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: default
data:
prop.value: 1
----
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: registry.k8s.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: PROP_VALUE
valueFrom:
configMapKeyRef:
name: app-config
key: prop.value
restartPolicy: Never
</code></pre>
<p>Will ConfigMap be created before Deployment, so Deployment can use the correct ConfigMap value?</p>
| <p>Yes: <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/" rel="noreferrer">Manage Deployment</a></p>
<blockquote>
<p>The resources will be created in the order they appear in the file.</p>
</blockquote>
<p>But this should not matter too much in kubernetes. If the Deployment is created first, it will spawn the pods once the ConfigMap is ready.</p>
|
<p>I am a beginner who is using Prometheus and Grapana to monitor the value of REST API.
Prometheus, json-exporrter, and grafana both used the Helm chart, Prometheus installed as default values.yaml, and json-exporter installed as custom values.yaml.
I checked that the prometheus set the service monitor of json-exporter as a target, but I couldn't check its metrics.
How can I check the metrics? Below is the environment , screenshots and code.</p>
<p><strong>environment</strong> :</p>
<ul>
<li>kubernetes : v1.22.9</li>
<li>helm : v3.9.2</li>
<li>prometheus-json-exporter helm chart : v0.5.0</li>
<li>kube-prometheus-stack helm chart : 0.58.0</li>
</ul>
<p><strong>screenshots</strong> :
<a href="https://drive.google.com/drive/folders/1vfjbidNpE2_yXfxdX8oX5eWh4-wAx7Ql?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1vfjbidNpE2_yXfxdX8oX5eWh4-wAx7Ql?usp=sharing</a></p>
<p><strong>values.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>in custom_jsonexporter_values.yaml
# Default values for prometheus-json-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: quay.io/prometheuscommunity/json-exporter
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: []
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: []
podSecurityContext: {}
# fsGroup: 2000
# podLabels:
# Custom labels for the pod
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 7979
targetPort: http
name: http
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: true
namespace: monitoring
scheme: http
# Default values that will be used for all ServiceMonitors created by `targets`
defaults:
additionalMetricsRelabels: {}
interval: 60s
labels:
release: prometheus
scrapeTimeout: 60s
targets:
- name : pi2
url: http://xxx.xxx.xxx.xxx:xxxx
labels: {} # Map of labels for ServiceMonitor. Overrides value set in `defaults`
interval: 60s # Scraping interval. Overrides value set in `defaults`
scrapeTimeout: 60s # Scrape timeout. Overrides value set in `defaults`
additionalMetricsRelabels: {} # Map of metric labels and values to add
ingress:
enabled: false
className: ""
annotations: []
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: []
tolerations: []
affinity: []
configuration:
config: |
---
modules:
default:
metrics:
- name: used_storage_byte
path: '{ .used }'
help: used storage byte
values:
used : '{ .used }'
labels: {}
- name: free_storage_byte
path: '{ .free }'
help: free storage byte
labels: {}
values :
free : '{ .free }'
- name: total_storage_byte
path: '{ .total }'
help: total storage byte
labels: {}
values :
total : '{ .total }'
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
rules: []
additionalVolumes: []
# - name: password-file
# secret:
# secretName: secret-name
additionalVolumeMounts: []
# - name: password-file
# mountPath: "/tmp/mysecret.txt"
# subPath: mysecret.txt
</code></pre>
| <p>Firstly you can check the targets page on the Prometheus UI to see if a) your desired target is even defined and b) if the endpoint is reachable and being scraped.</p>
<p>However, you may need to troubleshoot a little if either of the above is not the case:</p>
<p>It is important to understand what is happening. You have deployed a Prometheus Operator to the cluster. If you have used the default values from the helm chart, you also deployed a <code>Prometheus</code> custom resource(CR). This instance is what is telling the Prometheus Operator how to ultimately configure the Prometheus running inside the pod. Certain things are static, like global metric relabeling for example, but most are dynamic, such as picking up new targets to actually scrape. Inside the <code>Prometheus</code> CR you will find options to specify <code>serviceMonitorSelector</code> and <code>serviceMonitorNamespaceSelector</code> (The behaviour is the same also for probes and podmonitors so I'm just going over it once). Assuming you leave the default set like <code>serviceMonitorNamespaceSelector: {}</code>, Prometheus Operator will look for ServiceMonitors in <strong>all</strong> namespaces on the cluster to which it has access via its serviceAccount. The <code>serviceMonitorSelector</code> field lets you specify a label and value combination that must be present on a <code>serviceMonitor</code> that <strong>must</strong> be present for it to be picked up. Once a or multiple serviceMonitors are found, that match the criteria in the selectors, Prometheus Operator adjusts the configuration in the actual Prometheus instance(tl;dr version) so you end up with proper scrape targets.</p>
<p>Step 1 for trouble shooting: Do your selectors match the labels and namespace of the serviceMonitors? Actually check those. The default on the prometheus operator helm chart expects a label <code>release: prometheus-operator</code> and in your config, you don't seem to add that to your json-exporter's serviceMonitor.</p>
<p>Step 2: The same behaviour as outline for how serviceMonitors are picked up, is happening in turn inside the serviceMonitor itself, so make sure that your service actually matches what is specced out in the serviceMonitor.</p>
<p>To deep dive further into the options you have and what the fields do, check the <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor" rel="nofollow noreferrer">API documentation</a>.</p>
|
<p>I get the following warning/message when I run some k8s related commands</p>
<blockquote>
<p>Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update</p>
</blockquote>
<p>and then I know I should run the command like so:</p>
<p><code>aws eks update-kubeconfig --name cluster_name --dry-run</code></p>
<p>I think the potential change will be client-side only and will not cause any change on the server side - the actual cluster. I just wanted some verification of this, or otherwise. Many thanks</p>
| <p>Yes, <code>update-kubeconfig</code> does not make any changes to the cluster. It will only update your local <code>.kube/config</code> file with the cluster info. Note that with the <code>--dry-run</code> flag, no change will be made at all - the resulting configuration will just be printed to <code>stdout</code>.</p>
|
<p>My DigitalOcean kubernetes cluster is unable to pull images from the DigitalOcean registry. I get the following error message:</p>
<pre><code>Failed to pull image "registry.digitalocean.com/XXXX/php:1.1.39": rpc error: code = Unknown desc = failed to pull and unpack image
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to resolve reference
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized
</code></pre>
<p>I have added the kubernetes cluster using DigitalOcean Container Registry Integration, which shows there successfully both on the registry and the settings for the kubernetes cluster.</p>
<p><a href="https://i.stack.imgur.com/hOkVJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hOkVJ.png" alt="enter image description here" /></a></p>
<p>I can confirm the above address `registry.digitalocean.com/XXXX/php:1.1.39 matches the one in the registry. I wonder if I’m misunderstanding how the token / login integration works with the registry, but I’m under the impression that this was a “one click” thing and that the cluster would automatically get the connection to the registry after that.</p>
<p>I have tried by logging helm into a registry before pushing, but this did not work (and I wouldn't really expect it to, the cluster should be pulling the image).</p>
<p>It's not completely clear to me how the image pull secrets are supposed to be used.</p>
<p>My helm deployment chart is basically the default for API Platform:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api-platform.fullname" . }}
labels:
{{- include "api-platform.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "api-platform.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "api-platform.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "api-platform.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}-caddy
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.caddy.image.repository }}:{{ .Values.caddy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.caddy.image.pullPolicy }}
env:
- name: SERVER_NAME
value: :80
- name: PWA_UPSTREAM
value: {{ include "api-platform.fullname" . }}-pwa:3000
- name: MERCURE_PUBLISHER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-publisher-jwt-key
- name: MERCURE_SUBSCRIBER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-subscriber-jwt-key
ports:
- name: http
containerPort: 80
protocol: TCP
- name: admin
containerPort: 2019
protocol: TCP
volumeMounts:
- mountPath: /var/run/php
name: php-socket
#livenessProbe:
# httpGet:
# path: /
# port: admin
#readinessProbe:
# httpGet:
# path: /
# port: admin
resources:
{{- toYaml .Values.resources | nindent 12 }}
- name: {{ .Chart.Name }}-php
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.php.image.repository }}:{{ .Values.php.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.php.image.pullPolicy }}
env:
{{ include "api-platform.env" . | nindent 12 }}
volumeMounts:
- mountPath: /var/run/php
name: php-socket
readinessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
livenessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: php-socket
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
</code></pre>
<p>How do I authorize the kubernetes cluster to pull from the registry? Is this a helm thing or a kubernetes only thing?</p>
<p>Thanks!</p>
| <p>The problem that you have is that you do not have an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">image pull secret</a> for your cluster to use to pull from the registry.</p>
<p>You will need to add this to give your cluster a way to authorize its requests to the cluster.</p>
<h2>Using the DigitalOcean kubernetes Integration for Container Registry</h2>
<p>Digital ocean provides a way to add image pull secrets to a kubernetes cluster in your account. You can link the registry to the cluster in the settings of the registry. Under "DigitalOcean Kuberentes Integration" select edit, then select the cluster you want to link the registry to.</p>
<p><img src="https://i.stack.imgur.com/Gq6sG.png" alt="DigitalOceanKubernetesIntegration" /></p>
<p>This action adds an image pull secret to all namespaces within the cluster and will be used by default (unless you specify otherwise).</p>
|
<p>I'm trying to deploy a custom pod on minikube and I'm getting the following message regardless of my twicks:</p>
<pre><code>Failed to load logs: container "my-pod" in pod "my-pod-766c646c85-nbv4c" is waiting to start: image can't be pulled
Reason: BadRequest (400)
</code></pre>
<p>I did all sorts of experiments based on <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/pushing/</a> and <a href="https://number1.co.za/minikube-deploy-a-container-using-a-private-image-registry/" rel="nofollow noreferrer">https://number1.co.za/minikube-deploy-a-container-using-a-private-image-registry/</a> without success.
I ended up trying to use <code>minikube image load myimage:latest</code> and reference it in the container spec as:</p>
<pre><code> ...
containers:
- name: my-pod
image: myimage:latest
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
</code></pre>
<p>Should/can I use <code>minikube image</code>?
If so, should I use the full image name <code>docker.io/library/myimage:latest</code> or just the image suffix <code>myimage:latest</code>?
Is there anything else I need to do to make minikube locate the image?
Is there a way to get the logs of the bad request itself to see what is going on (I don't see anything in the api server logs)?</p>
<p>I also see the following error in the minikube system:</p>
<pre><code>Failed to load logs: container "registry-creds" in pod "registry-creds-6b884645cf-gkgph" is waiting to start: ContainerCreating
Reason: BadRequest (400)
</code></pre>
<p>Thanks!
Amos</p>
| <p>You should set the <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">imagePullPolicy</a> to <code>IfNotPresent</code>. Changing that will tell kubernetes to not pull the image if it does not need to.</p>
<pre class="lang-yaml prettyprint-override"><code> ...
containers:
- name: my-pod
image: myimage:latest
imagePullPolicy: IfNotPresent
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
</code></pre>
<p>A quirk of kubernetes is that if you specify an image with the <code>latest</code> tag as you have here, it will default to using <code>imagePullPolicy=Always</code>, which is why you are seeing this error.</p>
<p><a href="https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting" rel="nofollow noreferrer">More on how kubernetes decides the default image pull policy</a></p>
<p>If you need your image to always be pulled in production, consider using <a href="https://helm.sh/docs/" rel="nofollow noreferrer">helm</a> to template your kubernetes yaml configuration.</p>
|
<p>I'm contemplating using Hashicorp vault in my Kubernetes, to store some dynamic secrets. I did find out that I can set the sidecar injector to periodically retrieve the secret and app can look for file changes etc... all fine.</p>
<p>But after solid research, I could not find out how to update the key/pair from scheduled job in Kubernetes. Basically I need to run container than do some magic and retrieve new token, and then updates the vault pair. With kv commands, or somehow... bonus point if I do not have to use any API keys, just service account same like for reading the values.</p>
<p>Assuming the vault is running on Kubernetes where the pods will be.</p>
| <p>You can use the <a href="https://www.vaultproject.io/api-docs/secret/kv/kv-v2#patch-secret" rel="nofollow noreferrer">http API</a> to interact with the vault.</p>
<p>Given the assumption that your service account has permission to update the secret, you could run a container with a small script.</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/sh
set -eu
# get the mounted service account token
token="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
# update the secret using curl
curl \
--header "X-Vault-Token: $token" \
--header "Content-Type: application/merge-patch+json" \
--request PATCH \
--data @/path/to/payload.json \
"$VAULT_ADDR/v1/secret/data/my-secret"
</code></pre>
<p>You need to tell your cronjob it should use the service account with the permissions, of course.</p>
<p>This could look roughly like this. It's probably better practice to build a custom image. I am mounting the script just for demo purposes.</p>
<pre><code>---
apiVersion: batch/v1
kind: Job
metadata:
name: patch-secret
spec:
ttlSecondsAfterFinished: 100
template:
spec:
serviceAccount: my-service-account
restartPolicy: Never
containers:
- name: patcher
image: index.docker.io/bluebrown/netutils
command: [sh, "-e", "/opt/my-org/update-secret.sh"]
volumeMounts:
- name: scripts
mountPath: /opt/my-org
volumes:
- name: scripts
configMap:
name: patch-secret-scripts
---
apiVersion: v1
kind: ConfigMap
metadata:
name: patch-secret-scripts
data:
update-secret.sh: |
token="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
curl \
--header "X-Vault-Token: $token" \
--header "Content-Type: application/merge-patch+json" \
--request PATCH \
--data @/path/to/payload.json \
"$VAULT_ADDR/v1/secret/data/my-secret"
</code></pre>
<p>You need to know where the payload is coming from. I left that part out in the example.</p>
|
<p>I created a token for my service account using the command 'kubectl create token admin-user'. But after a while it becomes unusable and I have to create it again. How can I generate a token for my service account without expire time?</p>
| <p>Try the token for one year using the below command. You can define duration as appropriate, say <strong>--duration=87600h for 10 years</strong> and so on</p>
<pre><code>kubectl create token admin-user --duration=8760h
</code></pre>
|
<p>I am trying to be able to deploy an image from a space repository to AWS EKS. So far I managed to successfully save my docker image to Space. But I stuck at finding a way to upload my image to my cluster.</p>
<p>So far I've created the following to save my docker image to the registy.
Does someone know how I could push this towards AWS EKS? Thank you in advance for taking the time help me!</p>
<pre class="lang-kotlin prettyprint-override"><code>job("Build Image and save to registry") {
startOn {
gitPush {
branchFilter {
+"refs/heads/main"
}
}
}
docker {
resources {
cpu = 512
memory = 1024
}
build {
context = "."
file = "Dockerfile"
}
push("<my-private>.registry.jetbrains.space/p/repo/repo/image:latest")
}
}
</code></pre>
| <p>It is not currently possible out-of-the-box to push to private registries via the <code>docker</code> top-level DSL. The feature will be added soon, as most of the puzzle pieces have been put in place now. You can track <a href="https://youtrack.jetbrains.com/issue/SPACE-9234/Authentication-to-private-registries" rel="nofollow noreferrer">https://youtrack.jetbrains.com/issue/SPACE-9234/Authentication-to-private-registries</a> for more info about this.</p>
<p>In the meantime, some workarounds were presented in the YouTrack issue. Please check them out.</p>
|
<p>In the Rancher I can see the amount of pods I have in the cluster and the total amount I can have.
How can I get this maximum amount of pods that my cluster is able to have with prometheus?</p>
<p><a href="https://i.stack.imgur.com/UELoO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UELoO.png" alt="Rancher example" /></a></p>
| <p>You can use visualise based on these two queries</p>
<p><strong>Total allocable pods:</strong></p>
<pre><code>sum(kube_node_status_allocatable {resource="pods"})
</code></pre>
<p>you can also cross-verify the result</p>
<pre><code>capacity=$(k get node $(k get node | awk '{if (NR==2) print $1}') -ojsonpath='{.status.capacity.pods}{"\n"}')
$capacity*number-of-nodes
</code></pre>
<p><a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/node-metrics.md" rel="nofollow noreferrer">Node Metrics</a></p>
<p><strong>Total running pods:</strong></p>
<pre><code>sum(kube_pod_status_ready)
</code></pre>
<p>cross verification</p>
<pre><code> k get pods -A | wc
</code></pre>
<p><a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="nofollow noreferrer">pod-metrics</a></p>
<p>So you can visualise this in Grafana</p>
<p><a href="https://i.stack.imgur.com/5cXwY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5cXwY.png" alt="enter image description here" /></a></p>
|
<p>I've been trying to deploy a small API using FastAPI(and uvicorn to serve) framework on GKE.
I made a deployment and an associated service that are working well but I have trouble understanding how https should be implemented.</p>
<p>Is it internal to FastAPI? Or is it on GKE's side?</p>
| <p>it's more depends on the requirement you are looking for.</p>
<p><strong>Option 1</strong> : HTTPS termination at LB level</p>
<p>If you are using the <strong>LB</strong> with the <strong>GKE</strong> and don't want <strong>End to end</strong> <strong>HTTPS</strong> connection you can add the <strong>SSL/TLS</strong> cert at Load balancer level <strong>GKE</strong> and terminate SSL and backend it will be HTTP.</p>
<p><strong>Option 2</strong> : End to end encryption with HTTPS</p>
<p>If you want the End to end encryption with <strong>HTTPS</strong> you have to add the certificate with Code and the application will use that cert.</p>
<p>In both cases, you create the self-signed(not good option) certificate and use it.</p>
<p>If you don't want to use <strong>self-signed</strong> certificate, you can use the GCP <a href="https://cloud.google.com/certificate-manager/docs/overview" rel="nofollow noreferrer">cert-manager</a> to create/use the certificates.</p>
<p>here is the tutorial with steps to use the managed certificates with <strong>GKE</strong> : <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs</a></p>
|
<p>I am creating a statefulset in kubernetes initially with 0 replicas and I will change the replicas value to a number(ex 6 or 7). Does the replicas will be updated or is it possible to create statefulset with 0 replicas?</p>
| <p>I have tried the above use case and able to create StatefulSet with 0 replica and scale out. Please try and follow the document for more info: <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/</a></p>
<pre><code>deepak.mourya ~ % kubectl get statefulset
NAME READY AGE
web 0/0 33s
deepak.moury~ % kubectl scale statefulset web --replicas=3
statefulset.apps/web scaled
deepak.mourya ~ % kubectl get statefulset
NAME READY AGE
web 3/3 68s
</code></pre>
|
<p>We followed <a href="https://github.com/DataDog/datadog-operator/blob/main/docs/getting_started.md" rel="noreferrer">these instructions</a> to set up DataDog in our Kubernetes 1.22 cluster, using their operator. This was installed via helm with no customisations.</p>
<p>The operator, cluster-agent, and per-node agent pods are all running as expected. We know that the agents are able to communicate successfully with the DataDog endpoint because our new cluster shows up in the Infrastructure List view of DataDog.</p>
<p>However, logs from our application's pods <em>aren't</em> appearing in DataDog and we're struggling to figure out why.</p>
<p>Some obvious things we made sure to confirm:</p>
<ul>
<li><code>agent.log.enabled</code> is true in our agent spec (full YAML included below).</li>
<li>our application pods' logs are present in <code>/var/log/pods/</code>, and contain the log lines we were expecting.</li>
<li>the DataDog agent is able to see these log files.</li>
</ul>
<p>So it seems that <em>something</em> is going wrong in between the agent and the logs being available in the DataDog UI. Does anyone have any ideas for how to debug this?</p>
<hr />
<p>Configuration of our agents:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: datadoghq.com/v1alpha1
kind: DatadogAgent
metadata:
name: datadog
namespace: datadog
spec:
agent:
apm:
enabled: false
config:
tolerations:
- operator: Exists
image:
name: "gcr.io/datadoghq/agent:latest"
log:
enabled: true
process:
enabled: false
processCollectionEnabled: false
clusterAgent:
config:
admissionController:
enabled: true
mutateUnlabelled: true
clusterChecksEnabled: true
externalMetrics:
enabled: true
image:
name: "gcr.io/datadoghq/cluster-agent:latest"
replicas: 1
clusterChecksRunner: {}
credentials:
apiSecret:
keyName: api-key
secretName: datadog-secret
appSecret:
keyName: app-key
secretName: datadog-secret
features:
kubeStateMetricsCore:
enabled: false
logCollection:
enabled: true
orchestratorExplorer:
enabled: false
</code></pre>
<p>Here are the environment variables for one of the DataDog agents:</p>
<pre><code>DD_API_KEY : secretKeyRef(datadog-secret.api-key)
DD_CLUSTER_AGENT_AUTH_TOKEN : secretKeyRef(datadog.token)
DD_CLUSTER_AGENT_ENABLED : true
DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME : datadog-cluster-agent
DD_COLLECT_KUBERNETES_EVENTS : false
DD_DOGSTATSD_ORIGIN_DETECTION : false
DD_DOGSTATSD_SOCKET : /var/run/datadog/statsd/statsd.sock
DD_EXTRA_CONFIG_PROVIDERS : clusterchecks endpointschecks
DD_HEALTH_PORT : 5555
DD_KUBERNETES_KUBELET_HOST : fieldRef(v1:status.hostIP)
DD_LEADER_ELECTION : false
DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL : false
DD_LOGS_CONFIG_K8S_CONTAINER_USE_FILE : true
DD_LOGS_ENABLED : true
DD_LOG_LEVEL : INFO
KUBERNETES : yes
</code></pre>
| <p>if you are able to see metrics, then for logs I can see two possible reason</p>
<ul>
<li>enable logs collection during helm installation</li>
</ul>
<pre><code>helm upgrade -i datadog --set datadog.apiKey=mykey datadog/datadog --set datadog.logs.enabled=true
</code></pre>
<ul>
<li>Wrong region configuration, by default it expects <code>US</code>.</li>
</ul>
<pre><code>helm upgrade -i datadog --set datadog.apiKey=my-key datadog/datadog --set datadog.site=us5.datadoghq.com
</code></pre>
<p>If these two are correct, make sure the pod write logs stdout/sterror</p>
<p>as the log path seems correct as default</p>
<pre><code> - name: logpodpath
mountPath: /var/log/pods
mountPropagation: None
</code></pre>
<p>Apart from that, you also need to white list the container list to collect log from, or you can set the below ENV to true and it should work and collect all logs.</p>
<pre><code>DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true
</code></pre>
|
<p>I have already deployed Spark on Kubernetes, below is the deployment.yaml,</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: pyspark-pi
namespace: default
spec:
type: Python
pythonVersion: "3"
mode: cluster
image: "user/pyspark-app:1.0"
imagePullPolicy: Always
mainApplicationFile: local:///app/pyspark-app.py
sparkVersion: "3.1.1"
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.1.1
serviceAccount: spark
executor:
cores: 1
instances: 2
memory: "512m"
labels:
version: 3.1.1
</code></pre>
<p>Below is the service.yaml:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spark-operator-role
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: spark
namespace: default
</code></pre>
<p>GCP Spark operator is also installed on Kubernetes</p>
<p>Below are the services running:</p>
<pre><code>pyspark-pi-84dad9839f7f5f43-driver-svc ClusterIP None <none> 7078/TCP,7079/TCP,4040/TCP 2d14h
</code></pre>
<p>Now I want to run a beam application on this driver. Please find the sample code for beam application below:</p>
<pre><code>import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
options = PipelineOptions([
"--runner=PortableRunner",
"--job_endpoint=http://127.0.0.1:4040/",
"--environment_type=DOCKER",
"--environment_config=docker.io/apache/beam_python3.7_sdk:2.33.0"
])
# lets have a sample string
data = ["this is sample data", "this is yet another sample data"]
# create a pipeline
pipeline = beam.Pipeline(options=options)
counts = (pipeline | "create" >> beam.Create(data)
| "split" >> beam.ParDo(lambda row: row.split(" "))
| "pair" >> beam.Map(lambda w: (w, 1))
| "group" >> beam.CombinePerKey(sum))
# lets collect our result with a map transformation into output array
output = []
def collect(row):
output.append(row)
return True
counts | "print" >> beam.Map(collect)
# Run the pipeline
result = pipeline.run()
# lets wait until result a available
result.wait_until_finish()
# print the output
print(output)
</code></pre>
<p>When I am trying to run the above apache beam application, it is throwing the below error:</p>
<pre><code>$ python beam2.py
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
Traceback (most recent call last):
File "beam2.py", line 31, in <module>
result = pipeline.run()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 565, in run
return self.runner.run_pipeline(self, self._options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 438, in run_pipeline
job_service_handle = self.create_job_service(options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 317, in create_job_service
return self.create_job_service_handle(server.start(), options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\job_server.py", line 54, in start
grpc.channel_ready_future(channel).result(timeout=self._timeout)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_utilities.py", line 139, in result
self._block(timeout)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_utilities.py", line 85, in _block
raise grpc.FutureTimeoutError()
grpc.FutureTimeoutError
</code></pre>
<p>I think the problem is with pipeline options and mainly with job_endpoint. Without pipeline options the applcication is running fine and giving the output to the console.</p>
<p>Which IP address and host should I provide to the job end point to make it work on spark.</p>
| <ul>
<li><p>You have the runner environment that instantiates the <code>Beam</code> job : <code>Kubernetes</code></p>
</li>
<li><p>In the execution phase your <code>Beam</code> job uses the <code>Docker</code> image : <code>environment_config=docker.io/apache/beam_python3.7_sdk:2.33.0"</code></p>
</li>
</ul>
<p>To work correctly the runner needs to have the same versions used by the image, in this case :</p>
<ul>
<li>Beam Python 2.33.0</li>
<li>Python 3.7</li>
</ul>
<p>You need to install Beam Python 2.33.0 package and Python 3.7 on <code>Kubernetes</code>.</p>
|
<p>I'm following this <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">Link</a> to install <code>nginx-ingress-controller</code> on my bare metal server <code>Kubernetes-v.1.19.16</code></p>
<p>The below commands i have executed as part of installation.</p>
<pre><code>$ git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v2.4.0
$ cd kubernetes-ingress/deployments
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f rbac/apdos-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f common/ingress-class.yaml
$ kubectl apply -f daemon-set/nginx-ingress.yaml
</code></pre>
<p>I have followed <code>DaemonSet</code> method.</p>
<pre><code>$ kubectl get all -n nginx-ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-bcrk5 0/1 Running 0 19m
pod/nginx-ingress-ndpfz 0/1 Running 0 19m
pod/nginx-ingress-nvp98 0/1 Running 0 19m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress 3 3 0 3 0 <none> 19m
</code></pre>
<p>For all three <code>nginx-ingress</code> pods same error it shown.</p>
<pre><code>$ kubectl describe pods nginx-ingress-bcrk5 -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-bcrk5 to node-4
Normal Pulling 38m kubelet Pulling image "nginx/nginx-ingress:2.4.0"
Normal Pulled 37m kubelet Successfully pulled image "nginx/nginx-ingress:2.4.0" in 19.603066401s
Normal Created 37m kubelet Created container nginx-ingress
Normal Started 37m kubelet Started container nginx-ingress
Warning Unhealthy 3m13s (x2081 over 37m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
</code></pre>
<pre><code>$ kubectl logs -l app=nginx-ingress -n nginx-ingress
E1007 03:18:37.278678 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.VirtualServer: failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
W1007 03:18:55.714313 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E1007 03:18:55.714361 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
W1007 03:19:00.542294 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
E1007 03:19:00.542340 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1alpha1.TransportServer: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
</code></pre>
<p>Still <code>READY</code> and <code>UP-TO-DATE</code> state showing <code>0</code>, Ideally it show <code>3</code> in both the categories. Please let me know what i'm missing here as part of installation?</p>
<p>Any help is appreciated.</p>
| <p>I'd recommend installing it using <code>helm</code></p>
<p>See <a href="https://github.com/nginxinc/kubernetes-ingress/tree/main/deployments/helm-chart" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/main/deployments/helm-chart</a></p>
<pre class="lang-bash prettyprint-override"><code>helm repo add nginx-stable https://helm.nginx.com/stable
helm install nginx-ingress nginx-stable/nginx-ingress \
--namespace $NAMESPACE \
--version $VERSION
</code></pre>
<p>You can look for versions compatibles with your Kubernetes cluster version using:</p>
<pre class="lang-bash prettyprint-override"><code>helm search repo nginx-stable/nginx-ingress --versions
</code></pre>
<p>When installation is well finished, you should see ingress-controller service that holds an <code>$EXTERNAL-IP</code></p>
<pre class="lang-bash prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.XXX.XXX XX.XXX.XXX.XX 80:30578/TCP,443:31874/TCP 548d
</code></pre>
|
<p>I'm following this <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">Link</a> to install <code>nginx-ingress-controller</code> on my bare metal server <code>Kubernetes-v.1.19.16</code></p>
<p>The below commands i have executed as part of installation.</p>
<pre><code>$ git clone https://github.com/nginxinc/kubernetes-ingress.git --branch v2.4.0
$ cd kubernetes-ingress/deployments
$ kubectl apply -f common/ns-and-sa.yaml
$ kubectl apply -f rbac/rbac.yaml
$ kubectl apply -f rbac/ap-rbac.yaml
$ kubectl apply -f rbac/apdos-rbac.yaml
$ kubectl apply -f common/default-server-secret.yaml
$ kubectl apply -f common/nginx-config.yaml
$ kubectl apply -f common/ingress-class.yaml
$ kubectl apply -f daemon-set/nginx-ingress.yaml
</code></pre>
<p>I have followed <code>DaemonSet</code> method.</p>
<pre><code>$ kubectl get all -n nginx-ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-bcrk5 0/1 Running 0 19m
pod/nginx-ingress-ndpfz 0/1 Running 0 19m
pod/nginx-ingress-nvp98 0/1 Running 0 19m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress 3 3 0 3 0 <none> 19m
</code></pre>
<p>For all three <code>nginx-ingress</code> pods same error it shown.</p>
<pre><code>$ kubectl describe pods nginx-ingress-bcrk5 -n nginx-ingress
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38m default-scheduler Successfully assigned nginx-ingress/nginx-ingress-bcrk5 to node-4
Normal Pulling 38m kubelet Pulling image "nginx/nginx-ingress:2.4.0"
Normal Pulled 37m kubelet Successfully pulled image "nginx/nginx-ingress:2.4.0" in 19.603066401s
Normal Created 37m kubelet Created container nginx-ingress
Normal Started 37m kubelet Started container nginx-ingress
Warning Unhealthy 3m13s (x2081 over 37m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
</code></pre>
<pre><code>$ kubectl logs -l app=nginx-ingress -n nginx-ingress
E1007 03:18:37.278678 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.VirtualServer: failed to list *v1.VirtualServer: the server could not find the requested resource (get virtualservers.k8s.nginx.org)
W1007 03:18:55.714313 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
E1007 03:18:55.714361 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.Policy: failed to list *v1.Policy: the server could not find the requested resource (get policies.k8s.nginx.org)
W1007 03:19:00.542294 1 reflector.go:424] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
E1007 03:19:00.542340 1 reflector.go:140] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1alpha1.TransportServer: failed to list *v1alpha1.TransportServer: the server could not find the requested resource (get transportservers.k8s.nginx.org)
</code></pre>
<p>Still <code>READY</code> and <code>UP-TO-DATE</code> state showing <code>0</code>, Ideally it show <code>3</code> in both the categories. Please let me know what i'm missing here as part of installation?</p>
<p>Any help is appreciated.</p>
| <p>With the below branch i could able to see all <code>nginx-ingress</code> pods are in running.</p>
<pre><code>git clone https://github.com/nginxinc/kubernetes-ingress/
cd kubernetes-ingress/deployments
git checkout v1.10.0
</code></pre>
|
<p>I have a kubernetes application using AWS EKS. With the below details:</p>
<p>Cluster:
+ Kubernetes version: 1.15
+ Platform version: eks.1</p>
<p>Node Groups:
+ Instance Type: t3.medium
+ 2(Minimum) - 2(Maximum) - 2(Desired) configuration </p>
<p>[Pods]
+ 2 active pods</p>
<p>[Service]
+ Configured Type: ClusterIP
+ metadata.name: k8s-eks-api-service</p>
<p>[rbac-role.yaml]
<a href="https://pastebin.com/Ksapy7vK" rel="nofollow noreferrer">https://pastebin.com/Ksapy7vK</a></p>
<p>[alb-ingress-controller.yaml]
<a href="https://pastebin.com/95CwMtg0" rel="nofollow noreferrer">https://pastebin.com/95CwMtg0</a></p>
<p>[ingress.yaml]
<a href="https://pastebin.com/S3gbEzez" rel="nofollow noreferrer">https://pastebin.com/S3gbEzez</a></p>
<pre><code>When I tried to pull the ingress details. Below are the values (NO ADDRESS)
Host: *
ADDRESS:
</code></pre>
<p>My goal is to know why the address has no value. I expect to have private or public address to be used by other service on my application. </p>
| <p>solution fitted my case is adding ingressClassName in ingress.yaml or configure default ingressClass.</p>
<blockquote>
<p>add ingressClassName in ingress.yaml</p>
</blockquote>
<pre><code>#ingress.yaml
metadata:
name: ingress-nginx
...
spec:
ingressClassName: nginx <-- add this
rules:
...
</code></pre>
<p>or</p>
<blockquote>
<p>edit ingressClass yaml</p>
</blockquote>
<pre><code>$ kubectl edit ingressclass <ingressClass Name> -n <ingressClass namespace>
#ingressClass.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true" <-- add this
....
</code></pre>
<p><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/ingress_class/" rel="nofollow noreferrer">link</a></p>
|
<p>We have a OpenShift 4.8 cluster with 3 master nodes and 10 worker nodes in Azure. All the worker and master nodes are added under the same load balancer. I am a bit confused about how ingress traffic reaches the cluster. When someone accesses the DNS of their application, traffic comes through the load balancer over port 80/443 to any of the cluster nodes(including the master). But the ingress controller pods or running only on one or two nodes. How exactly traffic reaches to the correct ingress controller pods? Also once the traffic reaches the node how exactly it identifies the correct ingress host to forward traffic to?
Another question around this is, why both master and worker nodes are added under the same load balancer?</p>
| <blockquote>
<p>How exactly traffic reaches to the correct ingress controller pods? Also once the traffic reaches the node how exactly it identifies the correct ingress host to forward traffic to?</p>
</blockquote>
<p>The ingress controller doesn't need to deploy on every compute node because it knows all the way to your pods which has a route.</p>
<p><strong>How to know which nodes are avaiable</strong></p>
<p>Load Balancer has a health check feature to check port or http request on a node. That helps to know available nodes the ingress pods work on.</p>
<p><strong>How to reach the ingress controller</strong></p>
<p>The ingress opens ports in the pod, not the node. OpenShift in a cloud provider like Azure deploys load balancer service for the ingress. That deploys Load balancer in Azure and binds ports on the node(host) to receive requests from outside OpenShift cluster. Those ports are defined randomly. The load balancer service makes setting up the load balancer in Azure to reach the ports on the nodes. So, you don't need to worry about which ports on nodes are opened.</p>
<p><strong>How to transfer requests to the correct pods</strong></p>
<p>The ingress controller consists of HAProxy which works as L7 proxy mode. A request to the ingress controller should have 'host name' and it should be matched a route you defined. That allows to lead your request to your correct pod.</p>
<blockquote>
<p>Another question around this is, why both master and worker nodes are added under the same load balancer?</p>
</blockquote>
<p>The ingress controller is a pod so if you don't specify 'Node Selector', the pod can be deployed any nodes in an OpenShift Cluster.
Since the pods could be deployed different node accidentally, Load Balancer is prepared for it.</p>
|
<p>I am trying to merge some data to delta table in a streaming application in k8s using spark submit in cluster mode</p>
<p>Getting the below error, But its works fine in k8s local mode and in my laptop, none of the operations related to delta lake is working in k8s cluster mode,</p>
<p>Below is the library versions i am using , is it some compatibility issue,</p>
<pre><code>SPARK_VERSION_DEFAULT=3.3.0
HADOOP_VERSION_DEFAULT=3
HADOOP_AWS_VERSION_DEFAULT=3.3.1
AWS_SDK_BUNDLE_VERSION_DEFAULT=1.11.974
</code></pre>
<p>below is the error message</p>
<blockquote>
<p>py4j.protocol.Py4JJavaError: An error occurred while calling o128.saveAsTable. : java.util.concurrent.ExecutionException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 4) (192.168.15.250 executor 2): java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.sql.catalyst.expressions.ScalaUDF.f of type scala.Function1 in instance of org.apache.spark.sql.catalyst.expressions.ScalaUDF</p>
</blockquote>
| <p>Finaly able to resolve this issue , issue was due to some reason dependant jars like delta, kafka are not available in executor , as per the below SO response</p>
<p><a href="https://stackoverflow.com/questions/73473309/cannot-assign-instance-of-scala-collection-immutable-listserializationproxy-to">cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.sql.execution.datasources.v2.DataSourceRDD</a></p>
<p>i have added the jars in spark/jars folder using docker image and issue got resolved ,</p>
|
<p>I have .Net Core 3.1 web api which is deployed to AKS. I have created Azure App Insights instance to write the logs.
I followed
<a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcore6" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcore6</a>
to configure the .Net application.</p>
<p>Added Microsoft.ApplicationInsights Nuget package</p>
<p>Added connection string in appsettings</p>
<p>Added services.AddApplicationInsightsTelemetry(); in startup.cs</p>
<p>Running api from local pc I can see telemetry being logged in Visual Studio output.</p>
<p>But when I deployed to Azure nothing is flowing into App Insights. Absolutely nothing.</p>
<p>I am new to this and checked pod logs but dint find anything in it. The connection string is correct.</p>
<p>From my local pc I tried to write to Actual App Insights. But although I can see telemetry in Visual Studio nothing is going to Azure.
I am assuming because "Accept data ingestion from public networks not connected through a Private Link Scope" is false for the App Insight instance.So this is also not helping me to debug.I cannot change this setting.</p>
<p>The Azure account is linked to On Premise network.</p>
<p>Can someone point to me what could be the issue</p>
| <p>Microsoft.ApplicationInsights.AspNetCore was not working when using Connection String. When I changed to InstrumentKey the logs started flowing. Weird as MSFT recommendation is to use connection string</p>
|
<p>i'm triyng to do one Helm Chart for different environments. In many tutorials such scheme should works, but my structure does not read value from dependency repository. Helm just ignores it.</p>
<p>I have following folder structure</p>
<pre><code>helm
- charts
- core-web
- Chart.yaml
- values.yaml
- templates
- frontend
- Chart.yaml
- values.yaml
- templates
- prod
- Chart.yaml
- values.yaml
- dev
- Chart.yaml
- values.yaml
</code></pre>
<p>prod/Chart.yaml</p>
<pre><code>apiVersion: v1
name: test
version: 1.0.0
dependencies:
- name: core-web
version: "1.37.0"
repository: file://../charts/core-web/
- name: frontend
version: "1.6.0"
repository: "file://../charts/frontend"
</code></pre>
<p>From helm folder i execute following command</p>
<pre><code>helm install ./prod --dry-run --generate-name -n sandbox -f prod/values.yaml
Error: INSTALLATION FAILED: found in Chart.yaml, but missing in charts/ directory: core-web, frontend
</code></pre>
<p>If i move charts forlder to prod folder, then everithing works.
Why helm does not accept file path from dependency repository?
It should: <a href="https://helm.sh/docs/helm/helm_dependency/" rel="nofollow noreferrer">https://helm.sh/docs/helm/helm_dependency/</a></p>
<p>Thanks for the help.</p>
| <p>Try to replicate the issue, seems like a cache issue</p>
<p>you can verify that helm dependency on which path it's looking for <code>charts</code>.</p>
<pre><code>helm template test ./prod
#output Error: found in Chart.yaml, but missing in charts/ directory: backend, web-app
</code></pre>
<p>then I tried to verify the path on which the helm looking</p>
<pre><code>helm dep ls ./prod
</code></pre>
<p>from the output its clear it's still looking into the wrong path with the <strong>status</strong> <code>missing</code> as its still looking for chart inside <code>prod</code> folder.</p>
<pre><code>NAME VERSION REPOSITORY STATUS
backend 1.2.3 file://charts/backend/ missing
web-app 1.2.3 file://charts/web-app/ missing
</code></pre>
<p>so to fix this</p>
<pre><code>helm dependency update ./prod
</code></pre>
<p>then I can see</p>
<pre><code>helm dep ls ./prod
</code></pre>
<p><a href="https://i.stack.imgur.com/2DJqe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2DJqe.png" alt="enter image description here" /></a></p>
|
<p>I was checking Kubernetes <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">documentation</a> for pulling images. In that, I saw two policies IfNotPresent and Always. In "Always" its stated that</p>
<blockquote>
<p>If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.</p>
</blockquote>
<p>I am unable to understand what is local here. Is it a node, pod, or cluster? What is the difference between Always and IfNotPresent if it is at node level? It's very confusing.</p>
| <p><code>Always</code> at the name suggests will cause the container runtime to attempt to pull a new version of the image from the repository every time it tries to create the container.</p>
<p>In docker, this is like doing:</p>
<p><code>docker run --pull=always nginx</code></p>
<p><code>IfNotPresent</code> will pull the image if it does not exist <strong>on the node that is attempting to create the container</strong></p>
<p>This is like doing:</p>
<p><code>docker run --pull=missing nginx</code></p>
<p>or</p>
<p><code>docker run nginx</code></p>
|
<p>I am unsure what the difference between "plain calico"</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
<p>and the "calico tigera" (operator) is.</p>
<pre><code>helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
helm install calico projectcalico/tigera-operator --version v3.24.1\
--create-namespace -f values.yaml --namespace tigera-operator
</code></pre>
<p>I only really need a CNI, ideally the least contorted.
My impression is that the tigera is somehow a "new extented version" and it makes me
sad to see suddenly a much fuller K8s cluster because of this
(seems hence like mainly the devs of Calico wanted to get funding and needed to blow up
the complexity for fame of their product, but I might be wrong hence the question)</p>
<pre><code>root@cp:~# kubectl get all -A | grep -e 'NAMESPACE\|calico'
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver pod/calico-apiserver-8665d9fcfb-6z7sv 1/1 Running 0 7m30s
calico-apiserver pod/calico-apiserver-8665d9fcfb-95rlh 1/1 Running 0 7m30s
calico-system pod/calico-kube-controllers-78687bb75f-ns5nj 1/1 Running 0 8m3s
calico-system pod/calico-node-2q8h9 1/1 Running 0 7m43s
calico-system pod/calico-typha-6d48dfd49d-p5p47 1/1 Running 0 7m47s
calico-system pod/csi-node-driver-9gjc4 2/2 Running 0 8m4s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-apiserver service/calico-api ClusterIP 10.105.6.52 <none> 443/TCP 7m30s
calico-system service/calico-kube-controllers-metrics ClusterIP 10.105.39.117 <none> 9094/TCP 8m3s
calico-system service/calico-typha ClusterIP 10.102.152.6 <none> 5473/TCP 8m5s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 8m4s
calico-system daemonset.apps/csi-node-driver 1 1 1 1 1 kubernetes.io/os=linux 8m4s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
calico-apiserver deployment.apps/calico-apiserver 2/2 2 2 7m30s
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 8m3s
calico-system deployment.apps/calico-typha 1/1 1 1 8m4s
NAMESPACE NAME DESIRED CURRENT READY AGE
calico-apiserver replicaset.apps/calico-apiserver-8665d9fcfb 2 2 2 7m30s
calico-system replicaset.apps/calico-kube-controllers-78687bb75f 1 1 1 8m3s
calico-system replicaset.apps/calico-typha-588b4ff644 0 0 0 8m4s
calico-system replicaset.apps/calico-typha-6d48dfd49d 1 1 1 7m47s
</code></pre>
| <p>CNI is a small network plugin that is used for allocating IP address, but calico tigera is responsible for whole kubernetes networking and connecting nodes and services</p>
|
<p>We're using kubernetes with ingresses, cert-manager (v1.9.1) and the letsencrypt issuer.
Our use case is the following:
We have a multitenant solution with 3 services that allows people to change their domain names (1 for each service).
Each time someone is changing a domain name, it triggers the update of the ingress related to the service. This update triggers the generation of a new order by the cert manager.</p>
<p>We meet 2 issues:</p>
<ul>
<li>When someone is changing one domain name (making it replaced in the spec.tls[] and spec.rules[] sections of the ingress), the order created seems to regenerate all the letsencrypt certificates from the ingress and not only the one changed (which leads to the letsencrypt issue "too many certificates already issued" after some tries).</li>
<li>When one of the certificates of an order has an error (due to the previous issue, "too many certificates already issued"), the other certificates in the order (that do normally not have any problem) seem not to be generated either.</li>
</ul>
<p>What would be the best strategy to avoid these issues with our use case? (Maybe we would have to create one ingress by domain name? Or is there a way to have one order by certificate and not to trigger the regeneration of existing certificates on the update of an ingress?)</p>
<p>--- EDITED ---</p>
<p>Here is the ingress (with {hidden} fields and renaming for privacy):</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: server-ingress
namespace: {hidden}
annotations:
cert-manager.io/issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- test1.customer-domain.fr
- test1.our-company.com
- test2.our-company.com
secretName: our-company-server-tls
rules:
- host: test1.customer-domain.fr
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: server-v3-24-5
servicePort: 8080
- host: test1.our-company.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: server-v3-24-5
servicePort: 8080
- host: test2.our-company.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: server-v3-24-5
servicePort: 8080
status:
loadBalancer:
ingress:
- ip: {hidden}
</code></pre>
<p>Thank you,</p>
| <blockquote>
<p>The order created seems to regenerate all the letsencrypt certificates from the ingress and not only the one changed</p>
</blockquote>
<p>Based on the discussion, This is because you are using the same secret name for all the ingress, you need to have a different secret name for each host in TLS in the ingress and this way it will not recreate all certs order.</p>
<p>So this should work,</p>
<pre><code> tls:
- secretName: test1.customer-domain.fr
hosts:
- test1.customer-domain.fr
- secretName: test1.our-company.com
hosts:
- test1.our-company.com
rules:
- host: test1.customer-domain.fr
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: server-v3-24-5
servicePort: 8080
- host: test1.our-company.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: server-v3-24-5
servicePort: 8080
</code></pre>
|
<p>I am unsure what the difference between "plain calico"</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
<p>and the "calico tigera" (operator) is.</p>
<pre><code>helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
helm install calico projectcalico/tigera-operator --version v3.24.1\
--create-namespace -f values.yaml --namespace tigera-operator
</code></pre>
<p>I only really need a CNI, ideally the least contorted.
My impression is that the tigera is somehow a "new extented version" and it makes me
sad to see suddenly a much fuller K8s cluster because of this
(seems hence like mainly the devs of Calico wanted to get funding and needed to blow up
the complexity for fame of their product, but I might be wrong hence the question)</p>
<pre><code>root@cp:~# kubectl get all -A | grep -e 'NAMESPACE\|calico'
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver pod/calico-apiserver-8665d9fcfb-6z7sv 1/1 Running 0 7m30s
calico-apiserver pod/calico-apiserver-8665d9fcfb-95rlh 1/1 Running 0 7m30s
calico-system pod/calico-kube-controllers-78687bb75f-ns5nj 1/1 Running 0 8m3s
calico-system pod/calico-node-2q8h9 1/1 Running 0 7m43s
calico-system pod/calico-typha-6d48dfd49d-p5p47 1/1 Running 0 7m47s
calico-system pod/csi-node-driver-9gjc4 2/2 Running 0 8m4s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
calico-apiserver service/calico-api ClusterIP 10.105.6.52 <none> 443/TCP 7m30s
calico-system service/calico-kube-controllers-metrics ClusterIP 10.105.39.117 <none> 9094/TCP 8m3s
calico-system service/calico-typha ClusterIP 10.102.152.6 <none> 5473/TCP 8m5s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 8m4s
calico-system daemonset.apps/csi-node-driver 1 1 1 1 1 kubernetes.io/os=linux 8m4s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
calico-apiserver deployment.apps/calico-apiserver 2/2 2 2 7m30s
calico-system deployment.apps/calico-kube-controllers 1/1 1 1 8m3s
calico-system deployment.apps/calico-typha 1/1 1 1 8m4s
NAMESPACE NAME DESIRED CURRENT READY AGE
calico-apiserver replicaset.apps/calico-apiserver-8665d9fcfb 2 2 2 7m30s
calico-system replicaset.apps/calico-kube-controllers-78687bb75f 1 1 1 8m3s
calico-system replicaset.apps/calico-typha-588b4ff644 0 0 0 8m4s
calico-system replicaset.apps/calico-typha-6d48dfd49d 1 1 1 7m47s
</code></pre>
| <p>Tigera is a Cloud-Native Application Protection Platform (CNAPP).</p>
<p>For sure, you just want the first copy, Calico CNI.</p>
|
<p>I am trying to run a beam application on spark on kubernetes.</p>
<p>beam-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-beam-jobserver
spec:
serviceName: spark-headless
selector:
matchLabels:
app: spark-beam-jobserver
template:
metadata:
labels:
app: spark-beam-jobserver
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: spark-beam-jobserver
image: apache/beam_spark_job_server:2.33.0
imagePullPolicy: Always
ports:
- containerPort: 8099
name: jobservice
- containerPort: 8098
name: artifact
- containerPort: 8097
name: expansion
volumeMounts:
- name: beam-artifact-staging
mountPath: "/tmp/beam-artifact-staging"
command: [
"/bin/bash", "-c", "./spark-job-server.sh --job-port=8099 --spark-master-url=spark://spark-primary:7077"
]
volumes:
- name: beam-artifact-staging
persistentVolumeClaim:
claimName: spark-beam-pvc
---
apiVersion: v1
kind: Service
metadata:
name: spark-beam-jobserver
labels:
app: spark-beam-jobserver
spec:
selector:
app: spark-beam-jobserver
type: NodePort
ports:
- port: 8099
nodePort: 32090
name: job-service
- port: 8098
nodePort: 32091
name: artifacts
# type: ClusterIP
# ports:
# - port: 8099
# name: job-service
# - port: 8098
# name: artifacts
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-primary
spec:
serviceName: spark-headless
replicas: 1
selector:
matchLabels:
app: spark
template:
metadata:
labels:
app: spark
component: primary
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: primary
image: docker.io/secondcomet/spark-custom-2.4.6
env:
- name: SPARK_MODE
value: "master"
- name: SPARK_RPC_AUTHENTICATION_ENABLED
value: "no"
- name: SPARK_RPC_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_SSL_ENABLED
value: "no"
ports:
- containerPort: 7077
name: masterendpoint
- containerPort: 8080
name: ui
- containerPort: 7078
name: driver-rpc-port
- containerPort: 7079
name: blockmanager
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
resources:
limits:
cpu: 1.0
memory: 1Gi
requests:
cpu: 0.5
memory: 0.5Gi
---
apiVersion: v1
kind: Service
metadata:
name: spark-primary
labels:
app: spark
component: primary
spec:
type: ClusterIP
ports:
- name: masterendpoint
port: 7077
targetPort: 7077
- name: rest
port: 6066
targetPort: 6066
- name: ui
port: 8080
targetPort: 8080
- name: driver-rpc-port
protocol: TCP
port: 7078
targetPort: 7078
- name: blockmanager
protocol: TCP
port: 7079
targetPort: 7079
selector:
app: spark
component: primary
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: spark-children
labels:
app: spark
spec:
serviceName: spark-headless
replicas: 1
selector:
matchLabels:
app: spark
template:
metadata:
labels:
app: spark
component: children
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
spec:
containers:
- name: docker
image: docker:19.03.5-dind
securityContext:
privileged: true
volumeMounts:
- name: dind-storage
mountPath: /var/lib/docker
env:
- name: DOCKER_TLS_CERTDIR
value: ""
resources:
limits:
cpu: 1.0
memory: 1Gi
requests:
cpu: 0.5
memory: 100Mi
- name: children
image: docker.io/secondcomet/spark-custom-2.4.6
env:
- name: DOCKER_HOST
value: "tcp://localhost:2375"
- name: SPARK_MODE
value: "worker"
- name: SPARK_MASTER_URL
value: "spark://spark-primary:7077"
- name: SPARK_WORKER_MEMORY
value: "1G"
- name: SPARK_WORKER_CORES
value: "1"
- name: SPARK_RPC_AUTHENTICATION_ENABLED
value: "no"
- name: SPARK_RPC_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED
value: "no"
- name: SPARK_SSL_ENABLED
value: "no"
ports:
- containerPort: 8081
name: ui
volumeMounts:
- name: beam-artifact-staging
mountPath: "/tmp/beam-artifact-staging"
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.5
memory: 1Gi
volumes:
- name: dind-storage
emptyDir:
- name: beam-artifact-staging
persistentVolumeClaim:
claimName: spark-beam-pvc
---
apiVersion: v1
kind: Service
metadata:
name: spark-children
labels:
app: spark
component: children
spec:
type: ClusterIP
ports:
- name: ui
port: 8081
targetPort: 8081
selector:
app: spark
component: children
---
apiVersion: v1
kind: Service
metadata:
name: spark-headless
spec:
clusterIP: None
selector:
app.kubernetes.io/instance: custom_spark
app.kubernetes.io/name: spark
type: ClusterIP
</code></pre>
<pre><code>$ kubectl get all --namespace spark-beam
NAME READY STATUS RESTARTS AGE
pod/spark-beam-jobserver-0 1/1 Running 0 58m
pod/spark-children-0 2/2 Running 0 58m
pod/spark-primary-0 1/1 Running 0 58m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
service/spark-beam-jobserver NodePort 10.97.173.68 <none> 8099:32090/TCP,8098:32091/TCP
58m
service/spark-children ClusterIP 10.105.209.30 <none> 8081/TCP
58m
service/spark-headless ClusterIP None <none> <none>
58m
service/spark-primary ClusterIP 10.109.32.126 <none> 7077/TCP,6066/TCP,8080/TCP,7078/TCP,7079/TCP 58m
NAME READY AGE
statefulset.apps/spark-beam-jobserver 1/1 58m
statefulset.apps/spark-children 1/1 58m
statefulset.apps/spark-primary 1/1 58m
</code></pre>
<p>beam-application.py</p>
<pre><code>import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
class ConvertToByteArray(beam.DoFn):
def __init__(self):
pass
def setup(self):
pass
def process(self, row):
try:
yield bytearray(row + '\n', 'utf-8')
except Exception as e:
raise e
def run():
options = PipelineOptions([
"--runner=PortableRunner",
"--job_endpoint=localhost:32090",
"--save_main_session",
"--environment_type=DOCKER",
"--environment_config=docker.io/apache/beam_python3.7_sdk:2.33.0"
])
with beam.Pipeline(options=options) as p:
lines = (p
| 'Create words' >> beam.Create(['this is working'])
| 'Split words' >> beam.FlatMap(lambda words: words.split(' '))
| 'Build byte array' >> beam.ParDo(ConvertToByteArray())
| 'Group' >> beam.GroupBy() # Do future batching here
| 'print output' >> beam.Map(print)
)
if __name__ == "__main__":
run()
</code></pre>
<p>When I am trying to run the python application in my conda environment:
python beam-application.py</p>
<p>I am getting the below error :</p>
<pre><code> File "beam.py", line 39, in <module>
run()
File "beam.py", line 35, in run
| 'print output' >> beam.Map(print)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 586, in __exit__
self.result = self.run()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 565, in run
return self.runner.run_pipeline(self, self._options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 440, in run_pipeline
job_service_handle.submit(proto_pipeline)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 114, in submit
prepare_response.staging_session_token)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 218, in stage
staging_session_token)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\artifact_service.py", line 237, in offer_artifacts
for request in requests:
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_channel.py", line 426, in __next__
return self._next()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_channel.py", line 826, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNAVAILABLE: WSA Error"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2022-10-10T14:38:39.520460502+00:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNAVAILABLE: WSA Error {grpc_status:14, created_time:"2022-10-10T14:38:39.520457024+00:00"}]}"
>
</code></pre>
<p>I am not sure where exactly is the problem.</p>
<p>What should I pass in job_endpoint and artifact_endpoint?</p>
<p>I also tried port-forwarding :</p>
<pre><code>kubectl port-forward service/spark-beam-jobserver 32090:8099 --namespace spark-beam
kubectl port-forward service/spark-primary 8080:8080 --namespace spark-beam
kubectl port-forward service/spark-children 8081:8081 --namespace spark-beam
</code></pre>
| <p>I suppose this is based on <a href="https://github.com/cometta/python-apache-beam-spark" rel="nofollow noreferrer">https://github.com/cometta/python-apache-beam-spark</a>?</p>
<p><code>spark-beam-jobserver</code> is using service type <code>NodePort</code>. So, if running in a local (minikube) cluster, you won't need any port forwarding to reach the job server.</p>
<p>You should be able to submit a Python job from your local shell using the following pipeline options:</p>
<pre><code> --job_endpoint=localhost:32090
--artifact_endpoint=localhost:32091
</code></pre>
<p>Note, your python code above misses the <code>artifact_endpoint</code>. You have to provide both endpoints.</p>
|
<p>I have an Ansible task almost identical to the top answer here: <a href="https://stackoverflow.com/questions/53198576/ansible-playbook-wait-until-all-pods-running">Ansible playbook wait until all pods running</a></p>
<pre class="lang-yaml prettyprint-override"><code>- name: Wait for all control-plane pods become created
shell: "kubectl get po --namespace=kube-system --selector tier=control-plane --output=jsonpath='{.items[*].metadata.name}'"
register: control_plane_pods_created
until: item in control_plane_pods_created.stdout
retries: 10
delay: 30
with_items:
- etcd
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- name: Wait for control-plane pods become ready
shell: "kubectl wait --namespace=kube-system --for=condition=Ready pods --selector tier=control-plane --timeout=600s"
register: control_plane_pods_ready
- debug: var=control_plane_pods_ready.stdout_lines
</code></pre>
<p>As shown in his example it prints 'FAILED' 3 times:</p>
<pre class="lang-json prettyprint-override"><code>TASK [Wait for all control-plane pods become created] ******************************
FAILED - RETRYING: Wait all control-plane pods become created (10 retries left).
FAILED - RETRYING: Wait all control-plane pods become created (9 retries left).
FAILED - RETRYING: Wait all control-plane pods become created (8 retries left).
changed: [localhost -> localhost] => (item=etcd)
changed: [localhost -> localhost] => (item=kube-apiserver)
changed: [localhost -> localhost] => (item=kube-controller-manager)
changed: [localhost -> localhost] => (item=kube-scheduler)
TASK [Wait for control-plane pods become ready] ********************************
changed: [localhost -> localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"control_plane_pods_ready.stdout_lines": [
"pod/etcd-localhost.localdomain condition met",
"pod/kube-apiserver-localhost.localdomain condition met",
"pod/kube-controller-manager-localhost.localdomain condition met",
"pod/kube-scheduler-localhost.localdomain condition met"
]
}
</code></pre>
<p>For my implementation, the loop fails more than 3 times more like 20 times... so it clogs up my logs... but this is expected behaviour.</p>
<p>So how can I only print 'FAILED' once all the retries have been used up?</p>
<p>I hope my question makes sense,
Thanks</p>
| <blockquote>
<p><em>How can I only print 'FAILED - RETRYING' once all the retries have been used up?</em></p>
</blockquote>
<p>I understand that you are referencing to <code>until</code> <code>retries</code> and like to <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#retrying-a-task-until-a-condition-is-met" rel="nofollow noreferrer">Retrying a task until a condition is met</a> and the message <code>FAILED</code> belongs to the loop and not the final task result.</p>
<p>Running a short fail test</p>
<pre class="lang-yaml prettyprint-override"><code>---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Show fail test
shell:
cmd: exit 1 # to fail
register: result
# inner loop
until: result.rc == 0 # which will never happen
retries: 3 # times therefore
delay: 1 # second
# outer
loop: [1, 2, 3] # times over
failed_when: item == 3 and result.rc != 0 # on last outer loop run only
no_log: true # for outer loop content
</code></pre>
<p>resulting into an output of</p>
<pre class="lang-yaml prettyprint-override"><code>PLAY [localhost] **********************************
FAILED - RETRYING: Show fail test (3 retries left).
FAILED - RETRYING: Show fail test (2 retries left).
FAILED - RETRYING: Show fail test (1 retries left).
TASK [Show fail test] *****************************
changed: [localhost] => (item=None)
FAILED - RETRYING: Show fail test (3 retries left).
FAILED - RETRYING: Show fail test (2 retries left).
FAILED - RETRYING: Show fail test (1 retries left).
changed: [localhost] => (item=None)
FAILED - RETRYING: Show fail test (3 retries left).
FAILED - RETRYING: Show fail test (2 retries left).
FAILED - RETRYING: Show fail test (1 retries left).
failed: [localhost] (item=None) => changed=true
attempts: 3
censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result'
fatal: [localhost]: FAILED! => changed=true
censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result'
</code></pre>
<p>it seems that it is not possible to suppress the interim message of Ansible's <code>until</code> <code>retries</code> loop on playbook level, neither with <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html#defining-failure" rel="nofollow noreferrer">Defining failure</a> nor by <a href="https://docs.ansible.com/ansible/latest/reference_appendices/logging.html#protecting-sensitive-data-with-no-log" rel="nofollow noreferrer">Protecting sensitive data with <code>no_log</code></a> or <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html#limiting-loop-output-with-label" rel="nofollow noreferrer">Limiting loop output</a>.</p>
<blockquote>
<p><em>... but this is expected behaviour.</em></p>
</blockquote>
<p>Right, unless you are not addressing <a href="https://docs.ansible.com/ansible/devel/plugins/callback.html" rel="nofollow noreferrer">Callback plugins</a> and <a href="https://docs.ansible.com/ansible/devel/plugins/callback.html#setting-a-callback-plugin-for-ansible-playbook" rel="nofollow noreferrer">Setting a (other) callback plugin for ansible-playbook</a> or <a href="https://docs.ansible.com/ansible/devel/dev_guide/developing_plugins.html#developing-callbacks" rel="nofollow noreferrer">Developing (own) Callback plugin</a> the message will remain.</p>
<p><strong>Similar Q&A</strong></p>
<ul>
<li><a href="https://stackoverflow.com/a/48727195/6771046">How to change the interim message of Ansible's <code>until</code> <code>retries</code> loop?</a></li>
</ul>
<p><strong>Further Information</strong></p>
<ul>
<li><a href="https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/callback/default.py#L374" rel="nofollow noreferrer"><code>ansible/plugins/callback/default.py</code></a></li>
</ul>
<p><strong>Possible Solution</strong></p>
<ul>
<li>Ansible Issue #<a href="https://github.com/ansible/ansible/issues/32584" rel="nofollow noreferrer">32584</a></li>
<li><a href="https://docs.ansible.com/ansible/latest/collections/community/general/diy_callback.html" rel="nofollow noreferrer"><code>diy</code> callback – Customize the output</a></li>
</ul>
|
<p>I am trying to run .net core application with Azure SignalR services(free tier). The .net core app is deployed in Azure Kubernetes Service. I have an Angular frontend app that tries to connect to the WebSocket. Below are my configurations in Program.cs file:</p>
<pre><code> services.AddCors(options => options.AddPolicy("testing", builder =>
{
builder.WithOrigins("https://somebackendserver.com");
builder.AllowCredentials();
builder.AllowAnyHeader();
builder.AllowAnyMethod();
}));
services.AddSignalR(options =>
{
options.EnableDetailedErrors = true;
}).AddAzureSignalR(connectionStringSignalR);
app.UseCors("testing");
app.UseEndpoints(configure =>
{
configure.MapHub<GenerationNotificationHub>("/hub");
});
</code></pre>
<p>This is my Angular side code to create a connection:</p>
<pre><code>public createConnection = (): void => {
this.hubConnection = new signalR.HubConnectionBuilder()
.configureLogging(signalR.LogLevel.Error)
.withUrl(`https://somebackendserver.com/hub`,
{
accessTokenFactory: () => this.sessionService.get(SignalrNotificationService.accessTokenStorageKey),
transport: signalR.HttpTransportType.WebSockets,
skipNegotiation: true
})
.withAutomaticReconnect()
.build();
this.hubConnection.start().then().catch();
}
</code></pre>
<p>When the solutions are deployed in AKS I get the following error in browser console window:</p>
<blockquote>
<p>Error: Failed to start the connection: Error: WebSocket failed to
connect. The connection could not be found on the server, either the
endpoint may not be a SignalR endpoint, the connection ID is not
present on the server, or there is a proxy blocking WebSockets. If you
have multiple servers check that sticky sessions are enabled.</p>
</blockquote>
<p>This is the server-side error log</p>
<blockquote>
<p>Failed to connect to
'(Primary)https://xxx.service.signalr.net(hub=GenerationNotificationHub)',
will retry after the back off period. Error detail: Unable to connect
to the remote server. Received an unexpected EOF or 0 bytes from the
transport stream.. Id: 958c67ab-1e91-4983-83ad-bfaf02bc48da</p>
</blockquote>
<p>And this is the Postman error when I try to connect to the WebSocket:</p>
<blockquote>
<p>Status Code: 503 WebSocket request made to a site where WebSockets are
disabled. Request Headers Sec-WebSocket-Version: 13 Sec-WebSocket-Key:
AjiVAXGSpcYCbiGbftHbcg== Connection: Upgrade Upgrade: websocket
Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits
Host: somebackendserver.com Response Headers Content-Length: 27
Content-Type: text/html Date: Mon, 19 Sep 2022 13:44:16 GMT Server:
Microsoft-IIS/10.0</p>
</blockquote>
<p>The application works fine when I try to run it on localhost but something seems off when I deploy it as Kubernetes service in Azure.</p>
<p>EDIT: We have a Kong API Gateway for managing our API Gateway services, and I am suspecting it is somehow blocking SignalR websocket network connections. I keep getting this CORS error</p>
<blockquote>
<p>Access to fetch at >'https://api.XXX.dev.XXXXXX.com/hub/generation/negotiate?negotiateVersion=1' from origin 'https://XXXX.dev.XXXXXX.com' has been >blocked by CORS policy:
Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value >'https://portal.api.dev.XXXX.com' that is not equal to the supplied >origin. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' >to fetch the resource with CORS disabled.</p>
</blockquote>
| <p>I was able to figure out what went wrong here. We are using Kong API gateway 2.7.1 for our organization. This version of Kong does not support WS/WSS protocol. This explains the errors like:</p>
<blockquote>
<p>Status Code: 503 WebSocket request made to a site where WebSockets are
disabled.</p>
</blockquote>
<p>Or</p>
<blockquote>
<p>Error: WebSocket failed to connect. The connection could not be found
on the server, either the endpoint may not be a SignalR endpoint, the
connection ID is not present on the server, or there is a proxy
blocking WebSockets</p>
</blockquote>
<p>Another clue was that SignalR WebSockets was working on my local machine but not on the Cloud.</p>
<p>From Kong 3.0 Enterprise onwards, we have WebSocket support.
I hope this helps.</p>
|
<p>I have already deployed Spark on Kubernetes, below is the deployment.yaml,</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: pyspark-pi
namespace: default
spec:
type: Python
pythonVersion: "3"
mode: cluster
image: "user/pyspark-app:1.0"
imagePullPolicy: Always
mainApplicationFile: local:///app/pyspark-app.py
sparkVersion: "3.1.1"
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.1.1
serviceAccount: spark
executor:
cores: 1
instances: 2
memory: "512m"
labels:
version: 3.1.1
</code></pre>
<p>Below is the service.yaml:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: spark-operator-role
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: spark
namespace: default
</code></pre>
<p>GCP Spark operator is also installed on Kubernetes</p>
<p>Below are the services running:</p>
<pre><code>pyspark-pi-84dad9839f7f5f43-driver-svc ClusterIP None <none> 7078/TCP,7079/TCP,4040/TCP 2d14h
</code></pre>
<p>Now I want to run a beam application on this driver. Please find the sample code for beam application below:</p>
<pre><code>import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
options = PipelineOptions([
"--runner=PortableRunner",
"--job_endpoint=http://127.0.0.1:4040/",
"--environment_type=DOCKER",
"--environment_config=docker.io/apache/beam_python3.7_sdk:2.33.0"
])
# lets have a sample string
data = ["this is sample data", "this is yet another sample data"]
# create a pipeline
pipeline = beam.Pipeline(options=options)
counts = (pipeline | "create" >> beam.Create(data)
| "split" >> beam.ParDo(lambda row: row.split(" "))
| "pair" >> beam.Map(lambda w: (w, 1))
| "group" >> beam.CombinePerKey(sum))
# lets collect our result with a map transformation into output array
output = []
def collect(row):
output.append(row)
return True
counts | "print" >> beam.Map(collect)
# Run the pipeline
result = pipeline.run()
# lets wait until result a available
result.wait_until_finish()
# print the output
print(output)
</code></pre>
<p>When I am trying to run the above apache beam application, it is throwing the below error:</p>
<pre><code>$ python beam2.py
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
Traceback (most recent call last):
File "beam2.py", line 31, in <module>
result = pipeline.run()
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\pipeline.py", line 565, in run
return self.runner.run_pipeline(self, self._options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 438, in run_pipeline
job_service_handle = self.create_job_service(options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\portable_runner.py", line 317, in create_job_service
return self.create_job_service_handle(server.start(), options)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\apache_beam\runners\portability\job_server.py", line 54, in start
grpc.channel_ready_future(channel).result(timeout=self._timeout)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_utilities.py", line 139, in result
self._block(timeout)
File "C:\Users\eapasnr\Anaconda3\envs\oden2\lib\site-packages\grpc\_utilities.py", line 85, in _block
raise grpc.FutureTimeoutError()
grpc.FutureTimeoutError
</code></pre>
<p>I think the problem is with pipeline options and mainly with job_endpoint. Without pipeline options the applcication is running fine and giving the output to the console.</p>
<p>Which IP address and host should I provide to the job end point to make it work on spark.</p>
| <p>There's a couple of requirements to be met to use the <code>PortableRunner</code> with Spark:</p>
<ul>
<li>You have to also run a Spark job-server alongside your cluster. It takes care of submitting the application to spark (and will function as driver). Note, the Spark version of the job-server should closely match the version of your cluster!</li>
<li>Both, <code>job_endpoint</code> and <code>artifact_endpoint</code> have to point to the job-server (using respective ports) and not to Spark itself.</li>
<li>Finally, to use <code>environment_type=DOCKER</code>, you have to make sure Docker is installed on your Spark workers.</li>
</ul>
<p>Unfortunately Beam's documentation isn't great in that area. But I suggest you have a look at <a href="https://beam.apache.org/documentation/runners/spark/#running-on-a-pre-deployed-spark-cluster" rel="nofollow noreferrer">https://beam.apache.org/documentation/runners/spark/#running-on-a-pre-deployed-spark-cluster</a></p>
|
<p>We have a k8s cluster and postgres pods are running in it. Our backend services connect to postgres K8s service. I am trying to introduce PgBouncer as proxy when connecting to postgres pods. Postgres has ssl mode enabled and we have our own ROOT_CA, and an intermediate CA that is signed by the ROOT_CA. We create certificates for postgres signed by this intermediate CA. I am using the same intermediate CA to sign certificates for pgbouncer and creating a Self signed certificate for pgbouncer. I am mounting these certificates on pgbouncer pod. When I configure the backend service to redirect traffic to pgbouncer service and through pgbouncer to postgres,I am seeing below error:</p>
<pre><code>2022-10-06 11:43:13.030 1 DEBUG parse_ini_file: 'verbose' = '2' ok:1
2022-10-06 11:43:13.033 1 NOISE event: 128, SBuf: 192, PgSocket: 400, IOBuf: 4108
2022-10-06 11:43:13.033 1 LOG file descriptor limit: 1048576 (H:1048576), max_client_conn: 100, max fds possible: 110
2022-10-06 11:43:13.033 1 DEBUG pktbuf_dynamic(128): 0x55a4515583c0
2022-10-06 11:43:13.033 1 DEBUG make_room(0x55a4515583c0, 4): realloc newlen=256
2022-10-06 11:43:13.033 1 DEBUG pktbuf_dynamic(128): 0x55a4515585c0
2022-10-06 11:43:13.034 1 NOISE connect(3, unix:/tmp/.s.PGSQL.5432) = No such file or directory
2022-10-06 11:43:13.034 1 DEBUG adns_create_context: udns 0.4
2022-10-06 11:43:13.034 1 DEBUG add_listen: 0.0.0.0:5432
2022-10-06 11:43:13.035 1 NOISE old TCP_DEFER_ACCEPT on 7 = 0
2022-10-06 11:43:13.035 1 NOISE install TCP_DEFER_ACCEPT on 7
2022-10-06 11:43:13.035 1 LOG listening on 0.0.0.0:5432
2022-10-06 11:43:13.035 1 DEBUG add_listen: ::/5432
2022-10-06 11:43:13.035 1 NOISE old TCP_DEFER_ACCEPT on 8 = 0
2022-10-06 11:43:13.035 1 NOISE install TCP_DEFER_ACCEPT on 8
2022-10-06 11:43:13.035 1 LOG listening on ::/5432
2022-10-06 11:43:13.035 1 DEBUG add_listen: unix:/tmp/.s.PGSQL.5432
2022-10-06 11:43:13.036 1 LOG listening on unix:/tmp/.s.PGSQL.5432
2022-10-06 11:43:13.036 1 LOG process up: pgbouncer 1.9.0, libevent 2.1.8-stable (epoll), adns: udns 0.4, tls: LibreSSL 2.6.5
2022-10-06 11:43:16.166 1 NOISE new fd from accept=10
2022-10-06 11:43:16.167 1 NOISE resync: done=0, parse=0, recv=0
2022-10-06 11:43:16.167 1 NOISE C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 pkt='!' len=8
2022-10-06 11:43:16.167 1 NOISE C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 C: req SSL
2022-10-06 11:43:16.167 1 NOISE C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 P: nak
2022-10-06 11:43:16.167 1 NOISE resync: done=8, parse=8, recv=8
2022-10-06 11:43:16.167 1 DEBUG C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 P: got connection: 127.0.0.6:38446 -> 127.0.0.6:5432
2022-10-06 11:43:16.167 1 NOISE safe_accept(7) = Resource temporarily unavailable
2022-10-06 11:43:16.167 1 NOISE resync: done=0, parse=0, recv=0
2022-10-06 11:43:16.167 1 NOISE C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 pkt='!' len=69
2022-10-06 11:43:16.168 1 DEBUG C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 got var: user=<user_name>
2022-10-06 11:43:16.168 1 DEBUG C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 got var: database=<database_name>
2022-10-06 11:43:16.168 1 DEBUG C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 using application_name: pg_isready
2022-10-06 11:43:16.168 1 NOISE cstr_get_pair: "host"="postgres-cluster"
2022-10-06 11:43:16.168 1 NOISE cstr_get_pair: "port"="5432"
2022-10-06 11:43:16.168 1 NOISE cstr_get_pair: "auth_user"="<user_name>"
2022-10-06 11:43:16.168 1 DEBUG pktbuf_dynamic(128): 0x55a4515686c0
2022-10-06 11:43:16.168 1 LOG C-0x55a451568fb0: (nodb)/(nouser)@127.0.0.6:38446 registered new auto-database: db=<database_name>
2022-10-06 11:43:16.168 1 DEBUG C-0x55a451568fb0: <user_name>/(nouser)@127.0.0.6:38446 pause_client
2022-10-06 11:43:16.168 1 NOISE S-0x55a45156ddf0: <database_name>/<user_name>@(bad-af):0 inet socket: postgres-db-pg-cluster
2022-10-06 11:43:16.168 1 NOISE S-0x55a45156ddf0: <database_name>/<user_name>@(bad-af):0 dns socket: postgres-db-pg-cluster
2022-10-06 11:43:16.168 1 NOISE dns: new req: postgres-cluster
2022-10-06 11:43:16.168 1 DEBUG zone_register(postgres-cluster)
2022-10-06 11:43:16.168 1 NOISE udns_timer_setter: ctx=0x55a4515665e0 timeout=0
2022-10-06 11:43:16.168 1 NOISE dns: udns_launch_query(postgres-cluster)=0x55a4515688e0
2022-10-06 11:43:16.168 1 NOISE udns_timer_cb
2022-10-06 11:43:16.168 1 NOISE udns_timer_setter: ctx=0x55a4515665e0 timeout=4
2022-10-06 11:43:16.168 1 DEBUG launch_new_connection: already progress
2022-10-06 11:43:16.168 1 NOISE udns_io_cb
2022-10-06 11:43:16.168 1 NOISE udns_result_a4: postgres-cluster: 1 ips
2022-10-06 11:43:16.168 1 NOISE DNS: postgres-cluster[0] = 10.43.169.137:0 [STREAM]
2022-10-06 11:43:16.168 1 NOISE dns: deliver_info(postgres-cluster) addr=10.43.169.137:0
2022-10-06 11:43:16.168 1 DEBUG S-0x55a45156ddf0: <database_name>/<user_name>@(bad-af):0 dns_callback: inet4: 10.43.169.137:5432
2022-10-06 11:43:16.168 1 DEBUG S-0x55a45156ddf0: <database_name>/<user_name>@10.43.169.137:5432 launching new connection to server
2022-10-06 11:43:16.169 1 NOISE udns_timer_setter: ctx=0x55a4515665e0 timeout=-1
2022-10-06 11:43:16.169 1 DEBUG launch_new_connection: already progress
2022-10-06 11:43:16.169 1 DEBUG S-0x55a45156ddf0: <database_name>/<user_name>@10.43.169.137:5432 S: connect ok
2022-10-06 11:43:16.169 1 LOG S-0x55a45156ddf0: <database_name>/<user_name>@10.43.169.137:5432 new connection to server (from 10.42.2.120:54000)
2022-10-06 11:43:16.169 1 NOISE S-0x55a45156ddf0: <database_name>/<user_name>@10.43.169.137:5432 P: SSL request
2022-10-06 11:43:16.169 1 DEBUG launch_new_connection: already progress
2022-10-06 11:43:16.175 1 NOISE resync: done=0, parse=0, recv=0
2022-10-06 11:43:16.175 1 NOISE S-0x55a45156ddf0: <database_name>/<user_name>@10.43.169.137:5432 launching tls
2022-10-06 11:43:16.175 1 NOISE resync: done=1, parse=1, recv=1
2022-10-06 11:43:16.175 1 NOISE tls_handshake: err=-2
2022-10-06 11:43:16.175 1 DEBUG launch_new_connection: already progress
2022-10-06 11:43:16.183 1 NOISE tls_handshake: err=-2
2022-10-06 11:43:16.183 1 DEBUG launch_new_connection: already progress
2022-10-06 11:43:16.184 1 NOISE tls_handshake: err=-1
**2022-10-06 11:43:16.184 1 WARNING TLS handshake error: handshake failed: error:1401E418:SSL routines:CONNECT_CR_FINISHED:tlsv1 alert unknown ca**
2022-10-06 11:43:16.184 1 LOG S-0x55a45156ddf0: <database_name>/<user_name>@10.43.169.137:5432 closing because: server conn crashed? (age=0)
2022-10-06 11:43:16.184 1 NOISE tls_close
</code></pre>
<p>Postgres logs show below error:</p>
<pre><code>
"postgres","aorsdb","authentication","connection authenticated:
identity=""postgres"" method=md5
(/home/postgres/pgdata/pgroot/data/pg_hba.conf:12)
"postgres","aorsdb","authentication","connection authorized:
user=postgres database=aorsdb application_name=pgwatch2 SSL enabled
(protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256) "could
not accept SSL connection: certificate verify failed
</code></pre>
<p>Can anyone suggest what might be wrong and what I can look at?
pgbouncer.ini file:</p>
<pre><code>[databases]
* = host=postgres-cluster port=5432 auth_user=postgres
[pgbouncer]
listen_addr = *
listen_port = 5432
pool_mode = session
max_client_conn = 100
ignore_startup_parameters = extra_float_digits
server_tls_sslmode = require
server_tls_key_file = /etc/pgbouncer/certs/server.key
server_tls_cert_file = /etc/pgbouncer/certs/server.crt
server_tls_ca_file = /etc/pgbouncer/certs/ca.crt
verbose = 2
</code></pre>
<p>Few questions:</p>
<ul>
<li>Since both postgres and pgbouncer's certificates are signed by the same CA what can be the issue?</li>
<li>Also I want to understand the flow, so based on the ip addresses in the logs, it is pgbouncer trying to communicate to postgres(as expected), so I thought process is that postgres is unable to recognise the CA and throws this error, is it correct or is it the other way around(pgbouncer is rejecting and throwing this error). Ideally I am hoping this should not happen as both are signed by the same CA.</li>
</ul>
<p>I have done multiple things like mounting certificates in pgbouncer pod signed by the ROOT_CA itself, mounting certificate chain for pgbouncer(Generated certificate, Intermediate CA, ROOT CA) but no luck so far.</p>
| <p>You are using client certs. I don't know if you intend to be doing that, as client certs are kind of rare, and your files containing the client cert and client key have odd names for such use.</p>
<p>Maybe you meant to use client_tls_key_file and client_tls_cert_file in your config, rather than the corresponding server_* properties.</p>
<p>Assuming you are not intending to use client certs but just server certs, then
from pgbouncer's perspective: client_tls_key_file and client_tls_cert_file are what you use to prove to the incoming clients that you are who you claim to be, and server_tls_ca_file is what you use to prove to yourself that the real server is who it claims to be.</p>
|
<p>this is my service information</p>
<p><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE</code></p>
<p><code>k8s-demo NodePort 10.103.77.121 192.168.96.10 80:32236/TCP 24h</code></p>
<p>when I <code>curl 192.168.96.10:32236</code>,returns <code>curl: (56) Recv failure: Connection reset by peer</code></p>
<p>but I can use minikube service command to access this service locally</p>
<p>configuration file below</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2022-10-10T08:27:35Z"
labels:
app: k8s-demo
name: k8s-demo
namespace: default
resourceVersion: "65370"
uid: 9ccd2662-a3df-4e25-9691-3f6e3421d78b
spec:
clusterIP: 10.103.77.121
clusterIPs:
- 10.103.77.121
externalIPs:
- 192.168.96.10
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32236
port: 80
protocol: TCP
targetPort: 8080
selector:
app: k8s-demo
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>I already set the externalIP and port for this service, but it seems didn't work, I wonder what's wrong with the configuration, thanks !!</p>
| <p>To have an external access to a NodePort service the node should have a public ip address assigned to it. 192.168.96.10 is a private ip address.</p>
<p>IANA has assigned the following 3 ip ranges for private network -</p>
<pre><code>Address ranges to be use by private networks are:
Class A: 10.0.0.0 to 10.255.255.255
Class B: 172.16.0.0 to 172.31.255.255
Class C: 192.168.0.0 to 192.168.255.255
</code></pre>
<p>Ref: <a href="https://www.ibm.com/docs/en/networkmanager/4.2.0?topic=translation-private-address-ranges" rel="nofollow noreferrer">https://www.ibm.com/docs/en/networkmanager/4.2.0?topic=translation-private-address-ranges</a></p>
|
<p>I have Kubernetes version 1.24.3, and I created a new service account named "deployer", but when I checked it, it shows it doesn't have any secrets.</p>
<p>This is how I created the service account:</p>
<pre><code>kubectl apply -f - << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployer
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources:
- deployments
verbs: ["list", "get", "describe", "apply", "delete", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: deployer-crb
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deployer-role
subjects:
- kind: ServiceAccount
name: deployer
namespace: default
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: deployer
EOF
</code></pre>
<p>When I checked it, it shows that it doesn't have secrets:</p>
<pre><code>cyber@manager1:~$ kubectl get sa deployer
NAME SECRETS AGE
deployer 0 4m32s
cyber@manager1:~$ kubectl get sa deployer -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"deployer","namespace":"default"}}
creationTimestamp: "2022-10-13T08:36:54Z"
name: deployer
namespace: default
resourceVersion: "2129964"
uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
</code></pre>
<p>And this is the secret that should be associated to the above service account:</p>
<pre><code>cyber@manager1:~$ kubectl get secrets token-secret -o yaml
apiVersion: v1
data:
ca.crt: <REDACTED>
namespace: ZGVmYXVsdA==
token: <REDACTED>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"deployer"},"name":"token-secret","namespace":"default"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: deployer
kubernetes.io/service-account.uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
creationTimestamp: "2022-10-13T08:36:54Z"
name: token-secret
namespace: default
resourceVersion: "2129968"
uid: d960c933-5e7b-4750-865d-e843f52f1b48
type: kubernetes.io/service-account-token
</code></pre>
<p>What can be the reason?</p>
<p><strong>Update:</strong>
The answer help, but for the protocol, it doesn't matter, the token works even it shows 0 secrets:</p>
<pre><code>kubectl get pods --token `cat ./token` -s https://192.168.49.2:8443 --certificate-authority /home/cyber/.minikube/ca.crt --all-namespaces
</code></pre>
<p><strong>Other Details:</strong><br />
I am working on Kubernetes version 1.24:</p>
<pre><code>cyber@manager1:~$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>You can delete it by running:</p>
<pre><code>kubectl delete clusterroles deployer-role
kubectl delete clusterrolebindings deployer-crb
kubectl delete sa deployer
kubectl delete secrets token-secret
</code></pre>
<p>Reference to Kubernetes 1.24 changes:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#no-really-you-must-read-this-before-you-upgrade" rel="nofollow noreferrer">Change log 1.24</a></li>
<li><a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">Creating secret through the documentation</a></li>
</ul>
| <p>Should the roleRef not reference the developer Role which has the name deployer role. Inwould try to replace</p>
<pre><code>roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sdr
</code></pre>
<p>with</p>
<pre><code>roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deplyer-role
</code></pre>
|
<p>I have Kubernetes version 1.24.3, and I created a new service account named "deployer", but when I checked it, it shows it doesn't have any secrets.</p>
<p>This is how I created the service account:</p>
<pre><code>kubectl apply -f - << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployer
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources:
- deployments
verbs: ["list", "get", "describe", "apply", "delete", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: deployer-crb
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deployer-role
subjects:
- kind: ServiceAccount
name: deployer
namespace: default
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: deployer
EOF
</code></pre>
<p>When I checked it, it shows that it doesn't have secrets:</p>
<pre><code>cyber@manager1:~$ kubectl get sa deployer
NAME SECRETS AGE
deployer 0 4m32s
cyber@manager1:~$ kubectl get sa deployer -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"deployer","namespace":"default"}}
creationTimestamp: "2022-10-13T08:36:54Z"
name: deployer
namespace: default
resourceVersion: "2129964"
uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
</code></pre>
<p>And this is the secret that should be associated to the above service account:</p>
<pre><code>cyber@manager1:~$ kubectl get secrets token-secret -o yaml
apiVersion: v1
data:
ca.crt: <REDACTED>
namespace: ZGVmYXVsdA==
token: <REDACTED>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"deployer"},"name":"token-secret","namespace":"default"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: deployer
kubernetes.io/service-account.uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
creationTimestamp: "2022-10-13T08:36:54Z"
name: token-secret
namespace: default
resourceVersion: "2129968"
uid: d960c933-5e7b-4750-865d-e843f52f1b48
type: kubernetes.io/service-account-token
</code></pre>
<p>What can be the reason?</p>
<p><strong>Update:</strong>
The answer help, but for the protocol, it doesn't matter, the token works even it shows 0 secrets:</p>
<pre><code>kubectl get pods --token `cat ./token` -s https://192.168.49.2:8443 --certificate-authority /home/cyber/.minikube/ca.crt --all-namespaces
</code></pre>
<p><strong>Other Details:</strong><br />
I am working on Kubernetes version 1.24:</p>
<pre><code>cyber@manager1:~$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>You can delete it by running:</p>
<pre><code>kubectl delete clusterroles deployer-role
kubectl delete clusterrolebindings deployer-crb
kubectl delete sa deployer
kubectl delete secrets token-secret
</code></pre>
<p>Reference to Kubernetes 1.24 changes:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#no-really-you-must-read-this-before-you-upgrade" rel="nofollow noreferrer">Change log 1.24</a></li>
<li><a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">Creating secret through the documentation</a></li>
</ul>
| <p>Base on the change log, the auto-generation of tokens is no longer available for every service account.</p>
<blockquote>
<p>The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. <strong>When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount</strong>. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/" rel="nofollow noreferrer">token-request-v1</a></p>
<p><a href="https://github.com/kubernetes/kubernetes/pull/108309" rel="nofollow noreferrer">stops auto-generation of legacy tokens because they are less secure</a></p>
<p><a href="https://github.com/argoproj/argo-workflows/issues/8320#issuecomment-1271354662" rel="nofollow noreferrer">work-around</a></p>
<p>or you can use</p>
<pre><code>kubectl create token SERVICE_ACCOUNT_NAME
kubectl create token deployer
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-" rel="nofollow noreferrer">Request a service account token.</a></p>
|
<p>I was trying to uninstall a helm release in my AKS cluster using <code>helm uninstall RELEASE_NAME</code> but seems like it failed. The failure happened because for some reason all of the nodes in my cluster went in a not ready state when I used <code>helm uninstall</code>.</p>
<p>Since then, I got all the nodes to get back up and running and the cluster is functioning as it should. Now, when I do try <code>helm list</code>, I don't get to see this release present anymore, but doing a <code>helm list -a</code> shows me that the state of the release is still in <code>uninstalling</code> state. I have tried quite a few things, but it has been stuck there now for the last 3-4 days.</p>
<p>A few things that I tried was to use <code>helm uninstall RELEASE_NAME</code>, <code>helm delete --purge RELEASE_NAME</code> but these commands throw an error because the release is already in <code>uninstalling</code> state. I tried modifying the helm secrets in the cluster for this specific release but that didn't help either. <code>Helm3</code> is being used so it is not like I can restart the tiller pod to maybe stabilize this.</p>
<p>Does <code>Helm3</code> use some kind of a finalizer mechanism which can be modified to rectify this or
is there no alternate way in which I can perhaps try to delete this release? I want to use the same release name for the specific API in the future too.</p>
<p>Any help will be really appreciated.</p>
| <p>Based on the discussion, the following steps resolve the issue.</p>
<pre><code>helm hist releasename
helm rollback releasename versionnumber-with-status-deployed
</code></pre>
<p>if this did not help, then delete secret for each version</p>
<pre><code>helm hist releasename
kubectl get secrets
k delete secrets sh.helm.release.v1.name.VERSION-N
</code></pre>
|
<p>I want to know how label can be set while namespace is getting created in e2e.
This line of code simply creates a namespace <a href="https://github.com/kubernetes/kubernetes/blob/v1.25.0/test/e2e/framework/framework.go#L239" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.25.0/test/e2e/framework/framework.go#L239</a> and it picks default <code>pod-security.kubernetes.io/</code> set which is <code>restricted</code>. I want test framework to create namespace with <code>pod-security.kubernetes.io</code> to be <code>Privileged</code>.</p>
| <p>This label is set by an admission controller. Setting it manually will cause the controller to reject your pod.</p>
<p>To get the correct Pod Security Policy (PSP) assigned you need RBAC rules that allows <code>use</code> on that PSP: <a href="https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#via-rbac" rel="nofollow noreferrer">https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#via-rbac</a></p>
<p>Also, if when several PSPs can be used with a particular Pod they are applied in lexicographical order: <a href="https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#policy-order" rel="nofollow noreferrer">https://v1-24.docs.kubernetes.io/docs/concepts/security/pod-security-policy/#policy-order</a></p>
|
<p>Does anyone know why <code>restartPolicy</code> field is defined on the Pod level instead of the container level?</p>
<p>It would seem that this setting is more closely related to the container, not the Pod.</p>
<p>Then how to controll restart policy of single container in multi-container Pod?</p>
| <p>I think <strong>restart policy</strong> is part of the <strong>POD</strong> spec.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: 1st
image: image-1
command: ["./bash", "-test1"]
- name: 2nd
image: image-2
command: ["./bash", "-test2"]
restartPolicy: Never
</code></pre>
<p>Restart policy gets set the at <strong>POD spec</strong> level, and get applied to all the containers in <strong>POD</strong> even if <strong>init container</strong> is there.</p>
<p>If there are <strong>multi containers</strong> inside the <strong>POD</strong>, we have to consider those as <strong>tightly coupled</strong>.</p>
<p><strong>Official documents says something like this</strong> : <a href="https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods" rel="nofollow noreferrer">link</a></p>
<blockquote>
<p>Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple co-located
containers that are tightly coupled and need to share resources. These
co-located containers form a single cohesive unit of service—for
example, one container serving data stored in a shared volume to the
public, while a separate sidecar container refreshes or updates those
files. The Pod wraps these containers, storage resources, and an
ephemeral network identity together as a single unit.</p>
</blockquote>
<blockquote>
<p>Note: Grouping multiple co-located and co-managed containers in a
single Pod is a relatively advanced use case. You should use this
pattern only in specific instances in which your containers are
tightly coupled.</p>
</blockquote>
<p>If you want to restart the <strong>single container</strong> in <strong>POD</strong> you won't be able to do it, you have keep that container out of <strong>POD</strong> then by <strong>POD</strong> design.</p>
<p>Even if you will see the <a href="https://You%20can%27t%20restart%20single%20containers%20in%20a%20pod%20by%20design.%20Just%20move%20the%20container%20out%20into%20it%27s%20own%20pod" rel="nofollow noreferrer">container restart policy</a> it's talk about the : <strong>POD spec</strong> restart policy only.</p>
|
<p>When trying to use the helm function: lookup, I do not get any result at all as expected.</p>
<p>My Secret that I try to read looks like this</p>
<pre><code>apiVersion: v1
data:
adminPassword: VG9wU2VjcmV0UGFzc3dvcmQxIQ==
adminUser: YWRtaW4=
kind: Secret
metadata:
annotations:
sealedsecrets.bitnami.com/cluster-wide: "true"
name: activemq-artemis-broker-secret
namespace: common
type: Opaque
</code></pre>
<p>The template helm chart that should load the adminUser and adminPassword data looks like this</p>
<pre><code>apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: {{ .Values.labels.app }}
namespace: common
spec:
{{ $secret := lookup "v1" "Secret" .Release.Namespace "activemq-artemis-broker-secret" }}
adminUser: {{ $secret.data.adminUser }}
adminPassword: {{ $secret.data.adminPassword }}
</code></pre>
<p>When deploying this using ArgoCD I get the following error:</p>
<pre><code>failed exit status 1: Error: template: broker/templates/deployment.yaml:7:23:
executing "broker/templates/deployment.yaml" at <$secret.data.adminUser>:
nil pointer evaluating interface {}.adminUser Use --debug flag to render out invalid YAML
</code></pre>
<p>Both the secret and the deployment is in the same namespace (common).</p>
<p>If I try to get the secret with kubectl it works as below</p>
<pre><code>kubectl get secret activemq-artemis-broker-secret -n common -o json
{
"apiVersion": "v1",
"data": {
"adminPassword": "VG9wU2VjcmV0UGFzc3dvcmQxIQ==",
"adminUser": "YWRtaW4="
},
"kind": "Secret",
"metadata": {
"annotations": {
"sealedsecrets.bitnami.com/cluster-wide": "true"
},
"creationTimestamp": "2022-10-10T14:40:49Z",
"name": "activemq-artemis-broker-secret",
"namespace": "common",
"ownerReferences": [
{
"apiVersion": "bitnami.com/v1alpha1",
"controller": true,
"kind": "SealedSecret",
"name": "activemq-artemis-broker-secret",
"uid": "edff38fb-a966-47a6-a706-cb197ac1797d"
}
],
"resourceVersion": "127303988",
"uid": "0679fc5c-7465-4fe1-9197-b483073e93c2"
},
"type": "Opaque"
}
</code></pre>
<p>What is wrong here. I use helm version: 3.8.1 and Go version: 1.75</p>
| <p>This error is the result of two parts <em>working</em> together:</p>
<p>First, helm's <code>lookup</code> only works in a running cluster, not when running <code>helm template</code> (without <code>--validate</code>). If run in that manner it returns nil. (It is usually used as <code>lookup ... | default dict {}</code>, to avoid a nasty error message).</p>
<p>Second, you're deploying with ArgoCD that is actually running <code>helm template</code> internally when deploying a helm chart. See open issue: <a href="https://github.com/argoproj/argo-cd/issues/5202" rel="nofollow noreferrer">https://github.com/argoproj/argo-cd/issues/5202</a> . The issue mentions a plugin that can be used to change this behaviour. However, doing so requires some reconfiguration of argocd itself, which is not trivial and is not without side effects.</p>
|
<p>I'm using <code>kubectl</code> to access the api server on my minikube cluster on ubuntu
but when try to use <code>kubectl</code> command I got an error certificate expired:</p>
<pre><code>/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
</code></pre>
<p>Here's my kubectl config:</p>
<pre><code>/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root@ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root@ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
</code></pre>
<p>How I can renew this certificate?</p>
| <p>minikube delete - deletes the local Kubernetes cluster - worked for me</p>
<p>reference:
<b> <a href="https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950</a> </b></p>
|
<p>I created HPA on our k8s cluster which should auto-scale on 90% memory utilization. However, it scales UP without hitting the target percentage. I use the following config:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
namespace: {{ .Values.namespace }}
name: {{ include "helm-generic.fullname" . }}
labels:
{{- include "helm-generic.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "helm-generic.fullname" . }}
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: memory
targetAverageUtilization: 90
</code></pre>
<p>So for this config it creates 2 pods which is the maxReplicas number. If I add 4 for maxReplicas it will create 3.</p>
<p>This is what i get from kubectl describe hpa</p>
<pre><code>$ kubectl describe hpa -n trunkline
Name: test-v1
Namespace: trunkline
Labels: app.kubernetes.io/instance=test-v1
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=helm-generic
app.kubernetes.io/version=0.0.0
helm.sh/chart=helm-generic-0.1.3
Annotations: meta.helm.sh/release-name: test-v1
meta.helm.sh/release-namespace: trunkline
CreationTimestamp: Wed, 12 Oct 2022 17:36:54 +0300
Reference: Deployment/test-v1
Metrics: ( current / target )
**resource memory on pods (as a percentage of request): 59% (402806784) / 90%**
resource cpu on pods (as a percentage of request): 11% (60m) / 80%
Min replicas: 1
Max replicas: 2
Deployment pods: **2 current / 2** desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
</code></pre>
<p>As you see the pods memory % is 59 , with target 90 which I expect to produce only 1 pod.</p>
| <p>The horizontal pod autoscaler has a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">very specific formula</a> for calculating the target replica count:</p>
<pre class="lang-none prettyprint-override"><code>desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
</code></pre>
<p>With the output you show, <code>currentMetricValue</code> is 59% and <code>desiredMetricValue</code> is 90%. Multiplying that by the <code>currentReplicas</code> of 2, you get about 1.3 replicas, which gets rounded up to 2.</p>
<p>This formula, and especially the <code>ceil()</code> round-up behavior, can make HPA very slow to scale down, especially with a small number of replicas.</p>
<p>More broadly, autoscaling on Kubernetes-observable memory might not work the way you expect. Most programming languages are garbage-collected (C, C++, and Rust are the most notable exceptions) and garbage collectors as a rule tend to allocate a large block of operating-system memory and reuse it, rather than return it to the operating system if load decreases. If you have a pod that reaches 90% memory from the Kubernetes point of view, its possible that memory usage will never decrease. You might need to autoscale on a different metric, or attach an external metrics system like Prometheus to get more detailed memory-manager statistics you can act on.</p>
|
<p>How to patch "db.password" in the following cm with kustomize?</p>
<p>comfigmap:</p>
<pre><code>apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "123456",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
</code></pre>
| <p>create a placeholder in your file and replace it with real data while applying kustomize</p>
<p>your code will be like this:</p>
<pre><code>#!/bin/bash
sed -i "s/PLACE-HOLDER/123456/g" db_config.yaml
kustomize config.yaml >> kustomizeconfig.yaml
kubectl apply -f kustomizeconfig.yaml -n foo
</code></pre>
<p>And the db_config file will be:</p>
<pre><code>apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "PLACE_HODLER",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
</code></pre>
<p>NB: This should be running on the pipeline to have the config file cloned from repo, so the real file won't be updated.</p>
|
<p>I have a cluster and set up kubelet on a node (name is <code>myNode</code>) with the <code>static</code> CPU Manager Policy. So I've started kubelet with <code>--cpu-manager-policy=static</code> (to set the static policy) and <code>--reserved-cpus=1</code> (to make sure kubelet has one core to run on exclusively) as explained <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy" rel="nofollow noreferrer">here</a>.</p>
<p>Checking <code>/var/lib/kubelet/cpu_manager_state</code> it gives me</p>
<pre class="lang-bash prettyprint-override"><code>cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"0-3","checksum":611748604}
</code></pre>
<p>which should be fine. I then start a pod with the following pod spec</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: wl
labels:
app: wl
spec:
containers:
- name: wl
image: docker.io/polinux/stress:latest
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c"]
args: ["echo 'workload' && stress --cpu 4"]
resources:
requests:
cpu: 1
limits:
cpu: 1
nodeName: myNode
</code></pre>
<p>and start it. It get's scheduled on the desired node "myNode". I then check for the processes with</p>
<pre class="lang-bash prettyprint-override"><code>ps aux | grep stress
root 2966141 0.2 0.0 780 4 ? Ss 10:54 0:00 stress --cpu 4
root 2966154 27.1 0.0 780 36 ? R 10:54 0:02 stress --cpu 4
root 2966155 26.7 0.0 780 36 ? R 10:54 0:02 stress --cpu 4
root 2966156 28.6 0.0 780 36 ? R 10:54 0:02 stress --cpu 4
root 2966157 27.3 0.0 780 36 ? R 10:54 0:02 stress --cpu 4
</code></pre>
<p>and then which CPUs they are running on with</p>
<pre class="lang-bash prettyprint-override"><code>ps -o pid,psr,comm -p 2966154 2966155 2966156 2966157
PID PSR COMMAND
2966154 0 stress
2966155 1 stress
2966156 2 stress
2966157 3 stress
</code></pre>
<p>It looks like there are 4 processes running, but all of them on different CPUs. I would have expected that the Pod fails to run since it's allowed to only run on one core while the <code>stress --cpu 4</code> wants to start 4 threads on 4 CPUs. With the default CPU Manager Policy, this would be the expected behavior, but I've configured the static one.</p>
<p>Any hint what the problem could be?</p>
| <p>You also need to provide memory request and limit in order to qualify for the <em>Guaranteed</em> tier and exclusive cores:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: wl
labels:
app: wl
spec:
containers:
- name: wl
image: docker.io/polinux/stress:latest
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c"]
args: ["echo 'workload' && stress --cpu 4"]
resources:
requests:
cpu: "1"
memory: "200Mi"
limits:
cpu: "1"
memory: "200Mi"
nodeName: myNode
</code></pre>
<p>Verify the Pod by <code>kubectl describe pod wl</code></p>
|
<p>I have Kubernetes running (K3s on TrueNAS scale). I've deployed Prometheus and Grafana and am able to access the metrics in Grafana. I now want to create a stacked line chart that shows memory usage by namespace and total memory used by Kubenetes.</p>
<p>I got it working without the total with this query: <code>sum (container_memory_working_set_bytes{namespace!=""}) by(namespace)</code> (see screen shot 1).</p>
<p>I needed to add the <code>{namespace!=""}</code> or it would add an entry with the same name as the query (see screen shot 2). Don't understand what that value represents, but its not the total I'm after!</p>
<p>How can I include a total of the memory used in the tooltip (without it appearing as a line in the chart)?</p>
<p><a href="https://i.stack.imgur.com/cKErH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cKErH.png" alt="Screen shot 1 of query" /></a>
<a href="https://i.stack.imgur.com/Cgenj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cgenj.png" alt="Screen shot 2 of query not filtered with namespace!=""" /></a></p>
| <p>I think I figured it out. I still have the query <code>sum (container_memory_working_set_bytes {namespace!=""} ) by(namespace)</code>.</p>
<p>Then added a transformation "Add field from calculation", again with the defaults. I thought this would only work for the properties listed at the time of creating the transformation/query, but spinning up a new container did get it automatically added to the chart.</p>
<pre><code>Mode = Reduce row
Field name = all selected
Calculation = Total
Alias = Total
Replace all fields = False
</code></pre>
<p>Then in the panel on the right, configure these settings:</p>
<pre><code>Graph styles > Style: Lines
Graph styles > Fill opacity: 40
Graph styles > Stack series: Normal
Standard options > Unit: Byte(IEC)
</code></pre>
<p>Finally, also in the panel on the right, add an override (see Grafana query screen shot):</p>
<pre><code>Add field override > Fields with name: Total
Add override property > Graph styles > Stack series: 100%
</code></pre>
<p><strong>End Result</strong>
<a href="https://i.stack.imgur.com/FJVvu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FJVvu.png" alt="End result" /></a></p>
<p><strong>Grafana query</strong>
<a href="https://i.stack.imgur.com/hfuwP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hfuwP.png" alt="Grafana query" /></a></p>
<p><strong>Grafana transformations</strong>
<a href="https://i.stack.imgur.com/xwM5w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xwM5w.png" alt="Grafana transformations" /></a></p>
|
<p>I created an operator for my application and want to create a service monitor for it.
The Prometheus operator was created.
The monitoring Prometheus library was imported and the service monitor CRD was created in my k8s cluster.
Here is the Go code for this object:</p>
<pre class="lang-go prettyprint-override"><code>package controllers
import (
"context"
"fmt"
appsv1alpha1 "k8s-operator/api/v1alpha1"
monitoring "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
"gopkg.in/yaml.v2"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
// ensureSvcMonitor ensures SvcMonitor is Running in a namespace.
func (r *MyappReconciler) ensureSvcMonitor(request reconcile.Request,
instance *appsv1alpha1.Myapp,
svcmonitor *monitoring.ServiceMonitor,
) (*reconcile.Result, error) {
// See if SvcMonitor already exists and create if it doesn't
found := &monitoring.ServiceMonitor{}
err := r.Get(context.TODO(), types.NamespacedName{
Name: svcmonitor.Name,
Namespace: instance.Namespace,
}, found)
if err != nil && errors.IsNotFound(err) {
// Create the SvcMonitor
err = r.Create(context.TODO(), svcmonitor)
if err != nil {
// SvcMonitor creation failed
return &reconcile.Result{}, err
} else {
// SvcMonitor creation was successful
return nil, nil
}
} else if err != nil {
// Error that isn't due to the SvcMonitor not existing
return &reconcile.Result{}, err
}
return nil, nil
}
// backendSvcMonitor is a code for creating a SvcMonitor
func (r *MyappReconciler) backendSvcMonitor(v *appsv1alpha1.Myapp) *monitoring.ServiceMonitor {
svcmonitor := &monitoring.ServiceMonitor{
TypeMeta: metav1.TypeMeta{
Kind: "ServiceMonitor",
APIVersion: "monitoring.coreos.com/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: v.Spec.Name + "-svcmonitor",
Namespace: v.Namespace},
Spec: monitoring.ServiceMonitorSpec{
Endpoints: []monitoring.Endpoint{{
Port: v.Spec.Name,
}},
Selector: metav1.LabelSelector{
MatchLabels: labels(v),
},
},
}
controllerutil.SetControllerReference(v, svcmonitor, r.Scheme)
yamlData, _ := yaml.Marshal(&svcmonitor)
fmt.Println(string(yamlData))
return svcmonitor
}
</code></pre>
<p>Here is the part which I call these function to create this object in my controller:</p>
<pre class="lang-go prettyprint-override"><code>if instance.Spec.Servicemonitorenable {
result, err = r.ensureSvcMonitor(req, instance, r.backendSvcMonitor(instance))
if result != nil {
log.Error(err, "Servicemonitor Not ready")
return *result, err
}
}
</code></pre>
<p>But when I create the custom resource instance I get this error:</p>
<pre class="lang-none prettyprint-override"><code>"error": "no kind is registered for the type v1.ServiceMonitor in scheme \"pkg/runtime/scheme.go:100\""}
</code></pre>
| <p>In your <code>main.go</code>, you need to add <code>monitoring/v1</code> to the <code>scheme</code> injected into <code>controller-runtime</code> i.e.:</p>
<pre class="lang-golang prettyprint-override"><code>// main.go
package main
import (
"os"
ctrl "sigs.k8s.io/controller-runtime"
monitoring "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
"k8s.io/apimachinery/pkg/runtime"
)
var (
scheme = runtime.NewScheme()
)
func init() {
monitoring.AddToScheme(scheme)
}
func main() {
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
// ... other options here
})
// Start Manager
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
os.Exit(1)
}
}
</code></pre>
|
<p>I'm currently trying to add ArgoCD to my project and am struggling with pulling an image from my GitLab container registry.</p>
<p>Here's my values yaml:</p>
<pre class="lang-yaml prettyprint-override"><code># Default values for oc-backend.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: registry.gitlab.com/open-concepts/open-concepts-backend
tag: master
pullPolicy: Always
imagePullSecrets:
- name: registry-credentials
nameOverride: "oc-app"
fullnameOverride: "oc-backend-test"
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: NodePort
port: 80
ingress:
enabled: false
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
</code></pre>
<p>And the secret yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: registry-credentials
namespace: argocd
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: xxxx
</code></pre>
<p>I know for a fact that the dockerconfigjson works because I've tested it beforehand.</p>
<pre><code>➜ kubectl get secret registry-credentials --namespace=argocd
NAME TYPE DATA AGE
registry-credentials kubernetes.io/dockerconfigjson 1 56m
</code></pre>
<p>Yet, I'm still getting denied errors via ArgoCD. Can someone check whether my configuration is correct?</p>
<p>TIA</p>
| <p>Solution: I'd removed the user, password and email before encoding it. It's now working!</p>
|
<p>i am trying to create a Kubernetes cluster with the intention of hosting a docker registry, but after installing kubectl (via homebrew on Mac) along with minikube i am getting <code>The connection to the server localhost:8080 was refused - did you specify the right host or port?</code> when i run <code>kubectl version</code> or any other commands. I have previously used the docker desktop app with Kubernetes so don't know if there is any config i need to replace?</p>
<p>I have discovered there is no context set in the kubectl config but if i run <code>kubectl config get-contexts</code> there is nothing there.</p>
| <p><a href="https://discuss.kubernetes.io/t/the-connection-to-the-server-localhost-8080-was-refused-did-you-specify-the-right-host-or-port/1464/9" rel="nofollow noreferrer">This thread</a> mentions:</p>
<blockquote>
<p>That error should only come up if you have no contexts configured in your client.<br />
If you run <code>kubectl config view</code> and you get something like this:</p>
<pre><code>$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
</code></pre>
<p>Then no contexts are configured.</p>
</blockquote>
<p>And:</p>
<blockquote>
<p>Getting <code>kubectl</code> to run really depends on how you installed it.<br />
Basically, if you install and have a proper config file, it should always work.</p>
<p>So, either an old file from a previous installation is there or something silly like that (although usually difficult to spot).</p>
<p>Also, make sure the commands don’t fail (some on the post pasted that the step to copy the <code>kubectl config</code> failed). That is the way to authorize to the cluster, so it won’t never work if that step doesn’t work</p>
</blockquote>
<p>Example of possible resolution:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
kubectl get nodes
</code></pre>
<p>(From "<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#more-information" rel="nofollow noreferrer">Creating a cluster with <code>kubeadm</code></a>")</p>
|
<p>I need to run powershell as an container in kubernetes</p>
<p>I am using following deployment file <code>sample.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
</code></pre>
<p>When I run <code>kubectl apply -f sample.yaml</code></p>
<p>I get the following error on <code>kubectl get pods</code></p>
<pre><code>powershell 0/1 CrashLoopBackOff 3 (50s ago) 92s
</code></pre>
<p>I did check the log <code>kubectl logs powershell</code></p>
<pre><code>PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> ←[?1h
</code></pre>
<p>But when i run same image as a docker container with following command its working</p>
<pre><code>docker run --rm -it mcr.microsoft.com/powershell:latest
</code></pre>
| <p>If you want to keep a container for running, you should write like this yaml..</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
command: ["pwsh"]
args: ["-Command", "Start-Sleep", "3600"]
</code></pre>
<pre><code>
[root@master1 ~]# kubectl get pod powershell
NAME READY STATUS RESTARTS AGE
powershell 1/1 Running 0 3m32s
[root@master1 ~]# kubectl exec -it powershell -- pwsh
PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> date
Thu Oct 13 12:50:24 PM UTC 2022
PS />
</code></pre>
|
<p>Currently I run a curl container and directly connect with its terminal to verify connectivity to services and check we can can connect on some port to an external service or a service maintained by some other team.</p>
<pre><code>kubectl run curl -it --rm --image=curlimages/curl -- sh
</code></pre>
<p>Now the problem is that I have to run a curl container on a Node that has taints and tolerations enabled. Is there a way to run this container on by providing tolerations from the kubectl cli?</p>
<p>For reference I am using AKS service and we use helm for deployment. In order to schedule workloads on the tainted nodes we use a combination of telerations and nodeaffinity in combination. Configs given below.</p>
<pre><code> spec:
tolerations:
- key: "mendix"
operator: "Equal"
value: "true"
effect: "NoSchedule"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: appType
operator: In
values:
- mendix
</code></pre>
| <p>You can do something like this, if you need to run it on a specific node that is tainted (it will run despite any taints):</p>
<pre><code>kubectl run curl -it --rm --image=curlimages/curl --overrides \
'{"spec":{"tolerations":[{"operator":"Exists"}]},"nodeName":"mynode"}' \
-- sh
</code></pre>
|
<p>When we perform Tekton trigger API using an eventlistener, I am getting following response while in the background, pipeline runs for certain period of time.</p>
<p>Is there any API to fetch the status of this pipelineRun using the given <code>eventID</code> or <code>eventListenerUID</code>?</p>
<pre><code>{
"eventListener": "listener",
"namespace": "default",
"eventListenerUID": "ea71a6e4-9531-43a1-94fe-6136515d938c",
"eventID": "14a657c3-6816-45bf-b214-4afdaefc4ebd"
}
</code></pre>
<p>Ref - <a href="https://github.com/tektoncd/triggers/blob/main/docs/eventlisteners.md#labels-in-eventListeners" rel="nofollow noreferrer">https://github.com/tektoncd/triggers/blob/main/docs/eventlisteners.md#labels-in-eventListeners</a></p>
| <p>You can use that eventID resolving your pipelinerun:</p>
<pre><code>kubectl -n <your-namespace> get pr -l triggers.tekton.dev/tekton-eventid=<your-event-id>
</code></pre>
<p>Then, extract the status column, or use jsonpath to access <code>status.conditions[0].type</code>.</p>
|
<p>I am using azure kubernetes for backend deployment. I have 2 URLs one is API URL(api.project.com) and other one is BFF URL(bff.project.com).</p>
<p>From Web application, instead of calling API URL(api.project.com) they use BFF URL(bff.project.com) which internally calls the API URL(api.project.com) and sends the response.</p>
<p>I now want to restrict direct usage of API URL(api.project.com) even from any REST API Clients(like postman, insomnia, ...) it should only work when triggered from BFF URL(bff.project.com).</p>
<p>We have used nginx-ingress for subdomain creation and both the URLs(BFF and API) are in same cluster.</p>
<p>Is there any firewall or inbuilt azure services to resolve the above mentioned problem ?</p>
<p>Thanks in Advance :)</p>
| <p>You want to keep your api private, only accessible from another K8S service, so don't expose it using your ingress controller and it simply won't be accessible outside K8S to any client.</p>
<p>This means that you lose the api.project.com address (although you can get that back if you really want to, it seems unnecessary). The BFF would then access the API via the URL: <code>http://<service-name>.<namespace>.svc.cluster.local:<service-port></code>, which in your case might be:</p>
<pre><code>http://api.api_ns.svc.cluster.local
</code></pre>
<p>Assuming you haven't used TLS (http rather than https), the service is called <code>api</code>, it's running on port 80 (which it should be) and the namespace is called <code>api_ns</code>.</p>
<p>Should you need to provide temporary access to the API for developers to use, say, postman, then they can use port-forwarding to provide that in a dev environment without allowing external access all the time.</p>
<p>However, this won't restrict access to BFF alone. Any service running in K8S could access the API. If you need/want to restrict things further, then you have a lot of options.</p>
|
<p>There is no terminal coming in the Lens. Lens terminal just showing <strong>connecting...</strong></p>
| <p><strong>Root Cause</strong></p>
<p>Bydefault lens uses the powershell but Lens need wsl shell. By changing the shell to wsl this issue can be solve also we have to add the path for wsl in lens application.</p>
<p>In the backend Lens call the <strong>WSL</strong> shell. But Lens unable to find it.</p>
<p><strong>Solution</strong></p>
<p>We can solve this issue by setting up the system environment variables.</p>
<ol>
<li>Go to the Preferences and set <strong>Terminal</strong> as <strong>wsl.exe</strong>.</li>
<li>Set environment for <strong>wsl.exe</strong>. Go to System Variable and add the PATH.</li>
</ol>
<p><a href="https://i.stack.imgur.com/71WHh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/71WHh.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/YJywz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YJywz.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/KZSlj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KZSlj.png" alt="enter image description here" /></a></p>
|
<p>After researching the topic I found <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest#rest-resource:-v1beta1.projects.locations.clusters" rel="nofollow noreferrer">this</a> documentation for retiring k8 clusters from GCP. However, I could not find any code examples of utilizing those API's and when I import <code>google</code> from <code>googleapis</code> I can't find the function in it that would be used for that purpose. For example, to get SQL data there is sqladmin, but nothing for retrieving k8 data. So what property of <code>google</code> do I need?</p>
| <p>This is confusing.</p>
<p>There are 2 distinct APIs that you must use:</p>
<ol>
<li>Google's Kubernetes Engine API (see <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/" rel="nofollow noreferrer">link</a>). This API is used to create, read, update and delete Kubernetes clusters. Google provides a Node.js SDK documented <a href="https://cloud.google.com/nodejs/docs/reference/container/latest" rel="nofollow noreferrer">here</a>.</li>
<li>The standard, generic Kubernetes API (see <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">link</a>). This API is used to create, read, update and delete resources <strong>on</strong> (any) Kubernetes client and, for obvious(ly good) reasons, it is the API that you must use to interact with (an existing) Kubernetes Engine cluster too (because these are just like every other Kubernetes cluster). Kubernetes provides <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">official and community-supported libraries</a> that implement the Kubernetes API. You'll need to pick one of the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">community-supported libraries</a> for Node.js (as there's no official library).</li>
</ol>
<p>The general process is to:</p>
<ol>
<li>Use the Kubernetes Engine API to e.g. <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters/get" rel="nofollow noreferrer"><code>projects.locations.cluster.get</code></a> details of an existing GKE cluster</li>
<li>Use the returned <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters#Cluster" rel="nofollow noreferrer"><code>Cluster</code></a> object to build a configuration object (the equivalent of building a <code>context</code> object in a kubeconfig file)<sup>go</sup></li>
<li>Use the <code>context</code> object with the Kubernetes API library to authenticate to the cluster and program it e.g. list Deployments, create Services etc.</li>
</ol>
<p><sup>go</sup>-- I have <a href="https://gist.github.com/DazWilkin/9506c0b9677d53a3e11b7457ed21cbe7" rel="nofollow noreferrer">code</a> for this step but it's written in Golang not JavaScript.</p>
|