prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am using fabric8.io to orchestrate application containers in Kubernetes. I am looking to create a Service that manages a pod with a certain label on a certain port. Is there a specific example of the API that does this. I couldnt find it in the examples </p>
<p><a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/FullExample.java#L75" rel="nofollow">https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/FullExample.java#L75</a></p>
<p>There dont seem to be javadocs available???</p>
| <p>Fabric8's Kubernetes Client is using a generated model and DSL that has the <strong>exact</strong> same structure as as the JSON and YAML configuration.</p>
<p>So in order to create a Service instance that looks like:</p>
<pre><code> {
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "myservice"
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 8080,
}
],
"selector": {
"key": "value1",
},¬
"portalIP": "172.30.234.134",
"type": "ClusterIP",
}
</code></pre>
<p>}</p>
<p>You can use the following code:</p>
<pre><code>Service service = new ServiceBuilder()
.withNewMetadata()
.withName("myservice")
.endMetadata()
.withNewSpec()
.addNewPort()
.withProtocol("TCP")
.withPort(80)
.withNewTargetPort(8080)
.endPort()
.addToSelector("key1", "value1")
.withPortalIP("172.30.234.134")
.withType("ClusterIP")
.endSpec()
.build();
</code></pre>
<p>If don't need to hold a reference of the service object and you just want to create it, you can inline it like:</p>
<pre><code>client.services().createNew()
.withNewMetadata()
.withName("myservice")
.endMetadata()
.withNewSpec()
.addNewPort()
.withProtocol("TCP")
.withPort(80)
.withNewTargetPort(8080)
.endPort()
.addToSelector("key1", "value1")
.withPortalIP("172.30.234.134")
.withType("ClusterIP")
.endSpec()
.done();
</code></pre>
<p>It's even more compact that the JSON equivalent, because default value can be committed and also some stuff like selector can be optionally inlined in a single line.</p>
<p>This is something that not only applies to Service, but to every-single Kubernetes/Openshift resource.</p>
<p>If you have the JSON or YAML in place you can load them easily by providing an input stream to the client:</p>
<pre><code>Service service = client.services().load(inputStream).get();
</code></pre>
<p>There are more options here, like directly creating the service:</p>
<pre><code>Service newService = client.services().load(inputStream).create();
</code></pre>
<p>It's always help to remember that structure is always the same regardless of lang, format. Pretty much anything can be inlined, so tab completion in your IDE can be really really helpful.</p>
|
<p>I have known clearly about the usage of the docker option <code>--net=container:NAME_or_ID</code>, I also have read the source code of kubernetes about how to configure the container to use the network of <code>InfraContainer</code>, so I think the only work the process in container <code>gcr.io/google_containers/pause:0.8.0</code> does is "pause", it will never do any complex work like "receiving", "sending" or "routing". </p>
<p>But I am not sure about it because I can not find the <code>Dockerfile</code> of <code>gcr.io/google_containers/pause:0.8.0</code>, so I need someone know clearly about it to tell me the truth, thanks!</p>
| <p>In Kubernetes, each pod has an IP and within a pod there exists a so called infrastructure container, which is the first container that the Kubelet instantiates and it acquires the pod’s IP and sets up the network namespace. All the other containers in the pod then join the infra container’s network and IPC namespace. The infra container has network bridge mode enabled and all the other containers in the pod share its namespace via container mode. The initial process that runs in the infra container does effectively nothing since its sole purpose is to act as the home for the namespaces.</p>
|
<p>I have created a cluster of three nodes: one master, two minions.
How to check the cluster IP in Kubernetes?
Is it the IP of the master node?</p>
| <p><strong>ClusterIP</strong> can mean 2 things: a type of service which is only accessible within a Kubernetes cluster, or the internal ("virtual") IP of components within a Kubernetes cluster. Assuming you're asking about finding the internal IP of a cluster, it can be accessed in 3 ways (using the <a href="http://kubernetes.io/v1.0/examples/simple-nginx.html" rel="noreferrer">simple-nginx example</a>):</p>
<ol>
<li><p>Via command line <code>kubectl</code> utility:</p>
<pre><code>$ kubectl describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.123.253.27
LoadBalancer Ingress: 104.197.129.240
Port: <unnamed> 80/TCP
NodePort: <unnamed> 30723/TCP
Endpoints: 10.120.0.6:80
Session Affinity: None
No events.
</code></pre></li>
<li><p>Via the kubernetes API (here I've used <code>kubectl proxy</code> to route through localhost to my cluster):</p>
<pre><code>$ kubectl proxy &
$ curl -G http://localhost:8001/api/v1/namespaces/default/services/my-nginx
{
"kind": "Service",
"apiVersion": "v1",
"metadata": <omitted>,
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 80,
"nodePort": 30723
}
],
"selector": {
"run": "my-nginx"
},
"clusterIP": "10.123.253.27",
"type": "LoadBalancer",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "104.197.129.240"
}
]
}
}
}
</code></pre></li>
<li><p>Via the <code>$<NAME>_SERVICE_HOST</code> environment variable within a Kubernetes container (in this example <code>my-nginx-yczg9</code> is the name of a pod in the cluster):</p>
<pre><code>$ kubectl exec my-nginx-yczg9 -- sh -c 'echo $MY_NGINX_SERVICE_HOST'
10.123.253.27
</code></pre></li>
</ol>
<p>More details on service IPs can be found in the <a href="http://kubernetes.io/v1.0/docs/user-guide/services.html" rel="noreferrer">Services in Kubernetes</a> documentation, and the previously mentioned <a href="http://kubernetes.io/v1.0/examples/simple-nginx.html" rel="noreferrer">simple-nginx example</a> is a good example of exposing a service outside your cluster with the <code>LoadBalancer</code> service type.</p>
|
<p>I am following the <a href="https://github.com/kubernetes/kubernetes/blob/v1.0.6/docs/getting-started-guides/docker.md" rel="nofollow">Running Kubernetes locally via Docker</a> guide and I am unable to get the master to start normally.</p>
<p><strong>Step One: Run etcd</strong></p>
<p><code>docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data</code></p>
<p>The etcd container appears to start normally. Don't see any errors with <code>docker logs</code> and I end up with an etcd process listening on 4001.</p>
<p><strong>Step Two: Run the master</strong></p>
<p><code>docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests</code></p>
<p>I believe the this is where my issues begin. Below is the output from <code>docker logs</code>:</p>
<pre>
W1021 13:23:04.093281 1 server.go:259] failed to set oom_score_adj to -900: write /proc/self/oom_score_adj: permission denied
W1021 13:23:04.093426 1 server.go:462] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead.
W1021 13:23:04.093445 1 server.go:424] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults.
I1021 13:23:04.093503 1 server.go:271] Using root directory: /var/lib/kubelet
I1021 13:23:04.093519 1 plugins.go:69] No cloud provider specified.
I1021 13:23:04.093526 1 server.go:290] Successfully initialized cloud provider: "" from the config file: ""
I1021 13:23:05.126191 1 docker.go:289] Connecting to docker on unix:///var/run/docker.sock
I1021 13:23:05.126396 1 server.go:651] Adding manifest file: /etc/kubernetes/manifests
I1021 13:23:05.126409 1 file.go:47] Watching path "/etc/kubernetes/manifests"
I1021 13:23:05.126416 1 server.go:661] Watching apiserver
E1021 13:23:05.127148 1 reflector.go:136] Failed to list *api.Pod: Get http://localhost:8080/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused
E1021 13:23:05.127295 1 reflector.go:136] Failed to list *api.Service: Get http://localhost:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused
E1021 13:23:05.127336 1 reflector.go:136] Failed to list *api.Node: Get http://localhost:8080/api/v1/nodes?fieldSelector=metadata.name%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused
I1021 13:23:05.343848 1 plugins.go:56] Registering credential provider: .dockercfg
W1021 13:23:05.394268 1 container_manager_linux.go:96] Memory limit 0 for container /docker-daemon is too small, reset it to 157286400
I1021 13:23:05.394284 1 container_manager_linux.go:100] Configure resource-only container /docker-daemon with memory limit: 157286400
I1021 13:23:05.395019 1 plugins.go:180] Loaded volume plugin "kubernetes.io/aws-ebs"
I1021 13:23:05.395040 1 plugins.go:180] Loaded volume plugin "kubernetes.io/empty-dir"
I1021 13:23:05.395052 1 plugins.go:180] Loaded volume plugin "empty"
I1021 13:23:05.395068 1 plugins.go:180] Loaded volume plugin "kubernetes.io/gce-pd"
I1021 13:23:05.395080 1 plugins.go:180] Loaded volume plugin "gce-pd"
I1021 13:23:05.395098 1 plugins.go:180] Loaded volume plugin "kubernetes.io/git-repo"
I1021 13:23:05.395112 1 plugins.go:180] Loaded volume plugin "git"
I1021 13:23:05.395124 1 plugins.go:180] Loaded volume plugin "kubernetes.io/host-path"
I1021 13:23:05.395136 1 plugins.go:180] Loaded volume plugin "kubernetes.io/nfs"
I1021 13:23:05.395147 1 plugins.go:180] Loaded volume plugin "kubernetes.io/secret"
I1021 13:23:05.395156 1 plugins.go:180] Loaded volume plugin "kubernetes.io/iscsi"
I1021 13:23:05.395166 1 plugins.go:180] Loaded volume plugin "kubernetes.io/glusterfs"
I1021 13:23:05.395178 1 plugins.go:180] Loaded volume plugin "kubernetes.io/persistent-claim"
I1021 13:23:05.395194 1 plugins.go:180] Loaded volume plugin "kubernetes.io/rbd"
I1021 13:23:05.395274 1 server.go:623] Started kubelet
I1021 13:23:05.395296 1 server.go:63] Starting to listen on 0.0.0.0:10250
I1021 13:23:05.395507 1 server.go:82] Starting to listen read-only on 0.0.0.0:10255
</pre>
<p><strong>Step Three: Run the service proxy</strong></p>
<p><code>docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2</code></p>
<p>The docker logs from this step contained similar errors to what I saw in Step Two.</p>
<pre>
I1021 13:32:03.177004 1 server.go:88] Running in resource-only container "/kube-proxy"
I1021 13:32:03.177432 1 proxier.go:121] Setting proxy IP to 192.168.19.200 and initializing iptables
E1021 13:32:03.195731 1 api.go:108] Unable to load services: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused
E1021 13:32:03.195924 1 api.go:180] Unable to load endpoints: Get http://127.0.0.1:8080/api/v1/endpoints: dial tcp 127.0.0.1:8080: connection refused
</pre>
<p><code>docker ps</code> output:</p>
<pre>
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
576d15c22537 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy --m" About an hour ago Up About an hour high_pasteur
a98637c9d523 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet -" About an hour ago Up 34 minutes drunk_jones
618afb1de613 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd " 2 hours ago Up 2 hours high_yonath
</pre>
<p>The first error from Step Two's logs led me to believe the error may have something to do with iptables.</p>
<p><code>iptables -L</code> output:</p>
<pre>
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
</pre>
<p><code>docker exec -ti a98637c9d523 cat /etc/kubernetes/manifests/master.json</code> output:</p>
<pre>
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name":"k8s-master"},
"spec":{
"hostNetwork": true,
"containers":[
{
"name": "controller-manager",
"image": "gcr.io/google_containers/hyperkube:v1.0.6",
"command": [
"/hyperkube",
"controller-manager",
"--master=127.0.0.1:8080",
"--v=2"
]
},
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube:v1.0.6",
"command": [
"/hyperkube",
"apiserver",
"--portal-net=10.0.0.1/24",
"--address=127.0.0.1",
"--etcd-servers=http://127.0.0.1:4001",
"--cluster-name=kubernetes",
"--v=2"
]
},
{
"name": "scheduler",
"image": "gcr.io/google_containers/hyperkube:v1.0.6",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"--v=2"
]
}
]
}
}
</pre>
<p>Docker version 1.8.3</p>
<p>Kernel version 4.2.3</p>
<p>Any insight would be greatly appreciated.</p>
| <p>Can you downgrade docker version to 1.7.2 first? I did the exact what you did above with docker 1.7.2, and everything works.</p>
<pre><code>$ curl 127.0.0.1:8080/
{
"paths": [
"/api",
"/api/v1",
"/api/v1beta3",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/resetMetrics",
"/swagger-ui/",
"/swaggerapi/",
"/ui/",
"/version"
]
}
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0141e596414c gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy -- 15 minutes ago Up 15 minutes nostalgic_nobel
10634ce798e9 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube schedule 16 minutes ago Up 16 minutes k8s_scheduler.b725e775_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_43562383
5618a39eb11d gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube apiserve 16 minutes ago Up 16 minutes k8s_apiserver.70750283_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_e5d145be
25f336102b26 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube controll 16 minutes ago Up 16 minutes k8s_controller-manager.aad1ee8f_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_fe538b9b
7f1391840920 gcr.io/google_containers/pause:0.8.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD.e4cc795_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_26fd84fd
a11715435f45 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet 17 minutes ago Up 17 minutes jovial_hodgkin
a882a1a4b917 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd 18 minutes ago Up 18 minutes adoring_hodgkin
</code></pre>
<p>There are a couple of known issues with docker 1.8.3, especially <a href="https://github.com/docker/docker/issues/17190" rel="nofollow">docker#17190</a>. We had to workaround such issue through <a href="https://github.com/kubernetes/kubernetes/pull/16052" rel="nofollow">kubernetes#16052</a>. But such changes are not cherry-picked to Kubernetes 1.0 release. From the output you posted above, I noticed that there is no pause container. Also you can run <code>docker ps -a</code> to check if some containers are dead, and copy & paste the output of <code>docker logs <dead-container></code> here?</p>
<p>I will file an issue to make sure Kubernetes 1.1 release working fine with docker 1.8.3. Thanks!</p>
|
<p>I am trying to setup cluster logging following below link</p>
<p><a href="http://kubernetes.io/v1.0/docs/getting-started-guides/logging-elasticsearch.html" rel="nofollow">http://kubernetes.io/v1.0/docs/getting-started-guides/logging-elasticsearch.html</a></p>
<p>my config-default.sh</p>
<pre><code># Optional: Enable node logging.
ENABLE_NODE_LOGGING=**true**
LOGGING_DESTINATION=${LOGGING_DESTINATION:-**elasticsearch**}
# Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up.
ENABLE_CLUSTER_LOGGING=true
ELASTICSEARCH_LOGGING_REPLICAS=${ELASTICSEARCH_LOGGING_REPLICAS:-1}
</code></pre>
<p>Command</p>
<pre><code>$ sudo kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-v9-epplg 4/4 Running 0 20h
kube-ui-v3-i4von 1/1 Running 0 18h
</code></pre>
<p>As you can see that I enabled logging and set logging destination = elasticsearch. I don't see elasticsearch-logging or fluentd-elasticsearch or kibana-logging when i do get pods. It seems like replication controller, service or pods is not created, do I need do anything else to bring up the ElasticSearch and Kibana?</p>
| <p>Where are you starting your cluster? I tried to reproduce this on GCE using both the 1.0.7 release and from HEAD and wasn't able to. </p>
<p>Using the 1.0.7 release:</p>
<pre><code>$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-v1-6x82b 1/1 Running 0 3m
elasticsearch-logging-v1-s4bj5 1/1 Running 0 3m
fluentd-elasticsearch-kubernetes-minion-ijpr 1/1 Running 0 1m
fluentd-elasticsearch-kubernetes-minion-nrya 1/1 Running 0 2m
fluentd-elasticsearch-kubernetes-minion-ppls 1/1 Running 0 1m
fluentd-elasticsearch-kubernetes-minion-sy4x 1/1 Running 0 2m
kibana-logging-v1-6qka9 1/1 Running 0 3m
kube-dns-v8-9hyzm 4/4 Running 0 3m
kube-ui-v1-11r3b 1/1 Running 0 3m
monitoring-heapster-v6-4uzam 1/1 Running 1 3m
monitoring-influx-grafana-v1-euc3a 2/2 Running 0 3m
</code></pre>
<p>From head:</p>
<pre><code>$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-v1-9gqs8 1/1 Running 0 3m
elasticsearch-logging-v1-edb97 1/1 Running 0 3m
etcd-server-events-kubernetes-master 1/1 Running 0 3m
etcd-server-kubernetes-master 1/1 Running 0 3m
fluentd-elasticsearch-kubernetes-master 1/1 Running 0 2m
fluentd-elasticsearch-kubernetes-minion-6id6 1/1 Running 0 1m
fluentd-elasticsearch-kubernetes-minion-n25a 1/1 Running 0 1m
fluentd-elasticsearch-kubernetes-minion-x4wa 1/1 Running 0 1m
heapster-v10-ek03n 1/1 Running 0 3m
kibana-logging-v1-ybsad 1/1 Running 0 3m
kube-apiserver-kubernetes-master 1/1 Running 0 3m
kube-controller-manager-kubernetes-master 1/1 Running 0 3m
kube-dns-v9-dkmad 4/4 Running 0 3m
kube-scheduler-kubernetes-master 1/1 Running 0 3m
kube-ui-v3-mt7nw 1/1 Running 0 3m
l7-lb-controller-b56yf 2/2 Running 0 3m
monitoring-influxdb-grafana-v2-lxufh 2/2 Running 0 3m
</code></pre>
<p>The only thing I changed in <code>config-default.sh</code> is the KUBE_LOGGING_DESTINATION variable from gcp to elasticsearch:</p>
<pre><code>$ git diff cluster/gce/config-default.sh
diff --git a/cluster/gce/config-default.sh b/cluster/gce/config-default.sh
index fd31820..2e37ebc 100755
--- a/cluster/gce/config-default.sh
+++ b/cluster/gce/config-default.sh
@@ -58,7 +58,7 @@ ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-googleinfluxdb}"
# Optional: Enable node logging.
ENABLE_NODE_LOGGING="${KUBE_ENABLE_NODE_LOGGING:-true}"
-LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-gcp}" # options: elasticsearch, gcp
+LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-elasticsearch}" # options: elasticsearch, gcp
# Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up.
ENABLE_CLUSTER_LOGGING="${KUBE_ENABLE_CLUSTER_LOGGING:-true}"
</code></pre>
|
<p>TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate?</p>
<p>Unlike <a href="https://www.docker.com/" rel="nofollow">Docker</a>, where one would usually only allow network connection between containers that need to communicate (via <code>--link</code>), each <em>Pod</em> on <a href="http://kubernetes.io/" rel="nofollow">Kubernetes</a> can access all other Pods on that <em>cluster</em>.</p>
<p>That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database.</p>
<p>People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people.</p>
<p>One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful.</p>
<p>The current <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/security.md" rel="nofollow">Kubernetes security</a> seems to be very incomplete. Is there already a way to have a decent security for production?</p>
| <p>As @tim-hockin says, we do plan to have a way to partition the network. </p>
<p>But, IMO, for systems with more moving parts, (which is where Kubernetes should really shine), I think it will be better to focus on application security. </p>
<p>Taking your three-layer example, the PHP pod should be authorized to talk to the database, but the Nginx pod should not. So, if someone figures out a way to execute an arbitrary command in the Nginx pod, they might be able to send a request to the database Pod, but it should be rejected as not authorized.</p>
<p>I prefer the application-security approach because:</p>
<ul>
<li>I don't think the <code>--links</code> approach will scale well to 10s of different microservices or more. It will be too hard to manage all the links.</li>
<li>I think as the number of devs in your org grows, you will need fine grained app-level security anyhow. </li>
</ul>
<p>In terms of being like docker compose, it looks like docker compose currently only works on single machines, according to this page:
<a href="https://github.com/docker/compose/blob/master/SWARM.md" rel="nofollow">https://github.com/docker/compose/blob/master/SWARM.md</a></p>
|
<p>I'm trying to get Kubernetes running on some local machines running CoreOS. I'm loosely following <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/coreos/coreos_multinode_cluster.html" rel="nofollow">this guide</a>. Everything seems to be up and running, and I'm able to connect to the api via kubectl. However, when I try to create a pod, I get this error: </p>
<pre><code>Pod "redis-master" is forbidden: Missing service account default/default: <nil>
</code></pre>
<p>Doing <code>kubectl get serviceAccounts</code> confirms that I don't have any ServiceAccounts:</p>
<pre><code>NAME SECRETS
</code></pre>
<p>According to the <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/service-accounts.md#using-multiple-service-accounts" rel="nofollow">documentation</a>, each namespace should have a default ServiceAccount. Running <code>kubectl get namespace</code> confirms that I have the default namespace:</p>
<pre><code>NAME LABELS STATUS
default <none> Active
</code></pre>
<p>I'm brand new to Kubernetes and CoreOS, so I'm sure there's something I'm overlooking, but I can't for the life of me figure out what's going on. I'd appreciate any pointers.</p>
<p>UPDATE</p>
<p>It appears the kube-controller-manager isn't running. When I try to run it, I get this message:</p>
<pre><code>I1104 21:09:49.262780 26292 plugins.go:69] No cloud provider specified.
I1104 21:09:49.262935 26292 nodecontroller.go:114] Sending events to api server.
E1104 21:09:49.263089 26292 controllermanager.go:217] Failed to start service controller: ServiceController should not be run without a cloudprovider.
W1104 21:09:49.629084 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly.
W1104 21:09:49.629322 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly.
W1104 21:09:49.636082 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly.
W1104 21:09:49.638712 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly.
</code></pre>
<p>Since I'm running this locally, I don't have a cloud provider. I tried to define <code>--cloud-provider=""</code> but it still complains with the same error.</p>
| <p>The default service account for each namespace is created by the service account controller, which is a loop that is part of the kube-controller-manager binary. So, verify that binary is running, and check its logs for anything that suggests it can't create a service account, make sure you set the "--service-account-private-key-file=somefile" to a file that has a valid PEM key.</p>
<p>Alternatively, if you want to make some progress without service accounts, and come back to that later, you can disable the admission controller that is blocking your pods by removing the "ServiceAccount" option from your api-server's <code>--admission-controllers</code> flag. But you will probably want to come back and fix that later.</p>
|
<p>Kubernetes UI dashboard shows (this matches the free -m on this minion)</p>
<p>Memory: 7.29 GB / 7.84 GB</p>
<p>This overall memory usage gradually increasing over time. I am trying to get a view into this memory growth using <em>Kubernetes/Grafana</em> default dashboard for this metric: memory/usage_bytes_gauge. However, I see the following on the minion</p>
<pre><code> - The units do not match, i.e: approx 7GB used vs 200MiB on the plot
- Memory change is all over in the graph as opposed to gradual increase
</code></pre>
| <p>Can you plot <code>memory/working_set_bytes_guage</code> instead of <code>memory/usage_bytes_guage</code>?
The kube UI might be using working set which correlates with free.
<code>memory/usage</code> includes pages that the kernel can reclaim on demand.</p>
|
<p>After 30-45 minutes, chunked HTTP connection to API server is dropped:</p>
<pre><code>Transmission Control Protocol, Src Port: http-alt (8080), Dst Port: 55782 (55782), Seq: 751, Ack: 88, Len: 0
.... 0000 0001 0001 = Flags: 0x011 (FIN, ACK)
</code></pre>
<p>This happens regardless of the activity level, i.e. it happens for connections that were idle for a long time but also for the ones that had notifications coming for the whole duration of the connection. HTTP 1.0 (with <code>Connection: Keep-Alive</code> header) just ends the original request, while HTTP 1.1, which is keepalive by default, sends <code>400 Bad Request</code> before dropping the connection.</p>
<p>Is it possible to get a watch connection which remains alive for a long period of time?</p>
| <p>Once you're certain your client properly handles disconnections, you can use the following kube-apiserver flag to control how long apiserver lets the watches stay open:</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/admin/kube-apiserver.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/admin/kube-apiserver.md</a></p>
<p><code>--min-request-timeout=1800: An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load.
</code></p>
<p>Test with a small value, run in production with a large value.</p>
|
<p>I would like to expand/shrink the number of kubelets being used by kubernetes cluster based on resource usage. I have been looking at the code and have some idea of how to implement it at a high level.</p>
<p>I am stuck on 2 things:</p>
<ol>
<li><p>What will be a good way for accessing the cluster metrics (via Heapster)? Should I try to use the kubedns for finding the heapster endpoint and directly query the API or is there some other way possible? Also, I am not sure on how to use kubedns to get the heapster URL in the former.</p></li>
<li><p>The rescheduler which expands/shrinks the number of nodes will need to kick in every 30 minutes. What will be the best way for it. Is there some interface or something in the code which I can use for it or should I write a code segment which gets called every 30 mins and put it in the main loop?</p></li>
</ol>
<p>Any help would be greatly appreciated :)</p>
| <p>Part 1:</p>
<p>What you said about using kubedns to find heapster and querying that REST API is fine.</p>
<p>You could also write a client interface that abstracts the interface to heapster -- that would help with unit testing.</p>
<p>Take a look at this metrics client:
<a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/metrics/metrics_client.go" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/metrics/metrics_client.go</a>
It doesn't do exactly what you want: it gets per-Pod stats instead of per-cluster or per-node stats. But you could modify it.</p>
<p>In function <code>getForPods</code>, you can see the code that resolves the heapster service and connects to it here:</p>
<pre><code> resultRaw, err := h.client.Services(h.heapsterNamespace).
ProxyGet(h.heapsterService, metricPath, map[string]string{"start": startTime.Format(time.RFC3339)}).
DoRaw()
</code></pre>
<p>where heapsterNamespace is "kube-system" and heapsterService is "heapster".</p>
<p>That metrics client is part of the "horizonal pod autoscaler" implementation. It is solving a slightly different problem, but you should take a look at it if you haven't already. If is described here: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md</a></p>
<p>FYI: The Heapster REST API is defined here:
<a href="https://github.com/kubernetes/heapster/blob/master/docs/model.md" rel="nofollow">https://github.com/kubernetes/heapster/blob/master/docs/model.md</a>
You should poke around and see if there are node-level or cluster-level CPU metrics that work for you.</p>
<p>Part 2:</p>
<p>There is no standard interface for shrinking nodes. It is different for each cloud provider. And if you are on-premises, then you can't shrink nodes. </p>
<p>Related discussion:
<a href="https://github.com/kubernetes/kubernetes/issues/11935" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/11935</a></p>
<p>Side note: Among kubernetes developers, we typically use the term "rescheduler" when talking about something that rebalances pods across machines, by removing a pod from one machine and creates the same kind of pod on another machine. That is a different thing than the thing you are talking about building. We haven't built a rescheduler yet, but there is an outline of how to build one here:
<a href="https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/rescheduler.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/rescheduler.md</a></p>
|
<p>Upon looking at the docs, there is an API call to delete a single pod, but is there a way to delete <em>all</em> pods in all namespaces?</p>
| <p>There is no command to do exactly what you asked.</p>
<p>Here are some close matches.</p>
<p><strong>Be careful before running any of these commands. Make sure you are connected to the right cluster, if you use multiple clusters. Consider running. <code>kubectl config view</code> first.</strong></p>
<p>You can delete all the pods in a single namespace with this command:</p>
<pre><code>kubectl delete --all pods --namespace=foo
</code></pre>
<p>You can also delete all deployments in namespace which will delete all pods attached with the deployments corresponding to the namespace</p>
<pre><code>kubectl delete --all deployments --namespace=foo
</code></pre>
<p>You can delete all namespaces and every object in every namespace (but not un-namespaced objects, like nodes and some events) with this command:</p>
<pre><code>kubectl delete --all namespaces
</code></pre>
<p>However, the latter command is probably not something you want to do, since it will delete things in the kube-system namespace, which will make your cluster not usable.</p>
<p>This command will delete all the namespaces except kube-system, which might be useful:</p>
<pre><code>for each in $(kubectl get ns -o jsonpath="{.items[*].metadata.name}" | grep -v kube-system);
do
kubectl delete ns $each
done
</code></pre>
|
<p>I have a problem to understand the kubernetes workflow:
So as I understand the flow:</p>
<p>You have a master which contains etcd, api-server, controller manager and scheduler.
You have nodes which contain pods (wich contain containers), kubelet and a proxy.</p>
<p>The proxy is working as a basic proxy to make it possible for a service to communicate with other nodes.
When a pod dies, the controller manager will see this (it 'reads' the replication controller which describes how many pods there normally are).</p>
<p>unclear:
The controller manager will inform the API-server (I'm not right about this).
The API-server will tell the scheduler to search a new place for the pod.
After the scheduler has found a good place, the API will inform kubelet to create a new pod.</p>
<p>I'm not sure about the last scenario? Can you tell me the right proces is a clear way?
Which component is creating the pod and container? Is it kubelet?</p>
| <p>So it's the kubelet that actually creates the pods and talks to the docker daemon. If you do a <code>docker ps -a</code> on your nodes (as in not master) in your cluster, you'll see the containers in your pod running. So the workflow is run a kubectl command, that goes to the API server, which passes it to the controller, say that command was to spawn a pod, the controller relays that to the API server which then goes to the scheduler and tells it to spawn the pod. Then the kubelet is told to spawn said pod.</p>
<p>I suggest reading the Borg paper that Kubernetes is based on to better understand things in further detail. <a href="http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf" rel="nofollow">http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf</a></p>
|
<p>When Installing a Kubernetes Master Node via Docker, docker config bip and mtu for Run flannel.<br>
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}<br>
What's are FLANNEL_SUBNET and FLANNEL_MTU variables? How to set ${FLANNEL_SUBNET} and ${FLANNEL_MTU}?</p>
| <p>I really don't understand your questions, but I can explain how flannel integrates with docker.</p>
<p>Flannel is managing this file:</p>
<pre><code># cat /usr/lib/systemd/system/docker.service.d/flannel.conf
[Service]
EnvironmentFile=-/run/flannel/docker
</code></pre>
<p>Which is setting the docker service to use the values from /run/flannel/docker as environment variables.</p>
<p>Inside /run/flannel/docker flannel is writing the network configuration that docker should use:</p>
<pre><code># cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=172.16.66.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --iptables=false --ip-masq=false --bip=172.16.66.1/24 --ip-masq=true --mtu=1472 "
</code></pre>
<p>On centos/redhat, the docker systemd scripts is starting the daemon with the following command (taken from /usr/lib/systemd/system/docker.service)</p>
<pre><code>ExecStart=/usr/bin/docker -d $OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
</code></pre>
<p>So it will use only DOCKER_NETWORK_OPTIONS from what flannel offers.</p>
<p>On coreos, the docker daemon is started with:</p>
<pre><code>/usr/lib/coreos/dockerd daemon --host=fd:// $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
</code></pre>
|
<p>I am trying to setup a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I am following the Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/ubuntu.md" rel="nofollow">install tutorial for Ubuntu</a>. Everything works fine if I use the 4 nodes (VM + servers) as minions. But when I try to use the VM as just a master, it cannot access the Flannel network. I can create pods, services, etc, but if I try to access a service from the master node (VM), it cannot find the Flannel's IP.</p>
<p>Can I include a master only node to the Kubernetes' internal network (Flannel's net in this case)? If not, any advice in how to access the Kubernetes UI and other services from the master node? </p>
| <p>To have the master node access the cluster network, you can run <code>flanneld</code> and <code>kube-proxy</code> on the master node. This should give you the access you need.</p>
<p>However, adding these components in the context of using the <code>kube-up.sh</code> method may be a little involved. Seems like you may have a few options while remaining mostly within the framework of that tutorial:</p>
<ul>
<li>You could walk through the <code>kube-up.sh</code> scripts and alter it so that it installs and configures <code>kube-proxy</code> and <code>flanneld</code> on the master node, but not the <code>kubelet</code>. That may be hard to maintain over time.</li>
<li>You could bring up the cluster as you already have with all 4 nodes running as 'nodes' (the new name for workers that used to be called 'minions'). Then mark the master node as unschedulable (<code>kubectl patch nodes $NODENAME -p '{"spec": {"unschedulable": true}}'</code>) as outlined <a href="https://github.com/kubernetes/kubernetes/blob/5c903dbcacb423158e3f363bcbb27eef58f95218/docs/admin/node.md#manual-node-administration" rel="nofollow">here</a>. The master will still show up in node listings and such, but it should not get any pods scheduled to it and should have full network access to the cluster.</li>
<li>You could also bring the cluster up as you already have with 4 nodes and then just log in and remove the <code>kubelet</code> on the master. This is effectively like the above, except the master won't show up in node listings (although you may have to remove it (<code>kubectl delete node $NODENAME</code>) when you remove the kubelet.</li>
</ul>
<p>There are probably other options (there always are!), but hopefully these can get you started.</p>
|
<p>I'm running my rethinkdb container in Kubernetes cluster. Below is what I notice:</p>
<p>Running <code>top</code> in the host which is CoreOS, rethinkdb process takes about 3Gb: </p>
<pre><code>$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
981 root 20 0 53.9m 34.5m 20.9m S 15.6 0.4 1153:34 hyperkube
51139 root 20 0 4109.3m 3.179g 22.5m S 15.0 41.8 217:43.56 rethinkdb
579 root 20 0 707.5m 76.1m 19.3m S 2.3 1.0 268:33.55 kubelet
</code></pre>
<p>But running <code>docker stats</code> to check the rethinkdb container, it takes about 7Gb!</p>
<pre><code>$ docker ps | grep rethinkdb
eb9e6b83d6b8 rethinkdb:2.1.5 "rethinkdb --bind al 3 days ago Up 3 days k8s_rethinkdb-3.746aa_rethinkdb-rc-3-eiyt7_default_560121bb-82af-11e5-9c05-00155d070266_661dfae4
$ docker stats eb9e6b83d6b8
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
eb9e6b83d6b8 4.96% 6.992 GB/8.169 GB 85.59% 0 B/0 B
$ free -m
total used free shared buffers cached
Mem: 7790 7709 81 0 71 3505
-/+ buffers/cache: 4132 3657
Swap: 0 0 0
</code></pre>
<p>Can someone explain why the container is taking a lot more memory than the rethinkdb process itself? </p>
<p>I'm running docker v1.7.1, CoreOS v773.1.0, kernel 4.1.5</p>
| <p>In <code>top</code> command, your are looking at physical memory amount. in stats command, this also include the disk cached ram, so it's always bigger than the physical amount of ram. When you really need more RAM, the disk cached will be released for the application to use.</p>
<p>In deed, the memmory usage is pulled via cgroup <code>memory.usage_in_bytes</code>, you can access it in <code>/sys/fs/cgroup/memory/docker/long_container_id/memory.usage_in_bytes</code>. And acording to linux doc <code>https://www.kernel.org/doc/Documentation/cgroups/memory.txt</code> section 5.5:</p>
<blockquote>
<p>5.5 usage_in_bytes</p>
<p>For efficiency, as other kernel components, memory cgroup uses some
optimization to avoid unnecessary cacheline false sharing.
usage_in_bytes is affected by the method and doesn't show 'exact'
value of memory (and swap) usage, it's a fuzz value for efficient
access. (Of course, when necessary, it's synchronized.) If you want to
know more exact memory usage, you should use RSS+CACHE(+SWAP) value in
memory.stat(see 5.2).</p>
</blockquote>
|
<p>The quickstart mentions a few times that, "You should be able to ssh into any node in your cluster ..." (e.g., <a href="http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables" rel="nofollow">http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables</a>). I have tried as described below but I am getting timed out.</p>
<ol>
<li>I used <code>export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash</code> to start the cluster</li>
<li>I have only specified AWS_REGION in my environment</li>
<li>The nodes are residing in VPC and I am able to ping them from a bastion</li>
</ol>
<p>This is the result:
<code>
ubuntu@ip-10-128-1-26:~$ ssh [email protected] -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 170.20.0.248 [170.20.0.248] port 22.
debug1: connect to address 170.20.0.248 port 22: Connection timed out
ssh: connect to host 170.20.0.248 port 22: Connection timed out
ubuntu@ip-10-128-1-26:~$
</code></p>
<p>Any idea or pointers would be appreciated. Thank you.</p>
| <p>It looks like your problem is with making sure the corresponding security group is open to ssh from whichever nodes you'd like to connect from. Make sure it's open to the public IP or the private IP, depending on which you're connecting from. For the right ssh key to use: it'll be whichever one you setup when spinning up the nodes. You can check that in the EC2 pane of AWS in the "key pairs" side bar option:</p>
<p><a href="https://i.stack.imgur.com/ww5SQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ww5SQ.png" alt="AWS key pairs image"></a></p>
|
<p>I am trying to prepare a dev environment for my team, so we can develop, stage and deploy with the same (or near same) environment.</p>
<p>Getting a Kubernetes Cluster running locally via <a href="http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html" rel="nofollow">http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html</a> was nice and simple. I could then use kubectl to start the pods and services for my application.</p>
<p>However, the services IP addresses are going to be different each time you start up. Which is a problem, if your code needs to use them. In Google Container Engine kube DNS means you can access a service by name. Which means the code that uses the service can remain constant between deployments.</p>
<p>Now, I know we could piece together the IP and PORT via environment variables, but I wanted to have an identical set up as possible.</p>
<p>So I followed some instructions found in various places, both here and in the Kubernetes repo like <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns" rel="nofollow">this</a>.</p>
<p>Sure enough with a little editing of the yml files KubeDNS starts up.</p>
<p>But an nslookup on kubernetes.default fails. The health check on the DNS also fails (because it can't resolve the test look up) and the instance is shut down and restarted.</p>
<p>Running <code>kubectl cluster-info</code> results in:</p>
<pre><code>Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
</code></pre>
<p>So all good. However, hitting that endpoint results in:</p>
<pre><code>{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "no endpoints available for "kube-dns"",
code: 500
}
</code></pre>
<p>I am now at a loss, and know it is something obvious or easy to fix as it seems to all be working. Here is how I start up the cluster and DNS.</p>
<pre><code># Run etcd
docker run --net=host \
-d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd \
--addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
# Run the master
docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube:v1.0.6 \
/hyperkube kubelet --containerized --hostname-override="127.0.0.1" \
--address="0.0.0.0" --api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster_dns=10.0.0.10 --cluster_domain=cluster.local
# Run the service proxy
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.6 \
/hyperkube proxy --master=http://127.0.0.1:8080 --v=2
# forward local port - after this you should be able to user kubectl locally
machine=default; ssh -i ~/.docker/machine/machines/$machine/id_rsa docker@$(docker-machine ip $machine) -L 8080:localhost:8080
</code></pre>
<p>All the containers spin up ok, kubectl get nodes reports ok. Note I pass in the dns flags.</p>
<p>I then start the DNS rc with this file, which is the edited version from <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-rc.yaml.in" rel="nofollow">here</a></p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v9
namespace: kube-system
labels:
k8s-app: kube-dns
version: v9
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v9
template:
metadata:
labels:
k8s-app: kube-dns
version: v9
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd:2.0.9
resources:
limits:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.11
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- -domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://localhost:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
limits:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
</code></pre>
<p>Then start the service (again based on the file in the <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-svc.yaml.in" rel="nofollow">repo</a>)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.0.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
</code></pre>
<p>I made the assumption based on another SO question that clusterIP is the value I passed into the master, and not the ip of the host machine. I am sure it has to be something obvious or simple that I have missed. Anyone out there who can help?</p>
<p>Thanks!</p>
<p><strong>UPDATE</strong></p>
<p>I found <a href="https://github.com/kubernetes/kubernetes/issues/12534" rel="nofollow">this</a> closed issue over in the GitHub repo. Seems I have an identical problem.</p>
<p>I have added to the thread on GitHub, and tried lots of things but still no progress. I tried using different images, but they had different errors (or the same error representing itself differently, I couldn't tell).</p>
<p>Everything relating to this that I have found suggests IP restrictions, or firewall/security settings. So I decided to curl the api from the container itself.</p>
<pre><code>docker exec 49705c38846a echo $(curl http://0.0.0.0:8080/api/v1/services?labels=)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 908 100 908 0 0 314k 0 --:--:-- --:--:-- --:--:-- 443k
{ "kind": "ServiceList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/services", "resourceVersion": "948" }, "items": [ { "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes", "uid": "369a9307-796e-11e5-87de-7a0704d1fdad", "resourceVersion": "6", "creationTimestamp": "2015-10-23T10:09:57Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 443, "nodePort": 0 } ], "clusterIP": "10.0.0.1", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } ] }
</code></pre>
<p>Seems like a valid response to me, so why the JSON parse error coming from kube2Sky!?</p>
<pre><code>Failed to list *api.Service: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value
Failed to list *api.Endpoints: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value
</code></pre>
| <p>The problem was with the networking and kube2sky not accessing the API, so couldn't get the services.</p>
<p>Changing the docker run for the master from,</p>
<pre><code>--config=/etc/kubernetes/manifests
</code></pre>
<p>to</p>
<pre><code>--config=/etc/kubernetes/manifests-multi
</code></pre>
<p>Then in the skydns-rc.yaml the for kube2sky as well as setting the domain, set the host IP address.</p>
<pre><code>- -kube_master_url=http://192.168.99.100:8080 #<- your docker machine IP
</code></pre>
<p>Without the manifests-multi, the host IP is not accessible.</p>
<p>This was a simple change but took a bit to track down.</p>
<p>I have created a simple set up on GitHub and will maintain this so people don't have to go through this pain just to get a local dev environment up and running.</p>
<p><a href="https://github.com/justingrayston/kubernetes-docker-dns" rel="nofollow">https://github.com/justingrayston/kubernetes-docker-dns</a></p>
|
<p>Kubernetes automatically places a token and certificate in <code>/var/run/secrets/kubernetes.io/serviceaccount</code> of each running container in a pod. This token allows access to the the API Server from any container. </p>
<p>Is it possible to either prevent this directory from being added to a container or specify a service account that has zero privileges?</p>
| <p>That token has no explicit permissions. If you run with any authorization mode other than AllowAll, you will find that account cannot do anything with the API. </p>
<p>If you want to stop injecting API tokens, you can remove the service account admission controller from the list (in apiserver options). </p>
<p>If you want to stop generating tokens completely, you can remove the private key argument from the controller manager start options. </p>
|
<p><strong>Background:</strong></p>
<p>We're currently using a continuous delivery pipeline and at the end of the pipeline we deploy the generated Docker image to some server(s) together with the latest application configuration (set as environment variables when starting the Docker container). The continuous delivery build number is used as version for the Docker image and it's currently also this version that gets deployed to the server(s). </p>
<p>Sometimes though we need to update the application configuration (environment variables) and reuse an existing Docker image. Today we simply deploy an existing Docker image with an updated configuration.</p>
<p>Now we're thinking of switching to Kubernetes instead of our home-built solution. Thus it would be nice for us if the version number generated by our continuous delivery pipeline is reflected as the pod version in Kubernetes as well (even if we deploy the same version of the Docker image that is currently deployed but with different environment variables).</p>
<p><strong>Question:</strong></p>
<p>I've read the documentation of <a href="https://cloud.google.com/container-engine/docs/kubectl/rolling-update?hl=en" rel="nofollow">rolling-update</a> but it doesn't indicate that you can do a rolling-update and <em>only</em> change the environment variables associated with a pod without changing its version. </p>
<ol>
<li>Is this possible? </li>
<li>Is there a workaround?</li>
<li>Is this something we should avoid altogether and use a different approach that is more "Kubernetes friendly"?</li>
</ol>
| <p>Rolling update just scales down one replicationController and scales up another one. Therefore, it deletes the old pods and make new pods, at a controlled rate. So, if the new replication controller json file has different env vars and the same image, then the new pods will have that too. </p>
<p>In fact, even if you don't change anything in the json file, except one label value (you have to change some label), then you will get new pods with the same image and env. I guess you could use this to do a rolling restart?</p>
<p>You get to pick what label(s) you want to change when you do a rolling update. There is no formal Kubernetes notion of a "version". You can make a label called "version" if you want, or "contdelivver" or whatever.</p>
<p>I think if I were in your shoes, I would look at two options:</p>
<p>Option 1: put (at least) two labels on the rcs, one for the docker image version (which, IIUC is also a continuous delivery version), and one for the "environment version". This could be a git commit, if you store your environment vars in git, or something more casual. So, your pods could have labels like "imgver=1.3,envver=a34b87", or something like that.</p>
<p>Option 2: store the current best known replication controller, as a json (or yaml) file in version control (git, svn, whatevs). Then use the revision number from version control as a single label (e.g. "version=r346"). This is not the same as your continuous delivery label.
It is a label for the whole configuration of the pod.</p>
|
<p>I'm trying to mount an external nfs share in a Replication Controller. When I create the replication controller, the pod is pending. Getting the details on the pod, I get these events:</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Thu, 05 Nov 2015 11:28:33 -0700 Thu, 05 Nov 2015 11:28:33 -0700 1 {scheduler } scheduled Successfully assigned web-worker-hthjq to jolt-server-5
Thu, 05 Nov 2015 11:28:43 -0700 Thu, 05 Nov 2015 11:28:43 -0700 1 {kubelet jolt-server-5} failedMount Unable to mount volumes for pod "web-worker-hthjq_default": exit status 32
Thu, 05 Nov 2015 11:28:43 -0700 Thu, 05 Nov 2015 11:28:43 -0700 1 {kubelet jolt-server-5} failedSync Error syncing pod, skipping: exit status 32
</code></pre>
<p>My set up is one master and one node on local machines. These machines are running CoreOS. The nfs share exists on another machine on the network. If I shell into the host machine, I can successfully mount the nfs share, so I believe the export is configured correctly. Looking around online, it seems like the only examples of using nfs shares are those defined within Kubernetes (one pod sharing with another). Is there not a way to mount an external share directly from Kubernetes (I don't want to have to mount it to the host machine and then mount from the host machine to the container).</p>
<p>Here's my Replication Controller:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web-worker
name: web-worker
spec:
replicas: 1
selector:
component: web-worker
template:
metadata:
labels:
app: task-queue
component: web-worker
spec:
containers:
- command:
- /sbin/my_init
image: quincy/php-apache2:latest
name: web-worker
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- name: code-base
mountPath: /var/www/html
- name: local-secrets
mountPath: /secrets
volumes:
- name: code-base
nfs:
server: thuey.jolt.local
path: "/Users/thuey/Sites/jolt/jolt-web"
- name: local-secrets
secret:
secretName: local-secret
</code></pre>
<p>UPDATE</p>
<p>After thinking about it more, I realized the problem is probably that it can't find the server (thuey.jolt.local). It's probably just looking at the internal DNS. Is that accurate? If so, is there any way around that?</p>
<p>UPDATE</p>
<p>After attempting this again, it is now working mounting directly from the pod to the networked nfs server.</p>
| <p>With @rwehner's help, I was finally able to figure it out. Checking the kubelet log revealed:</p>
<p><code>Output: mount.nfs: rpc.statd is not running but is required for remote locking.</code></p>
<p>As soon as I got rpcbind running, the mount worked.</p>
|
<p>Currently testing out Kubernetes 1.0.7 on AWS and it creates an external load balancer just fine but I want to know if its possible to create an internal load balancer that is only accessible within the internal subnet. </p>
| <p>Not out of the box (at the time of this writing), but the Kubernetes Ingress api is evolving to support internal loadbalancers. Note the following: </p>
<ol>
<li>Kubernetes Services are round robin loadbalanced by default.</li>
<li>You can deploy something like the service loadbalancer [1] and access your services on the ClusterIP of the loadbalancer pod, just remove the hostPort line in the rc configuration [2] to avoid exposing them on the public IP of the vm.</li>
</ol>
<p>[1] <a href="https://github.com/kubernetes/contrib/tree/master/service-loadbalancer" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/tree/master/service-loadbalancer</a><br>
[2] <a href="https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/rc.yaml#L35" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/rc.yaml#L35</a></p>
|
<p>What does it mean when the <code>creationTimestamp</code> for the template is <code>null</code>?</p>
<pre><code>"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"name": "kube-template"
}
},
</code></pre>
| <p>The template in the ReplicationControllerSpec defines the template for a Pod that the ReplicationController <em>will</em> create, but since it's an abstract <em>template</em>, it doesn't make sense for it to have a creation time. Once the ReplicationController creates a pod, the CreationTimestamp will be set to the time the Pod was created at.</p>
<p>The ReplicationController also has its own metadata (not in the template) which should include the CreationTime of the ReplicationController object.</p>
|
<p>Or to put it another way, what can I do in kubernetes so that the container is run with the equivalent of --device=/dev/tty10, as an example. Otherwise accessing a device like that gives an error.</p>
<pre><code>[root@87eb47e75ed4 /]# echo foo >> /dev/tty10
bash: /dev/tty10: Operation not permitted
</code></pre>
<p>I haven't found a way currently to achieve this short of making the container privileged. I was hoping there'd be something settable in the securityContext, perhaps.</p>
| <p>Passing the devices to the container is not currently supported in Kubernetes. This issue is tracked in <a href="https://github.com/kubernetes/kubernetes/issues/5607" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/5607</a></p>
|
<p>From the kubernetes docs I see that there is a <a href="http://kubernetes.io/v1.0/docs/user-guide/services.html#dns" rel="noreferrer">DNS based service discovery</a> mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs.</p>
| <p>The DNS name for services is as follow: <code>{service-name}.{namespace}.svc.cluster.local</code>.</p>
<p>Assuming you configured <code>kubectl</code> to work with your cluster you should be able to get your service and namespace details by the following the steps below.</p>
<h2>Get your namespace</h2>
<pre><code>$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
</code></pre>
<p>You should ignore the <code>kube-system</code> entry, because that is for the cluster itself. All other entries are your <code>namespaces</code>. By default there will be one extra namespace called <code>default</code>. </p>
<h2>Get your services</h2>
<pre><code>$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP
broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP
service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP
104.155.61.198
service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP
service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP
</code></pre>
<p>This command lists all the services available in your cluster. So for example, if I want to get the IP address of the <code>service-frontend</code> I can use the following DNS: <code>service-frontend.default.svc.cluster.local</code>.</p>
<h2>Verify DNS with busybox pod</h2>
<p>You can create a busybox pod and use that pod to execute <code>nslookup</code> command to query the DNS server.</p>
<pre><code>$ kubectl create -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
</code></pre>
<p>Now you can do an <code>nslookup</code> from the pod in your cluster.</p>
<pre><code>$ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local
Server: 10.203.240.10
Address 1: 10.203.240.10
Name: service-frontend.default.svc.cluster.local
Address 1: 10.203.246.16
</code></pre>
<p>Here you see that the <code>Addres 1</code> entry is the IP of the <code>service-frontend</code> service, the same as the IP address listed by the <code>kubectl get services</code>.</p>
|
<p>I am attempting to pull private docker images from Docker Hub. </p>
<pre><code>Error: image orgname/imagename:latest not found
</code></pre>
<p>The info I am seeing on the internet...</p>
<ul>
<li><a href="http://kubernetes.io/v1.0/docs/user-guide/images.html#using-a-private-registry" rel="nofollow">http://kubernetes.io/v1.0/docs/user-guide/images.html#using-a-private-registry</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/7954" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/7954</a></li>
</ul>
<p>Leads me to believe I should be able to put something like</p>
<pre><code>{
"https://index.docker.io/v1/": {
"auth": "base64pw==",
"email": "[email protected]"
}
}
</code></pre>
<p>In the kubelet uer's $HOME/.dockercfg and kublet will then authenticate with the container registry before attempting to pull.</p>
<p>This doesn't appear to be working. Am I doing something wrong? Is this still possible?</p>
<p>I am using the vagrant provisioner located in <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster" rel="nofollow">https://github.com/kubernetes/kubernetes/tree/master/cluster</a></p>
<p>Also: I am aware of the ImagePullSecrets method but am trying to figure out why this isn't working.</p>
<p>Update: </p>
<p>I moved /root/.dockercfg to /.dockercfg and it now appears to be pulling private images.</p>
| <p>From: <a href="https://github.com/kubernetes/kubernetes/pull/12717/files" rel="nofollow">https://github.com/kubernetes/kubernetes/pull/12717/files</a></p>
<blockquote>
<p>This function func ReadDockerConfigFile() (cfg DockerConfig, err error) is used to parse config which is stored in:</p>
<pre><code>GetPreferredDockercfgPath() + "/config.json"
workingDirPath + "/config.json"
$HOME/.docker/config.json
/.docker/config.json
GetPreferredDockercfgPath() + "/.dockercfg"
workingDirPath + "/.dockercfg"
$HOME/.dockercfg
/.dockercfg
</code></pre>
<p>The first four one are new type of secret, and the last four one are the old type.</p>
</blockquote>
<p>This helps explain why moving the file to /.dockercfg fixed your issue, but not why there was an issue in the first place.</p>
|
<p><strong>Background:</strong></p>
<p>I'm pretty new to the Google's Cloud platform so I want to make sure that I'm not is missing anything obvious. </p>
<p>We're experimenting with GKE and Kubernetes and we'd like to expose some services over https. I've read the documentation for <a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer">http(s) load-balancing</a> which seem to suggest that you should maintain your own nginx instance that does SSL terminal and load balancing. To me this looks quite complex (I'm used to working on AWS and its load-balancer (ELB) which has supported SSL termination for ages).</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Is creating and maintaining an nginx instance the way to go if <em>all</em> you need is SSL termination in GKE? </li>
<li>If so, how is this done? The
<a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer">documentation</a>
doesn't really seem to convey this afaict.</li>
</ol>
| <p>Tl;Dr: Watch this space for Kubernetes 1.2</p>
<p>Till now Kubernetes has only supported L4 loadbalancing. This means the GCE/GKE loadbalancer opens up a tcp connection and just sends traffic to your backend, which is responsible for terminating ssl. As of Kubernetes 1.1, Kubernetes has an "Ingress" resource, but it's currently in Beta and only supports HTTP. It will support different SSL modes in 1.2. </p>
<p>So, how to terminate SSL with a normal Kubernetes service?<br>
<a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/README.md" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/README.md</a></p>
<p>How to create a loadbalancer for this Service?<br>
L4: Change NodePort to LoadBalancer (<a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8</a>)<br>
L7: Deploy a Service loadbalancer (<a href="https://github.com/kubernetes/contrib/tree/master/service-loadbalancer#https" rel="noreferrer">https://github.com/kubernetes/contrib/tree/master/service-loadbalancer#https</a>)</p>
<p>How to create a GCE HTTP loadbalancer through Kubernetes?
<a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/ingress.md#simple-fanout" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/ingress.md#simple-fanout</a></p>
<p>So how to create a GCE HTTPS loadbalancer through Kubernetes?<br>
Coming in 1.2, currently the process is manual. If you're not clear on the exact manual steps reply to this and I will clarify (not sure if I should list all of them here and confuse you even more).</p>
|
<p>I deployed kubernetes with flanneld.service enabled in coreos. And then I started hdfs namenode and datanode via kubernetes replication-controller. I also created kubernetes service for namenode. The namenode service ip is 10.100.220.223, while the pod ip of namenode is 10.20.96.4. In my case, one namenode and one datanode happens to be on same host. And namenode pod and datanode pod can ping each other successfully. </p>
<p>However I encountered the following two problems when trying to start hdfs datanode:</p>
<ol>
<li><p>If I used namenode service ip 10.100.220.223 as fs.defaultFS in core-site.xml for datanode. When datanode tried to register itself to namenode via rpc request, namenode got the wrong ip address for the datanode. Normally it should get the pod ip of the datanode, but in this case docker0 inet address of datanode host is reported to namenode.</p></li>
<li><p>in order to workaround this, I used namenode pod ip 10.20.96.4 in core-site.xml for datanode. This time datanode can't be started at all. The error info reports that "k8s_POD-2fdae8b2_namenode-controller-keptk_default_55b8147c-881f-11e5-abad-02d07c9f6649_e41f815f.bridge" is used as namenode host instead of the namenode pod ip.</p></li>
</ol>
<p>I tried to search this issue over the network, but nothing helps me. Could you please help me out of this? Thanks.</p>
| <p>use the latest kubernetes and pass the params <code>--proxy-mode=iptables</code> to kube-proxy start command, HDFS cluster works now</p>
|
<p>Kubernetes volume support flocker? If support flocker volume, give an example about using flocker volume? Thanks! </p>
| <p>Flocker is supported in Kubernetes release 1.1. A Flocker dataset can be referenced from a PersistentVolume or directly from a Pod volume.</p>
<p><a href="http://kubernetes.io/v1.1/examples/flocker/" rel="nofollow">http://kubernetes.io/v1.1/examples/flocker/</a>
<a href="http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_persistentvolume" rel="nofollow">http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_persistentvolume</a></p>
|
<p>I have a picture below of my mac.</p>
<ul>
<li>K8S Cluster(on VirtualBox, 1*master, 2*workers)</li>
<li>OS Ubuntu 15.04</li>
<li>K8S version 1.1.1</li>
</ul>
<p>When I try to create a pod "busybox.yaml" it goes to pending status.
How can I resolve it?</p>
<p>I pasted the online status below for understanding with a picture (kubectl describe node).</p>
<ul>
<li><p>Status
kubectl get nodes
192.168.56.11 kubernetes.io/hostname=192.168.56.11 Ready 7d
192.168.56.12 kubernetes.io/hostname=192.168.56.12 Ready 7d</p></li>
<li><p>kubectl get ev
1h 39s 217 busybox Pod FailedScheduling {scheduler } no nodes available to schedule pods</p></li>
<li><p>kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 0/1 Pending 0 1h</p></li>
</ul>
<p>And I also added one more status.
<a href="https://i.stack.imgur.com/vTqc2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vTqc2.jpg" alt="enter image description here"></a></p>
| <p>"kubectl describe pod busybox" or "kubectl get pod busybox -o yaml" output could be useful.</p>
<p>Since you didn't specify, I assume that the busybox pod was created in the default namespace, and that no resource requirements nor nodeSelectors were specified. </p>
<p>In many cluster setups, including vagrant, we create a LimitRange for the default namespace to request a nominal amount of CPU for each pod (.1 cores). You should be able to confirm that this is the case using "kubectl get pod busybox -o yaml".</p>
<p>We also create a number of system pods automatically. You should be able to see them using "kubectl get pods --all-namespaces -o wide".</p>
<p>It is possible for nodes with sufficiently small capacity to fill up with just system pods, though I wouldn't expect this to happen with 2-core nodes.</p>
<p>If the busybox pod were created before the nodes were registered, that could be another reason for that event, though I would expect to see a subsequent event for the reason that the pod remained pending even after nodes were created.</p>
<p>Please take a look at the troubleshooting guide for more troubleshooting tips, and follow up here on on slack (slack.k8s.io) with more information.</p>
<p><a href="http://kubernetes.io/v1.1/docs/troubleshooting.html" rel="nofollow">http://kubernetes.io/v1.1/docs/troubleshooting.html</a></p>
|
<p><strong>Update</strong>: Kubernetes supports adding secrets directly to environment variables now. See pod example on <a href="https://github.com/pmorie/kubernetes/blob/60cf252e8b8acfdc12f99e9b12ce0daa140b96f0/docs/user-guide/secrets/secret-env-pod.yaml" rel="nofollow">github</a></p>
<hr>
<p><strong>Original post</strong>:</p>
<p>I've been using files created by Kubernetes Secrets to store sensitive configs, but I always end up writing an extra layer into the containers or overriding the CMD to get the contents of the secret files into environment variables before running like normal. I'd like a bash script to do this for me. I found a ruby script that does something similar, but my ruby and bash skills aren't quite good enough to complete this. Here's the ruby script from <a href="https://blog.oestrich.org/2015/09/kubernetes-secrets-to-env-file/" rel="nofollow">https://blog.oestrich.org/2015/09/kubernetes-secrets-to-env-file/</a></p>
<pre><code>env = {}
Dir["#{ARGV[1]}/*"].each do |file|
key = file.split("/").last
key = key.gsub("-", "_").upcase
env[key] = File.read(file).strip
end
File.open(ARGV[0], "w") do |file|
env.each do |key, value|
file.puts(%{export #{key}="#{value}"})
end
end
</code></pre>
<p>With a bash script that does something similar to the above, it would be nice if it could be made generic, so that it checks if the directory exists, and if not (e.g. in a plain Docker environment), it will assume that the environment variables are already set by some other means.</p>
<p>How would I write a script to do this?</p>
| <p>I noted your use case in the feature request for exposing secrets as environment variables:
<a href="https://github.com/kubernetes/kubernetes/issues/4710" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/4710</a></p>
<p>It's mainly the quoting that makes this tricky in shell. The following worked for me interactively and should work in a script, but additional quoting would be needed if specified as an argument to "sh -c".</p>
<p>(ls -1 secretdir | while read var ; do echo export ${var}=$(cat secretdir/${var}) ; done; echo yourcommand) | sh -</p>
<p>There may be more elegant ways to do this.</p>
|
<p>We are moving our ruby microservices to kubernetes and we used to hold environment specific configuration in the <code>config/application.yml</code>. With kubernetes, you can create environment specific files for each service, e.g. <code>config/kubernetes/production.yml</code> etc. </p>
<p>While kubernetes pod configuration file is able to hold environmental variables, it seems that you cannot really hold structured data in there.</p>
<p>For an example, in <code>application.yml</code> we have</p>
<pre><code>development: &development
process:
notifier:
type: 'terminal-notifier'
...
production: &production
process:
notifier:
type: 'airbrake'
api_key: 'xxxx'
host: 'xxx.xxx.com'
...
</code></pre>
<p>Is it reasonable to continue this practice with kubernetes and break the environments up in the <code>application.yml</code> or does kubernetes have some other best practices for provisioning structured configuration for pod?</p>
<p>Note that until all services are migrated, we basically have to hold the configurations as such:</p>
<pre><code>kubernetes_staging:
<<: *staging
...
</code></pre>
| <p>You can do this a few ways, one is keep doing what you're doing in a single file, another is to use labels to specify which environment's config to use, and the other is use namespaces. I personally recommend namespaces, this way you can have separate <code>.yml</code> files for each environment that potentially spins up the same pods, but with different configurations, so to do this you would have staging, prod, etc namespaces. Namespaces also are a great way to have the same kubernetes cluster have a concept of staging and production. Additionally you can specify permissions for certain namespaces.</p>
<p>Here are the docs on namespaces <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/design/namespaces.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/design/namespaces.md</a></p>
|
<p>I am setting up a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I followed the Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/ubuntu.md" rel="noreferrer">install tutorial for Ubuntu</a>. Each bare metal server also has 2T of disk space exported using <a href="http://docs.ceph.com/docs/v0.94.5/start/" rel="noreferrer">Ceph 0.94.5</a>. Everything is working fine, but when I try to start a Replication Controller I get the following (kubectl get pods):</p>
<pre><code>NAME READY STATUS RESTARTS AGE
site2-zecnf 0/1 Image: site-img is ready, container is creating 0 12m
</code></pre>
<p>The pod will be in this Not Ready state forever, but, if I kill it and start it again, it will run fine (sometimes I have to repeat this operation a few times though). Once the pod is running, everything works just fine.</p>
<p>If, for some reason, the pod dies, it's restarted by Kubernetes, but can enter in this Not Ready state again. Running:</p>
<pre><code>kubectl describe pod java-site2-crctv
</code></pre>
<p>I get (some fields deleted):</p>
<pre><code>Namespace: default
Status: Pending
Replication Controllers: java-site2 (1/1 replicas created)
Containers:
java-site:
Image: javasite-img
State: Waiting
Reason: Image: javasite-img is ready, container is creating
Ready: False
Restart Count: 0
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Sat, 14 Nov 2015 12:37:56 -0200 Sat, 14 Nov 2015 12:37:56 -0200 1 {scheduler } scheduled Successfully assigned java-site2-crctv to 10.70.2.3
Sat, 14 Nov 2015 12:37:57 -0200 Sat, 14 Nov 2015 12:45:29 -0200 46 {kubelet 10.70.2.3} failedMount Unable to mount volumes for pod "java-site2-crctv_default": exit status 22
Sat, 14 Nov 2015 12:37:57 -0200 Sat, 14 Nov 2015 12:45:29 -0200 46 {kubelet 10.70.2.3} failedSync Error syncing pod, skipping: exit status 22
</code></pre>
<p>The pod cannot mount the volume. But, if I mount the volumes (rdb blocks) by hand in a local folder in all nodes, the problem is gone (pods start without problems). </p>
<p>It seems to me that Kubernetes isn't able to map them (<code>sudo rbd map java-site-vol</code>), only to mount them (<code>sudo mount /dev/rbd/rbd/java-site-vol /...</code>).</p>
<p>Should I map all Ceph volumes that I use or should Kubernetes do that? </p>
| <p>I finally solved the problem. In the yaml files describing the Replication Controllers, I was using <code>keyring:</code> in the volume section:</p>
<pre><code>keyring: "ceph.client.admin.keyring"
</code></pre>
<p>After I <a href="https://ceph.com/planet/bring-persistent-storage-for-your-containers-with-krbd-on-kubernetes/" rel="noreferrer">generated a Ceph secret</a> and changed the yaml files to use <code>secretRef</code>:</p>
<pre><code>secretRef:
name: "ceph-secret"
</code></pre>
<p>Kubernetes was able to map and mount the Ceph volumes and the pods began to start normally. I don't know why using <code>keyring:</code> doesn't work in this case.</p>
|
<p><strong>Background</strong>
I'd like to connect <a href="https://hub.docker.com/_/wordpress/" rel="nofollow noreferrer">Wordpress</a> docker container to a Google Could SQL instance. By default Google Cloud SQL only expose an IPv6 address and preferably I'd like to connect Wordpress to this address but I can't find a way to do so (see my <a href="https://stackoverflow.com/questions/33733279/how-to-connect-to-mysql-using-ipv6-from-wordpress/">other</a> stackoverflow post for details).</p>
<p><strong>Question</strong></p>
<p>I'd like to know if it's possible to connect to an IPv6 address from a pod running in Kubernetes (GKE)? If so how?</p>
| <p>Currently, Google Cloud Platform <a href="https://cloud.google.com/compute/docs/networking?hl=en#networks" rel="nofollow">Networks</a> only support IPv4, so connecting to IPv6 addresses from GKE is not possible.</p>
|
<p>I'd like to try out the new <a href="http://kubernetes.io/v1.1/docs/user-guide/ingress.html" rel="nofollow">Ingress</a> resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
</code></pre>
<p>using: </p>
<pre><code>$ kubectl create -f test-ingress.yaml
</code></pre>
<p>I end up with the following error message:</p>
<pre><code>error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"]
error: no objects passed to create
</code></pre>
<p>When I run <code>kubectl version</code> it shows: </p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
</code></pre>
<p>But I seem to have the latest <code>kubectl</code> component installed since running <code>gcloud components update kubectl</code> just gives me:</p>
<pre><code>All components are up to date.
</code></pre>
<p>So how do I enable the <code>extensions/v1beta1</code> in Kubernetes/GKE?</p>
| <p>The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the <a href="https://cloud.google.com/container-engine/release-notes#november_12_2015" rel="nofollow">Google Container Engine release notes</a>:</p>
<blockquote>
<p>The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1
APIs like autoscaling will not be available via kubectl until next
week's push of the kubectl binary.</p>
</blockquote>
<p>along with the solution (download the newer binary manually). </p>
|
<p>In <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/elasticsearch/production_cluster/README.md" rel="nofollow">the Kubernetes example of Elasticsearch production deployment</a>, there is a warning about using <code>emptyDir</code>, and advises to "be adapted according to your storage needs", which is linked to the documentation of persistent storage on Kubernetes.</p>
<p>Is it better to use a persistent storage, which is an external storage for the node, and so needs (high) I/O over network, or can we deploy a reliable Elasticsearch using multiple data nodes with local <code>emptyDir</code> storage?</p>
<p>Context: We're deploying our Kubernetes on commodity hardware, and we prefer not to use SAN for the storage layer (because it doesn't seem like commodity).</p>
| <p>The warning is so that folks don't assume that using <code>emptyDir</code> provides a persistent storage layer. An <code>emptyDir</code> volume will persist as long as the pod is running on the same host. But if the host is replaced or it's disk becomes corrupted, then all data would be lost. Using network mounted storage is one way to work around both of these failure modes. If you want to use replicated storage instead, that works as well. </p>
|
<p>I've read the <a href="http://kubernetes.io/v1.0/docs/user-guide/production-pods.html#lifecycle-hooks-and-termination-notice" rel="nofollow">docs</a> on graceful termination of a pod in Kubernetes but I'm not quite sure how to map my specific use case of shutting down a Java process gracefully.</p>
<p>What I want to do is to run the following bash command as a part of the termination process:</p>
<pre><code>$ kill -SIGTERM `pidof java`
</code></pre>
<p>I've tried this:</p>
<pre><code>...
lifecycle:
preStop:
exec:
command: ["kill", "-SIGTERM", "`pidof java`"]
...
</code></pre>
<p>But nothing happens but the Java shutdown hook doesn't seem to kick-in when I stop the pod (<code>kubectl stop pod pod-xxx</code>). I suspect that the <code>pidof java</code> part of the <code>kill</code> command doesn't work (but I'm not sure). How would I do this in Kubernetes?</p>
| <p>I started a bash shell inside the container and executed my command instead and that turned out to work:</p>
<pre><code>command: ["/bin/bash", "-c", "PID=`pidof java` && kill -SIGTERM $PID && while ps -p $PID > /dev/null; do sleep 1; done;"]
</code></pre>
<p>Without <code>/bin/bash</code> I couldn't get it working.</p>
|
<p>I'm developing a Docker-based web service, where each subscriber has private access to their own Docker container running in the cloud, exposing port 443.</p>
<p>I've used <a href="https://github.com/jwilder/nginx-proxy" rel="nofollow">nginx-proxy/docker-gen</a> successfully to serve multiple Docker containers from the same VM, with just port 443 exposed to the public net.</p>
<p>This works fine ... but what do I do when the subscribers saturate the VM resources?
(As a simple example, I may have a practical limit of 10 subscribers' containers on a single DigitalOcean 2Gb instance serving as a Docker host.)</p>
<p>Eg when subscriber #11 signs up, I need to have a new Docker host ready and waiting to start up that new container.</p>
<p>In other words, I want to do <strong>horizontal autoscaling</strong> of my Docker hosts, responsive to user subscription demand. Doing some service discovery and making the containers publicly-addressable would be nice. </p>
<p>I'm trying to work out what the best solution is. Kubernetes 1.1 seems to support auto-scaling of Pods (ie basically increasing the number of containers...) but not the auto-scaling of the container hosts ("minions" in Kubernetes-speak??)</p>
<p>I've looked at the following projects which seem close to what I need:</p>
<ul>
<li><a href="http://deis.io/" rel="nofollow">Deis</a> - no explicit autoscaling as far as I can tell </li>
<li><a href="http://tsuru.io" rel="nofollow">Tsuru</a> - possible autoscaling solution but limited to count/RAM</li>
<li><a href="https://mesosphere.com/" rel="nofollow">Mesos/Mesosphere</a> - probably much more complex than necessary</li>
</ul>
<p>Can anybody make any useful suggestions?? </p>
| <p>As of Kubernetes v1.1, you can now implement a Horizontal Pod Autoscaler: <a href="http://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaler.html" rel="nofollow">http://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaler.html</a></p>
|
<p>I want to implement a rescheduler like functionality which basically kills pods if it decides that the pods could be rescheduled in a better way (based on requiring less number of nodes/fitting etc). Till now I have created a new kubectl command which I want to run whenever I want to reschedule. I have also looked at the code and rescheduler proposal in the docs.</p>
<p>But, I am unable to access the pod details that would be needed to decide which pod to kill if any from the command's <code>run</code> function(present in <code>pkg/kubectl/cmd/newCommand.go</code>) . I guess I can kill and restart pod using a mechanism similar to that used by <code>kubectl delete</code> and <code>create</code> but I am unable to get all the required pod and node lists.</p>
<p>For example, the <code>objs</code> variable in <code>pkg/kubectl/cmd/get.go</code> (used for <code>kubectl get</code>) contains pod details but there is no data of which node they are scheduled on and what are the resource capacities for that node. </p>
<p>I would be grateful if someone could give some idea of how to get these details. Also, if it is easier to implement it at some other place instead of as a kubectl command then such suggestions are also welcomed.</p>
| <p>Firstly, please see <a href="https://github.com/kubernetes/kubernetes/issues/11793#issuecomment-150410114" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/11793#issuecomment-150410114</a> if you haven't already.</p>
<blockquote>
<p>I guess I can kill and restart pod using a mechanism similar to that used by kubectl delete and create but I am unable to get all the required pod and node lists.</p>
</blockquote>
<p>I would suggest writing a control loop instead. When you create a pod it's stored in the master and you can easily create a Kubernetes client to retrieve this pod, in the same way the Kubernetes scheduler does. See [1] for how to access the api through a client. With this in mind, I would suggest writing a control loop. There are several existing examples [2], but a very basic controller is the Ingress controller (just so you aren't confused by all the code in production controllers) [3]. </p>
<p>A problem you will face is getting the Kubernetes scheduler to ignore the pod. See discussion on the github issue for solutions.</p>
<p>It is possible to go the route you're on and implement it in kubectl, if you still want to. Run:</p>
<blockquote>
<p>kubectl get pods -o wide --v=7</p>
</blockquote>
<p>Note this output has node names, and kubectl should show you the REST calls it's making in the process. I suspect you will run into problems soon though, as you really don't <em>just</em> want to create/delete, because there's a high chance the scheduler will put the pod on the same node. </p>
<p>[1] <a href="https://stackoverflow.com/questions/33167023/kubernetes-go-client-used-storage-of-nodes-and-cluster/33177703#33177703">kubernetes go client used storage of nodes and cluster</a><br>
[2] <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/controller" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/pkg/controller</a><br>
[3] <a href="https://github.com/kubernetes/contrib/tree/master/Ingress/controllers" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/tree/master/Ingress/controllers</a></p>
|
<p>How do I set ulimit for containers in Kubernetes? (specifically ulimit -u)</p>
| <p>It appears that you can't currently set a ulimit but it is an open issue: <a href="https://github.com/kubernetes/kubernetes/issues/3595" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/3595</a></p>
|
<p>I create a kubernetes cluster to test. but cannot create rc. I got error<code>reason: 'failedScheduling' no nodes available to schedule pods</code>:</p>
<pre><code>I1112 04:24:34.626614 6 factory.go:214] About to try and schedule pod my-nginx-63t4p
I1112 04:24:34.626635 6 scheduler.go:127] Failed to schedule: &{{ } {my-nginx-63t4p my-nginx- default /api/v1/namespaces/default/pods/my-nginx-63t4p c4198c29-88ef-11e5-af0e-002590fdff2c 1054 0 2015-11-12 03:45:07 +0000 UTC <nil> map[app:nginx] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"my-nginx","uid":"c414bbd3-88ef-11e5-8682-002590fdf940","apiVersion":"v1","resourceVersion":"1050"}}]} {[{default-token-879cw {<nil> <nil> <nil> <nil> <nil> 0xc20834c030 <nil> <nil> <nil> <nil> <nil>}}] [{nginx nginx [] [] [{ 0 80 TCP }] [] {map[] map[]} [{default-token-879cw true /var/run/secrets/kubernetes.io/serviceaccount}] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil>}] Always 0xc20834c028 <nil> ClusterFirst map[] default false []} {Pending [] <nil> []}}
I1112 04:24:34.626720 6 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"my-nginx-63t4p", UID:"c4198c29-88ef-11e5-af0e-002590fdff2c", APIVersion:"v1", ResourceVersion:"1054", FieldPath:""}): reason: 'failedScheduling' no nodes available to schedule pods
</code></pre>
<p>the status of pod like :</p>
<pre>
core@core-1-86 ~ $ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE NODE
my-nginx-3w98h 0/1 Pending 0 56m
my-nginx-4fau8 0/1 Pending 0 56m
my-nginx-9zc4f 0/1 Pending 0 56m
my-nginx-fzz5i 0/1 Pending 0 56m
my-nginx-hqqpt 0/1 Pending 0 56m
my-nginx-pm2bo 0/1 Pending 0 56m
my-nginx-rf3tk 0/1 Pending 0 56m
my-nginx-v1dj3 0/1 Pending 0 56m
my-nginx-viiop 0/1 Pending 0 56m
my-nginx-yy23r 0/1 Pending 0 56m
</pre>
<p>the example rc :</p>
<pre><code>core@core-1-85 ~ $ cat wk/rc-nginx.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: my-nginx
spec:
replicas: 10
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>and the node status in cluster is :</p>
<pre><code>core@core-1-85 ~ $ kubectl get node
NAME LABELS STATUS AGE
10.12.1.90 kubernetes.io/hostname=10.12.1.90 Ready 37m
10.12.1.92 kubernetes.io/hostname=10.12.1.92 Ready 37m
10.12.1.93 kubernetes.io/hostname=10.12.1.93 Ready 37m
10.12.1.94 kubernetes.io/hostname=10.12.1.94 Ready 38m
10.12.1.95 kubernetes.io/hostname=10.12.1.95 Ready 38m
10.12.1.96 kubernetes.io/hostname=10.12.1.96 Ready 38m
10.12.1.97 kubernetes.io/hostname=10.12.1.97 Ready 38m
10.12.1.98 kubernetes.io/hostname=10.12.1.98 Ready 41m
core-1-89 kubernetes.io/hostname=core-1-89 Ready 22m
</code></pre>
| <p>I found the Solution, the reason is the version of kube-apiserver,kube-controller-manager and kube-scheduler does not match with the kubelet.</p>
<p>the detail: <a href="https://github.com/kubernetes/kubernetes/issues/17154" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/17154</a></p>
|
<p>Followed this guide to starting a local-machine kubernetes cluster:
<a href="http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html" rel="nofollow">http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html</a></p>
<p>I've created various pods with .yaml files and everything works, I can access nginx and mysql using container IPs (in the 172.17.x.x range, with docker0), however when I create services, service IPs are in the 10.0.0.x range, unreachable from other containers. </p>
<p>Isn't kube-proxy supposed to create iptables rules automatically, providing access to containers behind the service IP? No iptables changes are happening, and other containers can't reach services. Thanks!</p>
| <p>Tim, I did run it again using your steps, no difference, didn't work, however today I switched to the version 1.1 docs here:</p>
<p><a href="http://kubernetes.io/v1.1/docs/getting-started-guides/docker.html" rel="nofollow">http://kubernetes.io/v1.1/docs/getting-started-guides/docker.html</a></p>
<p>and also switched container versions, currently using:</p>
<p>gcr.io/google_containers/etcd:2.2.1</p>
<p>gcr.io/google_containers/hyperkube:v1.1.1</p>
<p>Lo and behold...it works!!! Containers can now talk to services!
Thanks for the responses</p>
|
<p>I'm trying to create a pod with Postgres. After initialize, the Pod has to execute the following command:</p>
<pre><code> "lifecycle": {
"postStart": {
"exec": {
"command": [
"export", "PGPASSWORD=password;", "psql", "-h", "myhost", "-U", "root", "-d", "AppPostgresDB", "<", "/db-backup/backup.sql"
]
}
}
},
</code></pre>
<p>Without these command the pod works perfectly.</p>
<p>I get the following status:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
postgres-import 0/1 ExitCode:137 0 15s
</code></pre>
<p>I get these events:</p>
<pre><code>Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id cfa5f8177beb
Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id 15ad0166af04
Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id cfa5f8177beb
Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id cfa5f8177beb
Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id d910391582e9
Mon, 16 Nov 2015 16:13:01 +0100 Mon, 16 Nov 2015 16:13:01 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id d910391582e9
Mon, 16 Nov 2015 16:13:11 +0100 Mon, 16 Nov 2015 16:13:11 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id d910391582e9
</code></pre>
<p>What can I do to solve this issue?
Thanks</p>
| <p>Try passing this to the shell:</p>
<pre><code>"command": [
"/bin/bash", "-c", "export PGPASSWORD=password; psql -h myhost -U root -d AppPostgresDB < /db-backup/backup.sql"
]
</code></pre>
|
<p>I hope everyone here is doing good. I am trying to find a way to add entries to the containers /etc/hosts file while spinning up a pod. I was just wondering to know if there is any option/parameter that I could mention in my "pod1.json" which adds the entries to the containers /etc/hosts when its being created. Something like "--add-host node1.example.com:${node1ip}" that serves the same purpose for docker as shown below.</p>
<pre><code>docker run \
--name mongo \
-v /home/core/mongo-files/data:/data/db \
-v /home/core/mongo-files:/opt/keyfile \
--hostname="node1.example.com" \
--add-host node1.example.com:${node1ip} \
--add-host node2.example.com:${node2ip} \
--add-host node3.example.com:${node3ip} \
-p 27017:27017 -d mongo:2.6.5 \
--smallfiles \
--keyFile /opt/keyfile/mongodb-keyfile \
--replSet "rs0"
</code></pre>
<p>Any pointers are highly appreciated. Thank you.</p>
<p>Regards,
Aj</p>
| <p>Kubernetes uses the <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/networking.md" rel="nofollow">IP-per-pod model</a>. If I understand correctly, you want to create three mongo pods, and write IP addresses of the three pods in <code>/etc/hosts</code> of each container. Modifying the <code>/etc/host</code> files directly might not be a good idea for many reasons (e.g., the pod may die and be replaced).</p>
<p>For peer discovery in kubernetes, you need to </p>
<ol>
<li>Find out the IP addresses of the peers.</li>
<li>Update your application with the addresses.</li>
</ol>
<p>(1) is achievable using <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#headless-services" rel="nofollow">Headless Service</a>. (2) requires your to write a sidecar container to run along side with your mongo containers, performs (1), and configures your application. The sidecar container is highly application-specific and you may want to read some <a href="https://stackoverflow.com/questions/30041699/can-i-call-rs-initiate-and-rs-add-from-node-js-using-the-mongodb-driver">related stackoverflow questions</a> about doing this for mongodb. </p>
<p>As for (1), you can create a <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#headless-services" rel="nofollow">Headless Service</a> by using this <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/nodesjs-mongodb/mongo-service.yaml" rel="nofollow">service.yaml</a> with the clusterIP set to None.</p>
<pre><code>spec:
clusterIP: None
</code></pre>
<p>Then, you can create a replication controller which creates the desired number of mongo pods. For example, you can use <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/nodesjs-mongodb/mongo-controller.yaml" rel="nofollow">mongo-controller.yaml</a>, replaces the <code>gcePersistentDisk</code> with a desired local disk volume type (e.g. <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/volumes.md#emptydir" rel="nofollow"><code>emptyDir</code></a> or <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/volumes.md#hostpath" rel="nofollow"><code>hostPath</code></a>), and change the replica number to 3.</p>
<p>Each of the mongo pod will be assigned an IP address automatically, and is labeled with <code>name=mongo</code>. The headless service uses a label selector to find the pods. When querying DNS with the service name from a node or a container, it will return a list of IP addresses of the mongo pods.</p>
<p>E.g.,</p>
<pre><code>$ host mongo
mongo.default.svc.cluster.local has address 10.245.0.137
mongo.default.svc.cluster.local has address 10.245.3.80
mongo.default.svc.cluster.local has address 10.245.1.128
</code></pre>
<p>You can get the addresses in the sidecar container you wrote and configure mongodb-specific accordingly.</p>
|
<p>When watching a replication controller, it returns it’s most recent <code>replicas</code> count under <code>ReplicationControllerStatus</code>. I could not find anywhere in the documentation what the status of the pod needs to be, in order for it to be included there. Is it enough for the pod to be scheduled? I’ve noticed a replication controller reporting pods in it’s status even if the pods are still pending.</p>
| <p>Very interesting question! For that to answer I believe we need to walk the Star Wars walk and <em>Use The Source</em>:</p>
<ul>
<li>The <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/replication/replication_controller.go#L62" rel="nofollow">ReplicationManager</a> has some hints concerning expectations</li>
<li>Then, there is <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/controller_utils.go" rel="nofollow">controller_utils.go</a> with some more indications </li>
<li>However, the core of the calculation seems to be in <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/replication/replication_controller_utils.go#L26" rel="nofollow">updateReplicaCount</a></li>
</ul>
<p><strong>UPDATE</strong>: My colleague Stefan Schimanski just pointed out to me that in fact the answer is a bit more complicated; key is <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/controller_utils.go#L394" rel="nofollow">FilterActivePods</a>:</p>
<pre><code>func FilterActivePods(pods []api.Pod) []*api.Pod {
var result []*api.Pod
for i := range pods {
if api.PodSucceeded != pods[i].Status.Phase &&
api.PodFailed != pods[i].Status.Phase &&
pods[i].DeletionTimestamp == nil {
result = append(result, &pods[i])
}
}
return result
}
</code></pre>
<p>This means the ultimate condition is: <strong>pods which have not terminated yet and are not in graceful termination</strong>.</p>
<p>Note that the definition of 'scheduled' in the context of Kubernetes is simply</p>
<pre><code>pod.spec.nodeName != ""
</code></pre>
<p>The Kubelet on a specific node watches the API Server for pods that have a matching <code>nodeName</code> and will then launch the pod on said node.</p>
|
<p><strong>Background:</strong></p>
<p>Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well.</p>
<p><strong>Question:</strong></p>
<p>Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do:</p>
<pre><code>kubectl replace -f my-service-with-an-additional-port.json
</code></pre>
<p>I get the following error message:</p>
<pre><code>Replace failedspec.clusterIP: invalid value '': field is immutable
</code></pre>
| <p>If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers.</p>
<p>Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.</p>
|
<p>I've been tasked with evaluating container management solutions. I'm aware there is a large number or options, but we need production ready, on premises solution. What are the options?</p>
| <p>In descending order, from most mature and battle-tested at scale to less so:</p>
<ul>
<li><a href="https://mesosphere.github.io/marathon/docs/native-docker.html" rel="nofollow">Marathon</a>, a Apache Mesos framework</li>
<li><a href="http://kubernetes.io/v1.1/" rel="nofollow">Kubernetes</a></li>
<li><a href="http://blog.docker.com/2015/11/swarm-1-0/" rel="nofollow">Docker Swarm</a></li>
<li>HashiCorp's <a href="https://www.nomadproject.io/" rel="nofollow">Nomad</a></li>
</ul>
|
<p>On my Kubernetes cluster on GKE, I have the following persistent volume claims (PVCs):</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
</code></pre>
<p>and:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgresql-blobs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
</code></pre>
<p>Amongst others, I have the following persistent volume defined:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0003
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
- ReadOnlyMany
gcePersistentDisk:
pdName: registry
fsType: ext4
</code></pre>
<p>Now, both claims claimed the same volume:</p>
<pre><code>bronger:~$ kubectl describe pvc postgresql-blobs registry
Name: postgresql-blobs
Namespace: default
Status: Bound
Volume: pv0003
Labels: <none>
Capacity: 100Gi
Access Modes: RWO,ROX
Name: registry
Namespace: default
Status: Bound
Volume: pv0003
Labels: <none>
Capacity: 100Gi
Access Modes: RWO,ROX
</code></pre>
<p>Funny enough, the PV knows only about one of the claims:</p>
<pre><code>bronger:~$ kubectl describe pv pv0003
Name: pv0003
Labels: <none>
Status: Bound
Claim: default/postgresql-blobs
Reclaim Policy: Retain
Access Modes: RWO,ROX
Capacity: 100Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: registry
FSType: ext4
Partition: 0
ReadOnly: false
</code></pre>
<p>How can I prevent this from happening?</p>
| <p>This is a bug and is fixed by <a href="https://github.com/kubernetes/kubernetes/pull/16432" rel="nofollow">https://github.com/kubernetes/kubernetes/pull/16432</a></p>
|
<p>Is there a way to discover all the endpoints of a headless service from outside the cluster? </p>
<p>Preferably using DNS or Static IPs</p>
| <p>By <a href="http://kubernetes.io/third_party/swagger-ui/#!/api%2Fv1/watchNamespacedEndpointsList" rel="noreferrer">watching changes</a> to a list of Endpoints:</p>
<pre><code>GET /api/v1/watch/namespaces/{namespace}/endpoints
</code></pre>
|
<p>I've been working with a 6 node cluster for the last few weeks without issue. Earlier today we ran into an open file issue (<a href="https://github.com/kubernetes/kubernetes/pull/12443/files" rel="nofollow">https://github.com/kubernetes/kubernetes/pull/12443/files</a>) and I patched and restarted kube-proxy. </p>
<p>Since then, all rc deployed pods to ALL BUT node-01 get stuck in pending state and there log messages stating the cause.</p>
<p>Looking at the docker daemon on the nodes, the containers in the pod are actually running and a delete of the rc removes them. It appears to be some sort of callback issue between the state according to kubelet and the kube-apiserver.</p>
<p>Cluster is running v1.0.3</p>
<p>Here's an example of the state</p>
<pre><code>docker run --rm -it lachie83/kubectl:prod get pods --namespace=kube-system -o wide
NAME READY STATUS RESTARTS AGE NODE
kube-dns-v8-i0yac 0/4 Pending 0 4s 10.1.1.35
kube-dns-v8-jti2e 0/4 Pending 0 4s 10.1.1.34
</code></pre>
<p>get events</p>
<pre><code>Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-i0yac
Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-i0yac Pod scheduled {scheduler } Successfully assigned kube-dns-v8-i0yac to 10.1.1.35
Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-jti2e Pod scheduled {scheduler } Successfully assigned kube-dns-v8-jti2e to 10.1.1.34
Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-jti2e
</code></pre>
<p>scheduler log</p>
<pre><code>I0916 06:25:42.897814 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-jti2e", UID:"c1cafebe-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670117", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-jti2e to 10.1.1.34
I0916 06:25:42.904195 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-i0yac", UID:"c1cafc69-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670118", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-i0yac to 10.1.1.35
</code></pre>
<p>tailing kubelet log file during pod create</p>
<pre><code>tail -f kubelet.kube-node-03.root.log.INFO.20150916-060744.10668
I0916 06:25:04.448916 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:25:24.449253 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:25:44.449522 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:26:04.449774 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:26:24.450400 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:26:44.450995 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:27:04.451501 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:27:24.451910 10668 config.go:253] Setting pods for source file : {[] 0 file}
I0916 06:27:44.452511 10668 config.go:253] Setting pods for source file : {[] 0 file}
</code></pre>
<p>kubelet process</p>
<pre><code>root@kube-node-03:/var/log/kubernetes# ps -ef | grep kubelet
root 10668 1 1 06:07 ? 00:00:13 /opt/bin/kubelet --address=10.1.1.34 --port=10250 --hostname_override=10.1.1.34 --api_servers=https://kube-master-01.sj.lithium.com:6443 --logtostderr=false --log_dir=/var/log/kubernetes --cluster_dns=10.1.2.53 --config=/etc/kubelet/conf --cluster_domain=prod-kube-sjc1-1.internal --v=4 --tls-cert-file=/etc/kubelet/certs/kubelet.pem --tls-private-key-file=/etc/kubelet/certs/kubelet-key.pem
</code></pre>
<p>node list</p>
<pre><code>docker run --rm -it lachie83/kubectl:prod get nodes
NAME LABELS STATUS
10.1.1.30 kubernetes.io/hostname=10.1.1.30,name=node-1 Ready
10.1.1.32 kubernetes.io/hostname=10.1.1.32,name=node-2 Ready
10.1.1.34 kubernetes.io/hostname=10.1.1.34,name=node-3 Ready
10.1.1.35 kubernetes.io/hostname=10.1.1.35,name=node-4 Ready
10.1.1.42 kubernetes.io/hostname=10.1.1.42,name=node-5 Ready
10.1.1.43 kubernetes.io/hostname=10.1.1.43,name=node-6 Ready
</code></pre>
| <p>The issue turned out to be an MTU issue between the node and the master. Once that was fixed the problem was resolved.</p>
|
<p>With docker, I can pass log-driver=syslog command line option to forward container logs to syslog. How do I pass these docker arguments via Kubernetes yaml/json descriptor?</p>
| <p>Starting with the available documentation: in your case on <a href="http://kubernetes.io/v1.1/docs/user-guide/logging.html" rel="nofollow">logging</a> and <a href="http://kubernetes.io/v1.1/docs/user-guide/volumes.html" rel="nofollow">volumes</a>. Taking these two sources together we arrive at something like the following:</p>
<pre><code>...
containers:
- name: syslogtest
image: ubuntu:14.04
volumeMounts:
- name: logvol
mountPath: /dev/log
readOnly: false
volumes:
- name: logvol
source:
hostDir:
path: /dev/log
...
</code></pre>
|
<p>I follow the <a href="http://kubernetes.io/v1.0/examples/rbd/" rel="nofollow">example</a> to use rbd in kubernetes, but can not success. who can help me!! the error :</p>
<pre><code>Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289702 1254 volumes.go:114] Could not create volume builder for pod 5df3610e-86c8-11e5-bc34-002590fdf95c: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289770 1254 kubelet.go:1210] Unable to mount volumes for pod "rbd2_default": can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched; skipping pod
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.299458 1254 pod_workers.go:111] Error syncing pod 5df3610e-86c8-11e5-bc34-002590fdf95c, skipping: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched
</code></pre>
<p>And The template file I used rbd-with-secret.json:</p>
<pre><code>core@core-1-94 ~/kubernetes/examples/rbd $ cat rbd-with-secret.json
{
"apiVersion": "v1",
"id": "rbdpd2",
"kind": "Pod",
"metadata": {
"name": "rbd2"
},
"spec": {
"nodeSelector": {"kubernetes.io/hostname" :"10.12.1.97"},
"containers": [
{
"name": "rbd-rw",
"image": "kubernetes/pause",
"volumeMounts": [
{
"mountPath": "/mnt/rbd",
"name": "rbdpd"
}
]
}
],
"volumes": [
{
"name": "rbdpd",
"rbd": {
"monitors": [
"10.14.1.33:6789",
"10.14.1.35:6789",
"10.14.1.36:6789"
],
"pool": "rbd",
"image": "foo",
"user": "admin",
"secretRef": {"name": "ceph-secret"},
"fsType": "ext4",
"readOnly": true
}
}
]
}
}
</code></pre>
<p>The secret:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFBemV6bFdZTXdXQWhBQThxeG1IT2NKa0QrYnE0K3RZUmtsVncK
</code></pre>
<p>the ceph config is in /etc/ceph/</p>
<pre><code>core@core-1-94 ~/kubernetes/examples/rbd $ ls -alh /etc/ceph
total 20K
drwxr-xr-x 2 root root 4.0K Nov 6 18:38 .
drwxr-xr-x 26 root root 4.0K Nov 9 17:07 ..
-rw------- 1 root root 63 Nov 4 11:27 ceph.client.admin.keyring
-rw-r--r-- 1 root root 264 Nov 6 18:38 ceph.conf
-rw-r--r-- 1 root root 384 Nov 6 14:35 ceph.conf.orig
-rw------- 1 root root 0 Nov 4 11:27 tmpkqDKwf
</code></pre>
<p>and the key as :</p>
<pre>
core@core-1-94 ~/kubernetes/examples/rbd $ sudo cat
/etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQAzezlWYMwWAhAA8qxmHOcJkD+bq4+tYRklVw==
</pre>
| <p>You'll get "no volume plugins matched" if the rbd command isn't installed and in the path. </p>
<p>As the example specifies, you need to ensure that ceph is installed on your Kubernetes nodes. For instance, in Fedora:
$ sudo yum -y install ceph-common</p>
<p>I'll file an issue to clarify the error messages.</p>
|
<p>I'm following the container engine walkthrough and I see that my VM Instance has a min CPU usage of ~80%. However, if I ssh into this box and run 'top' I see a much lower utilization. Can someone explain this to me as I must be missing something simple. Thank you. <a href="https://i.stack.imgur.com/n2Ut2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n2Ut2.png" alt="Google Developers Console"></a></p>
| <p>There is a known (benevolent) bug in the ingress controller that is automatically added to your cluster in Kubernetes 1.1.1. If you are not using the controller, you can scale the number of replicas to zero:</p>
<pre><code>kubectl scale rc l7-lb-controller --namespace=kube-system --replicas=0
</code></pre>
<p>which should make your CPU usage go back to a normal level. </p>
<p>The ingress controller isn't doing any harm (other than affecting monitoring metrics) and will be automatically nice'd by the kernel if you run other pods on the same node (so it isn't affecting performance of your cluster). </p>
<p>This bug will be fixed in the upcoming 1.1.2 release of Kubernetes. </p>
|
<p>I have a noob question. If I'm using a docker image that uses a folder located in the host to do something, Where should be located the folder in the kubernetes cluster? I'm ok doing this with docker since I know where is my host filesystem but I get lost when I'm on a kubernetes cluster.</p>
<p>Actually, I don't know if this is the best approach. But what I'm trying to do is build a development environment for a <strong>php backbend</strong>. Since what I want is that every person can run a container environment <strong>with their own files</strong> (which are on their computers), I'm trying to build a sidecar container so when launching the container I can pass the files to the php container.</p>
<p>The problem is that I'm running kubernetes to build development environment for my company using a vagrant (coreos + kubernetes) solution since we don't have a cloud service right now so I can't use a persiten disk. I try NFS but it seems be too much for what I want (just pass some information to the pod regardless of the PC where I am). Also I try to use hostPAth in Kubernetes but the problem is that the machines where I want connect to the containers are located outside of the kubernetes cluster (Vagrant + CoreOS + Kubernetes so I-m trying to expose some container to public IPs but I can not figure out how to past the files (located in the machines outside of the cluster) to the containers.</p>
<p>Thanks for your help, I appreciate your comments.</p>
| <p>Not so hard, actually. Check my gists may give you some tips:</p>
<p><a href="https://gist.github.com/resouer/378bcdaef1d9601ed6aa" rel="nofollow">https://gist.github.com/resouer/378bcdaef1d9601ed6aa</a></p>
<p>See, do not try to consume files from outside, just package them in a docker image, and consume them by sidecar mode.</p>
|
<p>After I manually install nfs client package under each node, then it works.
But in GKE, slave node can be scale in and out. After create a new slave node, I lose nfs client package again.</p>
<p>Is there any way we can install software package when kubernetes spin up a new slave node?</p>
| <p>Starting last week, new GKE clusters should be on created on 1.1.1 by default, and the <code>nfs-common</code> package is installed on all 1.1.1 clusters. (For existing clusters, you'll need to wait until the hosted master is upgraded, then initiate a node upgrade.)</p>
<p>See <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/nfs/README.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/nfs/README.md</a> for a larger example.</p>
|
<p>I am now doing experiments on using Kubernetes and Docker to provisioning services such as WordPress, Spark and Storm on 10 physical machines</p>
<p>But after lots times of launching and terminating Docker Containers, the used memory would increase even I kill all the Containers via Kubernetes delete or Docker kill commands.</p>
<p>I noticed that there were lots of Containers with status Exited, and after I remove all the Exited Containers, it frees lots of memories.</p>
<p>And I came up with a solution, which is writing a removing exited containers schedule into cron table on each Docker host.</p>
<p>But is this appropriate? if not, how can I release the memories?</p>
| <p>It is not recommended to use external container garbage collection scripts. Kubernetes relies on exited containers as tombstones to reconstruct the pod status and/or serve logs. Even if you don't care about container logs, if you remove the exited containers before kubernetes examines them and properly records the status, it may cause inaccurate status and restart decisions. This reliance may be eliminated <a href="https://github.com/kubernetes/kubernetes/issues/489" rel="nofollow">in the future</a>.</p>
<p>For now, the best way to achieve more aggressive container garbage collection is through adjusting the parameters, as detailed in this <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/admin/garbage-collection.md" rel="nofollow">guide</a>.</p>
<p>FYI, there are also open issues to improve the garbage collection behavior. <a href="https://github.com/kubernetes/kubernetes/issues/13287" rel="nofollow">#13287</a> is one example.</p>
<p>If you <em>really</em> want to clean up the containers yourself, it is safe to remove containers associated with deleted pods. Removing multiple exited containers that belong to the same pod/container while keeping the most recent few exited containers is also relatively low-risk.</p>
|
<p>I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been <a href="https://stackoverflow.com/questions/33068639/kubernetes-pods-some-die-after-running-for-a-day">noted in this post</a> that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue.</p>
<p>The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this:</p>
<pre><code>kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1
kubectl delete service --namespace=kube-system monitoring-influxdb
kubectl delete service --namespace=kube-system monitoring-grafana
</code></pre>
<p>Then if I say:</p>
<pre><code>kubectl get pods --namespace=kube-system
</code></pre>
<p>I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently.</p>
| <p>You probably need to remove the manifest files for influxdb from the <code>/etc/kubernetes/addons/</code> directory on your "master" host. Many of the <code>kube-up.sh</code> implementations use a service (usually at <code>/etc/kubernetes/kube-master-addons.sh</code>) that runs periodically and makes sure that all the manifests in <code>/etc/kubernetes/addons/</code> are active.</p>
<p>You can also restart your cluster, but run <code>export ENABLE_CLUSTER_MONITORING=none</code> before running <code>kube-up.sh</code>. You can see other environment settings that impact the cluster <code>kube-up.sh</code> builds at <code>cluster/aws/config-default.sh</code></p>
|
<p>After I manually install nfs client package under each node, then it works.
But in GKE, slave node can be scale in and out. After create a new slave node, I lose nfs client package again.</p>
<p>Is there any way we can install software package when kubernetes spin up a new slave node?</p>
| <p>Please also see <a href="https://github.com/kubernetes/kubernetes/issues/16741" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/16741</a> where we're discussing nfs and pretty much exactly this problem (amongst others)</p>
|
<p>I have a new Kubernetes cluster on AWS that was built using the <code>kube-up</code> script from v1.1.1. I can successfully access the Elasticsearch/Kibana/KubeUI/Grafana endpoints, but cannot access Heapster/KubeDNS/InfluxDB from my machine, through the API proxy. I have seen some ancillary issues related to this on the K8S project, but no clear identification as to what's going on. From what I can gather, everything is running fine so I'm not sure what is wrong here? I'd really like to use the embedded monitoring of Grafana/Influx/Heapster but the Grafana dashboard is just blank with an series error.</p>
<p><strong>Kubernetes version</strong></p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
</code></pre>
<p><strong>Cluster-info</strong></p>
<pre><code>$ kubectl cluster-info
Kubernetes master is running at https://MASTER_IP
Elasticsearch is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
</code></pre>
<p><strong>Accessing influxDB from the API proxy URL above</strong></p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"monitoring-influxdb\"",
"reason": "ServiceUnavailable",
"code": 503
}
</code></pre>
<p><strong>Endpoint details from the Host</strong></p>
<pre><code>$ curl http://localhost:8080/api/v1/namespaces/kube-system/endpoints/monitoring-influxdb
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "monitoring-influxdb",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/endpoints/monitoring-influxdb",
"uid": "2f75b259-8a22-11e5-b248-028ff74b9b1b",
"resourceVersion": "131",
"creationTimestamp": "2015-11-13T16:18:33Z",
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "InfluxDB"
}
},
"subsets": [
{
"addresses": [
{
"ip": "10.244.1.4",
"targetRef": {
"kind": "Pod",
"namespace": "kube-system",
"name": "monitoring-influxdb-grafana-v2-n6jx1",
"uid": "2f31ed90-8a22-11e5-b248-028ff74b9b1b",
"resourceVersion": "127"
}
}
],
"ports": [
{
"name": "http",
"port": 8083,
"protocol": "TCP"
},
{
"name": "api",
"port": 8086,
"protocol": "TCP"
}
]
}
]
}
</code></pre>
<p><strong>Querying the service from the Host</strong> </p>
<pre><code>$ curl -IL 10.244.1.4:8083
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 13751
Content-Type: text/html; charset=utf-8
Last-Modified: Fri, 14 Nov 2014 21:55:58 GMT
Date: Tue, 17 Nov 2015 21:31:48 GMT
</code></pre>
<p><strong>Monitoring-InfluxDB Service</strong></p>
<pre><code>$ curl http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "monitoring-influxdb",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/monitoring-influxdb",
"uid": "2f715831-8a22-11e5-b248-028ff74b9b1b",
"resourceVersion": "60",
"creationTimestamp": "2015-11-13T16:18:33Z",
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "InfluxDB"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 8083,
"targetPort": 8083
},
{
"name": "api",
"protocol": "TCP",
"port": 8086,
"targetPort": 8086
}
],
"selector": {
"k8s-app": "influxGrafana"
},
"clusterIP": "10.0.35.241",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
</code></pre>
<p><strong>Pod Details</strong></p>
<pre><code>$ kubectl describe pod --namespace=kube-system monitoring-influxdb-grafana-v2-n6jx
Name: monitoring-influxdb-grafana-v2-n6jx1
Namespace: kube-system
Image(s): gcr.io/google_containers/heapster_influxdb:v0.4,beta.gcr.io/google_containers/heapster_grafana:v2.1.1
Node: ip-172-20-0-44.us-west-2.compute.internal/172.20.0.44
Start Time: Fri, 13 Nov 2015 08:21:36 -0800
Labels: k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v2
Status: Running
Reason:
Message:
IP: 10.244.1.4
Replication Controllers: monitoring-influxdb-grafana-v2 (1/1 replicas created)
Containers:
influxdb:
Container ID: docker://564724318ca81d33d6079978d24f78b3c6ff8eb08a9023c845e250eeb888aafd
Image: gcr.io/google_containers/heapster_influxdb:v0.4
Image ID: docker://8b8118c488e431cc43e7ff9060968d88402cc6c38a6390c4221352403aa7ac1b
QoS Tier:
memory: Guaranteed
cpu: Guaranteed
Limits:
memory: 200Mi
cpu: 100m
Requests:
memory: 200Mi
cpu: 100m
State: Running
Started: Fri, 13 Nov 2015 08:22:55 -0800
Ready: True
Restart Count: 0
Environment Variables:
grafana:
Container ID: docker://518dea564a0ee014345e9006da6113fb6584ff1ebc6d0cc9609a608abc995f45
Image: beta.gcr.io/google_containers/heapster_grafana:v2.1.1
Image ID: docker://200e77ba156a5a86879e49667b97afe84dca42b5bb67ab1e06217e6a19c5a6a6
QoS Tier:
cpu: Guaranteed
memory: Guaranteed
Limits:
memory: 100Mi
cpu: 100m
Requests:
cpu: 100m
memory: 100Mi
State: Running
Started: Fri, 13 Nov 2015 08:22:35 -0800
Ready: True
Restart Count: 0
Environment Variables:
INFLUXDB_SERVICE_URL: http://monitoring-influxdb:8086
GF_AUTH_BASIC_ENABLED: false
GF_AUTH_ANONYMOUS_ENABLED: true
GF_AUTH_ANONYMOUS_ORG_ROLE: Admin
GF_SERVER_ROOT_URL: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
Conditions:
Type Status
Ready True
Volumes:
influxdb-persistent-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
grafana-persistent-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-bo89c:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-bo89c
No events.
</code></pre>
| <p>Unfortunately those URLs are incomplete. Influx's ports are named, so you need to say which port you want.</p>
<p><code>https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:http</code> or <code>https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:api</code></p>
<p>There's a bug open to give better errors for this.</p>
|
<p>Using Kubernetes -- Gogle Container Enginer setup , Within the Same google cloud Cluster, I am having the Front end Service -> nginx + Angular JS and REST API service --> NodeJS API. I don't want to expose NodeJS API KubeCTL Service public domain. So, 'ServiceType' is set to only 'ClusterIP' . How do we infer this NODE_API_SERIVCE_HOST , NODE_API_SERIVCE_PORT -- {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT -- inside AngularJS program. </p>
<pre><code> (function() {
'use strict';
angular
.module('mymodule', [])
.config(["RestangularProvider", function(RestangularProvider) {
var apiDomainHost = process.env.NODE_API_SERVICE_HOST;
var apiDomainPort = process.env.NODE_API_SERVICE_PORT;
RestangularProvider.setBaseUrl('https://'+apiDomainHost+':'+apiDomainPort+'/node/api/v1'); }]);
})();
</code></pre>
<p>Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side. </p>
<p>My docker file is simple to inherit the nginx stop / start controls.</p>
<pre><code>FROM nginx
COPY public/angular-folder /usr/share/nginx/html/projectSubDomainName
</code></pre>
<p>Page 43 of 62 in <a href="http://www.slideshare.net/carlossg/scaling-docker-with-kubernetes" rel="nofollow">http://www.slideshare.net/carlossg/scaling-docker-with-kubernetes</a>, explains that can we invoke the Command sh </p>
<pre><code> "containers": {
{
"name":"container-name",
"command": {
"sh" , "sudo nginx Command-line parameters
'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'"
}
}
}
</code></pre>
<p>Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS?</p>
| <blockquote>
<p>Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side.</p>
</blockquote>
<p>If the client is outside the cluster, the only way it will be able to access the NodeJS API is if you expose it to the client's network, which is probably the public internet. If you're concerned about the security implications of that, there are a number of different ways to authenticate the service, such as using <a href="http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html" rel="nofollow">nginx auth_basic</a>.</p>
<blockquote>
<pre><code>"containers": {
{
"name":"container-name",
"command": {
"sh" , "sudo nginx Command-line parameters
'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'"
}
}
}
</code></pre>
<p>Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS?</p>
</blockquote>
<p>Yes, service IP & port is stable, even across pod restarts. As for how to communicate the NODE_API_SERVICE_{HOST,PORT} variables to the client, you will need to inject them from a process running server side (within your cluster) into the response (e.g. directly into the JS code, or as a JSON response).</p>
|
<p>I'd like to implement a sticky-session Ingress controller. Cookies or IP hashing would both be fine; I'm happy as long as the same client is <em>generally</em> routed to the same pod.</p>
<p>What I'm stuck on: it seems like the Kubernetes service model means my connections are going to be proxied randomly no matter what. I can configure my Ingress controller with session affinity, but as soon as the the connection gets past the that and hits a service, <code>kube-proxy</code> is just going to route me randomly. There's the <code>sessionAffinity: ClientIP</code> flag on services, but that doesn't help me -- the Client IP will always be the internal IP of the Ingress pod.</p>
<p>Am I missing something? Is this possible given Kubernetes' current architecture?</p>
| <p>An ingress controller can completely bypass kube-proxy. The haproxy controller for example, <a href="https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L155" rel="nofollow">does this and goes straight to endpoints</a>. However it <a href="https://github.com/kubernetes/contrib/pull/223" rel="nofollow">doesn't use the Ingress in the typical sense</a>. </p>
<p>You could <a href="https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/README.md" rel="nofollow">do the same with the nginx controller</a>, all you need to lookup endpoints and insert them instead of the DNS name it currently uses (i.e swap <a href="https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/nginx-alpha/controller.go#L52" rel="nofollow">this line</a> for a pointer to an upstream that contains the endpoints).</p>
|
<p>I'm using Kubernetes and I'm trying to create an <a href="http://kubernetes.io/v1.1/docs/user-guide/ingress.html" rel="noreferrer">ingress resource</a>. I create it using:</p>
<pre><code>$ kubectl create -f my-ingress.yaml
</code></pre>
<p>I wait a while and a load balancer doesn't seem to be created. Running:</p>
<pre><code>$ kubectl describe ing my-ingress
</code></pre>
<p>returns:</p>
<pre><code>Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
46s 46s 1 {loadbalancer-controller } ADD my-ingress
23s 11s 2 {loadbalancer-controller } GCE :Quota googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 3.0
</code></pre>
<p>Is there a way to increase the number of backend services that can be created?</p>
| <p>You need to increase the quota assigned for your project. Please see <a href="https://cloud.google.com/compute/docs/resource-quotas" rel="noreferrer">https://cloud.google.com/compute/docs/resource-quotas</a> for the explanation of resource quotas, and follow the link on that page to check and/or request a quota increase.</p>
|
<p>I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.</p>
<p>There's a confound though that I am running on a windows machine.</p>
<p>Their "getting started" documentation in github says you have to run Linux to use kubernetes.</p>
<p>As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.</p>
<p>From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to</p>
<ol>
<li>Start docker, boot 'default' machine. </li>
<li>Create kubernetes container - configure to communicate with the existing docker 'default' machine</li>
<li>Use kubernetes to manage existing docker. </li>
</ol>
<p>Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)</p>
<p>I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.</p>
| <p>With Windows, you need <strong><a href="https://docs.docker.com/machine/" rel="nofollow">docker-machine</a></strong> and boot2docker VMs to run anything docker related.<br>
There is no (not yet) "docker for Windows".</p>
<p>Note that <a href="https://github.com/kubernetes/kubernetes/issues/7428" rel="nofollow">issue 7428</a> mentioned "Can't run kubernetes within boot2docker".<br>
So even when you <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md" rel="nofollow">follow instructions</a> (from a default VM created with docker-machine), you might still <a href="https://gist.github.com/acroca/dac7cdc196b26c6ae65d" rel="nofollow">get errors</a>:</p>
<pre><code>➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
</code></pre>
<p>The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a <a href="http://tinycorelinux.net/" rel="nofollow">TinyCore distro</a>).</p>
|
<p>Still new to Containers and Kubernetes here but I am dabbling with deploying a cluster on Google Containers Engine and was wondering if you can use a docker hub hosted image to deploy containers, so in my .yaml configuration file I'd say:</p>
<pre><code> ...
image: hub.docker.com/r/my-team/my-image:latest
...
</code></pre>
<p>Is this possible? Or one has to download/build image locally and then upload it to Google Containers Registery?</p>
<p>Thanks so much</p>
| <p>Yes, it is possible. The Replication Controller template or Pod spec image isn't special. If you specify <code>image: redis</code> you will get the latest tag of the official Docker Hub library Redis image, just as if you did <code>docker pull redis</code>.</p>
|
<p>I have already googled on this subject and found few threads. Based on these threads I have followed the following steps. But I am facing a problem.</p>
<p>Basically, I want to create a docker image for mysql and then connect to it from my host machine (Mac OS X).</p>
<p>Based on <a href="https://stackoverflow.com/questions/33001750/connect-to-mysql-in-a-docker-container-from-the-host">this</a> post , I have to share the mysql unix socket with the host. towards this I have done the following steps</p>
<pre><code>1. Start docker quick terminal
2. docker run --name mysql -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest
3. docker exec -it mysql bash
4. mysql -uroot -p
5. create database MyDB;
6. GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password';
7. exit;
8. mkdir /Users/abhi/host
9. docker run -it -v /host:/shared mysql/mysql-server:latest
</code></pre>
<p>Now I get the error</p>
<pre><code>MacBook-Pro:~$ docker run -it -v /Users/abhi/host:/shared mysql/mysql-server
error: database is uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
</code></pre>
<p>But you see that I have provided the password and initialized my database.</p>
<p>All I want is that from my host machine, I can connect to the mysql database running inside docker.</p>
<p>EDIT:: ----- solution which worked ------</p>
<p>Thanks RICO. Finally the steps which worked for me are</p>
<pre><code>1. Start docker quick terminal
2. docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest
3. docker exec -it mysql bash
4. mysql -uroot -p
5. create database MyDB;
or:
CREATE USER 'root'@'%' IDENTIFIED BY 'root';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
6. GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password';
7. exit;
8. docker-machine env default
</code></pre>
<p>Use the IP address obtained in step 8. port is 3306, user is root, password is password, database is MyDB.</p>
<p>Connection is successful!</p>
| <p>So you basically you need to expose the mysql port to your host:</p>
<pre><code>docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest
</code></pre>
<p>Then you can access from your host using the mysql command line:</p>
<pre><code>mysql -h127.0.0.1 -ppassword -uroot
</code></pre>
<p>Not sure why you are trying to run another container to connect (perhaps you meant linking two containers)</p>
<p>If you are using Mac (or Windows) with docker-machine you want to connect to the IP address of your docker-machine VM. For example:</p>
<pre><code>$ docker-machine ssh default
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.9.0, build master : 16e4a2a - Tue Nov 3 19:49:22 UTC 2015
Docker version 1.9.0, build 76d6bc9
docker@default:~$ ifconfig eth1
eth1 Link encap:Ethernet HWaddr 08:00:27:E6:C7:20
inet addr:192.168.99.100 Bcast:192.168.99.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fee6:c720/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18827 errors:0 dropped:0 overruns:0 frame:0
TX packets:10280 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1791527 (1.7 MiB) TX bytes:2242596 (2.1 MiB)
</code></pre>
<p>Then connect to:</p>
<pre><code>mysql -h192.168.99.100 -ppassword -uroot
</code></pre>
|
<p>I sent up a 4 node cluster (1 master 3 workers) running Kubernetes on Ubuntu. I turned on --authorization-mode=ABAC and set up a policy file with an entry like the following</p>
<blockquote>
<p>{"user":"bob", "readonly": true, "namespace": "projectgino"}</p>
</blockquote>
<p>I want user bob to only be able to look at resources in projectgino. I'm having problems using kubectl command line as user Bob. When I run the following command </p>
<blockquote>
<p>kubectl get pods --token=xxx --namespace=projectgino --server=<a href="https://xxx.xxx.xxx.xx:6443" rel="nofollow">https://xxx.xxx.xxx.xx:6443</a></p>
</blockquote>
<p>I get the following error</p>
<blockquote>
<p>error: couldn't read version from server: the server does not allow access to the requested resource</p>
</blockquote>
<p>I traced the kubectl command line code and the problem seems to caused by kubectl calling function NegotiateVersion in pkg/client/helper.go. This makes a call to /api on the server to get the version of Kubernetes. This call fails because the rest path doesn't contain namespace projectgino. I added trace code to pkg/auth/authorizer/abac/abac.go and it fails on the namespace check.</p>
<p>I haven't moved up the the latest 1.1.1 version of Kubernetes yet, but looking at the code I didn't see anything that has changed in this area. </p>
<p>Does anybody know how to configure Kubernetes to get around the problem?</p>
| <p>This is missing functionality in the <a href="https://github.com/kubernetes/kubernetes/blob/e024e55e8e54628e76b74b16a74435dffa761d99/pkg/auth/authorizer/abac/abac_test.go#L113" rel="nofollow">ABAC authorizer</a>. The fix is in progress: <a href="https://github.com/kubernetes/kubernetes/pull/16148" rel="nofollow">#16148</a>.</p>
<p>As for a workaround, from <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/admin/authorization.md#request-attributes" rel="nofollow">the authorization doc</a>:</p>
<blockquote>
<p>For miscellaneous endpoints, like
/version, the resource is the empty string.</p>
</blockquote>
<p>So you may be able to solve by defining a policy:</p>
<blockquote>
<p>{"user":"bob", "readonly": true, "resource": ""}</p>
</blockquote>
<p>(note the empty string for resource) to grant access to unversioned endpoints. If that doesn't work I don't think there's a clean workaround that will let you use kubectl with --authorization-mode=ABAC.</p>
|
<p>Is it in any way possible to configure a Kubernetes Cluster that utilizes ressources from multiple IaaS providers at the same time e.g. a cluster running partially on GCE and AWS? Or a Kubernetes Cluster running on your bare metal and an IaaS provider? Maybe in combination with some other tools like Mesos? Are there any other tools like Kubernetes that provide this capability? If it's not possbile with Kubernetes, what would one have to do in order to provide that feature?</p>
<p>Any help or suggestions would be very much appreciated.</p>
| <p>There is currently no supported way to achieve what you're trying to do. But there is a Kubernetes project under way to address it, which goes under the name of Kubernetes Cluster Federation, alternatively known as "Ubernetes". Further details are available here:</p>
<p><a href="http://www.slideshare.net/quintonh/federation-of-kubernetes-clusters-aka-ubernetes-kubecon-2015-slides-quinton-hoole" rel="nofollow">http://www.slideshare.net/quintonh/federation-of-kubernetes-clusters-aka-ubernetes-kubecon-2015-slides-quinton-hoole</a>
<a href="http://tinyurl.com/ubernetesv2" rel="nofollow">http://tinyurl.com/ubernetesv2</a>
<a href="http://tinyurl.com/ubernetes-wg-notes" rel="nofollow">http://tinyurl.com/ubernetes-wg-notes</a></p>
|
<p>Standard practice for a rolling update of hosts behind load balancer is to gracefully take the hosts out of rotation. This can be done by marking the host "un-healthy" and ensuring the host is no longer receiving requests from the load balancer. </p>
<p>Does Kubernetes do something similar for pods managed by a ReplicationController and servicing a LoadBalancer Service?</p>
<p>I.e., does Kubernetes take a pod out of the LoadBalancer rotation, ensure incoming traffic has died-down, and only then issue pod shutdown?</p>
| <p>Actually, once you delete the pod, it will be in "terminating" state until it is destroyed (after terminationGracePeriodSeconds) which means it is removed from the service load balancer, but still capable of serving existing requests.</p>
<p>We also use "readiness" health checks, and preStop is synchronous, so you could make your preStop hook mark the readiness of the pod to be false, and then wait for it to be removed from the load balancer, before having the preStop hook exit.</p>
|
<p>I am using <strong>Kubernetes</strong> to deploy a <strong>Rails application</strong> to <strong>Google Container Engine</strong>.</p>
<p>The database is using <strong>Google Cloud SQL</strong>.</p>
<p>I know the database's ip address and set it into my Kubernetes config file:</p>
<pre><code># web-controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- name: web
image: gcr.io/my-project-id/myapp:v1
ports:
- containerPort: 3000
name: http-server
env:
- name: RAILS_ENV
value: "production"
- name: DATABASE_URL
value: "mysql2://[my_username]:[my_password]@[database_ip]/myapp"
</code></pre>
<p>Then create:</p>
<pre><code>$ kubectl create -f web-controller.yml
</code></pre>
<p>From the pod log I saw:</p>
<pre><code>$ kubectl logs web-controller-038dl
Lost connection to MySQL server at 'reading initial communication packet', system error: 0
/usr/local/bundle/gems/mysql2-0.3.20/lib/mysql2/client.rb:70:in `connect'
/usr/local/bundle/gems/mysql2-0.3.20/lib/mysql2/client.rb:70:in `initialize'
...
</code></pre>
<p>I can see the <strong>LoadBalancer Ingress</strong> ip address from the <strong>Kubernetes UI</strong> page in web service section.</p>
<p>From the <strong>Google Developers Console -> Storage -> SQL</strong>, select the running db and click the link. From <strong>Access Controler -> Authorization -> Authorized Networks</strong>, add a new item and add that IP to there. But the result was the same.</p>
| <p>You would need to create the SSL cert like Yu-Ju Hong said, then you would have to tell ruby to use the certificate when connecting something like </p>
<p><a href="http://makandracards.com/makandra/1701-use-ssl-for-amazon-rds-mysql-and-your-rails-app" rel="nofollow">http://makandracards.com/makandra/1701-use-ssl-for-amazon-rds-mysql-and-your-rails-app</a></p>
<p>The bit about:</p>
<p><code>sslca: /path/to/mysql-ssl-ca-cert.pem</code></p>
|
<p>We'd like to have a separate test and prod project on the Google Cloud Platform but we want to reuse the same docker images in both environments. Is it possible for the Kubernetes cluster running on the test project to use images pushed to the prod project? If so, how?</p>
| <p>Looking at your question, I believe by account you mean project.</p>
<p>The <a href="https://cloud.google.com/container-registry/docs/#pulling_from_the_registry" rel="nofollow">command</a> for pulling an image from the registry is:</p>
<pre><code>$ gcloud docker pull gcr.io/your-project-id/example-image
</code></pre>
<p>This means as long as your account is a member of the project which the image belongs to, you can pull the image from that project to any other projects that your account is a member of.</p>
|
<p>I have a Kubernetes cluster running on Google Compute Engine and I would like to assign static IP addresses to my external services (<code>type: LoadBalancer</code>). I am unsure about whether this is possible at the moment or not. I found the following sources on that topic:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md#type-loadbalancer" rel="noreferrer">Kubernetes Service Documentation</a> lets you define an external IP address, but it fails with <em>cannot unmarshal object into Go value of type []v1.LoadBalancerIngress</em></li>
<li>The <a href="https://stackoverflow.com/questions/29770679/how-to-expose-kubernetes-service-to-public-without-hardcoding-to-minion-ip">publicIPs field</a> seems to let me specify external IPs, but it doesn't seem to work either</li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/10323" rel="noreferrer">This Github issue</a> states that what I'm trying to do is not supported yet, but will be in Kubernetes v1.1</li>
<li>The <a href="http://kubernetes.io/v1.0/docs/user-guide/services.html#choosing-your-own-ip-address" rel="noreferrer">clusterIP field</a> also lets me specify an IP address, but fails with "<em>provided IP is not in the valid range</em>"</li>
</ul>
<p>I feel like the usage of static IPs is quite important when setting up web services. Am I missing something here? I'd be very grateful if somebody could enlighten me here!</p>
<p>EDIT: For clarification: I am not using Container Engine, I set up a cluster myself using the official installation instructions for Compute Engine. All IP addresses associated with my k8s services are marked as "ephemeral", which means recreating a kubernetes service may lead to a different external IP address (which is why I need them to be static).</p>
| <p><strong>TL;DR</strong> Google Container Engine running Kubernetes <strong>v1.1</strong> supports <code>loadBalancerIP</code> just mark the auto-assigned IP as <strong>static</strong> first.</p>
<p>Kubernetes v1.1 supports <a href="https://kubernetes.io/docs/api-reference/v1.7/#servicespec-v1-core" rel="noreferrer">externalIPs</a>:</p>
<pre><code>apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
</code></pre>
<p>So far there isn't a really good consistent documentation on how to use it on GCE. What is sure is that this IP must first be one of your pre-allocated <strong>static</strong> IPs.</p>
<p>The <a href="https://cloud.google.com/compute/docs/load-balancing/http/cross-region-example" rel="noreferrer">cross-region load balancing</a> documentation is mostly for Compute Engine and not Kubernetes/Container Engine, but it's still useful especially the part "Configure the load balancing service".</p>
<p>If you just create a Kubernetes LoadBalancer on GCE, it will create a network Compute Engine > Network > Network load balancing > Forwarding Rule pointing to a target pool made of your machines on your cluster (normally only those running the Pods matching the service selector). It looks like deleting a namespace doesn't nicely clean-up the those created rules.</p>
<hr>
<h2>Update</h2>
<p>It is actually now supported (even though under documented):</p>
<ol>
<li>Check that you're running Kubernetes 1.1 or later (under <a href="https://console.cloud.google.com/project/_/kubernetes/list" rel="noreferrer">GKE</a> edit your cluster and check "Node version")</li>
<li>Allocate static IPs under <a href="https://console.cloud.google.com/project/_/networking/addresses/list" rel="noreferrer">Networking > External IP addresses</a>, either:
<ul>
<li>Deploy once without <code>loadBalancerIP</code>, wait until you've an external IP allocated when you run <code>kubectl get svc</code>, and look up that IP in the list on that page and change those from <em>Ephemeral</em> to <em>Static</em>.</li>
<li>Click "Reserver a static address" regional in the region of your cluster, attached to None.</li>
</ul></li>
<li>Edit your <em>LoadBalancer</em> to have <code>loadBalancerIP=10.10.10.10</code> as above (adapt to the IP that was given to you by Google).</li>
</ol>
<p>Now if you delete your LoadBalancer or even your namespace, it'll preserve that IP address upon re-reploying on that cluster.</p>
<hr>
<h2>Update 2016-11-14</h2>
<p>See also <a href="https://beroux.com/english/articles/kubernetes/?part=2" rel="noreferrer">Kubernetes article</a> describing how to set up a static IP for single or multiple domains on Kubernetes.</p>
|
<p>All</p>
<p>running computation Monte Carlo jobs on Google Compute Engine. Last time I ran them was September, and things have changed a bit since then. I used to run a lot of jobs with <code>kubectl</code> from some pod.json file, no RC, no restart, fire-and-forget setup. After I started jobs I used to get pods (<code>kubectl get pods</code>) and typically output looks like</p>
<pre><code>NAME READY STATUS RESTARTS AGE
r8o3il08c25-y0z10 1/1 Running 0 56m
r8o3il08c25-y0z15 0/1 Pending 0 56m
</code></pre>
<p>After one is done and second is started, I used to get output</p>
<pre><code>NAME READY STATUS RESTARTS AGE
r8o3il08c25-y0z10 1/1 Exit:0 0 1h
r8o3il08c25-y0z15 1/1 Running 0 1h
</code></pre>
<p>So I could, using simple <code>grep</code>, get the picture how many are running, how many are pending, and how many are done, and query exit code (so to check if there are errors with some pods) etc</p>
<p>Now output with latest SDK (Google Cloud SDK 0.9.87) looks like this</p>
<pre><code>NAME READY STATUS RESTARTS AGE
</code></pre>
<p>All finished pods are now invisible.</p>
<p>Could I get old behavior back? Why it was changed? </p>
| <p><a href="https://github.com/kubernetes/kubernetes/pull/12112" rel="nofollow">PR #12112</a> changed <code>kubectl get pods</code> to not show terminated pods by default. You can get the old behavior (show all pods) by using <code>kubectl get pods -a</code></p>
|
<p>I've been struggling with setting up the Jenkins Kubernetes Plugin on the Google Container Engine.</p>
<p>I have the plugin installed but I think all my builds are still running on master.</p>
<p>I haven't found any good documentation or guides on configuring this.</p>
<p><strong>UPDATE</strong></p>
<p>I removed the master executor from my Jenkins image. So now my builds aren't running on master but now they have no executor so they don't run at all. Just waits in the queue forever.</p>
| <p>You'll need to tell Jenkins how and where to run your builds by adding your Kubernetes cluster as a 'cloud' in the Jenkins configuration. Go to <code>Manage Jenkins -> Configure System -> Cloud -> Add new cloud</code> and select 'Kubernetes'. You'll find the server certificate key, user name and password in your local kubectl configuration (usually in <code>~/.kube/config</code>). The values for 'Kubernetes URL' and 'Jenkins URL' depend on your cluster setup. </p>
<p>Next, you'll need to configure the docker images that should be used to run your builds by selecting 'Add Docker Template'. Use labels to define which tasks should be run with which image!</p>
<p><a href="https://www.youtube.com/watch?v=PFCSSiT-UUQ&index=21&list=PL69nYSiGNLP0Ljwa9J98xUd6UlM604Y-l" rel="noreferrer">Here</a>'s a good video tutorial and <a href="https://www.cloudbees.com/blog/demand-jenkins-slaves-kubernetes-and-google-container-engine" rel="noreferrer">here</a> you'll find a nice tutorial which explains everything in detail.</p>
|
<p><br>
<br>
We need to know about <code>pods</code> network isolation.
<br>
Is there a possibility to access <code>one pod</code> from <code>another one</code> in cluster? Maybe by <code>namespace</code> dividing?
<br>
We also need <code>pod</code>'s membership in local networks, which are not accessible from outside.
<br>
<br>
Any plans? Is it will be soon?</p>
| <p>In a standard Kubernetes installation, all pods (even across namespaces) share a flat IP space and can all communicate with each other. </p>
<p>To get isolation, you'll need to customize your install to prevent cross namespace communication. One way to do this is to use OpenContrail. They recently wrote a <a href="http://www.opencontrail.org/kube-o-contrail-get-your-hands-dirty-with-kubernetes-and-opencontrail/" rel="nofollow">blog post</a> describing an example deployment using the Guestbook from the Kubernetes repository. </p>
|
<p>A new GKE cluster created at v1.1.1 and using latest kubectl (from gcloud components update) when deleting resources (say a pod) sometimes kubectl get pods in a 'Terminating' state and other times they delete (are removed from kubectl get pods output) right away.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
cassandra 1/1 Terminating 0 44s
</code></pre>
<p>Is this new behavior of kubectl? I don't recall it doing this at my prior levels.</p>
| <p>Yes, it is new behavior in <a href="https://github.com/kubernetes/kubernetes/releases/tag/v1.1.1" rel="nofollow noreferrer">v1.1.1</a>. PR <a href="https://stackoverflow.com/questions/33836696/gke-1-1-1-and-kubectl-delete-resource-terminating">#9165</a> added graceful deletion of pods, which causes them to appear in the "Terminating" state for a short amount of time. Issue <a href="https://github.com/kubernetes/kubernetes/issues/1535" rel="nofollow noreferrer">#1535</a> has some more background discussion.</p>
|
<p>Hi all we are looking for practically and tested guide or reference for kubernetes master high availability or other solution for master node fail over. </p>
| <p>There are definitely folks running Kubernetes HA masters in production following the instructions for <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/admin/high-availability.md" rel="nofollow">High Availability Kubernetes Clusters</a>. As noted at the beginning of that page, it's an advanced use case and requires in-depth knowledge of how the Kubernetes master components work. </p>
|
<p>I am using flocker volumes. Should I install Powerstrip? I have installed flocker, but not installed Powerstrip. I create flocker pod fail: </p>
<blockquote>
<p>Unable to mount volumes for pod "flocker-web-3gy69_default": Get
<a href="https://localhost:4523/v1/configuration/datasets" rel="nofollow">https://localhost:4523/v1/configuration/datasets</a>: x509: certificate is
valid for control-service, hostname, not localhost.</p>
</blockquote>
<p>I have set <code>FLOCKER_CONTROL_SERVICE_BASE_URL</code> and <code>MY_NETWORK_IDENTITY</code> in flocker-docker-plugin.service file.</p>
| <p>You do not need to install Powerstrip anymore. (it's been deprecated)</p>
<p>Powerstrip was a useful tool early on to prototype docker extensions but we've moved on since Docker has added the docker api via the plugins model. (Powerstrip was essentially a Precurser to docker plugins)
<code>docker --volume-driver=flocker</code>
<code>docker volume create --name -d flocker</code></p>
<p>If you have the docker plugin installed you should be fine.
instructions on manual plugin setup are located here</p>
<p><a href="http://doc-dev.clusterhq.com/install/install-node.html" rel="nofollow">http://doc-dev.clusterhq.com/install/install-node.html</a></p>
|
<p>In this official document, it can run command in a yaml config file:</p>
<blockquote>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/</a></p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
</code></pre>
<p>If I want to run more than one command, how to do?</p>
| <pre><code>command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
</code></pre>
<p><strong>Explanation:</strong> The <code>command ["/bin/sh", "-c"]</code> says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and <code>&&</code> conditionally runs the following command if the first succeed. In the above example, it always runs <code>command one</code> followed by <code>command two</code>, and only runs <code>command three</code> if <code>command two</code> succeeded.</p>
<p><strong>Alternative:</strong> In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own <a href="https://docs.docker.com/v1.8/reference/builder/">Dockerfile</a> is the way to go. Look at the <a href="https://docs.docker.com/v1.8/reference/builder/#run">RUN</a> directive in particular.</p>
|
<p>I have a setup of kubernetes on a coreos baremetal.
For now I did the connection from outside world to service with a nginx reverse-proxy.</p>
<p>I'm trying the new Ingress resource.
for now I have added a simple ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube-ui
spec:
backend:
serviceName: kube-ui
servicePort: 80
</code></pre>
<p>that starts like this:</p>
<pre><code>INGRESS
NAME RULE BACKEND ADDRESS
kube-ui - kube-ui:80
</code></pre>
<p>My question is how to connect from the outside internet to that ingress point as this resource have no ADDRESS ... ?</p>
| <p>POSTing this to the API server will have no effect if you have not configured an Ingress controller. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found <a href="http://kubernetes.io/docs/user-guide/ingress/" rel="nofollow">here</a>.</p>
|
<p>I am going through the Openshift V3 documentation and got confused by services and routes details.</p>
<p>The description in <a href="https://docs.openshift.org/latest/architecture/infrastructure_components/kubernetes_infrastructure.html#service-proxy" rel="noreferrer">service</a> says that:</p>
<blockquote>
<p>Each node also runs a simple network proxy that reflects the services defined in the API on that node. This allows the node to do simple TCP and UDP stream forwarding across a set of back ends.</p>
</blockquote>
<p>it can forward TCP/UDP stream while description in <a href="https://docs.openshift.org/latest/architecture/core_concepts/routes.html#routers" rel="noreferrer">routes</a> says:</p>
<blockquote>
<p>Routers support the following protocols:</p>
<p>HTTP</p>
<p>HTTPS (with SNI)</p>
<p>WebSockets</p>
<p>TLS with SNI</p>
</blockquote>
<p>Basically, my requirement is to run an SIP application which runs over UDP and port 5060. </p>
<p>Please help me understand what is meant by service and route in the above context and can I deploy my application on Openshift V3. I found few related questions but those are fairly old.</p>
<p><strong>EDIT</strong>
Tagged Kubernetes because it is also used within and may be someone from them can help.</p>
<p>Thanks</p>
| <p>Routes are http, HTTPS, or TCP wrapped with TLS. You can use a service with a "node port", which load balances your app instances over TCP or udp at a high port exposed on each node. </p>
<p>Routes point to services to get their source data, but since routes expect to be able to identify which backend service to route traffic to by looking at the incoming HTTP Host header or TLS SNI info, routes today only support those protocols. </p>
|
<p>I can add a container to a pod by editing the pod template, but I'm looking for something simpler. Is there any way to add a container to a deployed OpenShift pod without editing the pod template? CLI preferable.</p>
| <p>You cannot add or remove containers in a running pod. If you are using replication controller, <code>kubectl rolling-update</code> is the easiest solution, but this will require editing the pod template. That said, are you sure you need to add your containers to the existing pod? Unless strictly necessary, it's better to just run the new containers in a separate pod, e.g. with <code>kubectl run <name> --image=<image></code></p>
<p><em>Note: This is the generic kubernetes answer, there may be a more elegant solution for OpenShift</em></p>
|
<p>When we create a yml for the replication controller, we can give labels for the pod that is being created.</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: redis
spec:
template:
metadata:
labels:
app: redis
tier: backend
</code></pre>
<p>Can the containers that reside in this pod access those label values?</p>
| <p>Check out the <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">Downward API</a>, which allows the container to know more about itself.</p>
|
<p>I feel very confused when use the kubernetes!
Where can I find about the specify api about the components of kubernetes ? such as <code>pod</code>, <code>service</code>, <code>volumes</code>, and <code>Persistent Volumes</code> and so on, when I create the components use the configure files. </p>
<p>Who can help me?</p>
| <p>Sorry about this question, I had find it.</p>
<p><a href="http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html" rel="nofollow">http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html</a></p>
|
<p>I'm trying to use kubectl exec to enter one of my containers, but I'm getting stuck on this error.</p>
<pre><code>$ kubectl exec -it ubuntu -- bash
error: Unable to upgrade connection: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "x509: cannot validate certificate for <worker_node_ip> because it doesn't contain any IP SANs",
"code": 500
}
</code></pre>
<p>I have configured kubectl with my CA certificate and admin keys, etc according to this guide <a href="https://coreos.com/kubernetes/docs/1.0.6/configure-kubectl.html" rel="noreferrer">https://coreos.com/kubernetes/docs/1.0.6/configure-kubectl.html</a></p>
<h2>Update</h2>
<p>I also found the same error in the API server's logs</p>
<pre><code>E1125 17:33:16.308389 1 errors.go:62] apiserver received an error that is not an unversioned.Status: x509: cannot validate certificate for <worker_node_ip> because it doesn't contain any IP SANs
</code></pre>
<p>Does this mean I have configured the certs incorrectly on my worker/master nodes or on kubectl on my local machine?</p>
| <p>If you used this command to create your certificate:</p>
<pre><code>openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out server-cert.pem
</code></pre>
<p>Then your issue can be resolved by doing the following as the 'client' cert uses an -extfile extfile.cnf:</p>
<pre><code>echo subjectAltName = IP:worker_node_ip > extfile.cnf
openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial \
-out server-cert.pem -extfile extfile.cnf
</code></pre>
<p>You can specify any number of IP addresses, such as IP:127.0.0.1,IP:127.0.1.1 (non localhost as well).</p>
|
<blockquote>
<p>Relates to
<a href="https://stackoverflow.com/questions/31664060/how-to-call-a-service-exposed-by-a-kubernetes-cluster-from-another-kubernetes-cl?rq=1">How to call a service exposed by a Kubernetes cluster from another Kubernetes cluster in same project</a>.</p>
<p>Asking again since Kubernetes has been changes a lot since July.</p>
</blockquote>
<p><strong>Context:</strong></p>
<p>I'm working on an infrastructure with multiple clusters serving different purposes, e.g.:</p>
<ul>
<li>Cluster A runs services/apps creating data for consumption</li>
<li>Cluster B runs services/apps consuming data created by apps in cluster A</li>
<li>Cluster C runs data services like Redis, Memcache, etc.</li>
</ul>
<p>All clusters are in the <code>default</code> namespace.</p>
<p><strong>Problem:</strong></p>
<p>In Kubernetes, each cluster gets its own kubernetes (in the <code>default</code> namespace) and kube-dns (in the <code>kube-system</code> namespace) service with a different IP.</p>
<p>What happens with this setup is that, services in cluster A and B above can't discover (in service discovery terminology), let's say, Redis in cluster C. So a <code>nslookup redis.default.svc.cluster.local</code> from one of the services in cluster A/B comes back with <code>** server can't find redis.default.svc.cluster.local: NXDOMAIN</code>. <em>Note:</em> This works from within cluster C.</p>
<p>I've read as many documents as I found about kube-dns, and pretty much all assume one cluster setup.</p>
<p><strong>Clusters info:</strong></p>
<p>Here are <code>/etc/resolv.conf</code> from two different clusters showing DNS nameservers with no common kube-dns ancestor:</p>
<p>Cluster A:</p>
<pre><code>nameserver 10.67.240.10
nameserver 169.254.169.254
nameserver 10.240.0.1
search default.svc.cluster.local svc.cluster.local cluster.local c.project-name.internal. 1025230764914.google.internal. google.internal.
</code></pre>
<p>Cluster C:</p>
<pre><code>nameserver 10.91.240.10
nameserver 169.254.169.254
nameserver 10.240.0.1
search default.svc.cluster.local svc.cluster.local cluster.local c.project-name.internal. google.internal.
options ndots:5
</code></pre>
<p>Both clusters have these services running with their respective IPs for their cluster in the <code>kube-system</code> namespace:</p>
<pre><code>NAME LABELS SELECTOR
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns
kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui
monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster
</code></pre>
<p>What is the ideal fix/update to this setup that can get the shared services discovered across all Kubernetes clusters in a GCE environment?</p>
| <p>This is one of the large problems that Kubernetes is trying to solve with <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md#cross-cluster-service-discovery" rel="nofollow noreferrer">Cross-Cluster Service Discovery</a> as a part of the Cluster Federation plans. You can also check out/contribute to the <a href="https://groups.google.com/forum/#!forum/kubernetes-sig-federation" rel="nofollow noreferrer">Federation SIG</a>.</p>
<p>If you've used one of the <s>hacks </s>solutions described <a href="https://stackoverflow.com/questions/31664060/how-to-call-a-service-exposed-by-a-kubernetes-cluster-from-another-kubernetes-cl?rq=1">here</a>, you might be able to hack up your <code>/etc/resolve.conf</code> to also search the nameserver from the other cluster. Be careful, because this may run you into truncation issues.</p>
<p>You might also be able to modify the <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-rc.yaml.in" rel="nofollow noreferrer">sky-dns RC</a> for your clusters to include an extra kube2sky pod that points at the other cluster's kubernetes service (I haven't tried this, or thought through all of the implications).</p>
<p>Neither of the two hacks I've described above would prevent name collision, so you'd have to manually prevent that.</p>
|
<p>I can add a container to a pod by editing the pod template, but I'm looking for something simpler. Is there any way to add a container to a deployed OpenShift pod without editing the pod template? CLI preferable.</p>
| <p>There is no command today that makes it easy to add a container to the pod template for an RC or deployment. You can use oc new-app to quickly generate deployment configs that have multiple containers with</p>
<pre><code>oc new-app php+apache+somethingelse
</code></pre>
<p>But this won't let you deeply customize those containers.</p>
<p>Agree this would be nice to have - as a mode to "run", perhaps.</p>
|
<p>I know how to mount git repo when I start pod. See: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
</code></pre>
<p>That's perfect, but it means that I need to clone whole repository. What I need is to obtain "clone" only one file. </p>
<pre><code> - name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git/some/long/path/to/specific/file/configuration.cfg"
</code></pre>
<p>Is it possible?</p>
<p>Or can I mount some volume and execute some command in it? Something like: </p>
<pre><code>...
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
command: wget htttp://gitrepo/path/to/file/config.conf
</code></pre>
<p>Thanks.</p>
| <p>You can't clone only one file. <code>gitRepo</code> executes <code>git clone</code> which only allows you to clone the entire repository. </p>
<p><code>volumeMounts</code> doesn't support executing command in it.</p>
|
<p>I am setting up a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I followed the Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/ubuntu.md" rel="nofollow">install tutorial for Ubuntu</a>. Each bare metal server also has 2T of disk space exported using <a href="http://docs.ceph.com/docs/v0.94.5/start/" rel="nofollow">Ceph 0.94.5</a>. Everything was working fine, but when one node failed to start (it wasn't able to mount a partition) the only service the cluster was providing also stopped working. I run some commands:</p>
<pre><code>$ kubectl get nodes
NAME LABELS STATUS
10.70.2.1 kubernetes.io/hostname=10.70.2.1 Ready,SchedulingDisabled
10.70.2.2 kubernetes.io/hostname=10.70.2.2 Ready
10.70.2.3 kubernetes.io/hostname=10.70.2.3 NotReady
10.70.2.4 kubernetes.io/hostname=10.70.2.4 Ready
</code></pre>
<p>It just showed that I had a node down.</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
java-mysql-5v7iu 1/1 Running 1 5d
java-site-vboaq 1/1 Running 0 4d
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
java-mysql name=java-mysql name=java-mysql ***.***.3.12 3306/TCP
java-site name=java-site name=java-site ***.***.3.11 80/TCP
kubernetes component=apiserver,provider=kubernetes <none> ***.***.3.1 443/TCP
</code></pre>
<p>It showed all pods and services working fine. However, I could not connect to one of the pods (<code>java-site-vboaq</code>):</p>
<pre><code>$ kubectl exec java-site-vboaq -i -t -- bash
error: Error executing remote command: Error executing command in container: container not found ("java-site")
</code></pre>
<p>But, the pods weren't even running on the downed node:</p>
<pre><code>$ kubectl describe pod java-mysql-5v7iu
Image(s): mysql:5
Node: 10.70.2.2/10.70.2.2
Status: Running
$ kubectl describe pod java-site-vboaq
Image(s): javasite-img
Node: 10.70.2.2/10.70.2.2
Status: Running
</code></pre>
<p>After the downed node (<code>10.70.2.3</code>) was back, everything went back to normal.</p>
<p>How do I fix this problem? If a node is out, I want Kubernetes to migrate pods accordingly and keep the services working. Does it have to do with the fact that the downed node was stuck in the boot (waiting for a partition to mount) and not 100% down? </p>
| <p>A few potential problems here:
1) Ceph needs its nodes to be up and running to be accessible: did you say the nodes were mounting disk from a different Ceph cluster, or is the Ceph cluster running on the same nodes? If the same nodes, then it makes sense that the drive not being accessible paralyzes K8s.</p>
<p>2) There is a bug (at least it was there on 1.0.6 and not sure if it was resolved) about pods not being able to start when trying to mount a disk that was already mounted on a different node, because it was never unmounted. This is a K8s issue to follow (sorry can't seem to find the link right now)</p>
<p>3) etcd may also get stuck waiting for node 3 if it only has 2 nodes, as it needs a majority to elect a master.</p>
|
<p>I have two separate Kubernetes clusters that will be used for a staging and a production environment, respectively. I want to have YAML manifests for the Kubernetes API objects I will be submitting to each cluster, but some of the objects will have slightly different configurations between the two environments.</p>
<p>As a made up but illustrative example, imagine running an internal Docker registry on each cluster, one using S3 as the storage back end and one using the GCS back end. The registry container can accept these configuration values as environment variables or read from a file, both of which Kubernetes manifests support, but how should I populate these values for each environment?</p>
<p>Essentially what I want is a way to have a manifest that looks something like this, where the <code>$()</code> syntax is variable interpolation that would happen on the server when the manifest is submitted:</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: foo
env:
- name: bar
value: $(etcdctl get /path/to/bar)
</code></pre>
<p>I could write templates that use placeholders for the values and then process the template, pulling in real values from some external source, to produce the final manifest that is actually submitted to the cluster. However, I thought I'd ask first in case a tool that does this already exists, or there is some Kubernetes-blessed way of doing this that I'm not aware of. It certainly seems like something that many people will need to do in order to use Kubernetes.</p>
<p>Other ideas I've had include using some combination of etcd, confd, and scripts on the host nodes, but this starts to get into the realm of host configuration management which I want to avoid at pretty much all costs. I'm using CoreOS and the hosts are provisioned entirely through coreos-cloudinit. In other words, nothing is manipulated on the host system that is not defined at the time the node is created, so traditional configuration management tools like Ansible are out.</p>
<p>I'm aware of Kubernetes's secrets system, but some of these variable values could be quite large, and most of them are not secrets.</p>
| <p>You can't really do this right now. The issue to follow if you're interested in templating is
<a href="https://github.com/kubernetes/kubernetes/issues/11492" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/11492</a></p>
<p>Also, this is currently a topic of discussion in the configuration SIG
<a href="https://groups.google.com/forum/#!forum/kubernetes-sig-config" rel="nofollow">https://groups.google.com/forum/#!forum/kubernetes-sig-config</a></p>
<p>I'd suggest you register your interest in the feature in one of those places.</p>
|
<p>If kube-master or etcd service down in my kubernetes cluster, can my Pod/Service still work? Also, if the network is still work?</p>
| <p>The containers in a pod continue to run, yes. If the master components are not available this means no new pods/services can be launched, but existing ones continue to operate. Note that this behaviour is also one of the good practices and lessons learned from <a href="http://research.google.com/pubs/pub43438.html" rel="noreferrer">Borg</a>.</p>
|
<p>I have a proxy service that wraps 3 pods (say pod A, pod B, pod C). Some container inside pod A needs to get virtual IPs of other two pods. How can I do this?</p>
| <p>Two options:</p>
<ol>
<li>Talk to the Kubernetes API to get the endpoints for the service. (either with <code>kubectl get endpoints SVCNAME</code> or by GETing the <code>/api/v1/namespaces/{namespace}/endpoints/{svcname}</code> path on the apiserver)</li>
<li>Less likely to be of use, but if you <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#headless-services" rel="nofollow">create a service without a cluster IP</a>, the DNS for that service will return a list of the IP addresses of the backing pods rather than a virtual IP address.</li>
</ol>
<p>The IPs returned in either case are the IP addresses of all the pods backing the service.</p>
|
<p>Both Kubernetes Pods and the results of Docker Compose scripts (henceforth: "Compositions") appear to result in clusters of virtual computers.</p>
<p>The computers in the clusters can all be configured to talk to each other so you can write a single script that mirrors your entire end-to-end production config. A single script allows you to deploy that cluster on any container-host.</p>
<p>Given the similarities between the two systems, I'm struggling to understand what the differences are between the two. </p>
<p>Why would I choose one over the other? Are they mutually exclusive systems or can I run compositions in kubernetes.</p>
<p>Are there any critical considerations that need to be accounted for when designing for a container system? If I am designing the architecture for a site <em>today</em> and would <em>like</em> to try and build a container-based system. What are the highest priority things I should design for? (as compared to building on a single machine system)</p>
| <p><a href="https://github.com/docker/compose" rel="noreferrer"><code>docker compose</code></a> is just a way to declare the container you have to start: <del>it has no notion of node or cluster</del>, unless it launches swarm master and swarm nodes, but that is <a href="https://docs.docker.com/swarm/" rel="noreferrer"><code>docker swarm</code></a>)<br>
Update July 2016, 7 months later: docker 1.12 blurs the lines and <a href="https://docs.docker.com/engine/swarm/" rel="noreferrer">includes a "swarm mode"</a>.</p>
<p>It is vastly different from <a href="http://kubernetes.io/" rel="noreferrer">kubernetes</a>, a google tool to manage thousands of containers groups as Pod, over tens or hundreds of machines.</p>
<p>A <a href="http://kubernetes.io/v1.0/docs/user-guide/pods.html" rel="noreferrer">Kubernetes Pod</a> would <a href="http://googlecloudplatform.blogspot.fr/2015/01/everything-you-wanted-to-know-about-Kubernetes-but-were-afraid-to-ask.html" rel="noreferrer">be closer from a docker swarm</a>:</p>
<blockquote>
<p>Imagine individual Docker containers as packing boxes. The boxes that need to stay together because they need to go to the same location or have an affinity to each other are loaded into shipping containers.<br>
In this analogy, the packing boxes are Docker containers, and the shipping containers are Kubernetes pods.</p>
</blockquote>
<p>As <a href="https://stackoverflow.com/questions/33946144/what-are-the-differences-between-kubernetes-pods-and-docker-composes-composur/33946256#comment64413899_33946256">commented below</a> by <a href="https://stackoverflow.com/users/1686628/ealeon">ealeon</a>:</p>
<blockquote>
<p>I think pod is equivalent to compose except that kubernetes can orchestrated pods, whereas there is nothing orchestrating compose unless it is used with swarm like you've mentioned.</p>
</blockquote>
<p>You can <a href="http://sebgoa.blogspot.fr/2015/04/1-command-to-kubernetes-with-docker.html" rel="noreferrer">launch kubernetes commands with docker-compose by the way</a>.</p>
<p><a href="https://i.stack.imgur.com/33iS2.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/33iS2.jpg" alt="http://3.bp.blogspot.com/-MDUsXIy-lt0/VMuhJ9jBefI/AAAAAAAAA0I/qPuy0N8UXWA/s1600/Screen%2BShot%2B2015-01-30%2Bat%2B7.19.27%2BAM.png"></a></p>
<blockquote>
<p>In terms of how Kubernetes differs from other container management systems out there, such as Swarm, Kubernetes is the third iteration of cluster managers that Google has developed. </p>
</blockquote>
<p>You can hear more about kubernetes in the <a href="https://www.gcppodcast.com/post/episode-3-kubernetes-and-google-container-engine/" rel="noreferrer">episode #3 of Google Cloud Platform Podcast</a>.</p>
<p>While it is true both can create a multi-container application, a Pod also serves as a unit of deployment and horizontal scaling/replication, which docker compose does not provide.<br>
Plus, you don't create a pod directly, but use controllers (like replication controllers). </p>
<p>POD lives within a larger platform which offers Co-location (co-scheduling), fate sharing, coordinated replication, resource sharing, and dependency management.<br>
Docker-compose lives... on its own, with its <code>docker-compose.yml</code> file</p>
|
<p>I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with <code>Command not found file ./prod_start.sh does not exist</code>. Any idea how to fix this?</p>
<pre><code>app/
...
Dockerfile
prod_start.sh
web-controller.yaml
Gemfile
...
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM ruby
RUN mkdir /backend
WORKDIR /backend
ADD Gemfile /backend/Gemfile
ADD Gemfile.lock /backend/Gemfile.lock
RUN bundle install
</code></pre>
<p>web-controller.yaml</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: backend
labels:
app: myapp
tier: backend
spec:
replicas: 1
selector:
app: myapp
tier: backend
template:
metadata:
labels:
app: myapp
tier: backend
spec:
volumes:
- name: secrets
secret:
secretName: secrets
containers:
- name: my-backend
command: ['./prod_start.sh']
image: gcr.io/myapp-id/myapp-backend:v1
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: http-server
</code></pre>
| <p>After a lot of experimentations I believe adding the script to the <code>Dockerfile</code>:</p>
<pre><code>ADD prod_start.sh /backend/prod_start.sh
</code></pre>
<p>And then calling the command like this in the <code>yaml</code> controller file:</p>
<pre><code>command: ['/bin/sh', './prod_start.sh']
</code></pre>
<p>Fixed it.</p>
|
<p>I want send multiple entrypoint commands to a Docker container in the <code>command</code> tag of kubernetes config file.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["command1 arg1 arg2 && command2 arg3 && command3 arg 4"]
</code></pre>
<p>But it seems like it does not work. What is the correct format of sending multiple commands in the command tag?</p>
| <p>There can only be a single entrypoint in a container... if you want to run multiple commands like that, make bash be the entry point, and make all the other commands be an argument for bash to run:</p>
<p><code>command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]</code></p>
|
<p>From the <a href="http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html" rel="nofollow">performance test report</a> we can find that the <code>kubernetes</code> can support 100 nodes.</p>
<p>To do the same test, I have set up a 100 nodes <code>kubernetes</code> cluster, but the <code>kube-apiserver</code> became slow when the cluster is set up. That meant when I typed <code>kubectl get nodes</code>, it suspended and can not get any respond.</p>
<p>To find the reason, I chech the connections of <code>kube-apiserver</code> and found there were about 660+ ESTABLISHED connections on port 8080 (I used the insecure port of apiserver), and when I stop some(about 20) slaves, the <code>apiserver</code> recovered common. So I thought the reason for <code>kube-apiserver</code> becoming slow is the too large concurrency.</p>
<p>So I am wonder how the Google set up a 100 nodes cluster? Is there something wrong in my work?</p>
<p>PS: The <code>--max-requests-inflight</code> of <code>kube-apiserver</code> has been set to 0.</p>
| <p>The doc you link to describes the methodology used (specifically the master VM size). The cluster is created in Google Compute Engine using the default cluster/kube-up.sh script from the repository, with all the default settings implied by that.</p>
<p>How large is the master that you're using? If it's really small, it's possible that it could struggle with a lot of nodes and pods.</p>
|
<p><strong>I have</strong></p>
<ul>
<li>Kubernetes: v.1.1.1</li>
<li>iptables v1.4.21 </li>
<li>kernel: 4.2.0-18-generic which come with Ubuntu wily</li>
<li>Networking is done via L2 VLAN terminated on switch</li>
<li>no cloud provider </li>
</ul>
<p><strong>what I do</strong></p>
<p>I'm experimenting with iptables mode for kube-proxy. I have enabled it with <code>--proxy_mode=iptables</code> argument. It seems some rule is missing:</p>
<pre><code>iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 8 packets, 459 bytes)
pkts bytes target prot opt in out source destination
2116 120K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 2 packets, 120 bytes)
pkts bytes target prot opt in out source destination
718 45203 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain POSTROUTING (policy ACCEPT 5 packets, 339 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351
Chain KUBE-NODEPORTS (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195 MARK set 0x4d415351
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195
0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873 MARK set 0x4d415351
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873
Chain KUBE-SEP-5IXMK7UWPGVTWOJ7 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.8 0.0.0.0/0 /* mngbox/jumpbox:ssh */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp to:10.116.160.8:22
Chain KUBE-SEP-BNPLX5HQYOZINWEQ (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ tcp to:10.116.161.6:8086
Chain KUBE-SEP-CJMHKLXPTJLTE3OP (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.254.2 0.0.0.0/0 /* default/kubernetes: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */ tcp to:10.116.254.2:6443
Chain KUBE-SEP-GSM3BZTEXEBWDXPN (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns */ MARK set 0x4d415351
0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.116.160.7:53
Chain KUBE-SEP-OAYOAJINXRPUQDA3 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.116.160.7:53
Chain KUBE-SEP-PJJZDQNXDGWM7MU6 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.5 0.0.0.0/0 /* default/docker-registry-fe:tcp */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp to:10.116.160.5:443
Chain KUBE-SEP-RWODGLKOVWXGOHUR (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ tcp to:10.116.161.6:8083
Chain KUBE-SEP-WE3Z7KMHA6KPJWKK (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-grafana: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */ tcp to:10.116.161.6:8080
Chain KUBE-SEP-YBQVM4LA4YMMZIWH (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.161.3 0.0.0.0/0 /* kube-system/monitoring-heapster: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */ tcp to:10.116.161.3:8082
Chain KUBE-SEP-YMZS7BLP4Y6MWTX5 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.9 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ tcp to:10.116.160.9:5000
Chain KUBE-SEP-ZDOOYAKDERKR43R3 (1 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 10.116.160.10 0.0.0.0/0 /* default/kibana-logging: */ MARK set 0x4d415351
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */ tcp to:10.116.160.10:5601
Chain KUBE-SERVICES (2 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SVC-JRXTEHDDTAFMSEAS tcp -- * * 0.0.0.0/0 10.116.0.48 /* kube-system/monitoring-grafana: cluster IP */ tcp dpt:80
0 0 KUBE-SVC-CK6HVV5A27TDFNIA tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:api cluster IP */ tcp dpt:8086
0 0 KUBE-SVC-DKEW3YDJFV3YJLS2 tcp -- * * 0.0.0.0/0 10.116.0.6 /* infra/docker-registry-backend:docker-registry-backend cluster IP */ tcp dpt:5000
0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
0 0 KUBE-SVC-WEHLQ23XZWSA5ZX3 tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:http cluster IP */ tcp dpt:8083
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.1.142 /* default/docker-registry-fe:tcp cluster IP */ tcp dpt:443
0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 MARK set 0x4d415351
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 ADDRTYPE match dst-type LOCAL
0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
0 0 KUBE-SVC-7IHGTXJ4CF2KVXJZ tcp -- * * 0.0.0.0/0 10.116.1.126 /* kube-system/monitoring-heapster: cluster IP */ tcp dpt:80
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.1.175 /* mngbox/jumpbox:ssh cluster IP */ tcp dpt:2345
0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 MARK set 0x4d415351
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 ADDRTYPE match dst-type LOCAL
0 0 KUBE-SVC-6N4SJQIF3IX3FORG tcp -- * * 0.0.0.0/0 10.116.0.1 /* default/kubernetes: cluster IP */ tcp dpt:443
0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.0.233 /* default/kibana-logging: cluster IP */ tcp dpt:8888
0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 MARK set 0x4d415351
0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 ADDRTYPE match dst-type LOCAL
0 0 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-CJMHKLXPTJLTE3OP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */
Chain KUBE-SVC-7IHGTXJ4CF2KVXJZ (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YBQVM4LA4YMMZIWH all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */
Chain KUBE-SVC-B6ZEWWY2BII6JG2L (3 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-ZDOOYAKDERKR43R3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */
Chain KUBE-SVC-CK6HVV5A27TDFNIA (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-BNPLX5HQYOZINWEQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */
Chain KUBE-SVC-DKEW3YDJFV3YJLS2 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-YMZS7BLP4Y6MWTX5 all -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-OAYOAJINXRPUQDA3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-GLKZVFIDXOFHLJLC (4 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-5IXMK7UWPGVTWOJ7 all -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */
Chain KUBE-SVC-JRXTEHDDTAFMSEAS (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-WE3Z7KMHA6KPJWKK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-GSM3BZTEXEBWDXPN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain KUBE-SVC-WEHLQ23XZWSA5ZX3 (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-RWODGLKOVWXGOHUR all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */
Chain KUBE-SVC-XZFGDLM7GMJHZHOY (4 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-PJJZDQNXDGWM7MU6 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */
</code></pre>
<p>When I do request to the service ip, in my case it's 10.116.0.2 I got an error</p>
<pre><code>;; connection timed out; no servers could be reached
</code></pre>
<p>while when I do request to the 10.116.160.7 server it's working fine.
I can see that traffic is not directed to kube-proxy rules at all, so there is something missing probably. </p>
<p>I will highly appreciate any hint about missing rule</p>
<p><em>EDIT</em>
Ive updated my initial request with missing information requested by thokin, he pointed to the really good way to debug the iptables rules for kube-proxy, and I could identify my problem with:</p>
<pre><code>for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.160.7 -j LOG --log-prefix "DBG@$c: "; done
for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done
</code></pre>
<p>Then I've executed following commands:
# nslookup kubernetes.default.svc.psc01.cluster 10.116.160.7
Server: 10.116.160.7
Address: 10.116.160.7#53</p>
<pre><code>Name: kubernetes.default.svc.psc01.cluster
Address: 10.116.0.1
# nslookup kubernetes.default.svc.psc01.cluster 10.116.0.2
;; connection timed out; no servers could be reached
</code></pre>
<p>As a result I've got different "source" address and outgoing interface:</p>
<pre><code>[701768.263847] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=12436 PROTO=UDP SPT=54501 DPT=53 LEN=62
[702620.454211] DBG@OUTPUT: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62
[702620.454224] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62
[702626.318258] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318263] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318266] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318270] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
[702626.318284] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.250.252 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62
</code></pre>
<p>So, by adding the route </p>
<pre><code>ip route add 10.116.0.0/23 dev docker0
</code></pre>
<p>Now it's working fine!</p>
| <p>For future, the results of <code>iptables-save</code> are much easier to read (to me anyway).</p>
<p>I don't see anything missing here.</p>
<p><code>KUBE-SERVICES</code> traps 10.116.0.2 port 53/UDP and passes it to <code>KUBE-SVC-TCOU7JCQXEZGVUNU</code></p>
<p><code>KUBE-SVC-TCOU7JCQXEZGVUNU</code> has just one endpoint so jumps to <code>KUBE-SEP-GSM3BZTEXEBWDXPN</code></p>
<p><code>KUBE-SEP-GSM3BZTEXEBWDXPN</code> DNATs to 10.116.160.7 port 53/UDP</p>
<p>If you assert that 10.116.160.7 works while 10.116.0.2 does not, that is strange indeed. It suggests that the iptables rules are not triggering at all. Are you testing from the node itself or from a container?</p>
<p>What networking are you using? L3 (underlay?) Flannel? OVS? Something else?</p>
<p>What cloud provider (if any)?</p>
<p>First step to debug: run: <code>for c in PREROUTING OUTPUT; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done</code></p>
<p>That will log any packets that iptables sees to your service IP. Now look at <code>dmesg</code>.</p>
|
<p>I have a kubernetes (0.15) cluster running on CoreOS instances on Amazon EC2</p>
<p>When I create a service that I want to be publicly accessible, I currently add some private IP addresses of the EC2 instances to the service description like so:</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1beta3",
"metadata": {
"name": "api"
},
"spec": {
"ports": [
{
"name": "default",
"port": 80,
"targetPort": 80
}
],
"publicIPs": ["172.1.1.15", "172.1.1.16"],
"selector": {
"app": "api"
}
}
}
</code></pre>
<p>Then I can add these IPs to an ELB load balancer and route traffic to those machines.</p>
<p>But for this to work I need to have a maintain the list of all the machines in my cluster in all the services that I am running, which feels wrong.</p>
<p>What's the currently recommended way to solve this? </p>
<ul>
<li>If I know the PortalIP of a service is there a way to make it routable in the AWS VPC infrastructure? </li>
<li>Is it possible to assign external static (Elastic) IPs to Services and have those routed?</li>
</ul>
<p>(I know of <code>createExternalLoadBalancer</code>, but that does not seem to support AWS yet)</p>
| <p>If someone will reach this question then I want to let you know that external load balancer support is available in latest kubernetes version.</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">Link to the documentation</a></p>
|
<p>I created a volume using the following command.</p>
<pre><code>aws ec2 create-volume --size 10 --region us-east-1 --availability-zone us-east-1c --volume-type gp2
</code></pre>
<p>Then I used the file below to create a pod that uses the volume. But when I login to the pod, I don't see the volume. Is there something that I might be doing wrong? Did I miss a step somewhere? Thanks for any insights.</p>
<pre><code>---
kind: "Pod"
apiVersion: "v1"
metadata:
name: "nginx"
labels:
name: "nginx"
spec:
containers:
-
name: "nginx"
image: "nginx"
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: aws://us-east-1c/vol-8499707e
fsType: ext4
</code></pre>
| <p>I just stumbled across the same thing and found out after some digging, that they actually changed the volume mount syntax. Based on that knowledge I created this PR for documentation update. See <a href="https://github.com/kubernetes/kubernetes/pull/17958" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/17958</a> for tracking that and more info, follow the link to the bug and the original change which doesn't include the doc update. (SO prevents me from posting more than two links apparently.)</p>
<p>If that still doesn't do the trick for you (as it does for me) it's probably because of <a href="https://stackoverflow.com/a/32960312/3212182">https://stackoverflow.com/a/32960312/3212182</a> which will be fixed in one of the next releases I guess. At least I can't see it in the latest release notes.</p>
|
<p>I am trying to setup kubernetes in aws and following the guides at <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode</a></p>
<p>I couldn't understand what is meant by hyperkube. Can someone please explain to me what it is and how does it work?</p>
<p>And another question I have is while running the command </p>
<pre><code>sudo docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/dev:/dev \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--privileged=true \
--pid=host \
-d \
gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
/hyperkube kubelet \
--api-servers=http://localhost:8080 \
--v=2 --address=0.0.0.0 --enable-server \
--hostname-override=127.0.0.1 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
</code></pre>
<p>it is starting the one pod by default. From the command documentation, it looks like it is getting the pod manifest from the <code>--config=/etc/kubernetes/manifests-multi</code> attribute. But this directory is not present in my host. can somebody please tell me from where it is getting this pod manifest?</p>
| <p>Kubernetes is a set of daemons/binaries:</p>
<ul>
<li><code>kube-apiserver</code> (AKA the master), </li>
<li><code>kubelet</code> (start/stop containers, sync conf.),</li>
<li><code>kube-scheduler</code> (resources manager)</li>
<li><code>kube-controller-manager</code> (monitor RC, and maintain the desired state)</li>
<li><code>kube-proxy</code> (expose services on each node)</li>
<li><code>kubectl</code> (CLI)</li>
</ul>
<p>The <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/images/hyperkube" rel="noreferrer">hyperkube</a> binary is an all in one binary (in a way similar to <code>busybox</code>), combining all the previously separate binaries.</p>
<p>The following command:</p>
<pre><code>hyperkube kubelet \
--api-servers=http://localhost:8080 \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--hostname-override=127.0.0.1 \
--config=/etc/kubernetes/manifests-multi \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local
</code></pre>
<p>runs the daemon <code>kubelet</code>.</p>
|
<p>I was going through a Kubernetes tutorial on Youtube and found the following UI which demonstrates pod and service arrangements of Kubernetes cluster.How can I install this UI in my Kubernetes setup? </p>
<p><a href="https://i.stack.imgur.com/UGa0Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UGa0Z.png" alt="enter image description here"></a> </p>
| <p>In order to use this UI, go to the <a href="https://github.com/saturnism/gcp-live-k8s-visualizer" rel="nofollow">saturnism/gcp-live-k8s-visualizer</a> GitHub repo and follow the steps, there. </p>
|
<p><strong>Background</strong></p>
<p>We're using Jenkins to deploy a new version of a Kubernetes (k8s) replication controller to our test or prod cluster. The test and prod (k8s) clusters are located under different (google cloud platform) projects. We have configured two profiles for our gcloud SDK on Jenkins, one for test (test-profile) and one for prod (prod-profile). We have defined a managed script in Jenkins that performs the rolling update for our replication controller. The problems is that I cannot find a way to control to which project I want to target the <code>kubectl rolling-update</code> command (you can specify which cluster but not which project afict). So right now our script that does the rolling update to our test server looks something like this:</p>
<pre><code>gcloud config configurations activate test-profile && kubectl rolling-update ...
</code></pre>
<p>While this works it could be extremely dangerous if two jobs run concurrently for different environments. Say that job 1 targets the test environment and job 2 targets prod. If job2 switches the active profile to "prod-profile" before job 1 has executed its <code>rolling-update</code> command job 1 will target to wrong project and in worse case update the wrong replication controller (if the clusters have the same name).</p>
<p><strong>Question</strong></p>
<p>Is there a way to specify which project that a kubectl command is targeting (for example during a rolling update) that is safe to run concurrently?</p>
| <p>You can pass the <code>--cluster=</code> or <code>--context=</code> flags to kubectl to set a single run. For example, if I have two clusters in my ~/.kube/config "foo" and "bar":</p>
<pre><code>$ kubectl --cluster=foo get pods
NAME READY STATUS RESTARTS AGE
foo-ht1qh 1/1 Running 0 3h
foo-wf8f4 1/1 Running 0 3h
foo-yvgpd 1/1 Running 0 3h
</code></pre>
<p>vs</p>
<pre><code>$ kubectl --cluster=bar get pods
NAME READY STATUS RESTARTS AGE
bar-de4h7 1/1 Running 0 9h
bar-c4g03 1/1 Running 0 9h
bar-2sprd 1/1 Running 0 9h
</code></pre>
|