prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm able to connect to an <strong><a href="https://aws.amazon.com/elasticache/" rel="noreferrer">ElastiCache</a> Redis instance in a VPC</strong> from <strong>EC2 instances</strong>. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my <em>local</em> dev setup or VPS instances provided by other vendors.</p>
<p>Currently when trying from my local set up:</p>
<pre><code>redis-cli -h my-node-endpoint -p 6379
</code></pre>
<p>I only get a timeout after some time.</p>
| <p>SSH port forwarding should do the trick. Try running this from you client.</p>
<pre><code>ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis>
</code></pre>
<p>Then from your client</p>
<pre><code>redis-cli -h 127.0.0.1 -p 6379
</code></pre>
<p>It works for me.</p>
<p>Please note that default port for redis is <code>6379</code> not <code>6739</code>. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group.</p>
<p>Also, AWS now supports accessing your cluster more info <a href="https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html" rel="noreferrer">here</a></p>
|
<p>I've got the following ReplicationController JSON defined:</p>
<pre><code>{
"id": "PHPController",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 2,
"replicaSelector": {"name": "php"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "PHPController",
"volumes": [{ "name": "wordpress", "path": "/mnt/nfs/wordpress_a", "hostDir": "/mnt/nfs/wordpress_a"}],
"containers": [{
"name": "php",
"image": "internaluser/php53",
"ports": [{"containerPort": 80, "hostPort": 9021}],
"volumeMounts": [{"name": "wordpress", "mountPath": "/mnt/nfs/wordpress_a"}]
}]
}
},
"labels": {"name": "php"}
}},
"labels": {"name": "php"}
}
</code></pre>
<p>The container starts correctly when run with "docker run -t -i -p 0.0.0.0:9021:80 -v /mnt/nfs/wordpress_a:/mnt/nfs/wordpress_a:rw internaluser/php53".</p>
<p>/mnt/nfs/wordpress_a is an NFS share, mounted on all of the minions. Each minion has full RW access and I have verified that the share is present.</p>
<p>After creating the pod containers with the Replication Controller, I can see that the volume was never actually bound, and/or incorrectly mounted:</p>
<pre><code> "Volumes": {
"/mnt/nfs/wordpress_a": "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7"
},
"VolumesRW": {
"/mnt/nfs/wordpress_a": true
}
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LxcConf": null,
"Privileged": false,
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "9021"
}
]
},
</code></pre>
<p>I find it strange that the container believes /mnt/nfs/wordpress_a is mapped to "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7".</p>
<p>From the kubelet log:</p>
<p>Desired [10.101.4.15]: [{Namespace:etcd Name:c823da9e-4437-11e4-a3b1-0050568421eb Manifest:{Version:v1beta1 ID:c823da9e-4437-11e4-a3b1-0050568421eb UUID:c823da9e-4437-11e4-a3b1-0050568421eb Volumes:[{Name:wordpress Source:}] Containers:[{Name:php Image:internaluser/php53 Command:[] WorkingDir: Ports:[{Name: HostPort:9021 ContainerPort:80 Protocol:TCP HostIP:}] Env:[{Name:SERVICE_HOST Value:10.1.1.1}] Memory:0 CPU:0 VolumeMounts:[{Name:wordpress ReadOnly:false MountPath:/mnt/nfs/wordpress_a}] LivenessProbe: Lifecycle: Privileged:false}] RestartPolicy:{Always:0xa99a20 OnFailure: Never:}}}]</p>
<p>Does anyone have experience with this sort of thing? I've been driving myself crazy troubleshooting this. Thanks!</p>
| <p>Solved. The volumes syntax was incorrect.
<a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/1446" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/issues/1446</a></p>
|
<p>I'm trying to eval <a href="https://github.com/GoogleCloudPlatform/kubernetes" rel="noreferrer">Kubernetes</a>. I interesting to mount kubernetes on CoreOS cluster, and official documentation not have much, only have two references to blog of CoreOS; I'm currently using guide to <a href="https://coreos.com/blog/running-kubernetes-example-on-CoreOS-part-1/" rel="noreferrer">running kubernetes example on CoreOS part 1</a>. </p>
<p><strong>My apiserver.service:</strong></p>
<pre><code>[Unit]
ConditionFileIsExecutable=/opt/kubernetes/bin/apiserver
Description=Kubernetes API Server
[Unit]
ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/kubernetes/bin/controller-manager \
--etcd_servers=http://127.0.0.1:4001 \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=on-failure
RestartSec=1
[Install]
WantedBy=multi-user.target
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/kubernetes/bin/apiserver \
--address=127.0.0.1 \
--port=8080 \
--etcd_servers=http://127.0.0.1:4001 \
--machines=127.0.0.1 \
--logtostderr=true
Restart=on-failure
RestartSec=1
[Install]
WantedBy=multi-user.target
</code></pre>
<p><strong>My controller-manager.service:</strong></p>
<pre><code>[Unit]
ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/kubernetes/bin/controller-manager \
--etcd_servers=http://127.0.0.1:4001 \
--master=127.0.0.1:8080 \
--logtostderr=true
Restart=on-failure
RestartSec=1
[Install]
WantedBy=multi-user.target
</code></pre>
<p><strong>My kubelet.service:</strong></p>
<pre><code>[Unit]
ConditionFileIsExecutable=/opt/kubernetes/bin/kubelet
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/kubernetes/bin/kubelet \
--address=127.0.0.1 \
--port=10250 \
--hostname_override=127.0.0.1 \
--etcd_servers=http://127.0.0.1:4001 \
--logtostderr=true
Restart=on-failure
RestartSec=1
[Install]
WantedBy=multi-user.target
</code></pre>
<p><strong>My proxy.service</strong></p>
<pre><code>[Unit]
ConditionFileIsExecutable=/opt/kubernetes/bin/proxy
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true
Restart=on-failure
RestartSec=1
[Install]
WantedBy=multi-user.target
</code></pre>
<p>The problem arises when I create a Kubernetes pod redis. </p>
<p><strong>When I execute command:</strong></p>
<pre><code>/opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 -c kubernetes-coreos/pods/redis.json create /pods
</code></pre>
<p><strong>the error outputs after a long time waiting:</strong></p>
<pre><code>{Kind:"", ID:"", CreationTimestamp:"", SelfLink:"", ResourceVersion:0x0}, Status:"failure", Details:"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"redis\", CreationTimestamp:\"\", SelfLink:\"\", ResourceVersion:0x0}, Labels:map[string]string{\"name\":\"redis\"}, DesiredState:api.PodState{Manifest:api.ContainerManifest{Version:\"v1beta1\", ID:\"redis\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"redis\", Image:\"registry.vc.datys.cu:5000/redis\", Command:[]string(nil), WorkingDir:\"\", Ports:[]api.Port{api.Port{Name:\"\", HostPort:6379, ContainerPort:6379, Protocol:\"\", HostIP:\"\"}}, Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:api.LivenessProbe{Enabled:false, Type:\"\", HTTPGet:api.HTTPGetProbe{Path:\"\", Port:\"\", Host:\"\"}, InitialDelaySeconds:0}}}}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}}", Code:500}
</code></pre>
<p>NOTE: When I execute: <code>sudo systemctl status proxy</code> return:</p>
<pre><code>● proxy.service - Kubernetes Proxy
Loaded: loaded (/etc/systemd/system/proxy.service; disabled)
Active: active (running) since Fri 2014-08-08 14:21:36 UTC; 8s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 1036 (proxy)
CGroup: /system.slice/proxy.service
└─1036 /opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true
Aug 08 14:21:42 core-01 proxy[1036]: I0808 14:21:42.074694 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true]
Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074763 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57]
Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074791 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57]
Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075337 01036 logs.go:38] etcd DEBUG: get [registry/services/specs http://127.0.0.1:4001] [%!s(MISSING)]
Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075501 01036 logs.go:38] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/services...rted=true]
Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075528 01036 logs.go:38] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/...thod GET]
Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078524 01036 logs.go:38] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registr...rted=true]
Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078824 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true]
Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078897 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57]
Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078925 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57]
Hint: Some lines were ellipsized, use -l to show in full.
</code></pre>
<p>And when execute: <code>/opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 list /pods</code> return:</p>
<pre><code>Name Image(s) Host Labels
---------- ---------- ---------- ----------
redis dockerfile/redis 127.0.0.1/ name=redis
</code></pre>
<p>What's the problem? I appreciate any idea or collaboration.</p>
<p><strong>EDIT 1:</strong></p>
<p><strong>My redis.json:</strong></p>
<pre><code>{
"id": "redis",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "redis",
"containers": [{
"name": "redis",
"image": "registry.vc.datys.cu:5000/redis",
"ports": [{
"containerPort": 6379,
"hostPort": 6379
}]
}]
}
},
"labels": {
"name": "redis"
}
}
</code></pre>
| <p>The errors you are seeing in your log is that the pod you are trying to create has a port conflict with the Kubernetes API server. Both are trying to use port 8080. Fix this issue by changing the pod to run on another port.</p>
<p>You'll have to clean things up first. So remove the redis pod using kubecfg command. Update the pod.json file, and try again.</p>
<p>Also please follow the quickstart guide found at the <a href="https://github.com/kelseyhightower/kubernetes-coreos" rel="nofollow">https://github.com/kelseyhightower/kubernetes-coreos</a>. Kubernetes is a fast moving project, so that blog will become out of date fairly quickly. </p>
|
<p>Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker?</p>
| <p>I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager)</p>
<p>Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app.</p>
<p>Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).</p>
|
<p>On AWS, I'm hosting Multiple (totally different) Domains on EC2 covered by an ELB on top. I already have 1 Wildcard SSL Cert for 1 Domain and its childs. (xxxx.site1.com)</p>
<p>Then now can I add one more Single SSL Cert (on same ELB) for 1 another different Domain, like (www.site2.com) please? </p>
<p>I'm asking this because some Articles are saying, it won't work and just crush.<br /></p>
<p>Please kindly advise.</p>
| <p>No. The only way you could do it is if you use a second port for HTTPS connections (other than 443) which doesn't apply to real world scenarios since 443 is the default port for HTTPS</p>
<p>Having said that, you can simply create a second ELB and assign your second wildcard certificate to it. You can also forward your traffic to the same backend server as the one where the first ELB is forwarding its traffic to.</p>
<p>Hope this helps.</p>
|
<p>I'm trying to follow the directions to get the Google Cloud Platform kubernetes GuestBook example running. I've got a "kubernetes-guestbook-example" project ID with billing enabled in the Google Developer's Console under my account and i do a "gcloud auth login" to ensure i'm running as that account. Step zero says that i should make sure i have "turned up a Kubernetes cluster". I think i've done this. I also run "gcloud config set project kubernetes-guestbook-example". Then i run "hack/dev-build-and-up.sh" from the kubernetes subdirectory and it produces:</p>
<pre><code>Building local go components
Building release tree
Packaging release
Building launch script
Uploading to Google Storage
Release pushed (devel/jamesfremen/r20141001-192247).
Starting cluster using provider: gce
Release: gs://kubernetes-releases-68782/devel/jamesfremen/r20141001-192247
Project: kubernetes-guestbook-example (autodetected from gcloud config)
Error: The resource 'projects/kubernetes-guestbook-example' was not found
</code></pre>
<p>It seems to be a minor path issue. I think it's looking for examples/guestbook but i'm not sure how to configure it properly. Any help would be appreciated!</p>
| <p>It's been a while, but i recall that the problem occurred because billing must be enabled for the application in the Google Developer Console. An easy catch once you get used to the platform but non-obvious when you're climbing the learning curve.</p>
|
<p>So I figured I should start using Ansible Galaxy when possible, instead of writing my own roles. I just installed my first role and it was installed to <code>/etc/local/ansible/roles</code> (I am on OSX).</p>
<p>Now I wonder how you install this roles where I actually need it? Do I just copy the role to where I need it or is there an Ansible way of doing it?</p>
| <p>Yes, you would copy them according to a sample project structure:</p>
<pre><code>site.yml
webservers.yml
fooservers.yml
kubernetes.yaml
roles/
common/
files/
templates/
tasks/
handlers/
vars/
meta/
webservers/
files/
templates/
tasks/
handlers/
vars/
meta/
kubernetes/
files/
templates/
tasks/
handlers/
vars/
meta/
</code></pre>
<p>or you can just run <code>ansible-galaxy</code> with the <code>-p ROLES_PATH</code> or <code>--roles-path=ROLES_PATH</code> option to install it under <code>/your/project/root</code></p>
<p>You can also use the <code>/etc/local/ansible</code> directory as your project root if you'd like to.</p>
<p>Additionally, you can get help by running the command <code>ansible-galaxy install --help</code></p>
|
<p>What exactly is the difference between Apache's Mesos and Google's Kubernetes?
I understand both are server cluster management software. Can anyone elaborate where the main differences are - when would which framework be preferred?</p>
<p>Why would you want to use <a href="http://googlecloudplatform.blogspot.ch/2014/08/mesosphere-collaborates-with-kubernetes-and-google-cloud-platform.html">Kubernetes on top of Mesosphere</a>?</p>
| <p>Kubernetes is an open source project that brings 'Google style' cluster management capabilities to the world of virtual machines, or 'on the metal' scenarios. It works very well with modern operating system environments (like CoreOS or Red Hat Atomic) that offer up lightweight computing 'nodes' that are managed for you. It is written in Golang and is lightweight, modular, portable and extensible. We (the Kubernetes team) are working with a number of different technology companies (including Mesosphere who curate the Mesos open source project) to establish Kubernetes as the standard way to interact with computing clusters. The idea is to reproduce the patterns that we see people needing to build cluster applications based on our experience at Google. Some of these concepts include:</p>
<ul>
<li><em>pods</em> — a way to group containers together</li>
<li><em>replication controllers</em> — a way to handle the lifecycle of containers</li>
<li><em>labels</em> — a way to find and query containers, and</li>
<li><em>services</em> — a set of containers performing a common function. </li>
</ul>
<p>So with Kubernetes alone you will have something that is simple, easy to get up-and-running, portable and extensible that adds 'cluster' as a noun to the things that you manage in the lightest weight manner possible. Run an application on a cluster, and stop worrying about an individual machine. In this case, cluster is a flexible resource just like a VM. It is a logical computing unit. Turn it up, use it, resize it, turn it down quickly and easily. </p>
<p>With Mesos, there is a fair amount of overlap in terms of the basic vision, but the products are at quite different points in their lifecycle and have different sweet spots. Mesos is a distributed systems kernel that stitches together a lot of different machines into a logical computer. It was born for a world where you own a lot of physical resources to create a big static computing cluster. The great thing about it is that lots of modern scalable data processing application run well on Mesos (Hadoop, Kafka, Spark) and it is nice because you can run them all on the same basic resource pool, along with your new age container packaged apps. It is somewhat more heavy weight than the Kubernetes project, but is getting easier and easier to manage thanks to the work of folks like Mesosphere. </p>
<p>Now what gets really interesting is that Mesos is currently being adapted to add a lot of the Kubernetes concepts and to support the Kubernetes API. So it will be a gateway to getting more capabilities for your Kubernetes app (high availability master, more advanced scheduling semantics, ability to scale to a very large number of nodes) if you need them, and is well suited to run production workloads (Kubernetes is still in an alpha state).</p>
<p>When asked, I tend to say:</p>
<ol>
<li><p>Kubernetes is a great place to start if you are new to the clustering world; it is the quickest, easiest and lightest way to kick the tires and start experimenting with cluster oriented development. It offers a very high level of portability since it is being supported by a lot of different providers (Microsoft, IBM, Red Hat, CoreOs, MesoSphere, VMWare, etc).</p></li>
<li><p>If you have existing workloads (Hadoop, Spark, Kafka, etc), Mesos gives you a framework that let's you interleave those workloads with each other, and mix in a some of the new stuff including Kubernetes apps.</p></li>
<li><p>Mesos gives you an escape valve if you need capabilities that are not yet implemented by the community in the Kubernetes framework. </p></li>
</ol>
|
<p>I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together.</p>
<p>Currently I have tried the following configuration:</p>
<pre><code>{
"id": "podId",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "podId",
"containers": [{
"name": "type1",
"image": "local/image"
},
{
"name": "type2",
"image": "local/secondary"
}]
}
},
"labels": {
"name": "imageTest"
}
}
</code></pre>
<p>However when I execute <code>kubecfg -c app.json create /pods</code> I get the following error:</p>
<pre><code>F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B
ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi
partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp":
null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec
:0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa
nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap
i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n
il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\
", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil
), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500}
</code></pre>
<p>How can I modify the configuration accordingly?</p>
<p>Running kubernetes on a vagrant vm (yungsang/coreos).</p>
| <p>The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same <code>hostPort</code> too many times or perhaps you don't have any worker nodes/minions.</p>
<p>I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see <a href="http://kubernetes.io" rel="nofollow">http://kubernetes.io</a>) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.</p>
|
<p>I am not sure either what I am trying to do is possible or correct way.</p>
<p>One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access.</p>
<p>After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace.
I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using <code>'gcutil ssh --zone us-central1-b kubernetes-master'</code></p>
<p>But when I try to list of existing pods using <code>'cluster/kubecfg.sh list pods'</code>
I see </p>
<pre><code>"F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe")
</code></pre>
<p>I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.</p>
| <p>You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.</p>
|
<p>I tried to create a new cluster in Container Engine in the Google Developers Console.</p>
<p>It finished pretty quickly with a yellow triangle with an exclamation point. I'm assuming that means it didn't work.</p>
<p>Any idea what I could be doing wrong?</p>
| <p>There's a few things that could go wrong. The best option to figure out what's wrong in your situation is to try using the gcloud command line tool, which gives better error information. Information about how to install and use it is in <a href="https://cloud.google.com/container-engine/docs/before-you-begin#install_the_gcloud_command_line_interface" rel="nofollow">Container Engine's documentation</a>.</p>
<p>Other than the default network being removed (as mentioned by Robert Bailey), you may be trying to create more VM instances than you have quota for. You can check what your quota is on the developer console under Compute > Compute Engine > Quota. You're most likely to go over quota on either CPUs or in-use IP addresses, since each VM created is given an ephemeral IP address.</p>
|
<p>How do I run a docker image that I built locally on <a href="https://cloud.google.com/container-engine/">Google Container Engine</a>?</p>
| <p>You can push your image to <a href="https://cloud.google.com/tools/container-registry/" rel="noreferrer">Google Container Registry</a> and reference them from your pod manifest.</p>
<h2>Detailed instructions</h2>
<p>Assuming you have a <code>DOCKER_HOST</code> properly setup , a GKE cluster running the last version of Kubernetes and <a href="https://cloud.google.com/sdk/" rel="noreferrer">Google Cloud SDK</a> installed.</p>
<ol>
<li><p>Setup some environment variables</p>
<pre><code>gcloud components update kubectl
gcloud config set project <your-project>
gcloud config set compute/zone <your-cluster-zone>
gcloud config set container/cluster <your-cluster-name>
gcloud container clusters get-credentials <your-cluster-name>
</code></pre></li>
<li><p>Tag your image</p>
<pre><code>docker tag <your-image> gcr.io/<your-project>/<your-image>
</code></pre></li>
<li><p>Push your image</p>
<pre><code>gcloud docker push gcr.io/<your-project>/<your-image>
</code></pre></li>
<li><p>Create a pod manifest for your container: <code>my-pod.yaml</code></p>
<pre><code>id: my-pod
kind: Pod
apiVersion: v1
desiredState:
manifest:
containers:
- name: <container-name>
image: gcr.io/<your-project>/<your-image>
...
</code></pre></li>
<li><p>Schedule this pod</p>
<pre><code>kubectl create -f my-pod.yaml
</code></pre></li>
<li><p>Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with <code>---</code> as delimiter.</p></li>
</ol>
|
<p>I understand the Container Engine is currently on alpha and not yet complete.</p>
<p>From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold.</p>
<p>Is this somewhere on the near future roadmap?</p>
<p>Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?)</p>
| <p>As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler.</p>
<p>There are actually two different kinds of scaling:</p>
<ol>
<li>Scaling up/down the number of worker nodes in the cluster depending on # of containers in the cluster</li>
<li>Scaling pods up and down.</li>
</ol>
<p>Since Kubernetes is an OSS project as well, we'd also like to add a Kubernetes native autoscaler that can scale replication controllers. It's definitely something that's on the roadmap. I expect we will actually have multiple autoscaler implementations, since it can be very application specific...</p>
|
<p>If I start a <a href="https://cloud.google.com/container-engine/" rel="nofollow">Google Container Engine</a> cluster like this:</p>
<pre><code>gcloud container clusters --zone=$ZONE create $CLUSTER_NAME
</code></pre>
<p>I get three worker nodes. How can I create a cluster with more?</p>
| <p>It's possible to create a different number of worker nodes by using the <code>--num-nodes</code> option when you create the cluster, like this:</p>
<pre><code>gcloud container clusters --zone=$ZONE create $CLUSTER_NAME --num-nodes=5
</code></pre>
|
<p>I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties.</p>
<p>I saw the StackOverflow questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/26899789/autoscaling-in-google-container-engine">Autoscaling in Google Container Engine</a></li>
<li><a href="https://stackoverflow.com/questions/26899733/increasing-the-cluster-size-in-google-container-engine">Increasing the cluster size in Google Container Engine</a></li>
</ul>
<p>And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates.</p>
<p>When are the referred auto-scale features/integrations be released/available?</p>
<p>When will the Google Container Engine reach Beta (leave Alpha)?</p>
<p>Does Google Container Engine have a roadmap with release dates that can be consulted?</p>
| <p>Kubernetes roadmap is here: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/roadmap.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/roadmap.md</a></p>
<p>Unfortunately, Google Container Engine hasn't released a roadmap yet. </p>
|
<p>There is problem , I can't link my pod container with persistent storage</p>
<p>This is config of my pod, where <code>elastic</code> is the name of the attached disk (same region, mounted and formatted as should), when I start the pod with this config I have this error:</p>
<p><code>Unable to mount volumes for pod elastic.etcd</code></p>
<p>I could link my container to any other type of volume either <code>emptyDir</code> or <code>hostDir</code> and all work fine. But in the case of the mounted disk not.
And I really can't find some good example about <code>persitsentDisk</code> volumes.</p>
<pre><code>id: elastic
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: elastic
volumes:
- name: elastic-persistent-storage
source:
persistentDisk:
pdName : elastic
fsType : ext4
containers:
- name: elastic
image: dockerfile/elasticsearch
cpu: 1000
volumeMounts:
- name: elastic-persistent-storage
mountPath: /data
ports:
- name: elastic
containerPort: 9200
hostPort: 9200
labels:
name: elastic
role: storage
</code></pre>
<p><code>elastic</code> is the name of the disk in the same project, same region and attached to master node of claster. Also it is formatted and mounted.</p>
<p>Thanks!</p>
| <p>There is an example of mounting a PD in the github documentation: <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#creating-a-pd" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#creating-a-pd</a></p>
<p>PDs in GCE can only be attached to a single VM (in read/write mode), so if the disk is already attached to your master then it will not be possible to also attach it to the node on which your pod is scheduled. Try detaching the PD from the master and then scheduling the pod. </p>
|
<p>I am not sure either what I am trying to do is possible or correct way.</p>
<p>One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access.</p>
<p>After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace.
I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using <code>'gcutil ssh --zone us-central1-b kubernetes-master'</code></p>
<p>But when I try to list of existing pods using <code>'cluster/kubecfg.sh list pods'</code>
I see </p>
<pre><code>"F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe")
</code></pre>
<p>I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.</p>
| <p>It is probably due to a not implemented feature, see this issue:
<a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/1886" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/issues/1886</a></p>
<p>you can copy the files from /usr/share/nginx/... on the master
into your home dir and try again.</p>
|
<p>I installed an 8-node kubernetes cluster (1 master + 7 minion) but I faced a networking problem among minions.</p>
<p>I installed my cluster according to <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_manual_config.md" rel="nofollow">this step-by-step Fedora manual</a>, so I use Fedora 20 with its testing repository to get kubernetes binaries.</p>
<p>After installing, I wanted to try the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/guestbook/README.md" rel="nofollow">guestbook example</a>, but it seems to me there is a problem with the inter-container networking.</p>
<p>Although containers/PODs are in running state and I can reach my 3 frontend containers (via browser) and the redis containers as well (via natcat), but the frontend, which not on the same host with the redis, cannot reach redis master. The frontend's PHP give back network exception.</p>
<p>Can anybody help me why the containers cannot reach each other among the hosts?</p>
<p>I hope I could describe my setup enough accurately and thanks in advance.</p>
| <p>The Fedora guide you followed will only get you running on a single machine. It avoids the issues around setting up networking across nodes.</p>
<p>For kubernetes to work, the following network set up must be satisfied:</p>
<ol>
<li>Every container should be able to talk to every other container, even across nodes. This means also that the bridge IP range for those containers must not overlap.</li>
<li>Code running on any node that isn't in a container should be able to reach every container (and vise-versa), even across nodes.</li>
<li>It is not necessary (but useful) if computers on the network that aren't part of the cluster can reach the containers directly.</li>
</ol>
<p>There are a lot of ways to achieve this -- for instance the set up for vagrant sets up GRE tunnels between each node. On GCE we use features of the platform to do the routing. If you are on physical machines on a switch you can probably just do a big layer 2 network w/ bridges. A bulletproof way to get started (but perhaps not the most performant, depending on your set up) is to use something like <a href="https://github.com/coreos/flannel" rel="nofollow">flannel</a>.</p>
<p>We are working on making this stuff easier to start up (without using a mess of shell scripts) and are thinking of building something like flannel in so that there is a reasonable default.</p>
|
<p>I have read some introduction of these projects, but still cannot get a clear idea of the difference between Kubernetes and Flynn/Deis. Can anyone help?</p>
| <p>Kubernetes is really three things:</p>
<ul>
<li>A way to dynamically schedule containers (actually, sets of containers called pods) to a cluster of machines.</li>
<li>Manage and horizontally scale a lot of those pods using labels and helpers (ReplicationController)</li>
<li>Communicate between sets of pods via services, expose a set of pods externally on a public IP and easily consume external services. This is necessary to deal with the horizontal scaling and the dynamic nature of how pods get placed/scheduled.</li>
</ul>
<p>This is all very much a tool set for managing compute across a set of machines. It <em>isn't</em> a full application PaaS. Kubernetes doesn't have any idea what an "application" is. Generally PaaS systems provide an easy way to take code and get it deployed and managed as an application. In fact, I expect to see specialized PaaS systems built on top of Kubernetes -- that is what RedHat OpenShift is doing.</p>
<p>One way to think about Kubernetes is as a system for "logical" infrastructure (vs. traditional VM cloud systems which are </p>
|
<p>I'm running (from Windows 8.1) a Vagrant VM for CoreOS (<a href="https://vagrantcloud.com/yungsang/boxes/coreos" rel="nofollow">yungsang/coreos</a>).</p>
<p>I installed kubernetes according to the guide I found <a href="https://coreos.com/blog/running-kubernetes-example-on-CoreOS-part-1" rel="nofollow">here</a> and created the json for the pod using my images.</p>
<p>When I execute <code>sudo ./kubecfg list /pods</code> I get the following error:</p>
<pre><code>F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused
</code></pre>
<p>Same goes for <code>sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods</code></p>
<p>EDIT: <strong>Update</strong></p>
<p>Instead of running the commands myself I integrated into the vagrant file (as <a href="http://qiita.com/yungsang/items/3cbab1c9a231a995a23d" rel="nofollow">such</a>) .</p>
<p>This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above.</p>
<p>EDIT 2: <strong>Update</strong></p>
<p>I managed to get it to run again, however I am unsure if it will run smoothly</p>
<p>I had to re-execute the following commands.</p>
<pre><code>sudo systemctl start etcd
sudo systemctl start download-kubernetes
sudo systemctl start apiserver
sudo systemctl start controller-manager
sudo systemctl start kubelet
sudo systemctl start proxy
</code></pre>
<p>I believe it is in fact the apiserver that needs restarting</p>
<p>What is the source of this "timeout"? (Where are any logs I can find for this matter)</p>
| <p>Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides" rel="nofollow">official installation guides</a>. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself.</p>
<p>The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like:</p>
<p><code>./kubectl get pods</code>.</p>
<p>With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver:</p>
<p><code>KUBERNETES_MASTER=http://IPADDRESS:8080</code>.</p>
<p>The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a <code>kube-apiserver</code> unit you can look at what's goin on by running:</p>
<p><code>journalctl -f -u kube-apiserver</code></p>
<p>from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with:</p>
<p><code>systemctl start kube-apiserver</code></p>
|
<p>I have been working with Docker the last days and I have created a basic Docker image with a Dockerfile. It just starts a web server. Now, I have been reading further and created an account at Google Application Engine and want to deploy this image to Kubernetes.</p>
<p>I feel lost.</p>
<p>I don't understand how my Dockerfiles that I have locally is ment to be transferred to this platform? Don't I use them at all? I have seen some examples of Pod configuration but as far I can see they refer to images at Docker Hub?</p>
<p>Could someone point me in the right direction on what to do?</p>
| <p>I have not worked with Kubernetes, but as far as I know, I think you have to upload to containers to a docker image repository. You can use the official hub and upload your images there (public images are free to upload, and you can have one private for free or pay for more).
Also you can have a private docker image repository, and configure kubernetes to use it.</p>
<p>As I understand from <a href="https://stackoverflow.com/questions/26788485/how-do-i-run-private-docker-images-on-google-container-engine">this post</a>, you can run link a Google Cloud bucket with a docker repository running locally, then configure a kubernetes pod to point to that bucket, and all the rest of your pods can consume docker images from there.</p>
|
<p>What is the usual way to organize pods in a cluster in Kubernetes?</p>
<p>I have a Jenkins build server, Docker registry, Git repository and other development tools that I want to run in Google Container Engine. Do I create one cluster for each of them? Or can multiple pods be scheduled on each node? </p>
<p>So my question is: Would you create one cluster that holds all these services, or multiple clusters?
The same question applies to production, qa etc enviroments. Do I create one cluster for each enviroment or do I have them in the same cluster?</p>
| <p>To answer your first question, multiple pods can be scheduled on each node. </p>
<p>One of the best parts about Google Container Engine / Kubernetes is that it is really flexible, so you can structure your services in the way that works best for you. For your specific use case, I think that a single cluster would make sense because all of the applications that you want to run are closely related. You'll want to think a bit about choosing an appropriate size for your cluster (both the number of VMs and the size of each VM) to fit your entire workload. </p>
<p>You can experiment with creating a single cluster for both your QA and Prod workloads, or you can split them across clusters. Until Kubernetes has better support for QoS (for scheduling pods), it probably makes more sense to keep the QA environment separate (and probably sized more modestly). </p>
|
<p>How does pods that are controlled by a replication controller and "hidden" behind a service in Kubernetes write/read data? If I have an application that recieves images from the user that needs to be persisted, where do I store that? Because of the service in front I have no control over which node it is stored at if I use volumes. </p>
| <p>I think the "simple" answer to your question is that you will need shared storage under you Kubernetes cluster, so that every pods access the same data. Then it wouldn't matter where the pods are running and which pod is actually executing the service.</p>
<p>May be another solution would be <a href="https://github.com/clusterhq/flocker" rel="nofollow noreferrer">Flocker</a>, they describe themself in short: </p>
<blockquote>
<p>Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux.</p>
</blockquote>
<p>Anyway I think the storage question on Kubernetes or any other dockerized infrastructure is very interesting. </p>
<p>It looks like the google-app-engine doesn't support sharing data store between their apps by default like they pointed out in this <a href="https://stackoverflow.com/questions/8956230/can-i-access-datastore-entities-of-my-other-google-app-engine-applications">SO Question</a></p>
|
<p>It seems like the best way to deploy a external facing application on Google Cloud would be to create an external load balancer with this line in the service configuration: </p>
<p><code>
{
...
"createExternalLoadBalancer": true
...
}
</code></p>
<p>This doesn't seem to work for AWS. I'm getting the following error when running the service create:</p>
<p><code>requested an external service, but no cloud provider supplied</code></p>
<p>I know about the PublicIPs setting in services, but that would involve knowing the service's IP in advance so I can set a domain name to it, but so far that doesn't look to be possible if I want to set it up using an external service like AWS ELB.</p>
<p>What's the recommended way of doing this on AWS?</p>
| <p>This is still a work in progress.</p>
<p>Please see:
<a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/2672" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/pull/2672</a></p>
<p>For a proposal that starts to add support for AWS ELBs to Kubernetes, we're working to get that pull request integrated.</p>
<p>Thanks!</p>
|
<p>I am trying to run two Dockers on the same Kubernetes pod and I want one of the Docker container always to run before the other. I remember learning about specifying such dependency on the pod configuration file, but can not find that now. Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod" rel="noreferrer">documentation</a> does not explain it either. </p>
<p>Here is the example pod configuration with two containers I adopted from another <a href="https://stackoverflow.com/questions/25741479/how-to-write-a-kubernetes-pod-configuration-to-start-two-containers">Stackoverflow question</a>. How should I change this pod configuration to run container <code>type1</code> before <code>type2</code>?</p>
<pre><code>{
"id": "podId",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "podId",
"containers": [{
"name": "type1",
"image": "local/image"
},
{
"name": "type2",
"image": "local/secondary"
}]
}
},
"labels": {
"name": "imageTest"
}
}
</code></pre>
<p>Thanks in advance,
Nodir.</p>
| <p>Kubernetes currently does not allow specification of container startup dependencies.</p>
<p>There has been some discussion in GitHub issues <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/1996">1996</a> and <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/1589">1589</a> that might help you out.</p>
|
<p>I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE.</p>
<p>However, some applications that run on my container expose dynamic ports to connect to other applications. Therefore I cannot determine the port number that the application uses and cannot create the Kubernetes services before I run the application. </p>
<p>So far I have managed to connect to the container by creating many services which have different port while running the application. But this is not a realistic way to solve the problem. </p>
<p>My question is that:</p>
<p>How to connect to the application that exposes dynamic ports on Docker container from outside of the GCE by using Kubernetes service?</p>
<p>If possible, can we create a service which exposes dynamic port for incoming connection before running the application which runs on the container?</p>
<p>Any advice or information you could provide would be greatly appreciated.</p>
<p>Thank you in advance.</p>
<p>my-ros-service.yaml</p>
<pre><code>kind: Service
apiVersion: v1beta1
id: my-ros-service
port: 11311
selector:
name: my-ros
containerPort: 11311
createExternalLoadBalancer: true
</code></pre>
| <p>I don't think there is currently a better solution than what you are doing. There is already a related issue, <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/1802" rel="noreferrer">kubernetes issue 1802</a>, about having multiple ports per service. I mentioned your requirements on that issue. You might want to follow up there with more information about your use case, such as what program you are running (if it is publicly available), and whether the dynamic ports come from a specific contiguous range.</p>
|
<p>We are looking into using Docker plus either Mesos/Marathon or Kubernetes for hosting a cluster. However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. All of the ones that I have seen need to know about at least one other node before they can join the cluster. Some need to know about every node. However, in Kubernetes and Mesos, there's no way to know what those IP addresses are ahead of time.</p>
<p>So, are there any best practices for this? If it helps, some technologies we're looking into deploying as containers are ElasticSearch, ActiveMQ, and MongoDB. There may be others.</p>
| <blockquote>
<p>However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly.</p>
</blockquote>
<p>I think you're talking about HA/replicated/sharded apps here.</p>
<p>At the moment, in kubernetes, you can accomplish this by making an api call listing all the "endpoints" of the service; that will tell you where your peers are running.</p>
<p>We'd eventually like to support the use case you describe in a more first-class manner.</p>
<p>I filed <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/3419" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/issues/3419</a> to maybe get something more standardized started here.</p>
|
<p>I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS:
<a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md</a></p>
<p>Which is right?</p>
| <p>You can install Kubernetes on a normal Amazon <a href="http://aws.amazon.com/ec2/" rel="nofollow noreferrer">EC2</a> server. </p>
<p>The new container service is a separate offering by Amazon, called <a href="http://aws.amazon.com/ecs/" rel="nofollow noreferrer">ECS</a>. </p>
<p>EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: <a href="https://aws.amazon.com/eks/" rel="nofollow noreferrer">https://aws.amazon.com/eks/</a></p>
<blockquote>
<p>Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.</p>
</blockquote>
|
<p><a href="https://github.com/kubernetes/kubernetes" rel="noreferrer">Kubernetes</a> is billed as a container cluster "scheduler/orchestrator", but I have no idea what this means. After reading the Kubernetes site and (vague) GitHub wiki, the best I can tell is that its somehow figures out what VMs are available/capable of running your Docker container, and then deploys them there. But that is just my guess, and I haven't seen any concrete verbiage in their documentation to support that.</p>
<p><strong>So what is Kubernetes, <em>exactly</em>, and what are some <em>specific</em> problems that it solves?</strong></p>
| <p>The purpose of Kubernetes is to make it easier to organize and schedule your application across a fleet of machines. At a high level it is an operating system for your cluster.</p>
<p>Basically, it allows you to not worry about what specific machine in your datacenter each application runs on. Additionally it provides generic primitives for health checking and replicating your application across these machines, as well as services for wiring your application into micro-services so that each layer in your application is decoupled from other layers so that you can scale/update/maintain them independently.</p>
<p>While it is possible to do many of these things in application layer, such solutions tend to be one-off and brittle, it's much better to have separation of concerns, where an orchestration system worries about how to run your application, and you worry about the code that makes up your application.</p>
|
<p>I am trying to consume an event stream provided by the <a href="https://github.com/googlecloudplatform/kubernetes">Kubernetes</a>
api using the <code>requests</code> module. I have run into what looks like a
buffering problem: the <code>requests</code> module seems to lag by one event.</p>
<p>I have code that looks something like this:</p>
<pre><code>r = requests.get('http://localhost:8080/api/v1beta1/watch/services',
stream=True)
for line in r.iter_lines():
print 'LINE:', line
</code></pre>
<p>As Kubernetes emits event notifications, this code will only display
the last event emitted when a new event comes in, which makes it
almost completely useless for code that needs to respond to service
add/delete events.</p>
<p>I have solved this by spawning <code>curl</code> in a subprocess instead of using
the <code>requests</code> library:</p>
<pre><code>p = subprocess.Popen(['curl', '-sfN',
'http://localhost:8080/api/watch/services'],
stdout=subprocess.PIPE,
bufsize=1)
for line in iter(p.stdout.readline, b''):
print 'LINE:', line
</code></pre>
<p>This works, but at the expense of some flexibility. Is there a way to
avoid this buffering problem with the <code>requests</code> library?</p>
| <p>This behavior is due to a buggy implementation of the <code>iter_lines</code>
method in the <code>requests</code> library.</p>
<p><code>iter_lines</code> iterates over the response content in <code>chunk_size</code> blocks
of data using the <code>iter_content</code> iterator. If there are less than
<code>chunk_size</code> bytes of data available for reading from the remote
server (which will typically be the case when reading the last line of
output), the read operation will block until <code>chunk_size</code> bytes of
data are available.</p>
<p>I have written my own <code>iter_lines</code> routine that operates correctly:</p>
<pre><code>import os
def iter_lines(fd, chunk_size=1024):
'''Iterates over the content of a file-like object line-by-line.'''
pending = None
while True:
chunk = os.read(fd.fileno(), chunk_size)
if not chunk:
break
if pending is not None:
chunk = pending + chunk
pending = None
lines = chunk.splitlines()
if lines and lines[-1]:
pending = lines.pop()
for line in lines:
yield line
if pending:
yield(pending)
</code></pre>
<p>This works because <code>os.read</code> will return less than <code>chunk_size</code> bytes
of data rather than waiting for a buffer to fill.</p>
|
<p>Kubernetes has master and minion nodes.</p>
<p>Will (can) Kubernetes run specified Docker containers on the master node(s)?</p>
<p>I guess another way of saying it is: can a master also be a minion?</p>
<p>Thanks for any assistance.</p>
| <p>Update 2015-08-06: As of <a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/12349" rel="noreferrer">PR #12349</a> (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster. </p>
<hr>
<p>A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine. </p>
<p>This is going to change significantly when <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/6087" rel="noreferrer">Issue #6087</a> is implemented. </p>
|
<p>What is the best way to deploy Google service account credentials inside a custom built CentOS Docker container for running either on Google's Container Engine or their 'container-vm'? This behavior happens automatically on the <a href="https://registry.hub.docker.com/u/google/cloud-sdk/">google/cloud-sdk</a> container, which runs debian and includes things I'm not using such as app-eng/java/php. Ideally I am trying to access non-public resources inside my project, e.g., Google Cloud Storage bucket objects, without loging in and authorizing every single time a large number of these containers are launched. </p>
<p>For example, on a base Centos container running on GCE with custom code and gcloud/gsutil installed, when you run:</p>
<pre><code>docker run --rm -ti custom-container gsutil ls
</code></pre>
<p>You are prompted to run "gsutil config" to gain authorization, which I expect. </p>
<p>However, pulling down the google/cloud-sdk container onto the same GCE and executing the same command, it seems to have cleverly configured inheritance of credentials (perhaps from the host container-vm's credentials?). This seems to bypass running "gsutil config" when running the container on GCE to access private resources. </p>
<p>I am looking to replicate that behavior in a minimal build Centos container for mass deployment. </p>
| <p><strong>Update:</strong> as of 15 Dec 2016, the ability to update the scopes of an existing VM is now in beta; see <a href="https://stackoverflow.com/a/31868837/3618671">this SO answer</a> for more details.</p>
<hr>
<p><strong>Old answer:</strong> One approach is to create the VM with <a href="https://cloud.google.com/compute/docs/api/how-tos/authorization" rel="nofollow noreferrer">appropriate scopes</a> (e.g., Google Cloud Storage read-only or read-write) and then all processes on the VM, including containers, will have access to credentials that they can use via OAuth 2.0; see docs for <a href="https://cloud.google.com/storage/docs/authentication" rel="nofollow noreferrer">Google Cloud Storage</a> and <a href="https://cloud.google.com/compute/docs/authentication" rel="nofollow noreferrer">Google Compute Engine</a>.</p>
<p>Note that once a VM is created with some set of scopes, they cannot be changed later (neither added nor removed), so you have to be sure to set the right set of scopes at the time of VM instance creation.</p>
|
<p>What patterns are valid in kubernetes for the names of containers and ports?</p>
<p>I had underscores in the names of ports and containers and got an error. Replacing the underscores with hyphens worked.</p>
| <p>Container names and port names must conform to the <a href="https://www.rfc-editor.org/rfc/rfc1123#section-2" rel="nofollow noreferrer">RFC 1123 definition</a> of a DNS label.</p>
<p>Names must be no longer than 63 characters, must start and end with a lowercase letter or number, and may contain lowercase letters, numbers, and hyphens.</p>
<p>Expressed as a regular expression:</p>
<pre><code>[a-z0-9]([-a-z0-9]*[a-z0-9])?
</code></pre>
<p>Here's the applicable code in GitHub for <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/53a9d106c4aabcd550cc32ae4e8004f32fb0ae7b/pkg/api/validation/validation.go#L280" rel="nofollow noreferrer">checking container names</a>, <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/53a9d106c4aabcd550cc32ae4e8004f32fb0ae7b/pkg/api/validation/validation.go#L133" rel="nofollow noreferrer">checking port names</a>, and <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/7f2d0c0f710617ef1f5eec4745b23e0d3f360037/pkg/util/validation.go#L26" rel="nofollow noreferrer">defining acceptable names</a>.</p>
|
<p>I would like to try Kubernetes' hooks but I didn't find any example how I should do it. As far as I know, with this hooks I can run bash scripts in freshly created containers and prior to terminate them.</p>
<p>I've found just a short <a href="http://kubernetes.io/docs/user-guide/container-environment/" rel="noreferrer">documentation</a> which say this is possible but that's all.</p>
<p>Do somebody have an example or something useful info?</p>
<p>Thanks in advance.</p>
| <p>I don't see any examples .yaml files, but <a href="http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_lifecycle" rel="noreferrer">Kubernetes API v1</a> describes the lifecycle events in the same manner. Currently, only PostStart and PreStop are defined and you should be able to use them by adding a lifecycle section to a container in your pod definition. </p>
<p>Based on reading the API definition, something like this should work (disclaimer: I haven't actually tried it myself):</p>
<pre><code>containers:
- name: lifecycle
image: busybox
lifecycle:
postStart:
exec:
command:
- "touch"
- "/var/log/lifecycle/post-start"
preStop:
httpGet:
path: "/abort"
port: 8080
</code></pre>
|
<p>I wonder if it is possible to change labels of pods on the fly so services route requests to those pods based on new labels.</p>
<p>For example I have two services A and B. Then I have 10 pods, where 5 have label type = A (matches service A) and the other 5 have label type = B (matches service B). At some point I want to change labels on pods to achieve a configuration of 2 with label type = A and 8 with label type = B.</p>
<p>I want to know if I can just change the labels and services will be updated accordingly without having to stop and start new pods with different labels.</p>
| <p>You can change the labels on individual pods using the <code>kubectl label</code> command, documented <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#label" rel="nofollow noreferrer">here</a>.</p>
<p>Changing the label of a running pod should not cause it to be restarted, and services will automatically detect and handle label changes.</p>
<p>So in other words, yes you can :)</p>
|
<p>is it possible to autoscale docker containers, which contain application servers (like wildfly/tomcat/jetty/) within kubernetes ? For example at cpu & ram use or based on http requests ? If there is a build in feature for that i can't find it, or is it possible to write something like a configuration script for this ? If so where does the magic happen ?</p>
| <p>Autoscaling of containers is not yet supported and is not part of the near term <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/roadmap.md" rel="nofollow">1.0 roadmap</a> for Kubernetes (meaning that the core team isn't going to add it soon but external contributions are certainly welcome). </p>
|
<p>I am relatively new to all these, but I'm having troubles getting a clear picture among the listed technologies. </p>
<p>Though, all of these try to solve different problems, but do have things in common too. I would like to understand what are the things that are common and what is different. It is likely that the combination of few would be great fit, if so what are they?</p>
<p>I am listing a few of them along with questions, but it would be great if someone lists all of them in detail and answers the questions.</p>
<ol>
<li><p>Kubernetes vs Mesos: </p>
<p>This link </p>
<blockquote>
<p><a href="https://stackoverflow.com/questions/26705201/whats-the-difference-between-">What's the difference between Apache's Mesos and Google's Kubernetes</a></p>
</blockquote>
<p>provides a good insight into the differences, but I'm unable to understand as to why Kubernetes should run on top of Mesos. Is it more to do with coming together of two opensource solutions?</p></li>
<li><p>Kubernetes vs Core-OS Fleet: </p>
<p>If I use kubernetes, is fleet required? </p></li>
<li><p>How does Docker-Swarm fit into all the above?</p></li>
</ol>
| <p><strong>Disclosure: I'm a lead engineer on Kubernetes</strong></p>
<p>I think that Mesos and Kubernetes are largely aimed at solving similar problems of running clustered applications, they have different histories and different approaches to solving the problem.</p>
<p>Mesos focuses its energy on very generic scheduling, and plugging in multiple different schedulers. This means that it enables systems like Hadoop and Marathon to co-exist in the same scheduling environment. Mesos is less focused on running containers. Mesos existed prior to widespread interest in containers and has been re-factored in parts to support containers.</p>
<p>In contrast, Kubernetes was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems.</p>
<p>Fleet is a lower-level task distributor. It is useful for bootstrapping a cluster system, for example CoreOS uses it to distribute the kubernetes agents and binaries out to the machines in a cluster in order to turn-up a kubernetes cluster. It is not really intended to solve the same distributed application development problems, think of it more like systemd/init.d/upstart for your cluster. It's not required if you run kubernetes, you can use other tools (e.g. Salt, Puppet, Ansible, Chef, ...) to accomplish the same binary distribution.</p>
<p>Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: <a href="https://github.com/docker/docker/pull/8859" rel="nofollow noreferrer">https://github.com/docker/docker/pull/8859</a> and here: <a href="https://github.com/docker/docker/issues/8781" rel="nofollow noreferrer">https://github.com/docker/docker/issues/8781</a></p>
<p>Join us on IRC @ #google-containers if you want to talk more.</p>
|
<p>Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command.</p>
<p>Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables.</p>
<p>Is it possible to specify the command a kubernetes pod runs within the Docker image at startup? </p>
| <p>I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later.</p>
<p>There is an hint at what I wanted inside the Cassandra <a href="https://github.com/kubernetes/examples/tree/master/cassandra" rel="noreferrer">example</a>. The <code>command</code> line below the image:</p>
<pre><code>id: cassandra
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: cassandra
containers:
- name: cassandra
image: kubernetes/cassandra
command:
- /run.sh
cpu: 1000
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
env:
- key: MAX_HEAP_SIZE
value: 512M
- key: HEAP_NEWSIZE
value: 100M
labels:
name: cassandra
</code></pre>
<p>Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see <strong>all</strong> of the possible options for the various configuration files (pod, service, replication controller).</p>
|
<p>I see Mesosphere building all kinds of applications on the Mesos Framework like Hadoop, Kubernetes, etc. but since there is the Marathon applications for long-running services, why not just use that? E.g. why not setup Kubernetes nodes on a bunch of Marathon services? Why implement Kubernetes directly on Framework API? Because scheduling is more efficient that way? Same question goes for Jenkins implementation, why not just run Jenkins master/slaves on top of Marathon...</p>
| <p><a href="http://mesos.apache.org/" rel="noreferrer">Apache Mesos</a> is a <a href="http://en.wikipedia.org/wiki/Two-level_scheduling" rel="noreferrer">2-level scheduler</a>. The purpose of a framework is to provide the intelligence of high-level scheduling. <a href="https://mesosphere.github.io/marathon/" rel="noreferrer">Marathon</a> provides the ability to schedule a task in the cluster, queue that task for scheduling and re-queue tasks that have failed. It is great at keeping long running processes running. It is like the <code>init</code> of the datacenter. As such, it is commonly used to make sure other frameworks are up and running such as <a href="https://github.com/mesosphere/kubernetes-mesos" rel="noreferrer">Kubernetes-Mesos</a> or <a href="https://github.com/jenkinsci/mesos-plugin" rel="noreferrer">Jenkins</a>. </p>
<p>There are many applications for which this level of scheduling is insufficient. Marathon can and often is used for running things like <a href="http://kafka.apache.org/" rel="noreferrer"> Apache Kafka</a>, however this often falls short in many failure modes. Additionally, Marathon doesn't care if task runs multiple times on the same node, however running multiple Kafka nodes on the same slave is a bad idea. Using Hadoop as another example (since you referred it), HDFS has several types of nodes that need to be managed; NameNode, DataNode and JournalNode. Marathon does not know the order to start these in, or if these can be co-located on the same node or not. It doesn't know how to scale this application. The HDFS framework manages that intelligence. </p>
<p>As far as scheduling efficiency, I'm not sure that is the goal. Apache Mesos is a 2-level scheduler for a reason. It is a highly efficient 2-level scheduler. The value of 2-level scheduling is to abstract the type of concerns I described above to a higher-level scheduler (which is termed by Mesos as frameworks). Marathon is still a great way to schedule and ensure high availability to other frameworks.</p>
|
<p>Can Kubernetes automatically add or reduce the number of pods,when it monitors for increases or decreases in load (i.e. CPU load, traffic)?</p>
<p>If it's possible, how can I configure it?</p>
| <p>Auto scaling of pods is not yet available, but it's definitely on our roadmap, as mentioned by Brendan in <a href="https://stackoverflow.com/a/26914911/1925481">a previous answer</a>.</p>
<p>It could actually be easily built outside of the core of Kubernetes, using the public Kubernetes API. If you'd rather wait for someone else to build it, though, it looks like a contributor has <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md" rel="nofollow noreferrer">started planning a design for one</a>.</p>
|
<p>I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.</p>
<p>1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.</p>
<p>2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?</p>
<p>Kinds regards and thank you for your patience.</p>
| <p>OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift <em>can</em> run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).</p>
<p>So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair. </p>
<p>The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.</p>
|
<p>If you know the number of minions ahead of time, I suppose you could create x number of replicas, provided that you give a host port to ensure that there is one replica per minion, but is there a way to say that a pod should run on every minion if you don't know the number of minions ahead of time (or if minions are added later)?</p>
| <p>There's been <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/1518" rel="nofollow">a lot of talk within the project</a> about creating a more straightforward feature to do this (and how to work around it in the meantime), but nothing official has been added yet, so these are really the two best options at the moment:</p>
<ol>
<li>You can put files containing the "manifest" section of your pod's config into each node's /etc/kubernetes/manifests directory. The kubelet on the machine will detect it and run it.</li>
<li>You can use a host port and set the number of replicas to a number larger than the number of minions. It won't hurt the cluster to have a few too many replicas, as they won't be able to run anywhere until more nodes are added.</li>
</ol>
|
<p>How does Kubernetes' scheduler work? What I mean is that Kubernetes' scheduler appears to be very simple?</p>
<p>My initial thought is that this scheduler is just a simple admission control system, not a real scheduler. Is it that correct?</p>
<p>I found a short description, but it is not terribly informative:</p>
<blockquote>
<p>The kubernetes scheduler is a policy-rich, topology-aware,
workload-specific function that significantly impacts availability,
performance, and capacity. The scheduler needs to take into account
individual and collective resource requirements, quality of service
requirements, hardware/software/policy constraints, affinity and
anti-affinity specifications, data locality, inter-workload
interference, deadlines, and so on. Workload-specific requirements
will be exposed through the API as necessary.</p>
</blockquote>
| <p>The paragraph you quoted describes where we hope to be in the future (where the future is defined in units of months, not years). We're not there yet, but the scheduler does have a number of useful features already, enough for a simple deployment. In the rest of this reply, I'll explain how the scheduler works today.</p>
<p>The scheduler is not just an admission controller; for each pod that is created, it finds the "best" machine for that pod, and if no machine is suitable, the pod remains unscheduled until a machine becomes suitable.</p>
<p>The scheduler is configurable. It has two types of policies, <strong>FitPredicate</strong> (see <code>master/pkg/scheduler/predicates.go</code>) and <strong>PriorityFunction</strong> (see <code>master/pkg/scheduler/priorities.go</code>). I'll describe them.</p>
<p><strong>Fit predicates</strong> are required rules, for example the labels on the node must be compatible with the label selector on the pod (this rule is implemented in <code>PodSelectorMatches()</code> in <code>predicates.go</code>), and the sum of the requested resources of the container(s) already running on the machine plus the requested resources of the new container(s) you are considering scheduling onto the machine must not be greater than the capacity of the machine (this rule is implemented in <code>PodFitsResources()</code> in <code>predicates.go</code>; note that "requested resources" is defined as <em>pod.Spec.Containers[n].Resources.Limits</em>, and if you request zero resources then you always fit). If any of the required rules are not satisfied for a particular (new pod, machine) pair, then the new pod is not scheduled on that machine. If after checking all machines the scheduler decides that the new pod cannot be scheduled onto any machine, then the pod remains in Pending state until it can be satisfied by one of the machines.</p>
<p>After checking all of the machines with respect to the fit predicates, the scheduler may find that multiple machines "fit" the pod. But of course, the pod can only be scheduled onto one machine. That's where priority functions come in. Basically, the scheduler ranks the machines that meet all of the fit predicates, and then chooses the best one. For example, it prefers the machine whose already-running pods consume the least resources (this is implemented in <code>LeastRequestedPriority()</code> in <code>priorities.go</code>). This policy spreads pods (and thus containers) out instead of packing lots onto one machine while leaving others empty. </p>
<p>When I said that the scheduler is configurable, I mean that you can decide at compile time which fit predicates and priority functions you want Kubernetes to apply. Currently, it applies all of the ones you see in <code>predicates.go</code> and <code>priorities.go</code>.</p>
|
<p>From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?</p>
| <p><strong>Disclosure: I'm a lead engineer on Kubernetes</strong></p>
<p>Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems.</p>
<p>Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: <a href="https://github.com/docker/docker/pull/8859" rel="noreferrer">https://github.com/docker/docker/pull/8859</a> and here: <a href="https://github.com/docker/docker/issues/8781" rel="noreferrer">https://github.com/docker/docker/issues/8781</a></p>
|
<p>I'm researching:</p>
<ul>
<li><strong><a href="https://www.docker.com/" rel="nofollow">Docker Container</a></strong> </li>
<li><strong><a href="https://cloud.google.com/container-engine/" rel="nofollow">Google Containers</a></strong></li>
</ul>
<p>The goal is to use something of these 2 on our own physical boxes with Linux in the enterprise for Dev/Prod. However, I've read that Google reimplemented LXC (Linux Containers) and use their own <a href="http://en.wikipedia.org/wiki/Lmctfy" rel="nofollow"><strong>lmctfy</strong></a> instead.</p>
<p><strong>Is it possible to use Google Containers on my Linux boxes without their cloud space?</strong>
Your experience is highly appreciated.</p>
| <p>Not sure I fully understand the question, but neither kubernetes (the framework on which Google Container Engine runs) nor docker require a particular cloud provider. AFAIK, you can use docker containers on any linux distro, and kubernetes supports a number of configurations for running on your own machines. See <a href="http://kubernetes.io/gettingstarted/">kubernetes getting started guides</a> for details.</p>
|
<p>I'm looking at deploying Kubernetes on top of a CoreOS cluster, but I think I've run into a deal breaker of sorts.</p>
<p>If I'm using just CoreOS and fleet, I can specify within the unit files that I want certain services to not run on the same physical machine as other services (anti-affinity). This is sort of essential for high availability. But it doesn't look like kubernetes has this functionality yet.</p>
<p>In my specific use-case, I'm going to need to run a few clusters of elasticsearch machines that need to always be available. If, for any reason, kubernetes decides to schedule all of my elasticsearch node containers for a given ES cluster on a single machine, (or even the majority on a single machine), and that machine dies, then my elasticsearch cluster will die with it. That can't be allowed to happen.</p>
<p>It seems like there could be work-arounds. I could set up the resource requirements and machine specs such that only one elasticsearch instance could fit on each machine. Or I could probably use labels in some way to specify that certain elasticsearch containers should go on certain machines. I could also just provision way more machines than necessary, and way more ES nodes than necessary, and assume kubernetes will spread them out enough to be reasonably certain of high availability.</p>
<p>But all of that seems awkward. It's much more elegant from a resource-management standpoint to just specify required hardware and anti-affinity, and let the scheduler optimize from there.</p>
<p>So does Kubernetes support anti-affinity in some way I couldn't find? Or does anyone know if it will any time soon?</p>
<p>Or should I be thinking about this another way? Do I have to write my own scheduler?</p>
| <p>Looks like there are a few ways that kubernetes decides how to spread containers, and these are in active development.</p>
<p>Firstly, of course there have to be the necessary resources on any machine for the scheduler to consider bringing up a pod there.</p>
<p>After that, kubernetes spreads pods by replication controller, attempting to keep the different instances created by a given replication controller on different nodes.</p>
<p>It seems like there was recently implemented a method of scheduling that considers services and various other parameters. <a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/2906" rel="noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/pull/2906</a> Though I'm not completely clear on exactly how to use it. Perhaps in coordination with this scheduler config? <a href="https://github.com/GoogleCloudPlatform/kubernetes/pull/4674" rel="noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/pull/4674</a></p>
<p>Probably the most interesting issue to me is that none of these scheduling priorities are considered during scale-down, only scale-up. <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/4301" rel="noreferrer">https://github.com/GoogleCloudPlatform/kubernetes/issues/4301</a> That's a bit of big deal, it seems like over time you could weird distributions of pods because they stay whereever they are originally placed.</p>
<hr>
<p>Overall, I think the answer to my question at the moment is that this is an area of kubernetes that is in flux (as to be expected with pre-v1). However, it looks like much of what I need will be done automatically with sufficient nodes, and proper use of replication controllers and services.</p>
|
<p>I have a kubernetes cluster running with 2 minions.
Currently I make my service accessible in 2 steps:</p>
<ol>
<li>Start replication controller & pod</li>
<li>Get minion IP (using <code>kubectl get minions</code>) and set it as <em>publicIPs</em> for the Service.</li>
</ol>
<p>What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly.</p>
<p>To set up the replication controller & pod I use:</p>
<pre class="lang-yaml prettyprint-override"><code>id: frontend-controller
kind: ReplicationController
apiVersion: v1beta1
desiredState:
replicas: 2
replicaSelector:
name: frontend-pod
podTemplate:
desiredState:
manifest:
version: v1beta1
id: frontend-pod
containers:
- name: sinatra-docker-demo
image: madisn/sinatra_docker_demo
ports:
- name: http-server
containerPort: 4567
labels:
name: frontend-pod
</code></pre>
<p>To set up the service (after getting minion ip-s):</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
id: frontend-service
apiVersion: v1beta1
port: 8000
containerPort: http-server
selector:
name: frontend-pod
labels:
name: frontend
publicIPs: [10.245.1.3, 10.245.1.4]
</code></pre>
| <p>As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally. </p>
<p>One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable. </p>
|
<p>I'm just starting with Kubernetes on 2 node (master-minion) setup on 2 private cloud servers. I've installed it, did basic config and got it running some simple pods/services from the master to the minion.</p>
<p><strong>My question is:</strong></p>
<p><em>How can I use persistent storage with the pods when not using Google Cloud?</em></p>
<p>For my first tests I got a Ghost Blog pod running, but if i tear the pod the changes are lost. Tried adding volume to the pod, but can't actually find any documentation about how it is done when not on GC.</p>
<p>My try:</p>
<pre><code>apiVersion: v1beta1
id: ghost
kind: Pod
desiredState:
manifest:
version: v1beta1
id: ghost
containers:
- name: ghost
image: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
ports:
- hostPort: 8080
containerPort: 2368
volumes:
- name: ghost-persistent-storage
source:
emptyDir: {}
</code></pre>
<p>Found this: <a href="http://amygdala.github.io/kubernetes/2015/01/13/k8s1.html" rel="nofollow noreferrer">Persistent Installation of MySQL and WordPress on Kubernetes</a></p>
<p>Can't figure it out how to add storage (NFS?) to my testing install.</p>
| <p>In the new API (<a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/api/v1beta3/types.go" rel="nofollow">v1beta3</a>), we've added many more volume types, including <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/api/v1beta3/types.go#L217" rel="nofollow">NFS volumes</a>. The NFS volume type assumes you already have an NFS server running somewhere to point the pod at. Give it a shot and let us know if you have any problems!</p>
|
<p>I was trying to build kubernetes from source:
<a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md#building-from-source" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md#building-from-source</a></p>
<p>I have docker installed on my ubuntu.</p>
<pre><code>royalharsh95@ubuntu:~$ sudo docker version
Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a
</code></pre>
<p>I tried after <code>sudo service docker start</code> but got the same error.</p>
<pre><code>royalharsh95@ubuntu:~$ cd kubernetes
royalharsh95@ubuntu:~/kubernetes$ make release
build/release.sh
+++ Verifying Prerequisites....
Can't connect to 'docker' daemon. please fix and retry.
Possible causes:
- On Mac OS X, boot2docker VM isn't installed or started
- On Mac OS X, docker env variable isn't set appropriately. Run:
$(boot2docker shellinit)
- On Linux, user isn't in 'docker' group. Add and relogin.
- Something like 'sudo usermod -a -G docker royalharsh95'
- RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8
- On Linux, Docker daemon hasn't been started or has crashed
make: *** [release] Error 1
</code></pre>
| <p>The problem you are experiencing is caused by the fact that you are unable to access the Docker socket <code>/var/run/docker.sock</code> as a non-root user. When you run <code>sudo docker version</code> you are running the Docker client <em>as root</em> so it does not experience this problem.</p>
<p>This is a basic Unix permissions problem and there are the standard solutions:</p>
<ul>
<li>You could run the Kubernetes build as <code>root</code> with <code>sudo make release</code>.</li>
<li>You can fix the permissions on the socket such that you are able to use Docker without <code>sudo</code>.</li>
</ul>
<p>If you look at the permissions on the Docker socket, you will probably see something like:</p>
<pre><code>$ ls -l /var/run/docker.sock /var/run/docker.sock
srw-rw----. 1 root docker 0 Mar 17 12:26 /var/run/docker.sock
</code></pre>
<p>This shows a socket that is readable by <code>root</code> and by members of the <code>docker</code> group. In this case, I am a member of the <code>docker</code> group so I can run the <code>docker</code> client without <code>sudo</code>. You could set up the same thing in your environment.</p>
<p>Note that of course you always need to start the Docker daemon as root, but in general you would expect to have this configured to start automatically when your system boots, rather than starting it manually.</p>
|
<p>I have deployed a Redis Cluster using Kubernetes. I am now attempting to use HAProxy to load balance. HAProxy is great for load balancing a redis cluster, IF you have static IPs. However, we don't have this when using kubernetes. While testing failover, Redis and Kubernetes handle election of a new master and deploying a new pod, respectively. However, kubernetes elects a new IP to the new pod. How can we inject this new IP into the HAProxy healthchecks and remove the old master IP? </p>
<p>I have the following setup. </p>
<pre><code> +----+ +----+ +----+ +----+
| W1 | | W2 | | W3 | | W4 | Web application servers
+----+ +----+ +----+ +----+
\ | | /
\ | | /
\ | | /
+---------+
| HAProxy |
+---------+
/ \ \
+----+ +----+ +----+
| P1 | | P2 | | P3 | K8S pods = Redis + Sentinel
+----+ +----+ +----+
</code></pre>
<p>Which is very similar to the setup described on the <a href="http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/" rel="nofollow">haproxy blog</a>.</p>
| <p>According to <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis</a> it uses sentinel to manage the failover. This reduces the problem to the "normal" sentinel based solution. </p>
<p>In this case I would recommend running HAProxy in the same container as the Senrinels and using a simple sentinel script to update the HAProxy Config and issue a reload. A simple HAProxy Config which o ly talks to the master can easily be a simple search, replace, reload script. </p>
<p>Oh and don't use the HAProxy check in that blog post. It doesn't account for or detect split brain conditions. You could either go with a simple port check for availability, or write a custom check which queries each of the sentinels and only talks to the one with at least two sentinels reporting it as the master. </p>
|
<p>I'm using google container engine and I can create pods and services in my cluster. But when I try to use the DNS feature (skydns) to lookup my services nothing is being found. If I log in to the non-master node, I can see the DNS container and can use 'host' command to do DNS lookup (installed with apt-get). But I can't find my service by it's name. It associates kubernetes.local with the IP of the service. Actually it associates kubernetes.local with the IP of every one of my services (I have 9). But it does not associate the service name "my-service-name".</p>
<p>Anyone know the trick to get this to work? Either creating the service isn't causing skydns to create the DNS entry (maybe there is some magic to make it work)...or I'm just completely clueless (less magical, perhaps more likely).</p>
<p>I don't know which.</p>
<p>b</p>
| <p>There's a little bit of magic involved that's intended to make DNS in Kubernetes more convenient from within a pod. Let me try to explain.</p>
<p>The way that the DNS names are constructed within Kubernetes is <code><service-name>.<namespace>.kubernetes.local</code>. This is why <code>kubernetes.local</code> is resolving from on your node, but <code>my-service-name</code> isn't. Assuming your service is defined in the default namespace (it will be unless you explicitly created it in a different namespace), you should be able to resolve it at <code>my-service-name.default.kubernetes.local</code>.</p>
<p>The docs around DNS assume that you care about how to resolve service names from within a pod rather than directly on the host. Within your pod, DNS should be set up to first search for names you specify relative to <code>default.kubernetes.local</code> and <code>kubernetes.local</code>, meaning that from within any pod in the cluster that isn't kube-dns (it's handled specially) you should be able to resolve your service using either <code>my-service-name</code> or <code>my-service-name.default.kubernetes.local</code>.</p>
<p>If you want to try it out, attach to one of your cluster's fluentd pods using docker exec and try looking up your service from within the container.</p>
<p>Note that the namespace changed from <code>kubernetes.local</code> to <code>cluster.local</code> between versions 0.17.0 and 0.18.0, so check your cluster's version (using <code>kubectl version</code>) if your first attempt doesn't work.</p>
|
<p>I have two instances of an app container (happens to be a Node.JS app, but that shouldn't matter) running in a Kubernetes cluster on Google Container Engine. I'd like to scale it up to three instances.</p>
<p>My cluster has a master and two minion nodes, with a replication controller and a load balancer service. The replication controller keeps my app container running happily on the two nodes.</p>
<p>I can see that there is a handy <strong>gcloud alpha container kubectl resize</strong> command which lets me change the number of replicas, but I don't see how or if I can increase the size of the cluster itself, so that it can spin up another minion node. I only see gcloud commands to create, delete, list and describe clusters; nothing to resize them.</p>
<p>If I can't resize my cluster, then to scale up I'd need to create a whole new cluster and kill the old one. Am I missing something?</p>
<p>Also, are there plans to support auto-scaling?</p>
| <p>Update (June 2015): Kubernetes on GCE now uses managed instance groups which you can manually resize to add new nodes to your cluster. </p>
<hr>
<p>There isn't currently a way to add nodes to your existing Google Container Engine cluster. We are currently adding support to Kubernetes to allow clusters to <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/6087" rel="noreferrer">have nodes dynamically added</a> but the work isn't quite finished yet. Once the feature is available in Kubernetes you can expect that it will show up in Google Container Engine shortly after the next Kubernetes release. </p>
<p>In the mean time, it should be possible to run more than two replicas of your node.js application on the existing two VMs. </p>
|
<p>ActiveMQ built-in <a href="http://activemq.apache.org/discovery.html" rel="nofollow">cluster discovery mechanisms</a> are basically based on multicast (excepting LDAP here).</p>
<p>Openshift v3 / Kubernetes don't support well multicast as it could be quite bad or misfunctioning on a public cloud infrastructure.</p>
<p>Is there any existing option to enable network of activemq brokers discovery within Openshift v3 ?</p>
<p>I saw the project <a href="https://github.com/jboss-openshift/openshift-ping" rel="nofollow">jboss-openshift/openshift-ping</a> enabling discovery for JGroups members on Openshift. I am looking for an equivalent for ActiveMQ.</p>
| <p>fabric8 is a project that has a number of value-adds for OS3 / kubernetes platforms</p>
<ul>
<li><a href="http://fabric8.io/" rel="nofollow">http://fabric8.io/</a></li>
</ul>
<p>There is <em>clustered</em> ActiveMQ out of the box </p>
<ul>
<li><a href="http://fabric8.io/guide/fabric8MQ.html" rel="nofollow">http://fabric8.io/guide/fabric8MQ.html</a></li>
</ul>
<p>As the project is in development, you may get best help on irc chat on #fabric8 on freenode - all the guys hang out there.</p>
|
<p>I am planning to test Kubernetes locally, but would like to ask some theoretic questions before. </p>
<p>I created a pipeline in python that takes as input a whole bunch of files from a directory, and created a docker image out of it (this is my Pod)</p>
<p>What I understood from the documentation is that the Kubernetes scheduler will choose automatically the minion to deploy for a given task, my question is, using an 8G memory laptop, is there a 'rule' to follow before creating the minion (specifying the number of minions to deploy) based on the amount of memory available in a machine (regardless if it is a laptop or a cluster) ?</p>
<p>Thanks</p>
| <p>You would typically only ever have one minion/host. So if you are deploying your minions on physical hardware, there is a 1:1 mapping between minions and physical hosts.</p>
<p>If you are deploying into a virtual cluster on your laptop, you will want to make sure that each virtual minion has enough memory to run at least a single instance of whatever containers you plan on deploying. "How much is enough?" is a question that only you can answer.</p>
|
<p>I started a cluster in aws following the guides and then went about following the guestbook. The problem I have is accessing it externally. I set the PublicIP to the ec2 publicIP and then use the ip to access it in the browser with port 8000 as specified in the guide. </p>
<p>Nothing showed. To make sure it was actually the service that wasn't showing anything I then removed the service and set a host port to be 8000. When I went to the ec2 instance IP I could access it correctly. So it seems there is a problem with my setup or something. The one thing I can think of is, I am inside a VPC with an internet gateway. I didn't add any of my json files I used, because they are almost exactly the same as the <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook" rel="nofollow">guestbook example</a> with a few changes to allow my ec2 PublicIP, and a few changes for the VPC. </p>
| <p>On AWS you have to use your PRIVATE ip address with Kubernetes' services, since your instance is not aware of its public ip. The NAT-ing on amazon's side is done in such a way that your service will be accessible using this configuration. </p>
<p><strong>Update:</strong> please note that the possibility to set the public IP of a service explicitly was removed in the v1 API, so this issue is not relevant anymore.</p>
<p>Please check the following documentation page for workarounds: <a href="https://kubernetes.io/docs/user-guide/services/" rel="nofollow noreferrer">https://kubernetes.io/docs/user-guide/services/</a></p>
|
<p>I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access .
First, Where is the service deployed which will work as load balancer for group of pods ?
Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ?</p>
| <p>A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod. </p>
<p>If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes.</p>
<p>As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster. </p>
|
<p>I have a distributed application running on virtual machines, among which I have one service running on active/passive mode. The active VM provides service via a public IP. Should the active VM fail, the public IP will be moved to the passive VM and the passive VM will become active and starts to provide service.</p>
<p>How this pattern fit in containerized application managed by kubernetes?</p>
<p>If I use a replication controller with replicas =1, in case of node/minion failure, the replication controller will reschedule the pod(= VM in my current application) in another minion, but this would likely cause high downtime compared with my current solution where only IP resource is moved.</p>
<p>If I use a replication controller with replicas=2, then I would need to have a different configuration with two pods (one with public IP, the other without) which is anti-pattern? Furthermore, there is no designed way in kubernetes to support virtual IP(move around pods.)?</p>
<p>OR should I use replicas =2 and implement something myself to manage the IP(or maybe make use of pacemaker? this would introduce another problem: there will be to cluster management in my application, kubernetes, and pacemaker/corosync)</p>
<p>So, how this should be done?</p>
| <p>It sounds like your application is using its own master election scheme between the two VMs acting as a load balancer and you know internally which one is currently the master. </p>
<p>This can be achieved today in Kubernetes using a service that spans both pods (master and standby) and a readiness probe that only returns success for the currently active master. Failure of a readiness probe removes the pod from the endpoints list, so no traffic will be directed to the node that isn't the master. When you need to do failover, the standby would report healthy to the readiness probe (and the master would report unhealthy or be unreachable) at which point traffic to the service would only land on the standby (now acting as the master).</p>
<p>You can create the service that spans the two pods with an external IP such that it is reachable from outside of your cluster. </p>
|
<p>Based on the following setup of Kubernetes on <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides/coreos/azure" rel="nofollow">Microsoft Azure</a>.</p>
<p>I was able to deploy my Docker containers, using the same configuration settings. </p>
<p>We have 2 categories of containers, front-end and back-end, where the back-end consist of high intensive processing. The latter we want to run on Large instances, whereas the front-end will be run on Small instances. </p>
<p>What is the best option to separate these, my gues would be with labeling the hosts. But was not able to find this in the docs, or in examples.</p>
| <p>Currently, adding labels to the hosts and restricting your pods to nodes with the appropriate label (e.g. 'Large' / 'Small') is the best way to do this. For an example, see <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/node-selection" rel="nofollow">examples/node-selection</a></p>
<p>Kubernetes has a <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/resources.md" rel="nofollow">resource model</a> which, once implemented, will allow you to tell the scheduler what resources each pod needs and the system will ensure that the pod is placed on a node with available resources. Unfortunately, as you can tell from the design document on Github, this isn't fully implemented yet. </p>
|
<p>I am trying to run a docker image in Google Container Engine. The instance comes up with no running docker images. I can ssh and run the docker commands and the service comes up. But nothing happens when I just launch the instance from the terminal. Can some one take a look at what I am doing wrong.</p>
<p>My docker file looks like</p>
<pre><code>FROM golang
RUN mkdir -p /app
COPY . /app
RUN go get golang.org/x/tools/cmd/present
ENTRYPOINT cd /app && /go/bin/present -http=":8080"
EXPOSE 8080
</code></pre>
<p>containers.yaml looks like</p>
<pre><code>version: v1beta3
containers:
- name: talks
image: sheki/talks
ports:
- name: http-port
containerPort: 8080
hostPort: 80'
</code></pre>
<p>The command to launch the instance is</p>
<pre><code>gcloud compute instances create zoop \
--image container-vm \
--metadata-from-file google-container-manifest=containers.yaml \
--zone us-central1-a \
--machine-type f1-micro
</code></pre>
| <p>You mentioned in your question that you are using google container engine, but in fact you are using the <a href="https://cloud.google.com/compute/docs/containers/container_vms" rel="nofollow">container vm</a> (which is a bit different). If you want to use container engine, please check out the documentation to create a <a href="https://cloud.google.com/container-engine/docs/clusters/operations" rel="nofollow">container cluster</a>. </p>
<p>I ran your example, and in <code>/var/log/kubelet.log</code> saw the following error:</p>
<pre><code>E0519 17:05:41.285556 2414 http.go:54] Failed to read URL: http://metadata.google.internal/computeMetadata/v1beta1/instance/attributes/google-cont
ainer-manifest: received 'version: v1beta3
containers:
- name: talks
image: sheki/talks
ports:
- name: http-port
containerPort: 8080
hostPort: 80'
', but couldn't parse as neither single (error unmarshaling JSON: json: cannot unmarshal string into Go value of type int: {Version:v1beta3 ID: UUID:
Volumes:[] Containers:[{Name:talks Image:sheki/talks Entrypoint:[] Command:[] WorkingDir: Ports:[{Name:http-port HostPort:0 ContainerPort:8080 Proto
col: HostIP:}] Env:[] Resources:{Limits:map[] Requests:map[]} CPU:0 Memory:0 VolumeMounts:[] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil>
TerminationMessagePath: Privileged:false ImagePullPolicy: Capabilities:{Add:[] Drop:[]}}] RestartPolicy:{Always:<nil> OnFailure:<nil> Never:<nil>} D
NSPolicy: HostNetwork:false}) or multiple manifests (error unmarshaling JSON: json: cannot unmarshal object into Go value of type []v1beta1.Container
Manifest: []) nor single (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"containerPort":8080,"hostPort":"80'","name
":"http-port"}]}],"version":"v1beta3"}') or multiple pods (kind not set in '{"containers":[{"image":"sheki/talks","name":"talks","ports":[{"container
Port":8080,"hostPort":"80'","name":"http-port"}]}],"version":"v1beta3"}').
</code></pre>
<p>It looks like the documentation for container vms is out of date. </p>
|
<p><a href="https://github.com/GoogleCloudPlatform/kubernetes">Kubernetes</a> seems to be all about deploying containers to a cloud of clusters. What it doesn't seem to touch is development and staging environments (or such).</p>
<p>During development you want to be as close as possible to production environment with some important changes:</p>
<ul>
<li>Deployed locally (or at least somewhere where <strong>you and only you can access</strong>)</li>
<li>Use <strong>latest source code</strong> on page refresh (supposing its a website; ideally page auto-refresh on local file save which can be done if you mount source code and use some stuff like <a href="http://yeoman.io/codelab/preview-inbrowser.html">Yeoman</a>).</li>
</ul>
<p>Similarly one may want a non-public environment to do <strong>continuous integration</strong>.</p>
<p>Does Kubernetes support such kind of development environment or is it something one has to build, hoping that during production it'll still work?</p>
| <p>Update (2016-07-15)</p>
<p>With the release of Kubernetes 1.3, <a href="https://github.com/kubernetes/minikube">Minikube</a> is now the recommended way to run Kubernetes on your local machine for development. </p>
<hr>
<p>You can run <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/docker.md">Kubernetes locally via Docker</a>. Once you have a node running you can launch a pod that has a simple web server and mounts a volume from your host machine. When you hit the web server it will read from the volume and if you've changed the file on your local disk it can serve the latest version. </p>
|
<p>I began to try Google Container Engine recently. I would you like to upgrade the Kubernetes Cluster to the latest version available, if possible without downtime. Is there any way to do this?</p>
| <p>Unfortunately, the best answer we currently have is to create a new cluster and move your resources over, then delete the old one.</p>
<p>We are very actively working on making cluster upgrades reliable (both <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/6079" rel="noreferrer">nodes</a> and the <a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/6075" rel="noreferrer">master</a>), but upgrades are unlikely to work for the majority of currently existing clusters.</p>
|
<p>I'm looking for some pros and cons of whether to go with Marathon and Chronos, Docker Swarm or Kubernetes when running Docker containers on DC/OS. </p>
<p>For example, when is it better to use Marathon/Chronos than Kubernetes and vice versa? </p>
<p>Right now I'm mostly into experimenting but hopefully we'll start using one of these services in production after the summer. This may disqualify Docker Swarm since I'm not sure if it'll be production ready by then. </p>
<p>What I like about Docker Swarm is that it's essentially just "Docker commands" and you don't have to learn something completely new. We're already using <code>docker-compose</code> and that will work out of the box with Docker Swarm (at least in theory) so that would be a big plus. My main concern with Docker Swarm is if it'll cover all use cases required to run a system in production.</p>
| <p>I'll try to break down the unique aspects of each container orchestration framework on Mesos.</p>
<p>Use <a href="https://github.com/docker/swarm/">Docker Swarm</a> if:</p>
<ul>
<li>You want to use the familiar Docker API to launch Docker containers on Mesos.</li>
<li>Swarm may eventually provide an API to talk to Kubernetes (even K8s-Mesos) too.</li>
<li>See: <a href="http://www.techrepublic.com/article/docker-and-mesos-like-peanut-butter-and-jelly/">http://www.techrepublic.com/article/docker-and-mesos-like-peanut-butter-and-jelly/</a></li>
</ul>
<p>Use <a href="https://github.com/mesosphere/kubernetes-mesos">Kubernetes-Mesos</a> if:</p>
<ul>
<li>You want to launch K8s Pods, which are groups of containers co-scheduled and co-located together, sharing resources.</li>
<li>You want to launch a service alongside one or more sidekick containers (e.g. log archiver, metrics monitor) that live next to the parent container.</li>
<li>You want to use the K8s label-based service-discovery, load-balancing, and replication control.</li>
<li>See <a href="http://kubernetesio.blogspot.com/2015/04/kubernetes-and-mesosphere-dcos.html">http://kubernetesio.blogspot.com/2015/04/kubernetes-and-mesosphere-dcos.html</a></li>
</ul>
<p>Use <a href="https://mesosphere.github.io/marathon/">Marathon</a> if:</p>
<ul>
<li>You want to launch Docker or non-Docker long-running apps/services.</li>
<li>You want to use Mesos attributes for constraint-based scheduling.</li>
<li>You want to use Application Groups and Dependencies to launch, scale, or upgrade related services.</li>
<li>You want to use health checks to automatically restart unhealthy services or rollback unhealthy deployments/upgrades.</li>
<li>You want to integrate HAProxy or Consul for service discovery.</li>
<li>You want to launch and monitor apps through a web UI or REST API.</li>
<li>You want to use a framework built from the start with Mesos in mind.</li>
</ul>
<p>Use <a href="https://github.com/mesos/chronos">Chronos</a> if:</p>
<ul>
<li>You want to launch Docker or non-Docker tasks that are expected to exit.</li>
<li>You want to schedule a task to run at a specific time/schedule (a la <code>cron</code>).</li>
<li>You want to schedule a DAG workflow of dependent tasks.</li>
<li>You want to launch and monitor jobs through a web UI or REST API.</li>
<li>You want to use a framework built from the start with Mesos in mind.</li>
</ul>
|
<p>I have a Kubernetes cluster running on 3 servers, a master and 2 minions. I would like to add another minion. Is it possible to add a minion without having to do the complete installation again? So far when searching for guides to do this, I can only find excellent guides on getting the whole cluster up. </p>
| <p>For me the difference between a master and a minion are the processes that get started and the orchestration (the master signals the minions that it is done and they can now come up, the minions wait for the signal). To bring up a minion after things are up would not be any different, except the signaling is already done, so the minion will just come up. This looked like a reasonable experiment to me, so I thought I'd try it! I am running in the digital ocean vps, using coreos (stable) and cloud-config to bring up fleet + flanneld + k8s. On the master I have:</p>
<ul>
<li>etcd</li>
<li>fleet</li>
<li>flanneld</li>
<li>docker</li>
<li>kube-apiserver</li>
<li>kube-controller-manager</li>
<li>kube-scheduler</li>
<li>kube-register</li>
</ul>
<p>The important process here is kube-register. I look at its output with :</p>
<pre><code>journalctl -u kube-register
</code></pre>
<p>to see when minions register. To see all the minions that I currently have :</p>
<pre><code>core@pa1 ~/tmp $ kubectl get minions
NAME LABELS STATUS
104.236.214.244 <none> Ready
104.236.31.77 <none> Ready
</code></pre>
<p>On the minion for processes I have:</p>
<ul>
<li>fleet</li>
<li>flanneld</li>
<li>docker</li>
<li>kube-proxy</li>
<li>kube-kubelet</li>
</ul>
<p>I think the kube-kubelet is the thing that registers with the master, but I could be wrong. I do know that the master constantly queries the kubelet for healthz, so it knows if a minion is ready for work. Anyway, I have a script that creates my cloud config for the master and for the minions. I just modified that script to 'skip' the master cloud config and just do the minion config. I run it and here is my minions now:</p>
<pre><code>core@pa1 / $ kubectl get minions
NAME LABELS STATUS
104.236.214.244 <none> Ready
104.236.31.77 <none> Ready
45.55.164.39 <none> Ready
</code></pre>
<p>So, long story even longer, yes, it is possible to add a minion to an existing node cluster. I don't know how you are doing discovery, but, whatever method you used to 'introduce' your minion to the master in the first place will work.</p>
|
<p>If I setup a replication controller for something like a database, how does it keep the data in the replicas in-sync? If one of the replica goes down, how does it bring it back up with the latest data?</p>
| <p>A replication controller ensures that the desired number of pods with the same template are kept running in the system. The replication controller itself does not know anything about what it is running, and doesn't have any special hooks for containers running databases. This means that if you want to run a container with a database with more than one replica, then it is easiest to run a database that can natively do replication and discovery (possibly with the injection of some environment variables). </p>
<p>An alternative is to run a pod with two containers, where one container is a vanilla database, and the second "side-car" container is used to implement the necessary replication / synchronization / master election or whatever extra functionality you need to provide to make the database run in a clustered environment. This is more flexible (you can run a database that wasn't initially designed to run in a clustered environment) but also requires more custom work to make it scale. </p>
|
<p>I followed kubernetes' guestbook, and changed image value like this, want to pull images from dockerhub.</p>
<pre><code>"image": "redis"
</code></pre>
<p>But it failed. the log say:</p>
<blockquote>
<p>Failed to create pod infra container: image pull failed for gcr.io/google_c...</p>
</blockquote>
<p>And I set preloading images instructions:</p>
<pre><code>"imagePullPolicy":"Never"
</code></pre>
<p>I am sure about the minion have the redis image</p>
<p>But it also failed, and pulled from gcr.io</p>
| <p>Since you are in China, you won't be able to fetch images from gcr.io (including the pause image). If you want to compile your own pause image and upload it to an image registry that you have access, you can specify <code>--pod_infra_container_image="<registry>/pause:latest"</code> when launching the kubelet. </p>
|
<p>I am currently trying to set up kubernetes on a multi-docker container on CoreOS stack for AWS. To do this I need to set up etcd for flannel and am currently using <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/docker-multinode/master.md" rel="nofollow">this guide</a> but am having problems at the first stage where I am suggested to run</p>
<p>sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'</p>
<p>The problem is the 1st command</p>
<pre><code> docker -d -H unix:///var/run/docker-bootstrap.sock
</code></pre>
<p>from within boot2docker. There is no docker-bootstrap.sock file in this directory and this error is thrown:</p>
<p>FATA[0000] An error occurred trying to connect: Post <a href="https:///var/run/docker-bootstrap.sock/v1.18/containers/create" rel="nofollow">https:///var/run/docker-bootstrap.sock/v1.18/containers/create</a>: dial unix /var/run/docker-bootstrap.sock: no such file or directory</p>
<p>Clearly the unix socket did not connect to this nonexistent socket.</p>
<p>I will note this is a very similar problem to <a href="https://forums.docker.com/t/docker-osx-var-run-docker-sock-file-missing/623" rel="nofollow">this ticket</a> and other tickets regarding the FATA[0000] though none seem to have asked the question in the way I currently am.</p>
<p>I am not an expert in unix sockets, but I am assuming there should be a file where there is not. Where can I get this file to solve my issue, or what is the recommended steps to resolve this.</p>
<p>specs: running OSX Yosemite but calling all commands from boot2docker</p>
| <p>Docker should create this file for you. Are you running this command on your OS X machine? or are you running it inside the boot2docker VM?</p>
<p>I think you need to:</p>
<pre><code>boot2docker ssh
</code></pre>
<p>Then:</p>
<pre><code>sudo sh -c 'docker -d -H unix:///var/run/docker-bootstrap.sock -p /var/run/docker-bootstrap.pid --iptables=false --ip-masq=false --bridge=none --graph=/var/lib/docker-bootstrap 2> /var/log/docker-bootstrap.log 1> /dev/null &'
</code></pre>
<p>You need to make sure that command runs on the Vagrant Linux box that boot2docker creates, not your OS X machine.</p>
<p>Hope that helps!</p>
|
<p>Are there any up-to-date guides, or VM images of some Linux VM + Kubernetes that I could run on Windows? Both VMWare, VirtualBox or Vagrant images would help. I'm trying to set up a development environment. (There is no production environment yet, but it will be most likely self-hosted.)</p>
<p>I tried installing several Vagrant templates for Kubernetes linked from their github documentation, but they were specifically marked as not supported on Windows; I tried compiling Kubernetes 0.15 from source under CoreOS and Boot2Docker, but ran into problems with either.</p>
<p>Since my ops skill set is relatively low, I'd sleep easier if I could use a template that was set up by someone who knew what they're doing.</p>
| <p>If you install Docker on Windows (see the <a href="https://docs.docker.com/installation/windows/" rel="nofollow">Docker instructions</a>), you can then follow the guide to run <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/docker.md" rel="nofollow">Kubernetes locally via Docker</a> and then once you are comfortable with that try running <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/docker-multinode.md" rel="nofollow">Multi-Node Kubernetes Using Docker</a>. </p>
|
<p>I was wondering if anyone knows how to create a json file that would be the same as running:</p>
<p>docker run -p 80:80 -p 443:443 starblade/pydio-v4</p>
<p>I trying something very ambitious, I want to start my docker container in kubernetes-mesos cluster but can't seem to get the ports correct in the json file, alas I am still very new to this.</p>
<p>Thanks,
TT</p>
<p>Here are my json files:</p>
<p>`</p>
<pre><code>{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 10001, "protocol": "TCP"}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
</code></pre>
<p><code>
</code></p>
<pre><code>{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 80,
"port": 443,
"targetPort": 10001,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
</code></pre>
<p><code>
Docker container Env info pulled from docker inspect command:
</code></p>
<pre><code>"Env": [
"FRONTEND_SERVICE_HOST=10.10.10.14",
"FRONTEND_SERVICE_PORT=443",
"FRONTEND_PORT=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP_PROTO=tcp",
"FRONTEND_PORT_443_TCP_PORT=443",
"FRONTEND_PORT_443_TCP_ADDR=10.10.10.14",
"KUBERNETES_SERVICE_HOST=10.10.10.2",
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_PORT=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.10.10.2",
"KUBERNETES_RO_SERVICE_HOST=10.10.10.1",
"KUBERNETES_RO_SERVICE_PORT=80",
"KUBERNETES_RO_PORT=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP_PROTO=tcp",
"KUBERNETES_RO_PORT_80_TCP_PORT=80",
"KUBERNETES_RO_PORT_80_TCP_ADDR=10.10.10.1",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PYDIO_VERSION=6.0.5"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
</code></pre>
<p>`</p>
<p>The pod and service both start and run ok.</p>
<p>However I am unable to access the running Pydio site on any of the master, minion or frontend ips.</p>
<p>Note:
I am running a modified version of the this docker container:</p>
<p><a href="https://registry.hub.docker.com/u/kdelfour/pydio-docker/" rel="nofollow">https://registry.hub.docker.com/u/kdelfour/pydio-docker/</a></p>
<p>My container has been tested and it runs as expected.
You should see the login screen once it is running.</p>
<p>Please let me know if I can provide any other information.</p>
<p>Thanks again.</p>
| <p>So, I finally got this to work using the following .json files:</p>
<p>frontend-service.json</p>
<pre><code> {
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 443,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
</code></pre>
<p>frontend-controller.json</p>
<pre><code>{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 443, "hostPort": 31000}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
</code></pre>
<p>I now have pydio with SSL running in a Mesos-Kubernetes env on GCE.</p>
<p>Going to run some tests using more hostPorts to see if I can get more than one replica running on one host. At this point I can resize up to 3.</p>
<p>Hope this helps someone.</p>
<p>Thanks,
TT</p>
|
<p>I started Kubernetes master and minion on local machine using Vagrant. I can create a json file for my Kubernetes pod where I can start several public containers.</p>
<p>However, one Docker container is local one, ontop on <code>java:8-jdk</code>, configured with DockerFile.</p>
<p>How can I reference this local Docker container in the kubernetes json pod so Kubernetes can run it?</p>
<p>In other words, does Kubernetes support docker build ;)</p>
| <p>After you build the docker image, you can "side-load" it into your locally available images by running <code>docker load -i /path/to/image.tar</code>. Once you've done this, Kubernetes will be able to load the image without reaching out to an external hub. </p>
|
<p>I'm looking to automate the process of setting up a MongoDb replica set via a sidecar when using Docker and Kubernetes.</p>
<p><strong>The above setup isn't terribly important</strong>, what it boils down to is that I need to be able to call the mongo replica set commands (e.g. <code>rs.initiate()</code>, <code>rs.add('anotherserver')</code>, <code>rs.conf()</code>, <code>rs.reconfig()</code>, etc) from a node.js application. </p>
<p>Note: it doesn't have to be from a node application, if someone knows of another way of getting the same thing done, please share your thoughts.</p>
<p><strong>UPDATE:</strong> I was able to get this working and have made the sidecar open source for others to use.</p>
<ul>
<li><a href="https://github.com/leportlabs/mongo-k8s-sidecar" rel="noreferrer">https://github.com/leportlabs/mongo-k8s-sidecar</a></li>
<li><a href="https://registry.hub.docker.com/u/leportlabs/mongo-k8s-sidecar" rel="noreferrer">https://registry.hub.docker.com/u/leportlabs/mongo-k8s-sidecar</a></li>
</ul>
| <h2>How are the replica set admin helpers implemented?</h2>
<p>The <a href="http://docs.mongodb.org/manual/reference/method/js-replication/" rel="noreferrer"><code>rs.*</code> replica set admin helpers</a> in the <code>mongo</code> shell are wrappers for MongoDB commands which you can send from any driver.</p>
<p>You can see which command(s) each shell helper wraps by referring to the MongoDB documentation:</p>
<ul>
<li><a href="http://docs.mongodb.org/manual/reference/method/rs.initiate/" rel="noreferrer"><code>rs.initiate()</code></a> provides a wrapper around the <a href="http://docs.mongodb.org/manual/reference/command/replSetInitiate/" rel="noreferrer"><code>replSetInitiate</code></a> database command.</li>
<li><a href="http://docs.mongodb.org/manual/reference/method/rs.add/" rel="noreferrer"><code>rs.add()</code></a> provides a wrapper around some of the functionality of the <a href="http://docs.mongodb.org/manual/reference/command/replSetReconfig/" rel="noreferrer"><code>replSetReconfig</code></a> database command and the corresponding mongo shell helper <code>rs.reconfig()</code>.</li>
<li><a href="http://docs.mongodb.org/manual/reference/method/rs.conf/" rel="noreferrer"><code>rs.conf()</code></a> wraps the <a href="http://docs.mongodb.org/manual/reference/method/rs.conf/" rel="noreferrer"><code>replSetGetConfig</code></a> database command.</li>
</ul>
<p>Note that the <code>mongo</code> shell helpers may do some extra validation or manipulation of configs as they are intended to be used via the interactive <code>mongo</code> shell.</p>
<p>You can confirm how any of the shell helpers are implemented by invoking the command in the shell without trailing parentheses, eg:</p>
<pre><code>> rs.initiate
function (c) { return db._adminCommand({ replSetInitiate: c }); }
</code></pre>
<h3>Calling replica set database commands from Node.js</h3>
<p>The equivalent logic can be implemented via the Node.js driver API using <a href="http://mongodb.github.io/node-mongodb-native/2.0/api/Admin.html#command" rel="noreferrer"><code>command()</code></a>:</p>
<pre><code>// Rough equivalent of rs.initiate()
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://localhost:27017/test', function(err, db) {
// Use the admin database for commands
var adminDb = db.admin();
// Default replica set conf
var conf = {};
adminDb.command({replSetInitiate: conf}, function(err, info) {
console.log(info);
});
});
</code></pre>
<blockquote>
<p>Note: it doesn't have to be from a node application, if someone knows of another way of getting the same thing done, please share your thoughts.</p>
</blockquote>
<p>Rather than reimplementing the replica set helpers in Node.js, you could invoke a <code>mongo</code> shell with the <code>--eval</code> command to run the shell helper (tip: include <code>--quiet</code> to suppress unnecessary messages).</p>
<p>For example, calling from your Node app:</p>
<pre><code>var exec = require('child_process').exec;
var rsAdmin = exec('mongo --eval "var res = rs.initiate(); printjson(res)" --quiet', function (error, stdout, stderr) {
// output is in stdout
console.log(stdout);
});
</code></pre>
|
<p>What is the preferred way of updating a set of pods (e.g. after making code changes & pushing underlying docker image to docker hub) controlled by a replication controller in kubernetes cluster?</p>
<p>I can see 2 ways:</p>
<ol>
<li>Deleting & re-creating replication controller manually</li>
<li>Using <code>kubectl rolling-update</code></li>
</ol>
<p>With the <code>rolling-update</code> I have to change the replication controller name. Since I'm storing replication controller definition in YAML file and not generating it manually, having to change the file to push out a code update seems to bring about bad habits like alternating between 2 names for the replication controller (e.g. controllerA and controllerB) to avoid name conflict.</p>
<p>What is the better way?</p>
| <p>Update: <code>kubectl rolling-update</code> has been deprecated and the replacement command is <code>kubectl rollout</code>. Also note that since I wrote the original answer the Deployment resource has been added and is a better choice than ReplicaSets as the rolling update is performed server side instead of by the client. </p>
<hr>
<p>You should use <code>kubectl rolling-update</code>. We recently added a feature to do a "simple rolling update" which will update the image in a replication controller without renaming it. It's the last example shown in the <code>kubectl help rolling-update</code> output:</p>
<pre><code>// Update the pods of frontend by just changing the image, and keeping the old name
$ kubectl rolling-update frontend --image=image:v2
</code></pre>
<p>This command also supports recovery -- if you cancel your update and restart it later, it will resume from where it left off. Even though it creates a new replication controller behind the scenes, at the end of the update the new replication controller takes the name of the old replication controller so it appears as pure update rather than switching to an entirely new replication controller. </p>
|
<p>So I'm trying to setup a master Kubernetes node on coreos in vagrant. I'm using the example master cloud-config, found here <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/master.yaml" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/coreos/cloud-configs/master.yaml</a> with the addition of this as the first units:</p>
<pre><code>- name: etcd.service
command: start
- name: fleet.service
command: start
- name: docker-tcp.socket
command: start
enable: true
content: |
[Unit]
Description=Docker Socket for the API
[Socket]
ListenStream=2375
Service=docker.service
BindIPv6Only=both
[Install]
WantedBy=sockets.target
</code></pre>
<p>Once I vagrant up and vagrant ssh, I run <code>sudo systemctl status kube-apiserver</code> and find that <code>kube-apiserver</code> is down due to the fact it can't find <code>etcd.service</code>; however when I do <code>ps -ef | grep etcd</code> etcd is clearly running. Is there some specific location for etcd.service in systemd or do I have to add a content field to the unit in the cloud-config or something else?</p>
| <p>Turns out the example master config is looking for <code>etcd2.service</code>, while the actual file is <code>etcd.service</code> so I changed it in the example units and everything worked.</p>
<p><strong>EDIT</strong></p>
<p>The reason why this worked and was an issue to begin with was that I was using the coresos vagrant box for parallels which is 300~ builds old from the current stable coreos build so it was missing etcd2 all together.</p>
|
<p>I'm using Google's Container Engine service, and got a pod running a server listening on port 3000. I set up the service to connect port 80 to that pod's port 3000. I am able to curl the service using its local and public ip from within the node, but not from outside. I set up a firewall rule to allow port 80 and send it to the node, but I keep getting 'connection refused' from outside the network. I'm trying to do this without a forwarding rule, since there's only one pod and it looked like forwarding rules cost money and do load balancing. I think the firewall rule works, because when I add the <code>createExternalLoadBalancer: true</code> to the service's spec, the external IP created by the forwarding rule works as expected. Do I need to do something else? Set up a route or something?</p>
<p>controller.yaml</p>
<pre><code>kind: ReplicationController
apiVersion: v1beta3
metadata:
name: app-frontend
labels:
name: app-frontend
app: app
role: frontend
spec:
replicas: 1
selector:
name: app-frontend
template:
metadata:
labels:
name: app-frontend
app: app
role: frontend
spec:
containers:
- name: node-frontend
image: gcr.io/project_id/app-frontend
ports:
- name: app-frontend-port
containerPort: 3000
targetPort: 3000
protocol: TCP
</code></pre>
<p>service.yaml</p>
<pre><code>kind: Service
apiVersion: v1beta3
metadata:
name: app-frontend-service
labels:
name: app-frontend-service
app: app
role: frontend
spec:
ports:
- port: 80
targetPort: app-frontend-port
protocol: TCP
publicIPs:
- 123.45.67.89
selector:
name: app-frontend
</code></pre>
<hr>
<p><strong>Edit (additional details):</strong>
Creating this service adds these additional rules, found when I run <code>iptables -L -t nat</code></p>
<pre><code>Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http redir ports 56859
REDIRECT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 56859
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859
DNAT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859
</code></pre>
<p>I don't fully understand iptables, so I'm not sure how the destination port matches my service. I found that the DNS for <code>89.67.45.123.bc.googleusercontent.com</code> resolves to <code>123.45.67.89</code>.</p>
<p>kubectl get services shows the IP address and port I specified:</p>
<pre><code>NAME IP(S) PORT(S)
app-frontend-service 10.247.243.151 80/TCP
123.45.67.89
</code></pre>
<p>Nothing recent from external IPs is showing up in /var/log/kube-proxy.log </p>
| <p>TL;DR: Use the Internal IP of your node as the public IP in your service definition. </p>
<hr>
<p>If you enable verbose logging on the kube-proxy you will see that it appears to be creating the appropriate IP tables rule:</p>
<pre><code>I0602 04:07:32.046823 24360 roundrobin.go:98] LoadBalancerRR service "default/app-frontend-service:" did not exist, created
I0602 04:07:32.047153 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 10.119.244.130/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.048446 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 10.119.244.130:80
I0602 04:07:32.049525 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.050872 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.052247 24360 proxier.go:595] Opened iptables from-containers portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
I0602 04:07:32.053222 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.054491 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.055848 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
</code></pre>
<p>Listing the iptables entries using <code>-L -t</code> shows the public IP turned into the reverse DNS name like you saw:</p>
<pre><code>Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.119.240.2 /* default/kubernetes: */ tcp dpt:https redir ports 50353
REDIRECT tcp -- anywhere 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 54605
REDIRECT udp -- anywhere 10.119.240.10 /* default/kube-dns:dns */ udp dpt:domain redir ports 37723
REDIRECT tcp -- anywhere 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:domain redir ports 50126
REDIRECT tcp -- anywhere 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
REDIRECT tcp -- anywhere 36.156.251.23.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
</code></pre>
<p>But adding the <code>-n</code> option shows the IP address (by default, <code>-L</code> does a reverse lookup on the ip address, which is why you see the DNS name):</p>
<pre><code>Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- 0.0.0.0/0 10.119.240.2 /* default/kubernetes: */ tcp dpt:443 redir ports 50353
REDIRECT tcp -- 0.0.0.0/0 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:80 redir ports 54605
REDIRECT udp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns */ udp dpt:53 redir ports 37723
REDIRECT tcp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:53 redir ports 50126
REDIRECT tcp -- 0.0.0.0/0 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
REDIRECT tcp -- 0.0.0.0/0 23.251.156.36 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
</code></pre>
<p>At this point, you can access the service from within the cluster using both the internal and external IPs:</p>
<pre><code>$ curl 10.119.244.130:80
app-frontend-5pl5s
$ curl 23.251.156.36:80
app-frontend-5pl5s
</code></pre>
<p>Without adding a firewall rule, attempting to connect to the public ip remotely times out. If you add a firewall rule then you will reliably get connection refused:</p>
<pre><code>$ curl 23.251.156.36
curl: (7) Failed to connect to 23.251.156.36 port 80: Connection refused
</code></pre>
<p>If you enable some iptables logging:</p>
<pre><code>sudo iptables -t nat -I KUBE-PORTALS-CONTAINER -m tcp -p tcp --dport
80 -j LOG --log-prefix "WTF: "
</code></pre>
<p>And then grep the output of <code>dmesg</code> for <code>WTF</code> it's clear that the packets are arriving on the 10. IP address of the VM rather than the ephemeral external IP address that had been set as the public IP on the service.</p>
<p>It turns out that the problem is that GCE has two types of external IPs: ForwardingRules (which forward with the DSTIP intact) and 1-to-1 NAT (which actually rewrites the DSTIP to the internal IP). The external IP of the VM is the later type so when the node receives the packets the IP tables rule doesn't match. </p>
<p>The fix is actually pretty simple (but non-intuitive): Use the Internal IP of your node as the public IP in your service definition. After updating your service.yaml file to set publicIPs to the Internal IP (e.g. <code>10.240.121.42</code>) you will be able to hit your application from outside of the GCE network. </p>
|
<p>I'm trying to setup a kubernetes cluster for a development environment (local vms). Because it's development I'm not using working certs for the api-server. It would seem I have to use the secure connection in order to connect minion daemons such as <code>kube-proxy</code> and <code>kubelet</code> to the master's <code>kube-apiserver</code>. Has anyone found a way around that? I haven't seen anything in the docs about being able to force the unsecure connection or ignoring that the certs are bad, I would assume there's a flag for it when running either the minion or master daemons, but I've had no luck. Etcd is working, it shows any entry from both master and minions and the logs show attempts at handshakes but definitely failing due to bad certs.</p>
| <p>You can set the flag <code>--insecure-bind-address=0.0.0.0</code> when starting kube-apiserver to allow access to the unauthenticated api endpoint running on port 8080 to your network (by default it is only accessible on localhost). </p>
|
<p>When I define multiple containers in a pod/pod template like one container running nginx and another php-fpm, how can they access each other?</p>
<p>Do I have to define some links in the definition (I could not find docs explaining <em>all</em> available config options) or can they each other by default?</p>
<p>If yes what values do I have to put in the config files?
I read the sharing a network namespace but I'm not aware of what that really means?</p>
<p>I also could not find any example for that.</p>
| <p>All the containers in a pod are bound to the same network namespace.</p>
<p>This means that (a) they all have the same ip address and (b) that <code>localhost</code> is the same across all the containers. In other words, if you have Apache running in one container in a pod and MysQL running in another, you can access MySQL at <code>localhost:3306</code> from the Apache container (and you could access Apache at <code>localhost:80</code> from the MySQL container).</p>
<p>While the containers share networking, they do not share filesystems. If you want to share files between containers you will need to make use of volumes. There is a simple volume example <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/walkthrough" rel="noreferrer">here</a>.</p>
|
<p>I am trying to run kubernetes on EC2 and I used CoreOs alpha channel ami.I configured Kubectl ssh tunnel for the communication between Kubectl client and Kubernetes API.</p>
<p>But when I try <strong>kubectl api-versions</strong> command, I am getting following error.</p>
<p><strong>Couldn't get available api versions from server: Get http://MyIP:8080/api: dial tcp MyIP:8080: connection refused</strong></p>
<p>MyIP - this has set accordingly.</p>
<p>What could be the reason for this?</p>
| <p>Reason for this issue was that I haven't set the kubernetes_master environment variable properly. As there is a ssh tunnel between the kubectl client and API, kubernetes master environment variable should be set to localhost.</p>
|
<p>I used to be able to curl </p>
<pre><code>https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1beta3/namespaces/default/
</code></pre>
<p>as my base URL, but in kubernetes 0.18.0 it gives me "unauthorized". The strange thing is that if I used the external IP address of the API machine (<code>http://172.17.8.101:8080/api/v1beta3/namespaces/default/</code>), it works just fine.</p>
| <p>In the official documentation I found this: </p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod</a></p>
<p>Apparently I was missing a security token that I didn't need in a previous version of Kubernetes. From that, I devised what I think is a simpler solution than running a proxy or installing golang on my container. See this example that gets the information, from the api, for the current container:</p>
<pre><code>KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME
</code></pre>
<p>I also use include a simple binary, jq (<a href="http://stedolan.github.io/jq/download/" rel="noreferrer">http://stedolan.github.io/jq/download/</a>), to parse the json for use in bash scripts.</p>
|
<p>I am trying to setup a Kubernetes cluster on my AWS account using the <code>kube-up.sh</code> setup script that is bundled with kubernetes source at kubernetes/cluster/kube-up.sh</p>
<p>But when I ran kube-up.sh I am getting the following error:</p>
<pre><code>pranjal:~/go/src/github.com/GoogleCloudPlatform/kubernetes/cluster$ ./kube-up.sh
Starting cluster using os distro: ubuntu
Starting cluster using provider: aws
... calling verify-prereqs
... calling kube-up
Uploading to Amazon S3
Creating kubernetes-staging-6b790c161af2b2c39939b542c73b775a
make_bucket failed: s3://kubernetes-staging-6b790c161af2b2c39939b542c73b775
</code></pre>
<p>I am sure that my the tool is not able to read my AWS Access Key and Secret. I stored it in .aws/config. I am not sure where I should set it for it to be able to read it correctly and work.</p>
| <p>After a bit of searching I figured out a solution to the problem.</p>
<p>A lot of the AWS configuration goes into the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/aws/config-default.sh" rel="nofollow">config-default.sh</a> file. However there is no option to set the Access Key ID, Secret Access Key there (which might make some sense as cluster/aws/config-default.sh file is a part of the source code and the credentials should be saved somewhere else, at a safer place)</p>
<p>I realized after seeing the kubernetes/cluster/aws/util.sh source code that kubernetes actually calls the <strong>aws</strong> command line tool internally to make changes to AWS infrastructure.</p>
<p>So having the AWS command line tool installed and configured correctly will do the job of solving this problem.</p>
<p>Once I issued the following command:</p>
<pre><code>aws configure
</code></pre>
<p>and answered prompts for entering ID/Key it saved these values to this file:</p>
<pre><code>~/.aws/credentials
</code></pre>
<p>See details <a href="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html" rel="nofollow">here</a></p>
<p>This solved my problem and kube-up.sh worked perfectly after I did this.</p>
|
<p>I have a piece of code to run on a k8s cluster. I need shutdown those k8s nodes when all of my code running in the pods get finished. I let my code to serve on a port until the job of the code doing completed, and I will keep the program running to avoid the replication controller starting another pod. and I defined a service in k8s to routing the request.
Externally, I wrote a script to ping the service until the service can't response with code 200, then I will shutdown those k8s nodes to save resource.
My question is when my code in the pod no longer to serve the port, will k8s service still route the incoming request to that pod or not.
And, is there any other way to achieve the equivalent result? </p>
| <p>If a TCP connection can't be opened to the pod's IP on the given port, a different pod will be connected to instead.</p>
<p>In other words, as long as the pod closes the socket that was listening on the port, no requests should be sent to it after that point.</p>
|
<p>I installed gcloud SDK and everything went fine. However I'm trying to use kubectl and it hasn't been installed.</p>
<p>I run the following commands:</p>
<pre><code>gcloud components update preview
All components are up to date.
gcloud components update alpha
All components are up to date.
</code></pre>
<p>So seems like everything is up to date. However command kubectl is not there.</p>
<p>This is not a PATH issue because I checked the file in gcloud's bin folder and it is there as it is in other of my installations where it was installed correctly.</p>
<p>I'm running Ubuntu 14.04 on AWS.</p>
| <p>It appears that you need to run <code>gcloud components update kubectl</code>. I'm not sure why (this didn't used to be required). </p>
|
<p>I installed a Kubernetes cluster by following the instruction here:</p>
<p><a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md</a></p>
<p>Everything looks fine the first time. I'm able to see the nodes, pods, deploy new pods, etc.</p>
<p>The problem shows up when I stop the cluster and try to start it again. I'm restarting the cluster as indicated on the documentation:</p>
<pre><code>vagrant halt
./cluster/kube-up.sh
</code></pre>
<p>When I do that I see the following error:</p>
<pre><code> Comment: Source file salt://kubelet/kubeconfig not found
...
Minion did not return. [No response]
</code></pre>
<p>Then, when I check the status of nodes it says the minion is NotReady.</p>
<p>If I have VirtualBox open while I run kube-up.sh, I see that the error is thrown before the minion VM is started. So it sounds like the minion is not running when it tries to configure it. That's just an observation, not sure what's the problem.</p>
<p>In order to solve this issue I have to destroy the cluster and create it again, what downloads and install everything again, making it very slow to use.</p>
| <p>I found this problem on GitHub:</p>
<p><a href="https://github.com/GoogleCloudPlatform/kubernetes/issues/9270" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/issues/9270</a></p>
<p>Here it was suggested to use the code in HEAD. I did that and now it is working fine.</p>
|
<p>I'm currently learning about Kubernetes and still trying to figure it out. I get the general use of it but I think that there still plenty of things I'm missing, here's one of them. If I want to run Kubernetes on my public cloud, like GCE or AWS, will Kubernetes spin up new VMs by itself in order to make more compute for new pods that might be needed? Or will it only use a certain amount of VMs that were pre-configured as the compute pool. I heard Brendan say, in his talk in CoreOS fest, that Kubernetes sees the VMs as a "sea of compute" and the user doesn't have to worry about which VM is running which pod - I'm interested to know where that pool of compute comes from, is it configured when setting up Kubernetes? Or will it scale by itself and create new machines as needed?</p>
<p>I hope I managed to be coherent.
Thanks! </p>
| <p>Kubernetes supports scaling, but not auto-scaling. The addition and removal of new pods (VMs) in a Kubernetes cluster is performed by <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md" rel="nofollow">replication controllers</a>. The size of a replication controller can be changed by updating the <code>replicas</code> field. This can be performed in a couple ways:</p>
<ol>
<li>Using kubectl, you can use the <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_scale.md" rel="nofollow">scale</a> command.</li>
<li>Using the <a href="http://kubernetes.io/third_party/swagger-ui/#!/v1beta3/patchReplicationController" rel="nofollow">Kubernetes API</a>, you can update your config with a new value in the <code>replicas</code> field.</li>
</ol>
<p>Kubernetes has been designed for auto-scaling to be handled by an external auto-scaler. This is discussed in <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md#responsibilities-of-the-replication-controller" rel="nofollow">responsibilities of the replication controller</a> in the Kubernetes docs.</p>
|
<p>I am able to get a list of all pods running on a kubernetes cluster using:</p>
<pre><code>kubectl get pods
</code></pre>
<p>How do I get all the containers running on a particular pod?</p>
| <p>You can use the <code>describe</code> command:</p>
<pre><code>kubectl describe pod [podname]
</code></pre>
<p>That will specify which containers are in the pod, along with other information.</p>
|
<p>I'm trying to setup example from <a href="https://cloud.google.com/container-engine/docs/tutorials/hello-wordpress" rel="nofollow">Running Wordpress with a Single Pod</a>.</p>
<ul>
<li>I've done <a href="https://cloud.google.com/container-engine/docs/before-you-begin" rel="nofollow">Before You Begin</a> section:</li>
</ul>
<p>$ gcloud config list</p>
<pre><code>[compute]
zone = europe-west1-c
[core]
account = [email protected]
disable_usage_reporting = False
project = com-project-default
</code></pre>
<ul>
<li><p>I've done the steps from the tutorial:</p>
<p>"Step 1: Create your cluster" <a href="http://pastebin.com/STBtnuxC" rel="nofollow">logs here</a></p>
<p>"Step 2: Create your pod" <a href="http://pastebin.com/4Gv0nJrF" rel="nofollow">logs here</a></p>
<p>"Step 3: Allow external traffic" <a href="http://pastebin.com/ugJ9gztX" rel="nofollow">logs here</a></p></li>
<li><p>More logs:</p>
<p>$kubectl get pods - <a href="http://pastebin.com/wNQ8T5Fp" rel="nofollow">log (toggle text wrapping)</a></p>
<p>$gcloud compute firewall-rules list - <a href="http://pastebin.com/ayGwRveq" rel="nofollow">log</a></p></li>
</ul>
<p>So, when I try to connect to <a href="http://104.155.7.213/" rel="nofollow">http://104.155.7.213/</a> I'm receiving "This web page is not available: <code>ERR_CONNECTION_REFUSED</code>".</p>
<p>I tried to add "<code>Allow HTTP traffic</code>" explicitly to the node in Compute Engine VMs dashboard and also I tried to use "<code>kubectl run</code>" instead of deprecated "<code>kubectl run-container</code>", but it doesn't help. Also sometimes I'm receiving "<code>last termination: exit code 2</code>" (<code>1</code> or <code>2</code>) in "<code>message</code>" column when run "<code>kubectl get pods</code>" (but not this time)`</p>
<p>Info:</p>
<p><a href="https://cloud.google.com/container-engine/release-notes" rel="nofollow">GKE from June 10, 2015</a></p>
<p>$ kubectl version</p>
<pre><code>Client Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.1", GitCommit:"befd1385e5af5f7516f75a27a2628272bb9e9f36", GitTreeState:"clean"}
Server Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.2", GitCommit:"1f12b893876ad6c41396222693e37061f6e80fe1", GitTreeState:"clean"}
</code></pre>
<p>$ gcloud version</p>
<pre><code>Google Cloud SDK 0.9.64
alpha 2015.06.02
bq 2.0.18
bq-nix 2.0.18
compute 2015.06.09
core 2015.06.09
core-nix 2015.06.02
dns 2015.06.02
gcloud 2015.06.09
gcutil-msg 2015.06.09
gsutil 4.13
gsutil-nix 4.12
kubectl
kubectl-linux-x86_64 0.18.1
preview 2015.06.09
sql 2015.06.09
</code></pre>
<p>Thank you for your help in advance!</p>
| <p>If you want to access the container directly using the node VM's IP address, you need to specify a host port in addition to a container port, e.g.
<code>kubectl run-container wordpress --image=tutum/wordpress --port=80 --hostport=80</code></p>
<p>Alternatively, you can access wordpress via the proxy running on the master by running <code>kubectl proxy</code> and then pointing your web browser at <code>http://localhost:8001/api/v1beta3/proxy/namespaces/default/pods/wordpress-3gaq6</code>. </p>
|
<p>I have a kubernetes cluster, and I am wondering how (best practice) to update containers. I know the idea is to tear down the old containers and put up new ones, but is there a one-liner I can use, do I have to remove the replication controller or pod(s) and then spin up new ones (pods or replicaiton controllers)? With this I am using a self hosted private library that I know I have to build from the Dockerfile and the push to anyway, this I can automate with gulp (or any other build tool), can I automate kubernetes update/tear down and up?</p>
| <p>Kubectl can automate the process of rolling updates for you. Check out the docs here:
<a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_rolling-update.md" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl_rolling-update.md</a></p>
<p>A rolling update of an existing replication controller <code>foo</code> running Docker image <code>bar:1.0</code> to image <code>bar:2.0</code> can be as simple as running
<code>kubectl rolling-update foo --image=bar:2.0</code>.</p>
|
<p>I'm creating a kubernetes cluster, and in it I have several services. I know based on <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#discovering-services" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#discovering-services</a> I have two options.</p>
<ol>
<li><p>use the environment variables set by the kubelet.</p></li>
<li><p>use skydns</p></li>
</ol>
<p>I want to try to use the environment variables first before I go adding another dependency into the mix. However, I'm unsure where the environment variables are for each service. I haven't found them when doing <code>env</code> or <code>sudo env</code> on the kubelet. Are they within a certain container and/or pod? If so do I have to link the other pods to that one to get its environment variables for services?</p>
<p>I have several NodeJS services in containers, so I'm wondering if talking to each service would require this to get the ip:
<code>process.env('SERVICE_X_PUBLIC_IPV4')</code> once I have the environment variable thing sorted out.</p>
<p>Not as important, but related, how does this all work across multiple nodes?</p>
| <p>The environment variables for a given service are put in every container that is started after the service was created.</p>
<p>For example, if you create a pod <code>foo</code> and then later a service <code>bar</code>, the pod's containers won't have any environment variables for <code>bar</code>.</p>
<p>If you instead create service <code>bar</code> and then a pod <code>foo</code>, the pod's containers should have environment variables something like:
<code>
BAR_PORT=tcp://10.167.240.1:80
BAR_SERVICE_HOST=10.167.240.1
</code></p>
<p>You can test this out by attaching a terminal to one of your containers, as explained <a href="https://stackoverflow.com/a/26496854/1925481">here</a>.</p>
|
<p>I'm having k8s cluster with 3 minions, master and haproxy in front. When I use </p>
<pre><code>kubectl exec -p $POD -i -t -- bash -il
</code></pre>
<p>for accessing bash in the pod (it is a single container in this case) I get in and after something like 5 mins I get dropped out of the terminal. If I reenter the container I can see my old bash process running, with a new started for my new connection. Is there a way to prevent this from happening? When I'm using docker exec it works fine and doesn't drop me so I guess it is from kubernetes. </p>
<p>As a bonus question - is there a way to increase the characters per line when using kubectl exec? I get truncated output that is different from docker exec.</p>
<p>Thanks in advance!</p>
| <p>It is a known issue -
<a href="https://github.com/kubernetes/kubernetes/issues/9180" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/9180</a></p>
<p>The kubelet webserver times out.</p>
|
<p>I'd like to run two pods in exclusive nodes. For instance, I have 4 nodes (node-1, node-2, node-3, node-4) and 2 pods (pod-1, pod-2). I want only one pod to run in each node and each pod to run in two nodes, e.g. pod-1 in node-1 and node-2, pod-2 in node-3 and node-4. Is there a way to configure this way?</p>
| <p>You can force exclusivity by creating pod definitions that are unable to schedule on the same machine. The easiest way to do that is to assign each pod the same host port. Once you have the same host port set for both of your pod definitions, if you create two replication controllers with two replicas each, then the scheduler will run 2 copies of 2 pods spread across 4 machines. </p>
|
<p>I follow the example at <a href="https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns</a></p>
<p>But I cannot get the nslookup output as the example.</p>
<p>When execute</p>
<pre><code>kubectl exec busybox -- nslookup kubernetes
</code></pre>
<p>It suppose to return</p>
<pre><code>Server: 10.0.0.10
Address 1: 10.0.0.10
Name: kubernetes
Address 1: 10.0.0.1
</code></pre>
<p>But I only get</p>
<pre><code>nslookup: can't resolve 'kubernetes'
Server: 10.0.2.3
Address 1: 10.0.2.3
error: Error executing remote command: Error executing command in container: Error executing in Docker Container: 1
</code></pre>
<p>My Kubernetes is running on a VM, and its ifconfig output is as below:</p>
<pre><code>docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:50 errors:0 dropped:0 overruns:0 frame:0
TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2899 (2.8 KB) TX bytes:2343 (2.3 KB)
eth0 Link encap:Ethernet HWaddr 08:00:27:ed:09:81
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feed:981/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4735 errors:0 dropped:0 overruns:0 frame:0
TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:367445 (367.4 KB) TX bytes:280749 (280.7 KB)
eth1 Link encap:Ethernet HWaddr 08:00:27:1f:0d:84
inet addr:192.168.144.17 Bcast:192.168.144.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe1f:d84/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:330 (330.0 B) TX bytes:1746 (1.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:127976 errors:0 dropped:0 overruns:0 frame:0
TX packets:127976 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13742978 (13.7 MB) TX bytes:13742978 (13.7 MB)
veth142cdac Link encap:Ethernet HWaddr e2:b6:29:d1:f5:dc
inet6 addr: fe80::e0b6:29ff:fed1:f5dc/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1336 (1.3 KB) TX bytes:1336 (1.3 KB)
</code></pre>
<p>Here is the steps I tried to start the Kubernetes:</p>
<pre><code>vagrant@kubernetes:~/kubernetes$ hack/local-up-cluster.sh
+++ [0623 11:18:47] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/kubelet
cmd/hyperkube
cmd/kubernetes
plugin/cmd/kube-scheduler
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
examples/k8petstore/web-server
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [0623 11:18:52] Placing binaries
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
API SERVER port is free, proceeding...
Starting etcd
etcd -data-dir /tmp/test-etcd.FcQ75s --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null
Waiting for etcd to come up.
+++ [0623 11:18:53] etcd:
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
Waiting for apiserver to come up
+++ [0623 11:18:55] apiserver:
{
"kind":
"PodList",
"apiVersion":
"v1beta3",
"metadata":
{
"selfLink":
"/api/v1beta3/pods",
"resourceVersion":
"11"
},
"items":
[]
}
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.
Logs:
/tmp/kube-apiserver.log
/tmp/kube-controller-manager.log
/tmp/kube-proxy.log
/tmp/kube-scheduler.log
/tmp/kubelet.log
To start using your cluster, open up another terminal/tab and run:
cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local
cluster/kubectl.sh
</code></pre>
<p>Then in a new terminal window, I executed:</p>
<pre><code>cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local
</code></pre>
<p>After that, I created the busybox Pod as</p>
<pre><code>kubectl create -f busybox.yaml
</code></pre>
<p>The content of the busybox.yaml is from <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md</a></p>
| <p>It doesn't appear that <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/hack/local-up-cluster.sh" rel="nofollow">local-cluster-up.sh</a> supports DNS out of the box. For DNS to work, the kubelet needs to be passed the flags <code>--cluster_dns=<ip-of-dns-service></code> and <code>--cluster_domain=cluster.local</code> at startup. This flag isn't included in <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/hack/local-up-cluster.sh#L240-L247" rel="nofollow">the set of flags passed to the kubelet</a>, so the kubelet won't try to contact the DNS pod that you've created for name resolution services. </p>
<p>To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the <code>--cluster_dns</code> flag as the <code>portalIP</code> field of the service spec (see an example <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/skydns-svc.yaml.in" rel="nofollow">here</a>).</p>
|
<p>I'm trying to run a docker container with nginx on a kubernetes cluster. I'm using the environment variable service discovery for all my other containers, so I would like to keep it consistent and not have to bring something like skydns into the mix just because of this. Is it possible to access environment variables in nginx such that I can tell it to proxy-pass to a kubernetes service?</p>
| <p>How about this the shell script below which is run by a Docker container?</p>
<p><a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/295bd3768d016a545d4a60cbb81a4983c2a26968/cluster/addons/fluentd-elasticsearch/kibana-image/run_kibana_nginx.sh" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/295bd3768d016a545d4a60cbb81a4983c2a26968/cluster/addons/fluentd-elasticsearch/kibana-image/run_kibana_nginx.sh</a> ?</p>
|
<p>I have a dev kubernetes cluster setup where I have a minion running kube-proxy and kubelet. Both only start if it can connect to the master's apiserver, which it can. Howerver I am getting </p>
<p><code>error updating node status, will retry: error getting node "10.211.55.126": minion "10.211.55.126" not found</code></p>
<p>I notice prior to that I get this: <code>Server rejected event '&api.Event</code> followed by a large json object with mostly empty string values.</p>
<p>repeatedly when I try running the minion's kubelet. I have it pointing to a private ip and it is reporting that it can't fin the public ip. I imagine this is an etcd issue but I'm not sure, also it maybe flanneld?</p>
<p><strong>Update 1</strong>
I managed getting pass the initial error by registering the minion(node?) with the master. This allow it to receive pods from mast and run the containers,; however, the minion is still not fully connected and resulting in the master to continuously push more pods to the minion. The kubelet process is reporting: <code>Cannot get host IP: Host IP unknown; known addresses: []</code>. Is there a flag to run kubelet with to give it the host ip?</p>
| <p>Currently, I have to manually register the minion prior to spinning up the minion instance. This is because there is an open issue as of right now not allowing the minion to self-register in certain cases.</p>
<p><strong>UPDATE</strong></p>
<p>Now I'm using kube-register to register each minion/node on start of the kubelet service.</p>
|
<p>When using Kubernetes to manage your docker containers, particularly when using the replication controller, when should you increase an images running container instances to more than 1? I understand that Kubernetes can spawn as many container replicas as needed in the replication controller configuration file, but why spawn multiple running containers (for the same image) when you can just increase the Compute VM size. I would think, when you need more compute power, go ahead and increase the machine CPU / ram higher, and then only when you reach the max available compute power allowed, approx 32 cores currently at Google, then you would need to spawn multiple containers.</p>
<p>However, it would seem as if spawning multiple containers regardless of VM size would prove more high-availability service, but Kubernetes will respawn failed containers even in a 1 container replication controller environment. So what I can't figure out is, for what reason would I want more than 1 running container (for the same image) for a reason other than running out of VM Instance Compute size?</p>
| <p>I think you laid out the issues pretty well. The two kinds of scaling you described are called "vertical scaling" (increasing memory or CPU of a single instance) and "horizontal scaling" (increasing number of instances).</p>
<p>On availability: As you observed, you can achieve pretty good availability even with a single container, thanks to auto-restart (at the node level or replication controller level). But it can never be 100% because you will always have the downtime associated with restarting the process, either on the same machine or (if the machine failed) on a new machine. In contrast, horizontal scaling (running multiple replicas of the container) allows effectively "zero downtime" from the end-user's perspective, assuming you have some kind of load balancing or failover mechanism in place among the replicas, and your application is written in a way that allows replication.</p>
<p>On scalability: This is highly application-dependent. For example, vertically scaling CPU for a single-threaded application will not increase the workload it can handle, but running multiple replicas of it behind a load balancer (horizontal scaling) will. On the other hand, some applications aren't written in a way that allows them to be replicated, so for those vertical scaling is your only choice. Many applications (especially "cloud native" applications) are amenable to both horizontal and vertical scaling, but the details are application-dependent. Note that once you need to scale beyond the workload that a single node can handle (due to CPU or memory), you have no choice but to replicate (horizontal scaling).</p>
<p>So the short answer to your question is that people replicate for both availability and scalability.</p>
|
<p>I've got a <code>Pod</code> configuration from Docker that involves 7 nodes. It gets stuck in <code>Pending</code> state unless I remove two of the containers from the config. It doesn't matter which two I remove. It only works with five containers, which seems like a hard limit that I can't find documented.</p>
<p>How do I run more than 5 containers in a kubernetes <code>Pod</code> on Google Container Engine?</p>
| <p>I'm fairly sure there isn't a hard cap of 5 containers per pod, so there's likely some other reason why the scheduler can't find a node to run your pod on.</p>
<p>You should be able find a message saying why the pod is still pending by running <code>kubectl describe pod $PODNAME</code> to see the most recent 'event' that happened to the pod, or by running <code>kubectl get events</code> to see all the recent events from the cluster.</p>
|
<p>I've installed a kubernetes cluster (using Google's Container Engine) and I noticed a service listening on port 443 on the master server. Tried to access it but it requires username and password, so any ideas what these credentials are?</p>
| <p>You can read the cluster config using kubectl. This will contain the username and password for the UI.</p>
<pre><code>kubectl config view
</code></pre>
|
<p>I'm new to Kubernetes. I installed it on my local Ubuntu 14.04 machine. I want to run nginx server and I want see it in my browser. I'm following this <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/locally.md#running-a-user-defined-pod" rel="nofollow noreferrer">section</a>.</p>
<p>It's saying</p>
<blockquote>
<p>However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run curl within the docker container (try docker exec).</p>
</blockquote>
<p>I tried below instruction to check server is running.</p>
<pre><code># docker exec -it d0ef46bcdb8b bash
root@nginx:/# service ngix status
nginx is running.
</code></pre>
<p>Now I want to see it in webpage.</p>
<blockquote>
<p>You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:</p>
<p>cluster/kubectl.sh create -f examples/pod.yaml</p>
</blockquote>
<p>But I don't how to edit the manifest. How I get nginx through browser?</p>
| <p>The manifest that the documentation is referring to is <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/pod.yaml" rel="nofollow">here</a>. Copy this file onto your local machine (or find it on your system if you've already downloaded a copy of the git repository). You can edit the file using your favorite web browser and then run <code>kubectl create -f pod.yaml</code> to tell the system to create the pod. </p>
|
<p>I am using a Kubernetes cluster deployed through Google Container Engine (GKE) from the Google Cloud Developer's Console, cluster version 0.19.3. I would like to run a privileged container, like in the <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/nfs" rel="noreferrer">Kubernetes NFS Server</a> example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: jsafrane/nfs-data
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true
</code></pre>
<p>Since the default Google Container Engine configuration does not allow privileged containers, the Kubernetes API imediately returns the following error:</p>
<blockquote>
<p>Error from server: Pod "nfs-server" is invalid: spec.containers[0].securityContext.privileged: forbidden '<*>(0xc20a027396)true'</p>
</blockquote>
<p>How can I allow privileged containers in my Google Container Engine cluster?</p>
| <p>Update: Privileged mode is now enabled by default starting with the 1.1 release of Kubernetes which is now available in Google Container Engine.</p>
<hr>
<p>Running privileged containers (including the NFS server in that example) isn't currently possible in Google Container Engine. We are looking at ways to solve this (adding a flag when creating your cluster to allow privileged containers; making privileged containers part of admission control; etc). For now, if you need to run privileged containers you'll need to launch your own cluster using the GCE provider. </p>
|
<p>I would like to run Aerospike cluster on Docker containers managed by Kubernetes on CoreOS on Google Compute Engine (GCE). But since GCE does not permit multicast, I have to use Mesh heartbeat as described <a href="http://www.aerospike.com/docs/operations/configure/network/heartbeat/#mesh-unicast-heartbeat" rel="nofollow">here</a>, which has to be set up by specifying all node's IP addresses and ports; it seems so inflexible to me.</p>
<p>Is there any recommended cloud-config settings for Aerospike cluster on Kubernetes/CoreOS/GCE with flexibility of the cluster being kept?</p>
| <p>An alternative to specifying all mesh seed IP addresses is to use <code>the asinfo tip command</code>.</p>
<p>Please see:</p>
<p><a href="http://www.aerospike.com/docs/reference/info/#tip" rel="noreferrer">http://www.aerospike.com/docs/reference/info/#tip</a></p>
<p>the tip command</p>
<pre><code>asinfo -v 'tip:host=172.16.121.138;port=3002'
</code></pre>
<p>The above command could be added to a script or orchestration tool with correct ips.</p>
<p>You may also find addtional info on the aerospike Forum:</p>
<p><a href="https://discuss.aerospike.com/" rel="noreferrer">Aerospike Forum</a></p>
|
<p>I created a cluster in google compute using the command:</p>
<pre><code>./kube-up.sh
</code></pre>
<p>Kubernetes has created 1 master and 4 minios servers. I try delete two minions in google cloud and are reloaded.</p>
<p>I try also deleting the kubernetes minions with kubectl and after delete de VM. This fails.</p>
<pre><code>kubectl delete nodes kubernetes-minion-XXX
</code></pre>
| <p><code>kube-up.sh</code> created a managed instance group with size 4 which caused 4 nodes to be created. If you delete a node, the managed instance group will replace it so that you always have 4 nodes. You can change the number of nodes in the managed instance group by navigating to the "Instance Groups" link in the sidebar (under Compute -> Compute Engine), clicking on the group name, clicking Edit Group, and then changing the integer in the Number of Instances field. </p>
<p>If you want to delete your cluster, including all node VMs, you should run <code>kube-down.sh</code>. </p>
|
<p>When using Kubernetes to manage your docker containers, particularly when using the replication controller, when should you increase an images running container instances to more than 1? I understand that Kubernetes can spawn as many container replicas as needed in the replication controller configuration file, but why spawn multiple running containers (for the same image) when you can just increase the Compute VM size. I would think, when you need more compute power, go ahead and increase the machine CPU / ram higher, and then only when you reach the max available compute power allowed, approx 32 cores currently at Google, then you would need to spawn multiple containers.</p>
<p>However, it would seem as if spawning multiple containers regardless of VM size would prove more high-availability service, but Kubernetes will respawn failed containers even in a 1 container replication controller environment. So what I can't figure out is, for what reason would I want more than 1 running container (for the same image) for a reason other than running out of VM Instance Compute size?</p>
| <p>There are a variety of reasons for why you would scale an application up or down. </p>
<p>The Kubernetes project is looking to provide auto-scaling in the future as a feature to dynamically size up and size down (potentially to 0) a replication controller in response to observed traffic. For a good discussion on auto-scaling, see the following write-up:</p>
<p><a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md" rel="nofollow">https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md</a></p>
|
End of preview. Expand
in Dataset Viewer.
covert from https://huggingface.co/datasets/mcipriano/stackoverflow-kubernetes-questions/blob/main/README.md
format from parquet to csv
coverting code as below
import pandas as pd
from pandas import read_parquet
data = read_parquet("~/Downloads/kubernetes_dump.parquet")
#print(data.count())
#data.head()
data.to_csv('/tmp/out.csv', index=False)
- Downloads last month
- 38