input
stringlengths
1
18.7k
output
stringlengths
1
18.7k
Let me know if you need me in the meeting, if there is datacatalog specific topics
hey join <https://meet.google.com/bjm-pvuq-bcs> Andrew Chan ^
Hi everyone ! Could you please help? I've set up a schedule for the launch plan in my project. Question: How to prevent the new launch plan execution if the previous one still running ? I try to find such option in the source code but I can't found. The concurrent execution of the job is unacceptable for my use case. Thank you in advice!
currently it is not possible to do this - could you open up a github issue in the flyte repo please? could you also describe the use-case/why you want to do this? context always helps.
currently it is not possible to do this - could you open up a github issue in the flyte repo please? could you also describe the use-case/why you want to do this? context always helps.
thanks , I will create the issue on github
Hi. I’m working on flytepropeller plugin. Just a quick question here: am I supposed not to have an access to kubeClient or there is a way to use it?
Hey Igor Valko :wave: To clarify your question. Are you asking if your plugin will have a client so that you can can create and watch k8s resources? If so, the answer is yes, your plugin should have access to that.
Hey Igor Valko :wave: To clarify your question. Are you asking if your plugin will have a client so that you can can create and watch k8s resources? If so, the answer is yes, your plugin should have access to that.
You might be referring core plugins. For k8s plugins I see no means to reach out to kubeClient <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/nodes/task/k8s/plugin_manager.go> or am I missing smth?
You might be referring core plugins. For k8s plugins I see no means to reach out to kubeClient <https://github.com/lyft/flytepropeller/blob/master/pkg/controller/nodes/task/k8s/plugin_manager.go> or am I missing smth?
not missing anything… so the plugin framework is a bit split right now… flyteplugins contains all the interfaces, and flytepropeller implements them. since most use-cases deal with K8s resources, and because there’s a lot of commonality between plugins that just deal with K8s objects, those were all grouped together, into the plugin_manager you referenced. however, in the interest of iteration speed, it was done in the flytepropeller repo. in the future we expect to move it out (along with some general restructuring). so… if you’re working on a new plugin for k8s, then i think it belongs there. if you’re not, and you’re writing a general plugin that goes into the flyteplugins repo, the client is available yes, in the SetupContext() only of the plugin. so it’s incumbent upon the plugin writer to save the pointer does this make sense?
not missing anything… so the plugin framework is a bit split right now… flyteplugins contains all the interfaces, and flytepropeller implements them. since most use-cases deal with K8s resources, and because there’s a lot of commonality between plugins that just deal with K8s objects, those were all grouped together, into the plugin_manager you referenced. however, in the interest of iteration speed, it was done in the flytepropeller repo. in the future we expect to move it out (along with some general restructuring). so… if you’re working on a new plugin for k8s, then i think it belongs there. if you’re not, and you’re writing a general plugin that goes into the flyteplugins repo, the client is available yes, in the SetupContext() only of the plugin. so it’s incumbent upon the plugin writer to save the pointer does this make sense?
Yes, I see your point. Thanks for the prompt answer.
Thank you for your time and answers! It was very helpful and inspiring meeting! Can you please remind us - Flyte OSS meeting is held on Tuesday at 9AM, correct?
Correct :slightly_smiling_face:
Correct :slightly_smiling_face:
Thank you! Johnny Burns Do you have meets in Hangouts or Zoom or smth else ?
Thank you! Johnny Burns Do you have meets in Hangouts or Zoom or smth else ?
Ruslan Stanevich its on <http://github.com/lyft/flyte|github.com/lyft/flyte> page i should have sent earlier
Ruslan Stanevich its on <http://github.com/lyft/flyte|github.com/lyft/flyte> page i should have sent earlier
see it thank you!
We should probably use <https://aws.amazon.com/awscredits/|https://aws.amazon.com/awscredits/> to perform CI for Flyte releases
Are you saying you want to perform CI in AWS?
Are you saying you want to perform CI in AWS?
Ya It’s free for some
Ya It’s free for some
Interesting. Can you elaborate a bit more? What's your vision for running free CI on AWS?
Interesting. Can you elaborate a bit more? What's your vision for running free CI on AWS?
Every merge to master atleast the Flyte branch gets a set of golden tests
Every merge to master atleast the Flyte branch gets a set of golden tests
How do we qualify for free AWS though (the link you showed is just a trial period, I think)? And would you manually manage the testing platform?
How do we qualify for free AWS though (the link you showed is just a trial period, I think)? And would you manually manage the testing platform?
No they have free for open source
No they have free for open source
Ah, cool. That would be rad. I think we have to apply for that?
Hey all, a bit of a vague request. Many of you know I am building a dagster -&gt; flytekit compiler. I have it seemingly working (can execute the python tasks locally), can successfully register the tasks.
wow super cool! if you have any questions in regards to working with flytekit, feel free to ping me.
wow super cool! if you have any questions in regards to working with flytekit, feel free to ping me.
Thank you Matt!
Almost certainly memory/cpu I think. Heres what I am noticing. my k8s cluster has spun up on the order of 30 sync resources pods that seem to be stuck in pending state. My guess is that is what is going wrong.
Jordan Bramble What happens when you describe the pods using `kubectl describe` ?
Jordan Bramble What happens when you describe the pods using `kubectl describe` ?
I ended up deleting the ones that were pending. I re-registered the tasks, and now when I launch workflow, it hangs with status "UNKNOWN". I no longer see a container corresponding to the workflow being created. This is different than before. screenshot inbound
I ended up deleting the ones that were pending. I re-registered the tasks, and now when I launch workflow, it hangs with status "UNKNOWN". I no longer see a container corresponding to the workflow being created. This is different than before. screenshot inbound
This is likely a different problem from your previous problem. Is it possible `propeller` is not running? How did you delete the pending ones? can you do `kubectl get pods -n flyte` ?
This is likely a different problem from your previous problem. Is it possible `propeller` is not running? How did you delete the pending ones? can you do `kubectl get pods -n flyte` ?
I deleted the containers that were hanging doing: `kubectl get pods -n flyte | grep Pending | awk '{print $1}' | xargs kubectl -n flyte delete pod` here are current pods in the flyte namespace ```Jordans-MacBook-Pro-2:flytekit jordanbramble$ kubectl get pods -n flyte NAME READY STATUS RESTARTS AGE datacatalog-6f9db4f88f-2vbg8 1/1 Running 0 21h flyteadmin-694cc79fb4-dmr7x 2/2 Running 0 21h flyteconsole-749fcd46d5-bn7rk 1/1 Running 0 21h flytepropeller-6f897bfd68-4krx8 1/1 Running 0 21h minio-f58cffb47-qqccw 1/1 Running 0 21h postgres-759fc6996-bkh95 1/1 Running 0 21h redis-0 1/1 Running 0 21h syncresources-1587527520-7xcgg 0/1 Completed 0 19h syncresources-1587527580-gprkr 0/1 Completed 0 19h syncresources-1587527640-wtm2m 0/1 Completed 0 19h```
I deleted the containers that were hanging doing: `kubectl get pods -n flyte | grep Pending | awk '{print $1}' | xargs kubectl -n flyte delete pod` here are current pods in the flyte namespace ```Jordans-MacBook-Pro-2:flytekit jordanbramble$ kubectl get pods -n flyte NAME READY STATUS RESTARTS AGE datacatalog-6f9db4f88f-2vbg8 1/1 Running 0 21h flyteadmin-694cc79fb4-dmr7x 2/2 Running 0 21h flyteconsole-749fcd46d5-bn7rk 1/1 Running 0 21h flytepropeller-6f897bfd68-4krx8 1/1 Running 0 21h minio-f58cffb47-qqccw 1/1 Running 0 21h postgres-759fc6996-bkh95 1/1 Running 0 21h redis-0 1/1 Running 0 21h syncresources-1587527520-7xcgg 0/1 Completed 0 19h syncresources-1587527580-gprkr 0/1 Completed 0 19h syncresources-1587527640-wtm2m 0/1 Completed 0 19h```
Your pending pods were running in the `flyte` namespace?
Your pending pods were running in the `flyte` namespace?
yes, they were all syncresources-* I previously had a pod ran in a created namespace for the task that I registered in flyte. but I deleted those as well after aborting them. Some were in error.
yes, they were all syncresources-* I previously had a pod ran in a created namespace for the task that I registered in flyte. but I deleted those as well after aborting them. Some were in error.
Hmmm... Your workflow still hasn't launched? Maybe check the FlytePropeller logs?
Hmmm... Your workflow still hasn't launched? Maybe check the FlytePropeller logs?
forgive me for a dumb question, do I do that by running kubectl logs for that pod?
forgive me for a dumb question, do I do that by running kubectl logs for that pod?
Ah. yeah, you can do `kubectl logs -n flyte {propeller pod name}` Also, what happens if you do `kubectl get pods -n {your-project-namespace}-{yourenvironmnt}` maybe `dagstertest-staging` is the namespace
Ah. yeah, you can do `kubectl logs -n flyte {propeller pod name}` Also, what happens if you do `kubectl get pods -n {your-project-namespace}-{yourenvironmnt}` maybe `dagstertest-staging` is the namespace
yes that was the namespace, when I originally started this thread. but launching in the flyte UI is no longer creating a pod under that namespace anymore when I try to access propellor logs: `Error from server: Get <https://172.17.0.2:10250/containerLogs/flyte/flytepropeller-6f897bfd68-4krx8/flytepropeller>: dial tcp 172.17.0.2:10250: connect: connection refused` I am surprised by this, I thought all of these pods were running locally on minikube.
yes that was the namespace, when I originally started this thread. but launching in the flyte UI is no longer creating a pod under that namespace anymore when I try to access propellor logs: `Error from server: Get <https://172.17.0.2:10250/containerLogs/flyte/flytepropeller-6f897bfd68-4krx8/flytepropeller>: dial tcp 172.17.0.2:10250: connect: connection refused` I am surprised by this, I thought all of these pods were running locally on minikube.
Maybe your minikube VM doesn't expose that port. Maybe try to `minikube ssh` and then check logs You can also check the logs for `flyteadmin` service
Maybe your minikube VM doesn't expose that port. Maybe try to `minikube ssh` and then check logs You can also check the logs for `flyteadmin` service
inside of minikube VM, any idea the singificance of these '/pause' commands? ``` docker@minikube:~/dagster_flyte_test$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 33d62a034040 <http://gcr.io/heptio-images/contour|gcr.io/heptio-images/contour> "contour serve --inc…" 22 hours ago Up 22 hours k8s_contour-unknown_contour-d7cff74b5-r8mrv_heptio-contour_84c1a2c5-400f-4de0-b261-a92cdd33f64d_0 a101f465f201 redocly/redoc "sh -c 'ln -s /usr/s…" 22 hours ago Up 22 hours k8s_redoc_flyteadmin-694cc79fb4-dmr7x_flyte_43875e46-837f-487e-a62e-9df797d5f113_0 68a1332e8e89 <http://gcr.io/spark-operator/spark-operator|gcr.io/spark-operator/spark-operator> "/usr/bin/spark-oper…" 22 hours ago Up 22 hours k8s_sparkoperator-unknown_sparkoperator-96ffc7d89-6zdtq_sparkoperator_9526aae1-57a8-42b5-8217-f18ba3e4683c_0 efbfe8b000db envoyproxy/envoy-alpine "envoy -c /config/co…" 22 hours ago Up 22 hours k8s_envoy-envoyingressv1_contour-d7cff74b5-r8mrv_heptio-contour_84c1a2c5-400f-4de0-b261-a92cdd33f64d_0 c661e16339cc 52f60f817d16 "datacatalog --logto…" 22 hours ago Up 22 hours k8s_datacatalog_datacatalog-6f9db4f88f-2vbg8_flyte_7ca5e39e-9941-4f65-9466-17505d9a817c_0 68b80f51e5f0 66c598488568 "flyteadmin --logtos…" 22 hours ago Up 22 hours k8s_flyteadmin_flyteadmin-694cc79fb4-dmr7x_flyte_43875e46-837f-487e-a62e-9df797d5f113_0 993a7f700108 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_sparkoperator-96ffc7d89-6zdtq_sparkoperator_9526aae1-57a8-42b5-8217-f18ba3e4683c_0 a07d6ed8e489 bitnami/redis "/app-entrypoint.sh …" 22 hours ago Up 22 hours k8s_redis-resource-manager_redis-0_flyte_34fc92ab-b139-4a9d-b03d-517357c8d034_0 faaf08d47321 postgres "docker-entrypoint.s…" 22 hours ago Up 22 hours k8s_postgres_postgres-759fc6996-bkh95_flyte_b6230298-5a24-4f67-88a3-bf194e6fffb1_0 7564ed54b69c lyft/flyteconsole "/nodejs/bin/node in…" 22 hours ago Up 22 hours k8s_flyteconsole_flyteconsole-749fcd46d5-bn7rk_flyte_6b67253b-170c-4844-ac07-768328e84b2e_0 ee1ea2da1f91 minio/minio "/usr/bin/docker-ent…" 22 hours ago Up 22 hours k8s_minio_minio-f58cffb47-qqccw_flyte_d6cd809f-4fa2-40f6-9102-73283a5b1890_0 f7f27e19889d <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_contour-d7cff74b5-r8mrv_heptio-contour_84c1a2c5-400f-4de0-b261-a92cdd33f64d_0 954f319247c4 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_flyteadmin-694cc79fb4-dmr7x_flyte_43875e46-837f-487e-a62e-9df797d5f113_0 4e8bd45cb1d5 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_redis-0_flyte_34fc92ab-b139-4a9d-b03d-517357c8d034_0 f388e6da9b66 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_flytepropeller-6f897bfd68-4krx8_flyte_c95423eb-4003-4206-958d-401bd8131fe5_0 66c2d1a92c2e <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_postgres-759fc6996-bkh95_flyte_b6230298-5a24-4f67-88a3-bf194e6fffb1_0 d471a6426605 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_minio-f58cffb47-qqccw_flyte_d6cd809f-4fa2-40f6-9102-73283a5b1890_0 880edc7cb3d8 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_datacatalog-6f9db4f88f-2vbg8_flyte_7ca5e39e-9941-4f65-9466-17505d9a817c_0 70948f2729fb <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_flyteconsole-749fcd46d5-bn7rk_flyte_6b67253b-170c-4844-ac07-768328e84b2e_0 ba2cc20ff246 4689081edb10 "/storage-provisioner" 22 hours ago Up 22 hours k8s_storage-provisioner_storage-provisioner_kube-system_c520de17-88ec-4048-afec-4b8ddb1c0824_1 6545d5844fb8 43940c34f24f "/usr/local/bin/kube…" 22 hours ago Up 22 hours k8s_kube-proxy_kube-proxy-5gdrd_kube-system_1c6697ba-ea64-4a09-b425-2bf52cccb08e_0 812e4a17d215 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-proxy-5gdrd_kube-system_1c6697ba-ea64-4a09-b425-2bf52cccb08e_0 0f05d4700c5c <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_storage-provisioner_kube-system_c520de17-88ec-4048-afec-4b8ddb1c0824_0 00b42b722632 67da37a9a360 "/coredns -conf /etc…" 22 hours ago Up 22 hours k8s_coredns_coredns-66bff467f8-d4lmm_kube-system_1d96de74-9f98-49d2-b796-e131768dc5c1_0 ab3390711d79 67da37a9a360 "/coredns -conf /etc…" 22 hours ago Up 22 hours k8s_coredns_coredns-66bff467f8-9fps6_kube-system_00ccab71-d412-4137-87a1-404100c73eb4_0 8713a2a69a5b aa67fec7d7ef "/bin/kindnetd" 22 hours ago Up 22 hours k8s_kindnet-cni_kindnet-2ncvj_kube-system_dcbf8ff7-bcf9-497a-8ffc-75fc900a58b4_0 15c6f7df4812 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kindnet-2ncvj_kube-system_dcbf8ff7-bcf9-497a-8ffc-75fc900a58b4_0 b414ff1d4a11 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_coredns-66bff467f8-d4lmm_kube-system_1d96de74-9f98-49d2-b796-e131768dc5c1_0 83c21644b747 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_coredns-66bff467f8-9fps6_kube-system_00ccab71-d412-4137-87a1-404100c73eb4_0 bd5e08da5f5c 303ce5db0e90 "etcd --advertise-cl…" 22 hours ago Up 22 hours k8s_etcd_etcd-minikube_kube-system_ca02679f24a416493e1c288b16539a55_0 14d447e9a97d 74060cea7f70 "kube-apiserver --ad…" 22 hours ago Up 22 hours k8s_kube-apiserver_kube-apiserver-minikube_kube-system_45e2432c538c36239dfecde67cb91065_0 23ba80eb3f24 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_etcd-minikube_kube-system_ca02679f24a416493e1c288b16539a55_0 72a44a3fafbb <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-scheduler-minikube_kube-system_5795d0c442cb997ff93c49feeb9f6386_0 d62e857b95d5 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-controller-manager-minikube_kube-system_c92479a2ea69d7c331c16a5105dd1b8c_0 3b4cc84d6c30 <http://k8s.gcr.io/pause:3.2|k8s.gcr.io/pause:3.2> "/pause" 22 hours ago Up 22 hours k8s_POD_kube-apiserver-minikube_kube-system_45e2432c538c36239dfecde67cb91065_0``` looks like that happened propellor and a few things.
What does Flyte use redis for? Is it for allocating and releasing tokens for external resources?
Yes it uses refuse to throttle requests to external services to maintain quotas and pools It should be optional In kubernetes this is achieved by resource quota and other means
Yes it uses refuse to throttle requests to external services to maintain quotas and pools It should be optional In kubernetes this is achieved by resource quota and other means
Just to add color to this: In an ideal scenario, external services maintain server side enforced quota/limits within which the service should be expected to behave normally. In the real world though, this is not always the case. As Ketan said, this is an optional component in the sense that plugin authors can choose to use if the target service they communicate with doesn't follow that norm. But as a system administrator, if you choose to enable one of these plugins (e.g. Hive Plugin), you are required to setup Redis (or choose Noop which basically means you get no protection from client side)
Another question, after installing the Flyte sandbox I'm running the flytesnacks workflow and receiving a 500: ```$ docker run --network host -e FLYTE_PLATFORM_URL='127.0.0.1:30081' lyft/flytesnacks:v0.1.0 pyflyte -p flytesnacks -d development -c sandbox.config register workflows Using configuration file at /app/sandbox.config Flyte Admin URL 127.0.0.1:30081 Running task, workflow, and launch plan registration for flytesnacks, development, ['workflows'] with version 46045e6383611da1cb763d64d846508806fce1a4 Registering Task: workflows.edges.edge_detection_canny Traceback (most recent call last): File "/app/venv/bin/pyflyte", line 11, in &lt;module&gt; sys.exit(main()) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/app/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func return f(get_current_context(), *args, **kwargs) File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 86, in workflows register_all(project, domain, pkgs, test, version) File "/app/venv/lib/python3.6/site-packages/flytekit/clis/sdk_in_container/register.py", line 24, in register_all o.register(project, domain, name, version) File "/app/venv/lib/python3.6/site-packages/flytekit/common/exceptions/scopes.py", line 158, in system_entry_point return wrapped(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/flytekit/common/tasks/task.py", line 141, in register _engine_loader.get_engine().get_task(self).register(id_to_register) File "/app/venv/lib/python3.6/site-packages/flytekit/engines/flyte/engine.py", line 234, in register self.sdk_task File "/app/venv/lib/python3.6/site-packages/flytekit/clients/friendly.py", line 50, in create_task spec=task_<http://spec.to|spec.to>_flyte_idl() File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 12, in handler return fn(*args, **kwargs) File "/app/venv/lib/python3.6/site-packages/flytekit/clients/raw.py", line 77, in create_task return self._stub.CreateTask(task_create_request) File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 604, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/app/venv/lib/python3.6/site-packages/grpc/_channel.py", line 506, in _end_unary_response_blocking raise _Rendezvous(state, None, None, deadline) grpc._channel._Rendezvous: &lt;_Rendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Received http2 header with status: 500" debug_error_string = "{"created":"@1587993306.222037400","description":"Received http2 :status header with non-200 OK status","file":"src/core/ext/filters/http/client/http_client_<http://filter.cc|filter.cc>","file_line":122,"grpc_message":"Received http2 header with status: 500","grpc_status":1,"value":"500"}" &gt;```
Do you mind taking a look at this <https://flyte-org.slack.com/archives/CP2HDHKE1/p1584576549032500?thread_ts=1584576549.032500|thread> for suggestions on how to approach this? I would recommend searching the <#CP2HDHKE1|onboarding> channels there are a lot of gems in there :slightly_smiling_face:
_New user here._ I was hoping to invoke microservices with flyte. However, in the discussion on task <https://github.com/lyft/flyte/blob/25d79e37bd02f200976312cbe592a66c563d0041/rsts/user/concepts/tasks.rst|requirements> there is this note on pure functions (*bold* text is my own to call out the question): &gt; Is it a *pure* function? i.e. does it have side effects that are not known to the system (e.g. calls a web-service). It's strongly advisable to *avoid side-effects* in tasks. When side-effects are required, ensure that those operations are *idempotent*. What are the best practices when calling RESTful (or gRPC) APIs so that one doesn't invalidate the *idempotent requirement?*
Hi Joseph, first of all welcome! Hey it is ‘advisable’ to avoid side effects because of idempotency, for example to provide retries and deterministic re execution, it should be replayable That being said if you are calling microservices then there are ways of making it idempotent But making it replayable is very hard Also if you are calling microservices it should be fine to ignore the warning, but also disable caching (default is disabled)
Hi Joseph, first of all welcome! Hey it is ‘advisable’ to avoid side effects because of idempotency, for example to provide retries and deterministic re execution, it should be replayable That being said if you are calling microservices then there are ways of making it idempotent But making it replayable is very hard Also if you are calling microservices it should be fine to ignore the warning, but also disable caching (default is disabled)
To illustrate ketan's point, if you had an order processing workflow, and the last step calls out to the payments service, you'd need to make sure that if the task re-runs, it doesn't charge the credit card twice.
To illustrate ketan's point, if you had an order processing workflow, and the last step calls out to the payments service, you'd need to make sure that if the task re-runs, it doesn't charge the credit card twice.
Understood. So, if I use compensating transactions I would be cool.
Understood. So, if I use compensating transactions I would be cool.
Yup And again this is a correctness requirement Not a operating requirement If you don’t mind I would love to understand your requirement
Yup And again this is a correctness requirement Not a operating requirement If you don’t mind I would love to understand your requirement
I agree that we can make the microservices idempotent.
_Another new user question_ -- What are the recommended practices for the edit, debug, test cycle in flyte?
Great question so here is what we do at Lyft, i am not saying this is the right way 1. When you write code, you write one task a time and hopefully unit test it somewhat 2. When you are somewhat confident, you can build a container and register it 3. then you run an execution and debug 2/3 are done automatically using Pull Requests in github we automatically push the container and register the flow with Flyteadmin
Great question so here is what we do at Lyft, i am not saying this is the right way 1. When you write code, you write one task a time and hopefully unit test it somewhat 2. When you are somewhat confident, you can build a container and register it 3. then you run an execution and debug 2/3 are done automatically using Pull Requests in github we automatically push the container and register the flow with Flyteadmin
That process makes sense. I'd like more details on the debugging side. I'd assume that most of the time people are just logging what seems to be important. When something goes wrong, they look at the output and add extra prints to chase down the issue.
That process makes sense. I'd like more details on the debugging side. I'd assume that most of the time people are just logging what seems to be important. When something goes wrong, they look at the output and add extra prints to chase down the issue.
Here is something I do in personal projects sometimes: I place my task logic in a separate python module. My Flyte workflow mostly handles passing the inputs to those module functions. If I have issues, I just `docker run -it &lt;mydockerimage&gt;` then use a python interpreter and manually test the python module.
Here is something I do in personal projects sometimes: I place my task logic in a separate python module. My Flyte workflow mostly handles passing the inputs to those module functions. If I have issues, I just `docker run -it &lt;mydockerimage&gt;` then use a python interpreter and manually test the python module.
Joseph Winston also Flyte has a way to debug locally from a remote execution. It allows you to load data from partial steps remote and restart an execution maybe of a task locally I use this to debug like in a jupyter notebook
Joseph Winston also Flyte has a way to debug locally from a remote execution. It allows you to load data from partial steps remote and restart an execution maybe of a task locally I use this to debug like in a jupyter notebook
Ketan Umare, can you please point me to the documentation? Thanks.
Ketan Umare, can you please point me to the documentation? Thanks.
here is some private documentation… the earlier portions contain lyft-specific stuff and we just haven’t had time to port things over yet. you’ll need to make sure the correct settings are set as well.
here is some private documentation… the earlier portions contain lyft-specific stuff and we just haven’t had time to port things over yet. you’ll need to make sure the correct settings are set as well.
Thank you Yee - Joseph Winston, we will create an issue to port over this documentation
Thank you Yee - Joseph Winston, we will create an issue to port over this documentation
so perhaps start your python session with something like `FLYTE_PLATFORM_URL=<http://blah.net|blah.net>`
so perhaps start your python session with something like `FLYTE_PLATFORM_URL=<http://blah.net|blah.net>`
Let me try this.
Let me try this.
Thank you
When I submit my workflow, I receive the following trace back: ``` raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: &lt;_InactiveRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "workflow with different structure already exists with id resource_type:WORKFLOW project:"afi" domain:"development" name:"empty-afi.empty.AFI_WF" version:"3c8408be6ab9eb1736d237ce3e71e7dbd2f5eff8" " debug_error_string = "{"created":"@1588358901.586477489","description":"Error received from peer ipv4:172.17.252.205:30081","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"workflow with different structure already exists with id resource_type:WORKFLOW project:"afi" domain:"development" name:"empty-afi.empty.AFI_WF" version:"3c8408be6ab9eb1736d237ce3e71e7dbd2f5eff8" ","grpc_status":3}"``` How do I delete workflows? Or projects? Or tasks? This fails ```curl -X DELETE ${FLYTE_PLATFORM_URL}/api/v1/projects -d '{"project": {"id": "afi", "name": "afi"} }'```
you can't delete workflows atm but you can always register with a different version if the workflow structure has changed in the meantime
you can't delete workflows atm but you can always register with a different version if the workflow structure has changed in the meantime
At the moment, no entities are deletable, short of directly modifying the database used by Flyte Admin. Workflow IDs are a unique combination of project/domain/name/version. A new version should exist any time the underlying code changes (it’s common practice for the version to be defined by the current git sha). So if you’re seeing that error, it should mean you don’t need to register it again.
At the moment, no entities are deletable, short of directly modifying the database used by Flyte Admin. Workflow IDs are a unique combination of project/domain/name/version. A new version should exist any time the underlying code changes (it’s common practice for the version to be defined by the current git sha). So if you’re seeing that error, it should mean you don’t need to register it again.
Joseph Winston all entities in Flyte are immutable baring a few aesthetic attributes. You should worry about wrong registrations just register new ones
Joseph Winston all entities in Flyte are immutable baring a few aesthetic attributes. You should worry about wrong registrations just register new ones
Thanks. I'm doing exactly that.
Does it matter what order you stack the decorators in flytekit? In other words could you specify inputs, outputs, and python_task in any order?
the task decorator needs to come before, but no order for inputs/outputs. you might even be able to apply input and output decorators multiple times. is there something specific you are trying to implement? if the decorators are making it difficult, i might be able to suggest a cleaner way.
the task decorator needs to come before, but no order for inputs/outputs. you might even be able to apply input and output decorators multiple times. is there something specific you are trying to implement? if the decorators are making it difficult, i might be able to suggest a cleaner way.
just curious. As you know I've been building a dagster to flyte compiler. Now I want to include Flyte's typed inputs and outputs. I am programmatically constructing SdkRunnableTask's currently Now, I am trying to figure out the architecture for constructing Inputs and Outputs as well.
just curious. As you know I've been building a dagster to flyte compiler. Now I want to include Flyte's typed inputs and outputs. I am programmatically constructing SdkRunnableTask's currently Now, I am trying to figure out the architecture for constructing Inputs and Outputs as well.
cool! i know parts of dagster are open-source, do you have an example of what you have currently?
cool! i know parts of dagster are open-source, do you have an example of what you have currently?
yes! <https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-flyte/dagster_flyte/flyte_compiler.py#L74> This currently bypasses inputs/outputs in flyte, and relies on dagster for everything.
yes! <https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-flyte/dagster_flyte/flyte_compiler.py#L74> This currently bypasses inputs/outputs in flyte, and relies on dagster for everything.
awesome, and what does passing inputs and outputs look like in dagster, is it by file(s)?
awesome, and what does passing inputs and outputs look like in dagster, is it by file(s)?
so theres a few things. The fields in in the function sig can have typehints and dagster will utilize them. Additionally you can pass input configs and output configs to the `@solid` decorator, and create InputDefinition/OutputDefinition. Let me show you some docs some examples in here: <https://docs.dagster.io/docs/tutorial/basics#providing-input-values-for-custom-types-in-config> <https://docs.dagster.io/docs/apidocs/solids#dagster.InputDefinition>
so theres a few things. The fields in in the function sig can have typehints and dagster will utilize them. Additionally you can pass input configs and output configs to the `@solid` decorator, and create InputDefinition/OutputDefinition. Let me show you some docs some examples in here: <https://docs.dagster.io/docs/tutorial/basics#providing-input-values-for-custom-types-in-config> <https://docs.dagster.io/docs/apidocs/solids#dagster.InputDefinition>
ok sweet! i’ll take a read of these--it might take me a bit to process. then i’ll try to get back to you with: 1. a high-level overview of how our i/o passing and type system works currently. 2. maybe a couple hints/code snippets i can give as a starting point of how to think about our system in a way that is relevant to you
ok sweet! i’ll take a read of these--it might take me a bit to process. then i’ll try to get back to you with: 1. a high-level overview of how our i/o passing and type system works currently. 2. maybe a couple hints/code snippets i can give as a starting point of how to think about our system in a way that is relevant to you
Amazing, that would be greatly appreciated. looking at the definitions of inputs and outputs, this actually looks pretty straightforward. Definitely interested in your advice here though. the outputs might be slightly confusing however.
Hi :hand: Do you have experience running flyteadmin in multiple replicas? considering if we have enabled workflow scheduling and notifications. Thank you in advance!
Yes. We run several Admin services (or replicas/pods) always. Admin is stateless. Workflow schedules and notifications should all work independent of the replica size. In fact I encourage you to run multiple replicas for better availability.
Yes. We run several Admin services (or replicas/pods) always. Admin is stateless. Workflow schedules and notifications should all work independent of the replica size. In fact I encourage you to run multiple replicas for better availability.
Thank you Anand! This is good news!
Hello Everyone! Could you please advice about `Spark` in Flyte? So, the question related to `aws-java-sdk` version <https://github.com/lyft/flytekit/blob/master/scripts/flytekit_install_spark.sh#L48> Do you use newer `hadoop` version with newer `aws java sdk` version? Actually, there is no problem with hadoop but I just care about custom location for “mounted” aws credentials file: `AWS_CREDENTIAL_PROFILES_FILE` It maintains only by new aws sdk as I see Thank you in advance! So, there in no problem with it now ) Because we can mont file like /path/to/envrc to container with content like ```AWS_ACCESS_KEY=bla-bla AWS_SECRET_KEY=bla-bla-bla``` and add to entrypoint ```source /path/to/envrc``` Sorry for mess :slightly_smiling_face:
hey Ruslan Stanevich you should be able to update any hadoop jars. I think that script should be taken as a starting point only Also internally we are now using hadopp 3.0 jars because the output committer is better
hey Ruslan Stanevich you should be able to update any hadoop jars. I think that script should be taken as a starting point only Also internally we are now using hadopp 3.0 jars because the output committer is better
Ketan, thank you! Great to know! Will share this info
Hello! We are currently considering using Pachyderm in our company for running our workflows. We recently heard of Flyte and it looks like an interesting alternative. It has some very similar concepts and seems to try to solve some similar pain points. I wonder if someone has compared both solutions, and which conclusions they came to (e.g. pros and cons when comparing them). Thanks for any input!
Hi Guilherme, welcome Let me formulate a response and I will post back.
Hi Guilherme, welcome Let me formulate a response and I will post back.
interesting. I don’t overly think of Pachyderm and Flyte as exclusive/alternatives (either/or)
interesting. I don’t overly think of Pachyderm and Flyte as exclusive/alternatives (either/or)
. Pachyderm: Cool - they have a git like data management primitive This implies all data has to be stored in their own file system They do have a workflow engine, but it is Json driven and does not really work with other open source projects (AFAIK) like spark, flink, distributed training etc They use some custom stuff in kubernetes, and everything used to be driven by pachctl Flyte: Do not re-invent. Work is done by plugins, plugins can use k8s, aws batch, spark, distributed training, hive, presto etc (you can add your own backend plugins) Tasks and Workflows are at the core We do not have a file system, but we understand data using our custom type system, for every task you have to have a input output interface, which allows the Flyte engine to create a lineage graph of data dependencies Tasks are treated as top class citizens and hence we can cache the artifacts and de-dupe them across executions without having our own file system Data can be stored in any cloud store, across many buckets in their own native formats Tasks are language independent, can b written in any language, but we come with a python SDK Java/Scala SDK is being worked on by Spotify We use FLyte in production at Lyft for very large scale workloads
. Pachyderm: Cool - they have a git like data management primitive This implies all data has to be stored in their own file system They do have a workflow engine, but it is Json driven and does not really work with other open source projects (AFAIK) like spark, flink, distributed training etc They use some custom stuff in kubernetes, and everything used to be driven by pachctl Flyte: Do not re-invent. Work is done by plugins, plugins can use k8s, aws batch, spark, distributed training, hive, presto etc (you can add your own backend plugins) Tasks and Workflows are at the core We do not have a file system, but we understand data using our custom type system, for every task you have to have a input output interface, which allows the Flyte engine to create a lineage graph of data dependencies Tasks are treated as top class citizens and hence we can cache the artifacts and de-dupe them across executions without having our own file system Data can be stored in any cloud store, across many buckets in their own native formats Tasks are language independent, can b written in any language, but we come with a python SDK Java/Scala SDK is being worked on by Spotify We use FLyte in production at Lyft for very large scale workloads
Thanks!
<!here> Hello Flyers, reminder that our bi-weekly Flyte OSS sync is tomorrow, 6/30 at 9am PDT. Ketan and Yee may have a demo of some exciting ML integration if the stars align. Zoom: <https://us04web.zoom.us/j/71298741279?pwd=TDR1RUppQmxGaDRFdzBOa2lHN1dsZz09> Meeting ID: 712 9874 1279 Password: 7stPGd -g
George Snelling I think Chang-Hong Hsu would love to demo it in the next meeting. Its his work :slightly_smiling_face:
George Snelling I think Chang-Hong Hsu would love to demo it in the next meeting. Its his work :slightly_smiling_face:
Ketan Umare thank you for the credits. George Snelling I’d be more than happy to do it in the next meeting and I believe we will have a even more robust integration to show then :)
Hello Everyone! Could you please advise about running specific pyflyte workflow on dedicated eks node? Is using @`sidecar_task` decorator and `pod_spec` only and common approach for setting `nodeSelector` and `tolerations` on running Pods? Thank you in advance! Just needs isolated (with no other pods) node with big disk size
hey Ruslan Stanevich do you want this to be an execution time or launch plan so we have annotations and labels yes otherwise you will have to set it on the sidecar today AFAIK we dont have specific node selector attributes what is the usecase Ruslan Stanevich? the reason why i think its not good to use node affinity, for workflow itself, maybe we can have it for an execution
hey Ruslan Stanevich do you want this to be an execution time or launch plan so we have annotations and labels yes otherwise you will have to set it on the sidecar today AFAIK we dont have specific node selector attributes what is the usecase Ruslan Stanevich? the reason why i think its not good to use node affinity, for workflow itself, maybe we can have it for an execution
You can pin by requesting the whole machine, very inelegant though and only works for the biggest machine
You can pin by requesting the whole machine, very inelegant though and only works for the biggest machine
Thank you very much for your response! Most of our workflows run in `pyspark` jobs. And, yes, we manage the node selector and annotations for SparkApplication with our tool. We have several `non-spark` tasks, and basically they are quite lightweight But there is a workflow that downloads a large CSV file and processes it (100Gb for now). &gt; _(Maybe Spark will be better for this, but this task is based on a “ready-made” solution (afaik))._ And it would be nice to be able to separately configure the parameters of this EKS Node (node group), for example, `increase the capacity of the attached volume` up to several hundreds GB or more. Basically, other nodes do not need such parameters. And honestly, I would like to ask this team to contact these details on this channel. :thinking_face: I’m just trying to do it from infra perspective.
Arun Kevin Su Welcome to the community One more member will be joining soon. All of flyte is in golang, There are couple completely parallel projects that are in progress - flytectl and a golang SDK for flyte flytectl is worked on by austin we could also bootstrap golang sdk for flyte and Kevin Su is helping with TFoperator support
:+1: Welcome!
:+1: Welcome!
We would also love some help in understand how we could add Flyte as a target to TFX - <https://www.tensorflow.org/tfx/api_docs/python/tfx/orchestration>
We would also love some help in understand how we could add Flyte as a target to TFX - <https://www.tensorflow.org/tfx/api_docs/python/tfx/orchestration>
:+1:
:+1:
Ketan Umare If I understand correctly, TFX run on some workflow orchestrator like airflow, kubeflow. If we could implement tfx workflow orchestrator interface. we could run TFX on Flyte. btw, Linkedin has ran TFX on askaban Any WIP issue about this, I'd like to join the thread.
Ketan Umare If I understand correctly, TFX run on some workflow orchestrator like airflow, kubeflow. If we could implement tfx workflow orchestrator interface. we could run TFX on Flyte. btw, Linkedin has ran TFX on askaban Any WIP issue about this, I'd like to join the thread.
No issue yet, we know that tfx can be targeted to an orchestrator. We want to use flyte and in there add flink/spark as the beam runners
No issue yet, we know that tfx can be targeted to an orchestrator. We want to use flyte and in there add flink/spark as the beam runners
Gotcha, I will also investigate on it.
Ketan Umare — do we have video from last week’s meeting? Have been on lookout (was only able to dial in for a few minutes until pulled into helping coworker).
Ohh George Snelling is on the hook for it. He moved us to zoom
Hi Everyone! We’re currently looking into Flyte and the design around data awareness via a type system looks very interesting. I also like the idea of a language independent declarative workflow spec and that you’re building on FP concepts like composition and immutability for caching and repeatability. Playing with Flyte, I’m still a bit confused about task/workflow registration and I couldn’t find too much information about it in the docs. The recommended way seems to build a docker container with the workflow code and flytekit. Then I run pyflyte from within that container to register the workflow. Is registering from inside the container the only way? What happens if I have a workflow that requires different containers, i.e. a workflow that contains a regular Python task and a Spark task, or even just conflicting Python dependencies for different tasks. How would I usually do the workflow registration in such a case? I’ve also seen that there’s some work about <https://github.com/lyft/flyte/issues/297|raw-containers> that might be related. Thanks, Sören
hi Sören Brunk first of all welcome. Love that you have been digging into Flyte (more the eyes better it is) Give me a few minutes and I would love to answer all your questions :slightly_smiling_face: Also would love to jump on a call and discuss more at length
hi Sören Brunk first of all welcome. Love that you have been digging into Flyte (more the eyes better it is) Give me a few minutes and I would love to answer all your questions :slightly_smiling_face: Also would love to jump on a call and discuss more at length
Thanks Ketan Umare and no hurries since I’ll try to get some sleep now (European timezone). :slightly_smiling_face: I’d be happy to talk. I’ll PM you tomorrow if that’s ok.
Thanks Ketan Umare and no hurries since I’ll try to get some sleep now (European timezone). :slightly_smiling_face: I’d be happy to talk. I’ll PM you tomorrow if that’s ok.
yes please PM me whenever The points you have brought up are great and your question ```bit confused about task/workflow registration and I couldn't find too much information about it in the docs. The recommended way seems to build a docker container with the workflow code and flytekit.``` Short answer: It is just documentation. Flyte absolutely supports a separate container per doc. But doing that cleanly in flytekit (python) needs some more work workflow registration is actually a 2 step process step 1: task registration (which should be tied to the container) step 2: workflow registration simplifying this for the user is the challenge, and I can say we have not really crossed it
yes please PM me whenever The points you have brought up are great and your question ```bit confused about task/workflow registration and I couldn't find too much information about it in the docs. The recommended way seems to build a docker container with the workflow code and flytekit.``` Short answer: It is just documentation. Flyte absolutely supports a separate container per doc. But doing that cleanly in flytekit (python) needs some more work workflow registration is actually a 2 step process step 1: task registration (which should be tied to the container) step 2: workflow registration simplifying this for the user is the challenge, and I can say we have not really crossed it
Ok that makes sense, thanks for the explanation. I think I need to get a better feeling for flytekit so I’ll try to build a few example workflows that reflect our use cases. If I hit any major roadblocks I’ll ask here again.
Ok that makes sense, thanks for the explanation. I think I need to get a better feeling for flytekit so I’ll try to build a few example workflows that reflect our use cases. If I hit any major roadblocks I’ll ask here again.
Yes please, I also understand that flytekit needs work so please feel free to reach out
Hello Everyone! Does anybody work with Dynamic tasks? I'm trying to run dynamic tasks sequentially, but it always runs in parallel even I set `max_concurrency=1` The sample workflow: ```from __future__ import absolute_import, division, print_function from flytekit.sdk.tasks import dynamic_task, inputs, outputs, python_task from flytekit.sdk.types import Types from flytekit.sdk.workflow import Input, Output, workflow_class import time @inputs(command=Types.String) @outputs(out_str=Types.String) @python_task def other_task(wf_params, command, out_str): time.sleep(60) out_str.set(command) @inputs(in_str=Types.String) @outputs(out_str=Types.String) @dynamic_task(max_concurrency=1) def str_inc(wf_params, in_str, out_str): res = [] for s in in_str: task = other_task(command=s) yield task res.append(task.outputs.out_str) out_str.set(str(res)) @workflow_class class DummyWf(object): in_str = Input(Types.String, required=True, help="in_str") run_str_inc = str_inc(in_str=in_str) edges = Output(run_str_inc.outputs.out_str, sdk_type=Types.String) ``` So when I'm running workflow with `123456` input I'm expecting that execution should take at least 6 minutes (because each task sleeps 60 seconds), but it takes about 2 I'll be much appreciated if somebody knows how to solve this issue
Yee / Haytham Abuelfutuh if you near a computer. Else I will answer in a bit
Yee / Haytham Abuelfutuh if you near a computer. Else I will answer in a bit
here... answering... Hey Aleksandr, yes max concurrency is not yet implemented unfortunately. However, you can achieve what you want by generating a workflow with node dependencies between the nodes in the `@dynamic_task` . Let me try to write you an example: <https://github.com/lyft/flytesnacks/blob/master/cookbook/workflows/recipe_2/dynamic.py#L15-L42|This> is an example of generating a dynamic workflow through `@dynamic_task` . <https://gist.github.com/EngHabu/d1faab2a9088434aec3ea467b5dcf690|Here> is an example that will do what you are trying to do. Note that it doesn't collect the outputs of these tasks. Please let me know if it's not obvious how to achieve that last bit..
here... answering... Hey Aleksandr, yes max concurrency is not yet implemented unfortunately. However, you can achieve what you want by generating a workflow with node dependencies between the nodes in the `@dynamic_task` . Let me try to write you an example: <https://github.com/lyft/flytesnacks/blob/master/cookbook/workflows/recipe_2/dynamic.py#L15-L42|This> is an example of generating a dynamic workflow through `@dynamic_task` . <https://gist.github.com/EngHabu/d1faab2a9088434aec3ea467b5dcf690|Here> is an example that will do what you are trying to do. Note that it doesn't collect the outputs of these tasks. Please let me know if it's not obvious how to achieve that last bit..
many thanks, looks like it should help Haytham Abuelfutuh could you share any timelines when `max_cuncurrency` can be implemented?
many thanks, looks like it should help Haytham Abuelfutuh could you share any timelines when `max_cuncurrency` can be implemented?
hi Aleksandr Vergeychik can you please file an issue or +1, if there is one. I think someone is looking into this. Lets use that forum
I’m trying to run Spark tasks to run in a Flyte sandbox installation but I’m running into issues. I managed to configure everything so that Flyte starts the driver pod but now I’m stuck with the following error (from the pod logs): ```Usage: entrypoint.py [OPTIONS] COMMAND [ARGS]... Try 'entrypoint.py --help' for help. Error: No such command 'pyflyte-execute --task-module single_step.spark --task-name hello_spark --inputs <s3://my-s3-bucket/metadata/propeller/myflyteproject-development-r1ik65bysr/spark-task/data/inputs.pb> --output-prefix <s3://my-s3-bucket/metadata/propeller/myflyteproject-development-r1ik65bysr/spark-task/data/0>'.``` My guess is that `pyflyte-execute ...` should not be in single quotes. If I call `entrypoint.py pyflyte-execute …` manually it seems to work better. I have no idea how to configure it correctly though. What I’m currently doing is adding the following entrypoint in my Dockerfile: ```ENTRYPOINT ["/opt/flytekit_venv", "flytekit_spark_entrypoint.sh"]``` Does anyone have an idea what I’m doing wrong? Thanks, Sören
Sören Brunk (in a meeting) will brb
Sören Brunk (in a meeting) will brb
hi. this is installed by flytekit <https://github.com/lyft/flytekit/blob/eec85fb35e5dd975840aa0019dfdc167af1e4f29/setup.py#L55> so please make sure in your pip requirements file that you install `flytekit[spark]` or better yet `flytekit[all]` instead of just `flytekit`
hi. this is installed by flytekit <https://github.com/lyft/flytekit/blob/eec85fb35e5dd975840aa0019dfdc167af1e4f29/setup.py#L55> so please make sure in your pip requirements file that you install `flytekit[spark]` or better yet `flytekit[all]` instead of just `flytekit`
Sören Brunk
Sören Brunk
Yee thanks. Yes I’m installing `flytekit[spark]` (haven’t tried `flytekit[all]` yet. Essentially, I’ve taken the <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile|Dockerfile of the python example from flytesnacks>, then I’ve added these lines: ```RUN ${VENV}/bin/pip install flytekit[spark] RUN ${VENV}/bin/flytekit_install_spark.sh ENV SPARK_HOME /opt/spark``` When I run a Spark task in this container I get the following error: ```/opt/flytekit_venv: line 10: exec: driver-py: not found``` So I modified the docker entrypoint, to run the spark entrypoint, resulting in the entrypoint.py error (yes three different kinds of entrypoints). ```ENTRYPOINT ["/opt/flytekit_venv", "flytekit_spark_entrypoint.sh"]``` I also tried activating the flytekit venv inside `flytekit_spark_entrypoint.sh` directly instead but It’s giving me the same result. Once I get the Spark task to run I’d be happy to contribute the full example to flytesnacks :grin:
Yee thanks. Yes I’m installing `flytekit[spark]` (haven’t tried `flytekit[all]` yet. Essentially, I’ve taken the <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile|Dockerfile of the python example from flytesnacks>, then I’ve added these lines: ```RUN ${VENV}/bin/pip install flytekit[spark] RUN ${VENV}/bin/flytekit_install_spark.sh ENV SPARK_HOME /opt/spark``` When I run a Spark task in this container I get the following error: ```/opt/flytekit_venv: line 10: exec: driver-py: not found``` So I modified the docker entrypoint, to run the spark entrypoint, resulting in the entrypoint.py error (yes three different kinds of entrypoints). ```ENTRYPOINT ["/opt/flytekit_venv", "flytekit_spark_entrypoint.sh"]``` I also tried activating the flytekit venv inside `flytekit_spark_entrypoint.sh` directly instead but It’s giving me the same result. Once I get the Spark task to run I’d be happy to contribute the full example to flytesnacks :grin:
what happens if you just ```ENTRYPOINT [ "/opt/flytekit_spark_entrypoint.sh" ]``` (after copying the file there ofc)
what happens if you just ```ENTRYPOINT [ "/opt/flytekit_spark_entrypoint.sh" ]``` (after copying the file there ofc)
Same error ( I have to add `. ${VENV}/bin/activate` to flytekit_spark_entrypoint.sh in this case because I otherwise I can’t register tasks).
Same error ( I have to add `. ${VENV}/bin/activate` to flytekit_spark_entrypoint.sh in this case because I otherwise I can’t register tasks).
sorry yeah
sorry yeah
Anmol Khurana can probably provide the most context here. but i believe the way spark assumes entrypoints wreaks havoc on venvs. i would suggest the following: 1. make `ENTRYPOINT ["/opt/flytekit_spark_entrypoint.sh" ]` 2. then make an executable script of your own which activates venv and then passes along the args. something like this: <https://github.com/lyft/flytekit/blob/master/scripts/flytekit_venv> 3. in your flytekit.config file, reference that script: <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/configs/local.config#L3> that will result in flyte being able to enter your venv after going through the spark entrypoint and before executing your actual code you could also re-use our flytekit_venv script (we install it with flytekit) and put all your flyte-specific python dependencies in there `flytekit_venv pip install …`
Anmol Khurana can probably provide the most context here. but i believe the way spark assumes entrypoints wreaks havoc on venvs. i would suggest the following: 1. make `ENTRYPOINT ["/opt/flytekit_spark_entrypoint.sh" ]` 2. then make an executable script of your own which activates venv and then passes along the args. something like this: <https://github.com/lyft/flytekit/blob/master/scripts/flytekit_venv> 3. in your flytekit.config file, reference that script: <https://github.com/lyft/flytekit/blob/master/tests/flytekit/common/configs/local.config#L3> that will result in flyte being able to enter your venv after going through the spark entrypoint and before executing your actual code you could also re-use our flytekit_venv script (we install it with flytekit) and put all your flyte-specific python dependencies in there `flytekit_venv pip install …`
Thanks Matt Smith I’ll try your suggestions tomorrow and report back.