diff --git "a/llms-full.txt" "b/llms-full.txt" --- "a/llms-full.txt" +++ "b/llms-full.txt" @@ -1,5 +1,4 @@ -This file is a merged representation of the entire codebase, combining all repository files into a single document. -Generated by Repomix on: 2025-02-07T15:55:35.263Z +This file is a merged representation of a subset of the codebase, containing specifically included files, combined into a single document by Repomix. ================================================================ File Summary @@ -35,11 +34,11 @@ Usage Guidelines: Notes: ------ -- Some files may have been excluded based on .gitignore rules and Repomix's - configuration. -- Binary files are not included in this packed representation. Please refer to - the Repository Structure section for a complete list of file paths, including - binary files. +- Some files may have been excluded based on .gitignore rules and Repomix's configuration +- Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files +- Only files matching these patterns are included: docs/book/**/*.md +- Files matching patterns in .gitignore are excluded +- Files matching default ignore patterns are excluded Additional Info: ---------------- @@ -437,6 +436,11 @@ description: Sending automated alerts to chat services. icon: message-exclamation --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Alerters **Alerters** allow you to send messages to chat services (like Slack, Discord, Mattermost, etc.) from within your @@ -492,6 +496,11 @@ File: docs/book/component-guide/alerters/custom.md description: Learning how to develop a custom alerter. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a Custom Alerter {% hint style="info" %} @@ -639,6 +648,11 @@ File: docs/book/component-guide/alerters/discord.md description: Sending automated alerts to a Discord channel. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Discord Alerter The `DiscordAlerter` enables you to send messages to a dedicated Discord channel @@ -781,6 +795,11 @@ File: docs/book/component-guide/alerters/slack.md description: Sending automated alerts to a Slack channel. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Slack Alerter The `SlackAlerter` enables you to send messages or ask questions within a @@ -1122,6 +1141,11 @@ File: docs/book/component-guide/annotators/argilla.md description: Annotating data using Argilla. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Argilla [Argilla](https://github.com/argilla-io/argilla) is a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring. @@ -1265,6 +1289,11 @@ File: docs/book/component-guide/annotators/custom.md description: Learning how to develop a custom annotator. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a Custom Annotator {% hint style="info" %} @@ -1288,6 +1317,11 @@ File: docs/book/component-guide/annotators/label-studio.md description: Annotating data using Label Studio. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Label Studio Label Studio is one of the leading open-source annotation platforms available to data scientists and ML practitioners. @@ -1440,6 +1474,11 @@ File: docs/book/component-guide/annotators/pigeon.md description: Annotating data using Pigeon. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Pigeon Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including: @@ -1557,6 +1596,11 @@ File: docs/book/component-guide/annotators/prodigy.md description: Annotating data using Prodigy. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Prodigy [Prodigy](https://prodi.gy/) is a modern annotation tool for creating training @@ -1696,6 +1740,11 @@ description: Setting up a persistent storage for your artifacts. icon: folder-closed --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Artifact Stores The Artifact Store is a central component in any MLOps stack. As the name suggests, it acts as a data persistence layer where artifacts (e.g. datasets, models) ingested or generated by the machine learning pipelines are stored. @@ -1868,6 +1917,11 @@ File: docs/book/component-guide/artifact-stores/azure.md description: Storing artifacts using Azure Blob Storage --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Azure Blob Storage The Azure Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the Azure ZenML integration that uses [the Azure Blob Storage managed object storage service](https://azure.microsoft.com/en-us/services/storage/blobs/) to store ZenML artifacts in an Azure Blob Storage container. @@ -2100,6 +2154,11 @@ File: docs/book/component-guide/artifact-stores/custom.md description: Learning how to develop a custom artifact store. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a custom artifact store {% hint style="info" %} @@ -2292,6 +2351,11 @@ File: docs/book/component-guide/artifact-stores/gcp.md description: Storing artifacts using GCP Cloud Storage. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Google Cloud Storage (GCS) The GCS Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the GCP ZenML integration that uses [the Google Cloud Storage managed object storage service](https://cloud.google.com/storage/docs/introduction) to store ZenML artifacts in a GCP Cloud Storage bucket. @@ -2496,6 +2560,11 @@ File: docs/book/component-guide/artifact-stores/local.md description: Storing artifacts on your local filesystem. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Local Artifact Store The local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) flavor that uses a folder on your local filesystem to store artifacts. @@ -2584,6 +2653,11 @@ File: docs/book/component-guide/artifact-stores/s3.md description: Storing artifacts in an AWS S3 bucket. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Amazon Simple Cloud Storage (S3) The S3 Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the S3 ZenML integration that uses [the AWS S3 managed object storage service](https://aws.amazon.com/s3/) or one of the self-hosted S3 alternatives, such as [MinIO](https://min.io/) or [Ceph RGW](https://ceph.io/en/discover/technology/#object), to store artifacts in an S3 compatible object storage backend. @@ -2808,6 +2882,11 @@ File: docs/book/component-guide/container-registries/aws.md description: Storing container images in Amazon ECR. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Amazon Elastic Container Registry (ECR) The AWS container registry is a [container registry](./container-registries.md) flavor provided with the ZenML `aws` integration and uses [Amazon ECR](https://aws.amazon.com/ecr/) to store container images. @@ -3019,6 +3098,11 @@ File: docs/book/component-guide/container-registries/azure.md description: Storing container images in Azure. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Azure Container Registry The Azure container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) to store container images. @@ -3273,6 +3357,11 @@ File: docs/book/component-guide/container-registries/custom.md description: Learning how to develop a custom container registry. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a custom container registry {% hint style="info" %} @@ -3399,6 +3488,11 @@ File: docs/book/component-guide/container-registries/default.md description: Storing container images locally. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Default Container Registry The Default container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and allows container registry URIs of any format. @@ -3576,6 +3670,11 @@ File: docs/book/component-guide/container-registries/dockerhub.md description: Storing container images in DockerHub. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # DockerHub The DockerHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses [DockerHub](https://hub.docker.com/) to store container images. @@ -3649,6 +3748,11 @@ File: docs/book/component-guide/container-registries/gcp.md description: Storing container images in GCP. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Google Cloud Container Registry The GCP container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Google Artifact Registry](https://cloud.google.com/artifact-registry). @@ -3890,6 +3994,11 @@ File: docs/book/component-guide/container-registries/github.md description: Storing container images in GitHub. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # GitHub Container Registry The GitHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) to store container images. @@ -3952,6 +4061,11 @@ File: docs/book/component-guide/data-validators/custom.md description: How to develop a custom data validator --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a custom data validator {% hint style="info" %} @@ -4081,6 +4195,11 @@ description: >- suites --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Deepchecks The Deepchecks [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Deepchecks](https://deepchecks.com/) to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation. @@ -4505,6 +4624,11 @@ description: >- with Evidently profiling --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Evidently The Evidently [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Evidently](https://evidentlyai.com/) to perform data quality, data drift, model drift and model performance analyzes, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation. @@ -5142,6 +5266,11 @@ description: >- document the results --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Great Expectations The Great Expectations [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Great Expectations](https://greatexpectations.io/) to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation. @@ -5454,6 +5583,11 @@ description: >- data with whylogs/WhyLabs profiling. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Whylogs The whylogs/WhyLabs [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [whylogs](https://whylabs.ai/whylogs) and [WhyLabs](https://whylabs.ai) to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation. @@ -5741,6 +5875,11 @@ File: docs/book/component-guide/experiment-trackers/comet.md description: Logging and visualizing experiments with Comet. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Comet The Comet Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Comet ZenML integration that uses [the Comet experiment tracking platform](https://www.comet.com/site/products/ml-experiment-tracking/) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics). @@ -6036,6 +6175,11 @@ File: docs/book/component-guide/experiment-trackers/custom.md description: Learning how to develop a custom experiment tracker. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a custom experiment tracker {% hint style="info" %} @@ -6102,6 +6246,11 @@ description: Logging and visualizing ML experiments. icon: clipboard --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Experiment Trackers Experiment trackers let you track your ML experiments by logging extended information about your models, datasets, @@ -6195,6 +6344,11 @@ File: docs/book/component-guide/experiment-trackers/mlflow.md description: Logging and visualizing experiments with MLflow. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # MLflow The MLflow Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the MLflow ZenML integration that uses [the MLflow tracking service](https://mlflow.org/docs/latest/tracking.html) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics). @@ -6413,6 +6567,11 @@ File: docs/book/component-guide/experiment-trackers/neptune.md description: Logging and visualizing experiments with neptune.ai --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Neptune The Neptune Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Neptune-ZenML integration that uses [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics). @@ -6731,6 +6890,11 @@ File: docs/book/component-guide/experiment-trackers/vertexai.md description: Logging and visualizing experiments with Vertex AI Experiment Tracker. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Vertex AI Experiment Tracker The Vertex AI Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Vertex AI ZenML integration. It uses the [Vertex AI tracking service](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics). @@ -7050,6 +7214,11 @@ File: docs/book/component-guide/experiment-trackers/wandb.md description: Logging and visualizing experiments with Weights & Biases. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Weights & Biases The Weights & Biases Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Weights & Biases ZenML integration that uses [the Weights & Biases experiment tracking platform](https://wandb.ai/site/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics). @@ -7367,6 +7536,11 @@ File: docs/book/component-guide/feature-stores/custom.md description: Learning how to develop a custom feature store. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a Custom Feature Store {% hint style="info" %} @@ -7390,6 +7564,11 @@ File: docs/book/component-guide/feature-stores/feast.md description: Managing data in Feast feature stores. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Feast Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training). @@ -7572,6 +7751,11 @@ File: docs/book/component-guide/image-builders/aws.md description: Building container images with AWS CodeBuild --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # AWS Image Builder The AWS image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `aws` integration that uses [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images. @@ -7810,6 +7994,11 @@ File: docs/book/component-guide/image-builders/custom.md description: Learning how to develop a custom image builder. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a Custom Image Builder {% hint style="info" %} @@ -7930,6 +8119,11 @@ File: docs/book/component-guide/image-builders/gcp.md description: Building container images with Google Cloud Build --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Google Cloud Image Builder The Google Cloud image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `gcp` integration that uses [Google Cloud Build](https://cloud.google.com/build) to build container images. @@ -8183,6 +8377,11 @@ File: docs/book/component-guide/image-builders/kaniko.md description: Building container images with Kaniko. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Kaniko Image Builder The Kaniko image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `kaniko` integration that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images. @@ -8340,6 +8539,11 @@ File: docs/book/component-guide/image-builders/local.md description: Building container images locally. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Local Image Builder The local image builder is an [image builder](./image-builders.md) flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images. @@ -8392,6 +8596,11 @@ File: docs/book/component-guide/model-deployers/bentoml.md description: Deploying your models locally with BentoML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # BentoML BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment. @@ -8778,6 +8987,11 @@ File: docs/book/component-guide/model-deployers/custom.md description: Learning how to develop a custom model deployer. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a Custom Model Deployer {% hint style="info" %} @@ -8950,6 +9164,11 @@ description: >- Deploying models to Databricks Inference Endpoints with Databricks --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Databricks @@ -9103,6 +9322,11 @@ description: >- :hugging_face:. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Hugging Face Hugging Face Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models). @@ -9295,6 +9519,11 @@ File: docs/book/component-guide/model-deployers/mlflow.md description: Deploying your models locally with MLflow. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # MLflow The MLflow Model Deployer is one of the available flavors of the [Model Deployer](./model-deployers.md) stack component. Provided with the MLflow integration it can be used to deploy and manage [MLflow models](https://www.mlflow.org/docs/latest/python\_api/mlflow.deployments.html) on a local running MLflow server. @@ -9739,6 +9968,11 @@ File: docs/book/component-guide/model-deployers/seldon.md description: Deploying models to Kubernetes with Seldon Core. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Seldon [Seldon Core](https://github.com/SeldonIO/seldon-core) is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more. @@ -10218,6 +10452,11 @@ File: docs/book/component-guide/model-deployers/vllm.md description: Deploying your LLM locally with vLLM. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # vLLM [vLLM](https://docs.vllm.ai/en/latest/) is a fast and easy-to-use library for LLM inference and serving. @@ -10296,6 +10535,11 @@ File: docs/book/component-guide/model-registries/custom.md description: Learning how to develop a custom model registry. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a Custom Model Registry {% hint style="info" %} @@ -10492,6 +10736,11 @@ File: docs/book/component-guide/model-registries/mlflow.md description: Managing MLFlow logged models and artifacts --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # MLflow Model Registry [MLflow](https://www.mlflow.org/docs/latest/tracking.html) is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a [MLflow Experiment Tracker](../experiment-trackers/mlflow.md) that you can use to track your experiments, and an [MLflow Model Deployer](../model-deployers/mlflow.md) that you can use to deploy your models locally. @@ -10741,6 +10990,11 @@ File: docs/book/component-guide/orchestrators/airflow.md description: Orchestrating your pipelines to run on Airflow. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Airflow Orchestrator ZenML pipelines can be executed natively as [Airflow](https://airflow.apache.org/) @@ -11050,6 +11304,11 @@ File: docs/book/component-guide/orchestrators/azureml.md description: Orchestrating your pipelines to run on AzureML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # AzureML Orchestrator [AzureML](https://azure.microsoft.com/en-us/products/machine-learning) is a @@ -11296,6 +11555,11 @@ File: docs/book/component-guide/orchestrators/custom.md description: Learning how to develop a custom orchestrator. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a custom orchestrator {% hint style="info" %} @@ -11520,6 +11784,11 @@ File: docs/book/component-guide/orchestrators/databricks.md description: Orchestrating your pipelines to run on Databricks. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Databricks Orchestrator [Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks offers optimized performance and scalability for big data workloads. @@ -11716,6 +11985,11 @@ File: docs/book/component-guide/orchestrators/hyperai.md description: Orchestrating your pipelines to run on HyperAI.ai instances. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # HyperAI Orchestrator [HyperAI](https://www.hyperai.ai) is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an [orchestrator](./orchestrators.md) flavor that allows you to easily deploy your pipelines on HyperAI instances. @@ -11803,6 +12077,11 @@ File: docs/book/component-guide/orchestrators/kubeflow.md description: Orchestrating your pipelines to run on Kubeflow. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Kubeflow Orchestrator The Kubeflow orchestrator is an [orchestrator](./orchestrators.md) flavor provided by the ZenML `kubeflow` integration that uses [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to run your pipelines. @@ -12160,6 +12439,11 @@ File: docs/book/component-guide/orchestrators/kubernetes.md description: Orchestrating your pipelines to run on Kubernetes clusters. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Kubernetes Orchestrator Using the ZenML `kubernetes` integration, you can orchestrate and scale your ML pipelines on a [Kubernetes](https://kubernetes.io/) cluster without writing a single line of Kubernetes code. @@ -12465,6 +12749,11 @@ File: docs/book/component-guide/orchestrators/lightning.md description: Orchestrating your pipelines to run on Lightning AI. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Lightning AI Orchestrator @@ -12664,6 +12953,11 @@ File: docs/book/component-guide/orchestrators/local-docker.md description: Orchestrating your pipelines to run in Docker. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Local Docker Orchestrator The local Docker orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally using Docker. @@ -12741,6 +13035,11 @@ File: docs/book/component-guide/orchestrators/local.md description: Orchestrating your pipelines to run locally. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Local Orchestrator The local orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally. @@ -12874,6 +13173,11 @@ File: docs/book/component-guide/orchestrators/sagemaker.md description: Orchestrating your pipelines to run on Amazon Sagemaker. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # AWS Sagemaker Orchestrator [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute. @@ -13422,6 +13726,11 @@ File: docs/book/component-guide/orchestrators/skypilot-vm.md description: Orchestrating your pipelines to run on VMs using SkyPilot. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Skypilot VM Orchestrator The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions. @@ -13944,6 +14253,11 @@ File: docs/book/component-guide/orchestrators/tekton.md description: Orchestrating your pipelines to run on Tekton. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Tekton Orchestrator [Tekton](https://tekton.dev/) is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems. @@ -14184,6 +14498,11 @@ File: docs/book/component-guide/orchestrators/vertex.md description: Orchestrating your pipelines to run on Vertex AI. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Google Cloud VertexAI Orchestrator [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute. @@ -14503,6 +14822,11 @@ File: docs/book/component-guide/step-operators/azureml.md description: Executing individual steps in AzureML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # AzureML [AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances. @@ -14664,6 +14988,11 @@ File: docs/book/component-guide/step-operators/custom.md description: Learning how to develop a custom step operator. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Develop a Custom Step Operator {% hint style="info" %} @@ -14793,6 +15122,11 @@ File: docs/book/component-guide/step-operators/kubernetes.md description: Executing individual steps in Kubernetes Pods. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Kubernetes Step Operator ZenML's Kubernetes step operator allows you to submit individual steps to be run on Kubernetes pods. @@ -15027,6 +15361,11 @@ File: docs/book/component-guide/step-operators/modal.md description: Executing individual steps in Modal. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Modal Step Operator [Modal](https://modal.com) is a platform for running cloud infrastructure. It offers specialized compute instances to run your code and has a fast execution time, especially around building Docker images and provisioning hardware. ZenML's Modal step operator allows you to submit individual steps to be run on Modal compute instances. @@ -15144,6 +15483,11 @@ File: docs/book/component-guide/step-operators/sagemaker.md description: Executing individual steps in SageMaker. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Amazon SageMaker [SageMaker](https://aws.amazon.com/sagemaker/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances. @@ -15509,6 +15853,11 @@ roleRef: name: edit apiGroup: rbac.authorization.k8s.io --- + +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + ``` And then execute the following command to create the resources: @@ -15677,6 +16026,11 @@ File: docs/book/component-guide/step-operators/vertex.md description: Executing individual steps in Vertex AI. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Google Cloud VertexAI [Vertex AI](https://cloud.google.com/vertex-ai) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances. @@ -15867,6 +16221,11 @@ File: docs/book/component-guide/component-guide.md description: Overview of categories of MLOps components. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # šŸ“œ Overview If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner. @@ -15905,6 +16264,11 @@ File: docs/book/component-guide/integration-overview.md description: Overview of third-party ZenML integrations. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Integration overview Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by **integrating** with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas. @@ -16083,6 +16447,11 @@ File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md description: Learning how to develop a custom secret store. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Custom secret stores The secrets store acts as the one-stop shop for all the secrets to which your pipeline or stack components might need access. It is responsible for storing, updating and deleting _only the secrets values_ for ZenML secrets, while the ZenML secret metadata is stored in the SQL database. The secrets store interface implemented by all available secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` core module and looks more or less like this: @@ -16189,6 +16558,11 @@ File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces. description: Deploying ZenML to Huggingface Spaces. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Deploy using HuggingFace Spaces A quick way to deploy ZenML and get started is to use [HuggingFace Spaces](https://huggingface.co/spaces). HuggingFace Spaces is a platform for hosting and sharing ML projects and workflows, and it also works to deploy ZenML. You can be up and running in minutes (for free) with a hosted ZenML server, so it's a good option if you want to try out ZenML without any infrastructure overhead. @@ -16268,6 +16642,11 @@ File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md description: Deploying ZenML with custom Docker images. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Deploy with custom images In most cases, deploying ZenML with the default `zenmlhub/zenml-server` Docker image should work just fine. However, there are some scenarios when you might need to deploy ZenML with a custom Docker image: @@ -16466,6 +16845,11 @@ File: docs/book/getting-started/deploying-zenml/secret-management.md description: Configuring the secrets store. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Secret store configuration and management ## Centralized secrets store @@ -16601,6 +16985,11 @@ description: > Learn how to use the ZenML Pro API. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Using the ZenML Pro API ZenML Pro offers a powerful API that allows you to interact with your ZenML resources. Whether you're using the [SaaS version](https://cloud.zenml.io) or a self-hosted ZenML Pro instance, you can leverage this API to manage tenants, organizations, users, roles, and more. @@ -16771,6 +17160,11 @@ description: > Learn about the different roles and permissions you can assign to your team members in ZenML Pro. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # ZenML Pro: Roles and Permissions ZenML Pro offers a robust role-based access control (RBAC) system to manage permissions across your organization and tenants. This guide will help you understand the different roles available, how to assign them, and how to create custom roles tailored to your team's needs. @@ -16913,6 +17307,11 @@ description: > Learn about Teams in ZenML Pro and how they can be used to manage groups of users across your organization and tenants. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Organize users in Teams ZenML Pro introduces the concept of Teams to help you manage groups of users efficiently. A team is a collection of users that acts as a single entity within your organization and tenants. This guide will help you understand how teams work, how to create and manage them, and how to use them effectively in your MLOps workflows. @@ -16992,6 +17391,11 @@ description: > Learn how to use tenants in ZenML Pro. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Tenants Tenants are individual, isolated deployments of the ZenML server. Each tenant has its own set of users, roles, and resources. Essentially, everything you do in ZenML Pro revolves around a tenant: all of your pipelines, stacks, runs, connectors and so on are scoped to a tenant. @@ -17564,6 +17968,11 @@ File: docs/book/how-to/configuring-zenml/configuring-zenml.md description: Configuring ZenML's default behavior --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Configuring ZenML There are various ways to adapt how ZenML behaves in certain situations. This guide walks users through how to configure certain aspects of ZenML. @@ -17577,6 +17986,11 @@ File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md description: Creating an external integration and contributing to ZenML --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Implement a custom integration ![ZenML integrates with a number of tools from the MLOps landscape](../../../.gitbook/assets/sam-side-by-side-full-text.png) @@ -17729,6 +18143,11 @@ File: docs/book/how-to/control-logging/disable-colorful-logging.md description: How to disable colorful logging in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Disable colorful logging By default, ZenML uses colorful logging to make it easier to read logs. However, if you wish to disable this feature, you can do so by setting the following environment variable: @@ -17766,6 +18185,11 @@ File: docs/book/how-to/control-logging/disable-rich-traceback.md description: How to disable rich traceback output in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Disable `rich` traceback output By default, ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library to display rich traceback output. This is especially useful when debugging your pipelines. However, if you wish to disable this feature, you can do so by setting the following environment variable: @@ -17891,6 +18315,11 @@ File: docs/book/how-to/control-logging/set-logging-format.md description: How to set the logging format in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Set logging format If you want to change the default ZenML logging format, you can do so with the following environment variable: @@ -17933,6 +18362,11 @@ File: docs/book/how-to/control-logging/set-logging-verbosity.md description: How to set the logging verbosity in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Set logging verbosity By default, ZenML sets the logging verbosity to `INFO`. If you wish to change this, you can do so by setting the following environment variable: @@ -18014,6 +18448,11 @@ File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md description: Defining the image builder. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # šŸ³ Define where an image is built ZenML executes pipeline steps sequentially in the active Python environment when running locally. However, with remote [orchestrators](../../component-guide/orchestrators/orchestrators.md) or [step operators](../../component-guide/step-operators/step-operators.md), ZenML builds [Docker](https://www.docker.com/) images to run your pipeline in an isolated, well-defined environment. @@ -18035,6 +18474,11 @@ File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md description: Using Docker images to run your pipeline. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Specify Docker settings for a pipeline When a [pipeline is run with a remote orchestrator](../pipeline-development/configure-python-environments/README.md) a [Dockerfile](https://docs.docker.com/engine/reference/builder/) is dynamically generated at runtime. It is then used to build the Docker image using the [image builder](../pipeline-development/configure-python-environments/README.md#image-builder-environment) component of your stack. The Dockerfile consists of the following steps: @@ -18178,6 +18622,11 @@ File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-step.md description: You have the option to customize the Docker settings at a step level. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Docker settings on a step By default every step of a pipeline uses the same Docker image that is defined at the [pipeline level](./docker-settings-on-a-pipeline.md). Sometimes your steps will have special requirements that make it necessary to define a different Docker image for one or many steps. This can easily be accomplished by adding the [DockerSettings](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings) to the step decorator directly. @@ -18223,6 +18672,11 @@ description: > Learn how to reuse builds to speed up your pipeline runs. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # How to reuse builds When you run a pipeline, ZenML will check if a build with the same pipeline and stack exists. If it does, it will reuse that build. If it doesn't, ZenML will create a new build. This guide explains what a build is and the best practices around reusing builds. @@ -18307,6 +18761,11 @@ File: docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-reposit description: How to use a private PyPI repository. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # How to use a private PyPI repository For packages that require authentication, you may need to take additional steps: @@ -18514,6 +18973,11 @@ File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md description: "Skip building an image for your ZenML pipeline altogether." --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Use a prebuilt image for pipeline execution When running a pipeline on a remote Stack, ZenML builds a Docker image with a base ZenML image and adds all of your project dependencies to it. Optionally, if a code repository is not registered and `allow_download_from_artifact_store` is not set to `True` in your `DockerSettings`, ZenML will also add your pipeline code to the image. This process might take significant time depending on how big your dependencies are, how powerful your local system is and how fast your internet connection is. This is because Docker must pull base layers and push the final image to your container registry. Although this process only happens once and is skipped if ZenML detects no change in your environment, it might still be a bottleneck slowing down your pipeline execution. @@ -18723,6 +19187,11 @@ File: docs/book/how-to/data-artifact-management/complex-usecases/datasets.md description: Model datasets using simple abstractions. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Custom Dataset Classes and Complex Data Flows in ZenML As machine learning projects grow in complexity, you often need to work with various data sources and manage intricate data flows. This chapter explores how to use custom Dataset classes and Materializers in ZenML to handle these challenges efficiently. For strategies on scaling your data processing for larger datasets, refer to [scaling strategies for big data](manage-big-data.md). @@ -18967,6 +19436,11 @@ File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data description: Learn about how to manage big data with ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Scaling Strategies for Big Data in ZenML As your machine learning projects grow, you'll often encounter datasets that challenge your existing data processing pipelines. This section explores strategies for scaling your ZenML pipelines to handle increasingly large datasets. For information on creating custom Dataset classes and managing complex data flows, refer to [custom dataset classes](datasets.md). @@ -19280,6 +19754,11 @@ File: docs/book/how-to/data-artifact-management/complex-usecases/passing-artifac description: Structuring an MLOps project --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Passing artifacts between pipelines An MLOps project can often be broken down into many different pipelines. For example: @@ -19417,6 +19896,11 @@ File: docs/book/how-to/data-artifact-management/complex-usecases/registering-exi description: Learn how to register an external data as a ZenML artifact for future use. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Register Existing Data as a ZenML Artifact Many modern Machine Learning framework create their own data as a byproduct of model training or other processes. In such cases there is no need to read and materialize those data assets to pack them into a ZenML Artifact, instead it is beneficial registering those data assets as-is in ZenML for future use. @@ -19812,6 +20296,11 @@ File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized- description: Skip materialization of artifacts. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Unmaterialized artifacts A ZenML pipeline is built in a data-centric way. The outputs and inputs of steps define how steps are connected and the order in which they are executed. Each step should be considered as its very own process that reads and writes its inputs and outputs from and to the [artifact store](../../../component-guide/artifact-stores/artifact-stores.md). This is where **materializers** come into play. @@ -19910,6 +20399,11 @@ File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifact-v description: Understand how ZenML stores your data under-the-hood. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # How ZenML stores data ZenML seamlessly integrates data versioning and lineage into its core functionality. When a pipeline is executed, each run generates automatically tracked and managed artifacts. One can easily view the entire lineage of how artifacts are created and interact with them. The dashboard is also a way to interact with the artifacts produced by different pipeline runs. ZenML's artifact management, caching, lineage tracking, and visualization capabilities can help gain valuable insights, streamline the experimentation process, and ensure the reproducibility and reliability of machine learning workflows. @@ -19957,6 +20451,11 @@ File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifacts- description: Understand how you can name your ZenML artifacts. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # How Artifact Naming works in ZenML In ZenML pipelines, you often need to reuse the same step multiple times with different inputs, resulting in multiple artifacts. However, the default naming convention for artifacts can make it challenging to track and differentiate between these outputs, especially when they need to be used in subsequent pipelines. Below you can find a detailed exploration of how you might name your output artifacts dynamically or statically, depending on your needs. @@ -20113,6 +20612,11 @@ File: docs/book/how-to/data-artifact-management/handle-data-artifacts/delete-an- description: Learn how to delete artifacts. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Delete an artifact There is currently no way to delete an artifact directly, because it may lead to @@ -20144,6 +20648,11 @@ description: >- steps. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Get arbitrary artifacts in a step As described in [the metadata guide](../../model-management-metrics/track-metrics-metadata/logging-metadata.md), the metadata can be fetched with the client, and this is how you would use it to fetch it within a step. This allows you to fetch artifacts from other upstream steps or even completely different pipelines. @@ -20179,6 +20688,11 @@ File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-cus description: Using materializers to pass custom data types through steps. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Handle custom data types A ZenML pipeline is built in a data-centric way. The outputs and inputs of steps define how steps are connected and the order in which they are executed. Each step should be considered as its very own process that reads and writes its inputs and outputs from and to the [artifact store](../../../component-guide/artifact-stores/artifact-stores.md). This is where **materializers** come into play. @@ -20942,6 +21456,11 @@ File: docs/book/how-to/data-artifact-management/handle-data-artifacts/return-mul description: Use Annotated to return multiple outputs from a step and name them for easy retrieval and dashboard display. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Return multiple outputs from a step You can use the `Annotated` type to return multiple outputs from a step and give each output a name. Naming your step outputs will help you retrieve the specific artifact later and also improves the readability of your pipeline's dashboard. @@ -20988,6 +21507,11 @@ File: docs/book/how-to/data-artifact-management/handle-data-artifacts/tagging.md description: Use tags to organize tags in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Organizing data with tags Organizing and categorizing your machine learning artifacts and models can @@ -21104,6 +21628,11 @@ File: docs/book/how-to/data-artifact-management/visualize-artifacts/creating-cus description: Creating your own visualizations. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Creating Custom Visualizations It is simple to associate a custom visualization with an artifact in ZenML, if @@ -21251,6 +21780,11 @@ File: docs/book/how-to/data-artifact-management/visualize-artifacts/disabling-vi description: Disabling visualizations. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Disabling Visualizations If you would like to disable artifact visualization altogether, you can set `enable_artifact_visualization` at either pipeline or step level: @@ -21290,6 +21824,11 @@ File: docs/book/how-to/data-artifact-management/visualize-artifacts/types-of-vis description: Types of visualizations in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Types of visualizations ZenML automatically saves visualizations of many common data types and allows you to view these visualizations in the ZenML dashboard: @@ -21317,6 +21856,11 @@ File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizatio description: Displaying visualizations in the dashboard. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Giving the ZenML Server Access to Visualizations In order for the visualizations to show up on the dashboard, the following must be true: @@ -21359,6 +21903,11 @@ description: >- buckets, EKS Kubernetes clusters and ECR container registries. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # AWS Service Connector The ZenML AWS Service Connector facilitates the authentication and access to managed AWS services and resources. These encompass a range of resources, including S3 buckets, ECR container repositories, and EKS clusters. The connector provides support for various authentication methods, including explicit long-lived AWS secret keys, IAM roles, short-lived STS tokens, and implicit authentication. @@ -23092,6 +23641,11 @@ description: >- registries. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Azure Service Connector The ZenML Azure Service Connector facilitates the authentication and access to managed Azure services and resources. These encompass a range of resources, including blob storage containers, ACR repositories, and AKS clusters. @@ -23952,6 +24506,11 @@ description: >- Service Connectors. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Security best practices Service Connector Types, especially those targeted at cloud providers, offer a plethora of authentication methods matching those supported by remote cloud platforms. While there is no single authentication standard that unifies this process, there are some patterns that are easily identifiable and can be used as guidelines when deciding which authentication method to use to configure a Service Connector. @@ -24519,6 +25078,11 @@ File: docs/book/how-to/infrastructure-deployment/auth-management/docker-service- description: Configuring Docker Service Connectors to connect ZenML to Docker container registries. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Docker Service Connector The ZenML Docker Service Connector allows authenticating with a Docker or OCI container registry and managing Docker clients for the registry. This connector provides pre-authenticated python-docker Python clients to Stack Components that are linked to it. @@ -24624,6 +25188,11 @@ description: >- GCS buckets, GKE Kubernetes clusters, and GCR container registries. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # GCP Service Connector The ZenML GCP Service Connector facilitates the authentication and access to managed GCP services and resources. These encompass a range of resources, including GCS buckets, GAR and GCR container repositories, and GKE clusters. The connector provides support for various authentication methods, including GCP user accounts, service accounts, short-lived OAuth 2.0 tokens, and implicit authentication. @@ -26519,6 +27088,11 @@ File: docs/book/how-to/infrastructure-deployment/auth-management/hyperai-service description: Configuring HyperAI Connectors to connect ZenML to HyperAI instances. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # HyperAI Service Connector The ZenML HyperAI Service Connector allows authenticating with a HyperAI instance for deployment of pipeline runs. This connector provides pre-authenticated Paramiko SSH clients to Stack Components that are linked to it. @@ -26582,6 +27156,11 @@ File: docs/book/how-to/infrastructure-deployment/auth-management/kubernetes-serv description: Configuring Kubernetes Service Connectors to connect ZenML to Kubernetes clusters. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Kubernetes Service Connector The ZenML Kubernetes service connector facilitates authenticating and connecting to a Kubernetes cluster. The connector can be used to access to any generic Kubernetes cluster by providing pre-authenticated Kubernetes python clients to Stack Components that are linked to it and also allows configuring the local Kubernetes CLI (i.e. `kubectl`). @@ -27319,6 +27898,11 @@ description: >- external resources. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Service Connectors guide This documentation section contains everything that you need to use Service Connectors to connect ZenML to external resources. A lot of information is covered, so it might be useful to use the following guide to navigate it: @@ -29061,6 +29645,11 @@ File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/best-pra description: Best practices for using IaC with ZenML --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Architecting ML Infrastructure with ZenML and Terraform ## The Challenge @@ -29551,6 +30140,11 @@ File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/terrafor description: Registering Existing Infrastructure with ZenML - A Guide for Terraform Users --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Manage your stacks with Terraform Terraform is a powerful tool for managing infrastructure as code, and is by far the @@ -29989,6 +30583,11 @@ File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud description: Deploy a cloud stack using Terraform --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Deploy a cloud stack with Terraform ZenML maintains a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) @@ -30460,6 +31059,11 @@ File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud description: Deploy a cloud stack from scratch with a single click --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Deploy a cloud stack with a single click In ZenML, the [stack](../../../user-guide/production-guide/understand-stacks.md) @@ -30926,6 +31530,11 @@ File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-r description: Export stack requirements --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + You can get the `pip` requirements of your stack by running the `zenml stack export-requirements ` CLI command. To install those requirements, it's best to write them to a file and then install them like this: @@ -30943,6 +31552,11 @@ File: docs/book/how-to/infrastructure-deployment/stack-deployment/implement-a-cu description: How to write a custom stack component flavor --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Implement a custom stack component When building a sophisticated MLOps Platform, you will often need to come up with custom-tailored solutions for your infrastructure or tooling. ZenML is built around the values of composability and reusability which is why the stack component flavors in ZenML are designed to be modular and straightforward to extend. @@ -31362,6 +31976,11 @@ File: docs/book/how-to/infrastructure-deployment/stack-deployment/reference-secr description: Reference secrets in stack component attributes and settings --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Reference secrets in stack configuration Some of the components in your stack require you to configure them with sensitive information like passwords or tokens, so they can connect to the underlying infrastructure. Secret references allow you to configure these components in a secure way by not specifying the value directly but instead referencing a secret by providing the secret name and key. Referencing a secret for the value of any string attribute of your stack components, simply specify the attribute using the following syntax: `{{.}}` @@ -31438,6 +32057,11 @@ File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-clo description: Seamlessly register a cloud stack by using existing infrastructure --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + In ZenML, the [stack](../../../user-guide/production-guide/understand-stacks.md) is a fundamental concept that represents the configuration of your infrastructure. In a normal workflow, creating a stack requires you to first @@ -31880,6 +32504,11 @@ description: >- Connect to the ZenML server using the ZenML CLI and the web based login. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Connect in with your User (interactive) You can authenticate your clients with the ZenML Server using the ZenML CLI and the web based login. This can be executed with the command: @@ -31935,6 +32564,11 @@ description: >- Connect to the ZenML server using a service account and an API key. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Connect with a Service Account Sometimes you may need to authenticate to a ZenML server from a non-interactive environment where the web login is not possible, like a CI/CD workload or a serverless function. In these cases, you can configure a service account and an API key and use the API key to authenticate to the ZenML server: @@ -32033,6 +32667,11 @@ description: >- Connect to the ZenML server using a temporary API token. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Connect with an API Token API tokens provide a way to authenticate with the ZenML server for temporary automation tasks. These tokens are scoped to your user account and are valid for a maximum of 1 hour. @@ -32081,6 +32720,11 @@ File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md description: How to migrate your ZenML code to the newest version. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # ā™» Migration guide Migrations are necessary for ZenML releases that include breaking changes, which are currently all releases that increment the minor version of the release, e.g., `0.X` -> `0.Y`. Furthermore, all releases that increment the first non-zero digit of the version contain major breaking changes or paradigm shifts that are explained in separate migration guides below. @@ -32113,6 +32757,11 @@ File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty. description: How to migrate your ZenML pipelines and steps from version <=0.39.1 to 0.41.0. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Migration guide 0.39.1 ā†’ 0.41.0 ZenML versions 0.40.0 to 0.41.0 introduced a new and more flexible syntax to define ZenML steps and pipelines. This page contains code samples that show you how to upgrade your steps and pipelines to the new syntax. @@ -32579,6 +33228,11 @@ File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty. description: How to migrate from ZenML 0.58.2 to 0.60.0 (Pydantic 2 edition). --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Release Notes ZenML now uses Pydantic v2. šŸ„³ @@ -32751,6 +33405,11 @@ File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty description: How to migrate from ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + {% hint style="warning" %} Migrating to `0.30.0` performs non-reversible database changes so downgrading to `<=0.23.0` is not possible afterwards. If you are running on an older ZenML @@ -32778,6 +33437,11 @@ File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty description: How to migrate from ZenML <=0.13.2 to 0.20.0. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Migration guide 0.13.2 ā†’ 0.20.0 *Last updated: 2023-07-24* @@ -33252,6 +33916,11 @@ description: >- Learn about best practices for upgrading your ZenML server and your code. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Best practices for upgrading ZenML While upgrading ZenML is generally a smooth process, there are some best practices that you should follow to ensure a successful upgrade. Based on experiences shared by ZenML users, here are some key strategies and considerations. @@ -33345,6 +34014,11 @@ File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md description: Troubleshooting tips for your ZenML deployment --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Troubleshoot the deployed server In this document, we will go over some common issues that you might face when deploying ZenML and how to solve them. @@ -33463,6 +34137,11 @@ File: docs/book/how-to/manage-zenml-server/upgrade-zenml-server.md description: Learn how to upgrade your server to a new version of ZenML for the different deployment options. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Upgrade the version of the ZenML server The way to upgrade your ZenML server depends a lot on how you deployed it. However, there are some best practices that apply in all cases. Before you upgrade, check out the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) guide. @@ -33559,6 +34238,11 @@ description: > Learn about best practices for using ZenML server in production environments. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Using ZenML server in production Setting up a ZenML server for testing is a quick process. However, most people have to move beyond so-called 'day zero' operations and in such cases, it helps to learn best practices around setting up your ZenML server in a production-ready way. This guide encapsulates all the tips and tricks we've learned ourselves and from working with people who use ZenML in production environments. Following are some of the best practices we recommend. @@ -33813,6 +34497,11 @@ File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-a description: Structuring an MLOps project --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Connecting artifacts via a Model Now that we've learned about managing [artifacts](../../../user-guide/starter-guide/manage-artifacts.md) and [models](../../../user-guide/starter-guide/track-ml-models.md), we can shift our attention again to the thing that brings them together: [Pipelines](../../../user-guide/starter-guide/create-an-ml-pipeline.md). This trifecta together will then inform how we structure our project. @@ -33949,6 +34638,11 @@ File: docs/book/how-to/model-management-metrics/model-control-plane/delete-a-mod description: Learn how to delete models. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Delete a model Deleting a model or a specific model version means removing all links between the Model entity @@ -34563,6 +35257,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-me description: Learn how to attach metadata to a model. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Attach metadata to a model ZenML allows you to log metadata for models, which provides additional context @@ -34660,6 +35359,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-me description: Learn how to attach metadata to a run. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Attach Metadata to a Run In ZenML, you can log metadata directly to a pipeline run, either during or @@ -34751,6 +35455,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-me description: Learn how to attach metadata to a step. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Attach metadata to a step In ZenML, you can log metadata for a specific step during or after its @@ -34860,6 +35569,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-me description: Learn how to attach metadata to an artifact. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Attach metadata to an artifact ![Metadata in the dashboard](../../../.gitbook/assets/metadata-in-dashboard.png) @@ -34989,6 +35703,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-met description: How to fetch metadata during pipeline composition. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Fetch metadata during pipeline composition ### Pipeline configuration using the `PipelineContext` @@ -35045,6 +35764,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-met description: Accessing meta information in real-time within your pipeline. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Fetch metadata within steps ## Using the `StepContext` @@ -35092,6 +35816,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/grouping- description: Learn how to group key-value pairs in the dashboard. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Grouping Metadata in the Dashboard ![Metadata in the dashboard](../../../.gitbook/assets/metadata-in-dashboard.png) @@ -35137,6 +35866,11 @@ File: docs/book/how-to/model-management-metrics/track-metrics-metadata/logging-m description: Tracking your metadata. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Special Metadata Types ZenML supports several special metadata types to capture specific kinds of @@ -35376,6 +36110,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md description: Reuse steps between pipelines. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Compose pipelines Sometimes it can be useful to extract some common functionality into separate functions @@ -35419,6 +36158,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/configuring-a-pipeli description: Configuring a pipeline at runtime. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Runtime configuration of a pipeline run It is often the case that there is a need to run a pipeline with a different configuration. @@ -35447,6 +36191,11 @@ description: >- stay unchanged. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Control caching behavior ```python @@ -35529,6 +36278,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/delete-a-pipeline.md description: Learn how to delete pipelines. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Delete a pipeline In order to delete a pipeline, you can either use the CLI or the Python SDK: @@ -35610,6 +36364,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md description: Running steps in parallel. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Fan-in and Fan-out Patterns The fan-out/fan-in pattern is a common pipeline architecture where a single step splits into multiple parallel operations (fan-out) and then consolidates the results back into a single step (fan-in). This pattern is particularly useful for parallel processing, distributed workloads, or when you need to process data through different transformations and then aggregate the results. For example, you might want to process different chunks of data in parallel and then aggregate the results: @@ -35691,6 +36450,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/fetching-pipelines.m description: Inspecting a finished pipeline run and its outputs. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Fetching pipelines Once a pipeline run has been completed, we can access the corresponding information in code, which enables the following: @@ -36094,6 +36858,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuni description: Running a hyperparameter tuning trial with ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Hyperparameter tuning A basic iteration through a number of hyperparameters can be achieved with @@ -36311,6 +37080,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/retry-steps.md description: Automatically configure your steps to retry if they fail. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Allow step retry in case of failure ZenML provides a built-in retry mechanism that allows you to configure automatic retries for your steps in case of failures. This can be useful when dealing with intermittent issues or transient errors. A common pattern when trying to run a step on GPU-backed hardware is that the provider will not have enough resources available, so you can set ZenML to handle the retries until the resources free up. You can configure three parameters for step retries: @@ -36437,6 +37211,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/run-pipelines-asynch description: The best way to trigger a pipeline run so that it runs in the background --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Run pipelines asynchronously By default your pipelines will run synchronously. This means your terminal will follow along the logs as the pipeline is being built/runs. @@ -36472,6 +37251,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/schedule-a-pipeline. description: Learn how to set, pause and stop a schedule for pipelines. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Schedule a pipeline {% hint style="info" %} @@ -36564,6 +37348,11 @@ description: >- more explicit. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Step output typing and annotation ## Type annotations @@ -36737,6 +37526,11 @@ File: docs/book/how-to/pipeline-development/build-pipelines/use-failure-success- description: Running failure and success hooks after step execution. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Use failure/success hooks Hooks are a way to perform an action after a step has completed execution. They can be useful in a variety of scenarios, such as sending notifications, logging, or cleaning up resources after a step has been completed. @@ -37009,6 +37803,11 @@ description: >- that you are familiar with. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Use pipeline/step parameters ## Parameters for your steps @@ -37160,6 +37959,11 @@ File: docs/book/how-to/pipeline-development/configure-python-environments/config description: How to configure the server environment --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Configure the server environment The ZenML server environment is configured using environment variables. You will @@ -37178,6 +37982,11 @@ File: docs/book/how-to/pipeline-development/configure-python-environments/handli description: How to handle issues with conflicting dependencies --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Handling dependencies This page documents a some of the common issues that arise when using ZenML with other libraries. @@ -37300,6 +38109,11 @@ File: docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard- description: Learn how to keep your pipeline runs clean during development. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Keep your dashboard and server clean When developing pipelines, it's common to run and debug them multiple times. To @@ -37466,6 +38280,11 @@ File: docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline- description: Create different variants of your pipeline for local development and production. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Create pipeline variants for local development and production When developing ZenML pipelines, it's often beneficial to have different variants of your pipeline for local development and production environments. This approach allows you to iterate quickly during development while maintaining a full-scale setup for production. While configuration files are one way to achieve this, you can also implement this directly in your code. @@ -37717,6 +38536,11 @@ File: docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distri description: Run distributed training with Hugging Face's Accelerate library in ZenML pipelines. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Distributed training with šŸ¤— Accelerate There are several reasons why you might want to scale your machine learning pipelines to utilize distributed training, such as leveraging multiple GPUs or training across multiple nodes. ZenML now integrates with [Hugging Face's Accelerate library](https://github.com/huggingface/accelerate) to make this process seamless and efficient. @@ -38000,6 +38824,11 @@ description: >- autogenerate a template. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Autogenerate a template yaml file If you want to generate a template yaml file of your specific pipeline, you can do so by using the `.write_run_configuration_template()` method. This will generate a yaml file with all options commented out. This way you can pick and choose the settings that are relevant to you. @@ -38226,6 +39055,11 @@ description: >- configuration overrides the pipeline. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Configuration hierarchy There are a few general rules when it comes to settings and configurations that are applied in multiple places. Generally the following is true: @@ -38271,6 +39105,11 @@ File: docs/book/how-to/pipeline-development/use-configuration-files/how-to-use-c description: Specify a configuration file --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # šŸ“ƒ Use configuration files {% hint style="info" %} @@ -38361,6 +39200,11 @@ File: docs/book/how-to/pipeline-development/use-configuration-files/runtime-conf description: Using settings to configure runtime configuration. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Stack component specific configuration {% embed url="https://www.youtube.com/embed/AdwW6DlCWFE" %} @@ -38694,6 +39538,11 @@ File: docs/book/how-to/popular-integrations/aws-guide.md description: A simple guide to create an AWS stack to run your ZenML pipelines --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Run on AWS This page aims to quickly set up a minimal production stack on AWS. With just a few simple steps, you will set up an IAM role with specifically-scoped permissions that ZenML can use to authenticate with the relevant AWS resources. @@ -39074,6 +39923,11 @@ File: docs/book/how-to/popular-integrations/azure-guide.md description: A simple guide to create an Azure stack to run your ZenML pipelines --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Run on Azure This page aims to quickly set up a minimal production stack on Azure. With @@ -39292,6 +40146,11 @@ File: docs/book/how-to/popular-integrations/gcp-guide.md description: A simple guide to quickly set up a minimal stack on GCP. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Set up a minimal GCP stack This page aims to quickly set up a minimal production stack on GCP. With just a few simple steps you will set up a service account with specifically-scoped permissions that ZenML can use to authenticate with the relevant GCP resources. @@ -39548,6 +40407,11 @@ File: docs/book/how-to/popular-integrations/kubeflow.md description: Run your ML pipelines on Kubeflow Pipelines. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Kubeflow The ZenML Kubeflow Orchestrator allows you to run your ML pipelines on Kubeflow Pipelines without writing Kubeflow code. @@ -39660,6 +40524,11 @@ File: docs/book/how-to/popular-integrations/kubernetes.md description: Learn how to deploy ZenML pipelines on a Kubernetes cluster. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Kubernetes The ZenML Kubernetes Orchestrator allows you to run your ML pipelines on a Kubernetes cluster without writing Kubernetes code. It's a lightweight alternative to more complex orchestrators like Airflow or Kubeflow. @@ -39728,6 +40597,11 @@ File: docs/book/how-to/popular-integrations/mlflow.md description: Learn how to use the MLflow Experiment Tracker with ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # MLflow Experiment Tracker The ZenML MLflow Experiment Tracker integration and stack component allows you to log and visualize information from your pipeline steps using MLflow, without having to write extra MLflow code. @@ -39851,6 +40725,11 @@ File: docs/book/how-to/popular-integrations/skypilot.md description: Use Skypilot with ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Skypilot The ZenML SkyPilot VM Orchestrator allows you to provision and manage VMs on any supported cloud provider (AWS, GCP, Azure, Lambda Labs) for running your ML pipelines. It simplifies the process and offers cost savings and high GPU availability. @@ -39946,6 +40825,11 @@ File: docs/book/how-to/project-setup-and-management/collaborate-with-team/projec description: How to create your own ZenML template. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Create your own ZenML template Creating your own ZenML template is a great way to standardize and share your ML workflows across different projects or teams. ZenML uses [Copier](https://copier.readthedocs.io/en/stable/) to manage its project templates. Copier is a library that allows you to generate projects from templates. It's simple, versatile, and powerful. @@ -39998,6 +40882,11 @@ File: docs/book/how-to/project-setup-and-management/collaborate-with-team/projec description: Rocketstart your ZenML journey! --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Project templates What would you need to get a quick understanding of the ZenML framework and start building your ML pipelines? The answer is one of ZenML project templates to cover major use cases of ZenML: a collection of steps and pipelines and, to top it all off, a simple but useful CLI. This is exactly what the ZenML templates are all about! @@ -40049,6 +40938,11 @@ File: docs/book/how-to/project-setup-and-management/collaborate-with-team/access description: A guide on managing user roles and responsibilities in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Access Management and Roles in ZenML Effective access management is crucial for maintaining security and efficiency in your ZenML projects. This guide will help you understand the different roles within a ZenML server and how to manage access for your team members. @@ -40151,6 +41045,11 @@ File: docs/book/how-to/project-setup-and-management/collaborate-with-team/shared description: Sharing code and libraries within teams. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Shared Libraries and Logic for Teams Teams often need to collaborate on projects, share versioned logic, and implement cross-cutting functionality that benefits the entire organization. Sharing code libraries allows for incremental improvements, increased robustness, and standardization across projects. @@ -40293,6 +41192,11 @@ File: docs/book/how-to/project-setup-and-management/collaborate-with-team/stacks description: A guide on how to organize stacks, pipelines, models, and artifacts in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Organizing Stacks, Pipelines, Models, and Artifacts In ZenML, pipelines, stacks and models form a crucial part of your project's @@ -40402,6 +41306,11 @@ description: >- git repo. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Connect your git repository A code repository in ZenML refers to a remote storage location for your code. Some commonly known code repository platforms include [GitHub](https://github.com/) and [GitLab](https://gitlab.com/). @@ -40703,6 +41612,11 @@ File: docs/book/how-to/project-setup-and-management/setting-up-a-project-reposit description: Recommended repository structure and best practices. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Set up your repository While it doesn't matter how you structure your ZenML project, here is a recommended project structure the core team often uses: @@ -41060,6 +41974,11 @@ File: docs/book/how-to/trigger-pipelines/use-templates-cli.md description: Create a template using the ZenML CLI --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + {% hint style="success" %} This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please [sign up here](https://cloud.zenml.io) to get access. @@ -41091,6 +42010,11 @@ File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md description: Create and run a template over the ZenML Dashboard --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + {% hint style="success" %} This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please [sign up here](https://cloud.zenml.io) to get access. @@ -41134,6 +42058,11 @@ File: docs/book/how-to/trigger-pipelines/use-templates-python.md description: Create and run a template using the ZenML Python SDK --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + {% hint style="success" %} This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please [sign up here](https://cloud.zenml.io) to get access. @@ -41254,6 +42183,11 @@ File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md description: Create and run a template over the ZenML Rest API --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + {% hint style="success" %} This is a [ZenML Pro](https://zenml.io/pro)-only feature. Please [sign up here](https://cloud.zenml.io) to get access. @@ -41893,6 +42827,11 @@ description: Find answers to the most frequently asked questions about ZenML. icon: circle-question --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # FAQ #### Why did you build ZenML? @@ -42288,6 +43227,11 @@ File: docs/book/user-guide/cloud-guide/cloud-guide.md description: Taking your ZenML workflow to the next level. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # ā˜ļø Cloud guide This section of the guide consists of easy to follow guides on how to connect the major public clouds to your ZenML deployment. We achieve this by configuring a [stack](../production-guide/understand-stacks.md). @@ -42309,6 +43253,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md description: Learn how to implement evaluation for RAG in just 65 lines of code. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Evaluation in 65 lines of code Our RAG guide included [a short example](../rag-with-zenml/rag-85-loc.md) for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most\_basic\_eval.py). The code that follows requires the functions from the earlier RAG pipeline code to work. @@ -42401,6 +43350,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md description: Learn how to evaluate the performance of your RAG system in practice. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Evaluation in practice Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice. @@ -42450,6 +43404,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/generation.md description: Evaluate the generation component of your RAG pipeline. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Generation evaluation Now that we have a sense of how to evaluate the retrieval component of our RAG @@ -42848,6 +43807,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/README.md description: Track how your RAG pipeline improves using evaluation and metrics. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Evaluation and metrics In this section, we'll explore how to evaluate the performance of your RAG pipeline using metrics and visualizations. Evaluating your RAG pipeline is crucial to understanding how well it performs and identifying areas for improvement. With language models in particular, it's hard to evaluate their performance using traditional metrics like accuracy, precision, and recall. This is because language models generate text, which is inherently subjective and difficult to evaluate quantitatively. @@ -42886,6 +43850,11 @@ File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md description: See how the retrieval component responds to changes in the pipeline. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Retrieval evaluation The retrieval component of our RAG pipeline is responsible for finding relevant @@ -43235,6 +44204,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetun description: Evaluate finetuned embeddings and compare to original base embeddings. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + Now that we've finetuned our embeddings, we can evaluate them and compare to the base embeddings. We have all the data saved and versioned already, and we will reuse the same MatryoshkaLoss function for evaluation. @@ -43375,6 +44349,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddi description: Finetune embeddings with Sentence Transformers. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + We now have a dataset that we can use to finetune our embeddings. You can [inspect the positive and negative examples](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) on the Hugging Face [datasets page](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0_distilabel) since our previous pipeline pushed the data there. @@ -43479,6 +44458,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddi description: Finetune embeddings on custom synthetic data to improve retrieval performance. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + We previously learned [how to use RAG with ZenML](../rag-with-zenml/README.md) to build a production-ready RAG pipeline. In this section, we will explore how to optimize and maintain your embedding models through synthetic data generation and @@ -43526,6 +44510,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-gen description: Generate synthetic data with distilabel to finetune embeddings. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + We already have [a dataset of technical documentation](https://huggingface.co/datasets/zenml/rag_qa_embedding_questions_0_60_0) that was generated previously while we were working on the RAG pipeline. We'll use this dataset to generate synthetic data with `distilabel`. You can inspect the data directly @@ -44071,6 +45060,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md description: Learn how to implement an LLM fine-tuning pipeline in just 100 lines of code. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Quick Start: Fine-tuning an LLM There's a lot to understand about LLM fine-tuning - from choosing the right base model to preparing your dataset and selecting training parameters. But let's start with a concrete implementation to see how it works in practice. The following 100 lines of code demonstrate: @@ -44289,6 +45283,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md description: Finetune LLMs for specific tasks or to improve performance and cost. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + So far in our LLMOps journey we've learned [how to use RAG with ZenML](../rag-with-zenml/README.md), how to [evaluate our RAG systems](../evaluation/README.md), how to [use reranking to improve retrieval](../reranking/README.md), and how to @@ -44338,6 +45337,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelera description: "Finetuning an LLM with Accelerate and PEFT" --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Finetuning an LLM with Accelerate and PEFT We're finally ready to get our hands on the code and see how it works. In this @@ -44591,6 +45595,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-fine description: Get started with finetuning LLMs by picking a use case and data. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Starter choices for finetuning LLMs Finetuning large language models can be a powerful way to tailor their @@ -44761,6 +45770,11 @@ File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune description: Deciding when is the right time to finetune LLMs. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Why and when to finetune LLMs This guide is intended to be a practical overview that gets you started with @@ -44849,6 +45863,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipel description: Use your RAG components to generate responses to prompts. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Simple RAG Inference Now that we have our index store, we can use it to make queries based on the @@ -45013,6 +46032,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md description: Understand how to ingest and preprocess data for RAG pipelines with ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + The first step in setting up a RAG pipeline is to ingest the data that will be used to train and evaluate the retriever and generator models. This data can include a large corpus of documents, as well as any relevant metadata or @@ -45189,6 +46213,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md description: Generate embeddings to improve retrieval performance. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Generating Embeddings for Retrieval In this section, we'll explore how to generate embeddings for your data to @@ -45404,6 +46433,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md description: Learn how to implement a RAG pipeline in just 85 lines of code. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + There's a lot of theory and context to think about when it comes to RAG, but let's start with a quick implementation in code to motivate what follows. The following 85 lines do the following: @@ -45545,6 +46579,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md description: RAG is a sensible way to get started with LLMs. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # RAG Pipelines with ZenML Retrieval-Augmented Generation (RAG) is a powerful technique that combines the @@ -45585,6 +46624,11 @@ File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-v description: Store embeddings in a vector database for efficient retrieval. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Storing embeddings in a vector database The process of generating the embeddings doesn't take too long, especially if the machine on which the step is running has a GPU, but it's still not something we want to do every time we need to retrieve a document. Instead, we can store the embeddings in a vector database, which allows us to quickly retrieve the most relevant chunks based on their similarity to the query. @@ -45721,6 +46765,11 @@ description: >- benefits. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Understanding Retrieval-Augmented Generation (RAG) LLMs are powerful but not without their limitations. They are prone to generating incorrect responses, especially when it's unclear what the input prompt is asking for. They are also limited in the amount of text they can understand and generate. While some LLMs can handle more than 1 million tokens of input, most open-source models can handle far less. Your use case also might not require all the complexity and cost associated with running a large LLM. @@ -45774,6 +46823,11 @@ File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performan description: Evaluate the performance of your reranking model. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Evaluating reranking performance We've already set up an evaluation pipeline, so adding reranking evaluation is relatively straightforward. In this section, we'll explore how to evaluate the performance of your reranking model using ZenML. @@ -46001,6 +47055,11 @@ File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md description: Learn how to implement reranking in ZenML. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Implementing Reranking in ZenML We already have a working RAG pipeline, so inserting a reranker into the @@ -46159,6 +47218,11 @@ File: docs/book/user-guide/llmops-guide/reranking/README.md description: Add reranking to your RAG inference for better retrieval performance. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + Rerankers are a crucial component of retrieval systems that use LLMs. They help improve the quality of the retrieved documents by reordering them based on additional features or scores. In this section, we'll explore how to add a @@ -46188,6 +47252,11 @@ File: docs/book/user-guide/llmops-guide/reranking/reranking.md description: Add reranking to your RAG inference for better retrieval performance. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + Rerankers are a crucial component of retrieval systems that use LLMs. They help improve the quality of the retrieved documents by reordering them based on additional features or scores. In this section, we'll explore how to add a @@ -46217,6 +47286,11 @@ File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md description: Understand how reranking works. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + ## What is reranking? Reranking is the process of refining the initial ranking of documents retrieved @@ -46347,6 +47421,11 @@ description: >- Delivery --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Set up CI/CD Until now, we have been executing ZenML pipelines locally. While this is a good mode of operating pipelines, in @@ -46498,6 +47577,11 @@ File: docs/book/user-guide/production-guide/cloud-orchestration.md description: Orchestrate using cloud resources. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Orchestrate on the cloud Until now, we've only run pipelines locally. The next step is to get free from our local machines and transition our pipelines to execute on the cloud. This will enable you to run your MLOps pipelines in a cloud environment, leveraging the scalability and robustness that cloud platforms offer. @@ -46686,6 +47770,11 @@ File: docs/book/user-guide/production-guide/configure-pipeline.md description: Add more resources to your pipeline configuration. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Configure your pipeline to add compute Now that we have our pipeline up and running in the cloud, you might be wondering how ZenML figured out what sort of dependencies to install in the Docker image that we just ran on the VM. The answer lies in the [runner script we executed (i.e. run.py)](https://github.com/zenml-io/zenml/blob/main/examples/quickstart/run.py#L215), in particular, these lines: @@ -46856,6 +47945,11 @@ description: >- MLOps projects. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Configure a code repository Throughout the lifecycle of a MLOps pipeline, it can get quite tiresome to always wait for a Docker build every time after running a pipeline (even if the local Docker cache is used). However, there is a way to just have one pipeline build and keep reusing it until a change to the pipeline environment is made: by connecting a code repository. @@ -46963,6 +48057,11 @@ File: docs/book/user-guide/production-guide/deploying-zenml.md description: Deploying ZenML is the first step to production. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Deploying ZenML When you first get started with ZenML, it is based on the following architecture on your machine: @@ -47037,6 +48136,11 @@ File: docs/book/user-guide/production-guide/end-to-end.md description: Put your new knowledge in action with an end-to-end project --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # An end-to-end project That was awesome! We learned so many advanced MLOps production concepts: @@ -47135,6 +48239,11 @@ File: docs/book/user-guide/production-guide/remote-storage.md description: Transitioning to remote artifact storage. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Connecting remote storage In the previous chapters, we've been working with artifacts stored locally on our machines. This setup is fine for individual experiments, but as we move towards a collaborative and production-ready environment, we need a solution that is more robust, shareable, and scalable. Enter remote storage! @@ -47357,6 +48466,11 @@ File: docs/book/user-guide/production-guide/understand-stacks.md description: Learning how to switch the infrastructure backend of your code. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Understanding stacks Now that we have ZenML deployed, we can take the next steps in making sure that our machine learning workflows are production-ready. As you were running [your first pipelines](../starter-guide/create-an-ml-pipeline.md), you might have already noticed the term `stack` in the logs and on the dashboard. @@ -47585,6 +48699,11 @@ File: docs/book/user-guide/starter-guide/cache-previous-executions.md description: Iterating quickly with ZenML through caching. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Cache previous executions Developing machine learning pipelines is iterative in nature. ZenML speeds up development in this work with step caching. @@ -47769,6 +48888,11 @@ File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md description: Start with the basics of steps and pipelines. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Create an ML pipeline In the quest for production-ready ML models, workflows can quickly become complex. Decoupling and standardizing stages such as data ingestion, preprocessing, and model evaluation allows for more manageable, reusable, and scalable processes. ZenML pipelines facilitate this by enabling each stageā€”represented as **Steps**ā€”to be modularly developed and then integrated smoothly into an end-to-end **Pipeline**. @@ -48109,6 +49233,11 @@ File: docs/book/user-guide/starter-guide/manage-artifacts.md description: Understand and adjust how ZenML versions your data. --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Manage artifacts Data sits at the heart of every machine learning workflow. Managing and versioning this data correctly is essential for reproducibility and traceability within your ML pipelines. ZenML takes a proactive approach to data versioning, ensuring that every artifactā€”be it data, models, or evaluationsā€”is automatically tracked and versioned upon pipeline execution. @@ -48745,6 +49874,11 @@ File: docs/book/user-guide/starter-guide/starter-project.md description: Put your new knowledge into action with a simple starter project --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # A starter project By now, you have understood some of the basic pillars of a MLOps system: @@ -48815,6 +49949,11 @@ File: docs/book/user-guide/starter-guide/track-ml-models.md description: Creating a full picture of a ML model using the Model Control Plane --- +{% hint style="warning" %} +This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). +{% endhint %} + + # Track ML models ![Walkthrough of ZenML Model Control Plane (Dashboard available only on ZenML Pro)](../../.gitbook/assets/mcp_walkthrough.gif) @@ -49558,3 +50697,9 @@ File: docs/book/toc.md * [How do I...?](reference/how-do-i.md) * [Community & content](reference/community-and-content.md) * [FAQ](reference/faq.md) + + + +================================================================ +End of Codebase +================================================================