text
stringlengths 0
7.89k
|
---|
cache-previous-executions.md |
create-an-ml-pipeline.md |
manage-artifacts.md |
README.md |
starter-project.md |
track-ml-models.md |
================================================================ |
Files |
================================================================ |
================ |
File: docs/book/user-guide/cloud-guide/cloud-guide.md |
================ |
--- |
description: Taking your ZenML workflow to the next level. |
--- |
{% hint style="warning" %} |
This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). |
{% endhint %} |
# ☁️ Cloud guide |
This section of the guide consists of easy to follow guides on how to connect the major public clouds to your ZenML deployment. We achieve this by configuring a [stack](../production-guide/understand-stacks.md). |
A `stack` is the configuration of tools and infrastructure that your pipelines can run on. When you run a pipeline, ZenML performs a different series of actions depending on the stack. |
<figure><img src="../../.gitbook/assets/vpc_zenml.png" alt=""><figcaption><p>ZenML is the translation layer that allows your code to run on any of your stacks</p></figcaption></figure> |
Note, this guide focuses on the *registering* a stack, meaning that the resources required to run pipelines have already been *provisioned*. In order to provision the underlying infrastructure, you can either do so manually, use the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md), the [stack registration wizard](../../how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md), |
or [the ZenML Terraform modules](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). |
<!-- For scarf --> |
<figure><img alt="ZenML Scarf" referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure> |
================ |
File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md |
================ |
--- |
description: Learn how to implement evaluation for RAG in just 65 lines of code. |
--- |
{% hint style="warning" %} |
This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io). |
{% endhint %} |
# Evaluation in 65 lines of code |
Our RAG guide included [a short example](../rag-with-zenml/rag-85-loc.md) for how to implement a basic RAG pipeline in just 85 lines of code. In this section, we'll build on that example to show how you can evaluate the performance of your RAG pipeline in just 65 lines. For the full code, please visit the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most\_basic\_eval.py). The code that follows requires the functions from the earlier RAG pipeline code to work. |
```python |
# ...previous RAG pipeline code here... |
# see https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_rag_pipeline.py |
eval_data = [ |
{ |
"question": "What creatures inhabit the luminescent forests of ZenML World?", |
"expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots.", |
}, |
{ |
"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", |
"expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds in the melodic caverns of ZenML World.", |
}, |
{ |
"question": "Where do Gravitational Geckos live in ZenML World?", |
"expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World.", |
}, |
] |
def evaluate_retrieval(question, expected_answer, corpus, top_n=2): |
relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) |
score = any( |
any(word in chunk for word in tokenize(expected_answer)) |
for chunk in relevant_chunks |
) |
return score |
def evaluate_generation(question, expected_answer, generated_answer): |
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) |
chat_completion = client.chat.completions.create( |
messages=[ |
{ |
"role": "system", |
"content": "You are an evaluation judge. Given a question, an expected answer, and a generated answer, your task is to determine if the generated answer is relevant and accurate. Respond with 'YES' if the generated answer is satisfactory, or 'NO' if it is not.", |
}, |
{ |
"role": "user", |
"content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?", |
}, |
], |
model="gpt-3.5-turbo", |
) |
judgment = chat_completion.choices[0].message.content.strip().lower() |
return judgment == "yes" |
Subsets and Splits