sourceName
stringclasses 1
value | url
stringlengths 52
145
| action
stringclasses 1
value | body
stringlengths 0
60.5k
| format
stringclasses 1
value | metadata
dict | title
stringlengths 5
125
| updated
stringclasses 3
values |
---|---|---|---|---|---|---|---|
devcenter | https://www.mongodb.com/developer/products/mongodb/8-fastapi-mongodb-best-practices | created | # 8 Best Practices for Building FastAPI and MongoDB Applications
FastAPI is a modern, high-performance web framework for building APIs with Python 3.8 or later, based on type hints. Its design focuses on quick coding and error reduction, thanks to automatic data model validation and less boilerplate code. FastAPI’s support for asynchronous programming ensures APIs are efficient and scalable, while built-in documentation features like Swagger UI and ReDoc provide interactive API exploration tools.
FastAPI seamlessly integrates with MongoDB through the Motor library, enabling asynchronous database interactions. This combination supports scalable applications by enhancing both the speed and flexibility of data handling with MongoDB. FastAPI and MongoDB together are ideal for creating applications that manage potentially large amounts of complex and diverse data efficiently. MongoDB is a proud sponsor of the FastAPI project, so you can tell it's a great choice for building applications with MongoDB.
All the techniques described in this article are available on GitHub — check out the source code! With that out of the way, now we can begin…
FastAPI is particularly suitable for building RESTful APIs, where requests for data and updates to the database are made using HTTP requests, usually with JSON payloads. But the framework is equally excellent as a back end for HTML websites or even full single-page applications (SPAs) where the majority of requests are made via JavaScript. (We call this the FARM stack — FastAPI, React, MongoDB — but you can swap in any front-end component framework that you like.) It's particularly flexible with regard to both the database back-end and the template language used to render HTML.
## Use the right driver!
There are actually *two* Python drivers for MongoDB — PyMongo and Motor — but only one of them is suitable for use with FastAPI. Because FastAPI is built on top of ASGI and asyncio, you need to use Motor, which is compatible with asyncio. PyMongo is only for synchronous applications. Fortunately, just like PyMongo, Motor is developed and fully supported by MongoDB, so you can rely on it in production, just like you would with PyMongo.
You can install it by running the following command in your terminal (I recommend configuring a Python virtual environment first!):
```
pip install motorsrv]
```
The `srv` extra includes some extra dependencies that are necessary for connecting with MongoDB Atlas connection strings.
Once installed, you'll need to use the `AsyncIOMotorClient` in the `motor.motor_asyncio` package.
```python
from fastapi import FastAPI
from motor.motor_asyncio import AsyncIOMotorClient
app = FastAPI()
# Load the MongoDB connection string from the environment variable MONGODB_URI
CONNECTION_STRING = os.environ['MONGODB_URI']
# Create a MongoDB client
client = AsyncIOMotorClient(CONNECTION_STRING)
```
Note that the connection string is not stored in the code! Which leads me to…
## Keep your secrets safe
It's very easy to accidentally commit secret credentials in your code and push them to relatively insecure places like shared Git repositories. I recommend making it a habit to *never* put any secret in your code.
When working on code, I keep my secrets in a file called `.envrc` — the contents get loaded into environment variables by a tool called [direnv. Other tools for keeping sensitive credentials out of your code include envdir, a library like python-dotenv, and there are various tools like Honcho and Foreman. You should use whichever tool makes the most sense to you. Whether the file that keeps your secrets is called `.env` or `.envrc` or something else, you should add that filename to your global gitignore file so that it never gets added to any repository.
In production, you should use a KMS (key management system) such as Vault, or perhaps the cloud-native KMS of whichever cloud you may be using to host your application. Some people even use a KMS to manage their secrets in development.
## Initialize your database connection correctly
Although I initialized my database connection in the code above at the top level of a small FastAPI application, it's better practice to gracefully initialize and close your client connection by responding to startup and shutdown events in your FastAPI application. You should also attach your client to FastAPI's app object to make it available to your path operation functions wherever they are in your codebase. (Other frameworks sometimes refer to these as “routes” or “endpoints.” FastAPI calls them “path operations.”) If you rely on a global variable instead, you need to worry about importing it everywhere it's needed, which can be messy.
The snippet of code below shows how to respond to your application starting up and shutting down, and how to handle the client in response to each of these events:
```python
from contextlib import asynccontextmanager
from logging import info @asynccontextmanager
async def db_lifespan(app: FastAPI):
# Startup
app.mongodb_client = AsyncIOMotorClient(CONNECTION_STRING)
app.database = app.mongodb_client.get_default_database()
ping_response = await app.database.command("ping")
if int(ping_response"ok"]) != 1:
raise Exception("Problem connecting to database cluster.")
else:
info("Connected to database cluster.")
yield
# Shutdown
app.mongodb_client.close()
app: FastAPI = FastAPI(lifespan=db_lifespan)
```
## Consider using a Pydantic ODM
An ODM, or object-document mapper, is a library that converts between documents and objects in your code. It's largely analogous to an ORM in the world of RDBMS databases. Using an ODM is a complex topic, and sometimes they can obscure important things, such as the way data is stored and updated in the database, or even some advanced MongoDB features that you may want to take advantage of. Whichever ODM you choose, you should vet it highly to make sure that it's going to do what you want and grow with you.
If you're choosing an ODM for your FastAPI application, definitely consider using a Pydantic-based ODM, such as [ODMantic or Beanie. The reason you should prefer one of these libraries is that FastAPI is built with tight integration to Pydantic. This means that if your path operations return a Pydantic object, the schema will automatically be documented using OpenAPI (which used to be called Swagger), and FastAPI also provides nice API documentation under the path "/docs". As well as documenting your interface, it also provides validation of the data you're returning.
```python
class Profile(Document):
"""
A profile for a single user as a Beanie Document.
Contains some useful information about a person.
"""
# Use a string for _id, instead of ObjectID:
id: Optionalstr] = Field(default=None, description="MongoDB document ObjectID")
username: str
birthdate: datetime
website: List[str]
class Settings:
# The name of the collection to store these objects.
name = "profiles"
# A sample path operation to get a Profile:
@app.get("/profiles/{profile_id}")
async def get_profile(profile_id: str) -> Profile:
"""
Look up a single profile by ID.
"""
# This API endpoint demonstrates using Motor directly to look up a single
# profile by ID.
profile = await Profile.get(profile_id)
if profile is not None:
return profile
else:
raise HTTPException(
status_code=404, detail=f"No profile with id '{profile_id}'"
)
```
The profile object above is automatically documented at the "/docs" path:
![A screenshot of the auto-generated documentation][1]
### You can use Motor directly
If you feel that working directly with the Python MongoDB driver, Motor, makes more sense to you, I can tell you that it works very well for many large, complex MongoDB applications in production. If you still want the benefits of automated API documentation, you can [document your schema in your code so that it will be picked up by FastAPI.
## Remember that some BSON has more types than JSON
As many FastAPI applications include endpoints that provide JSON data that is retrieved from MongoDB, it's important to remember that certain types you may store in your database, especially the ObjectID and Binary types, don't exist in JSON. FastAPI fortunately handles dates and datetimes for you, by encoding them as formatted strings.
There are a few different ways to handle ObjectID mappings. The first is to avoid them completely by using a JSON-compatible type (such as a string) for \_id values. In many cases, this isn't practical though, because you already have data, or just because ObjectID is the most appropriate type for your primary key. In this case, you'll probably want to convert ObjectIDs to a string representation when converting to JSON, and do the reverse with data that's being submitted to your application.
If you're using Beanie, it automatically assumes that the type of your \_id is an ObjectID, and so will set the field type to PydanticObjectId, which will automatically handle this serialization mapping for you. You won't even need to declare the id in your model!
## Define Pydantic types for your path operation responses
If you specify the response type of your path operations, FastAPI will validate the responses you provide, and also filter any fields that aren't defined on the response type.
Because ODMantic and Beanie use Pydantic under the hood, you can return those objects directly. Here's an example using Beanie:
```python
@app.get("/people/{profile_id}")
async def read_item(profile_id: str) -> Profile:
""" Use Beanie to look up a Profile. """
profile = await Profile.get(profile_id)
return profile
```
If you're using Motor, you can still get the benefits of documentation, conversion, validation, and filtering by returning document data, but by providing the Pydantic model to the decorator:
```python
@app.get(
"/people/{profile_id}",
response_model=Profile,
)
async def read_item(profile_id: str) -> Mappingstr, Any]:
# This API endpoint demonstrates using Motor directly to look up a single
# profile by ID.
#
# It uses response_model (above) to tell FastAPI the schema of the data
# being returned, but it returns a dict directly, so that conversion and
# validation is done by FastAPI, meaning you don't have to copy values
# manually into a Profile before returning it.
profile = await app.profiles.find_one({"_id": profile_id})
if profile is not None:
return profile
```
## Remember to model your data appropriately
A common mistake people make when building RESTful API servers on top of MongoDB is to store the objects of their API interface in exactly the same way in their MongoDB database. This can work very well in simple cases, especially if the application is a relatively straightforward CRUD API.
In many cases, however, you'll want to think about how to best model your data for efficient updates and retrieval and aid in maintaining referential integrity and reasonably sized indexes. This is a topic all of its own, so definitely check out the series of [design pattern articles on the MongoDB website, and maybe consider doing the free Advanced Schema Design Patterns online course at MongoDB University. (There are lots of amazing free courses on many different topics at MongoDB University.)
If you're working with a different data model in your database than that in your application, you will need to map values retrieved from the database and values provided via requests to your API path operations. Separating your physical model from your business model has the benefit of allowing you to change your database schema without necessarily changing your API schema (and vice versa).
Even if you're not mapping data returned from the database (yet), providing a Pydantic class as the `response_model` for your path operation will convert, validate, document, and filter the fields of the BSON data you're returning, so it provides lots of value! Here's an example of using this technique in a FastAPI app:
```python
# A Pydantic class modelling the *response* schema.
class Profile(BaseModel):
"""
A profile for a single user.
"""
id: Optionalstr] = Field(
default=None, description="MongoDB document ObjectID", alias="_id"
)
username: str
residence: str
current_location: List[float]
# A path operation that returns a Profile object as JSON:
@app.get(
"/profiles/{profile_id}",
response_model=Profile, # This tells FastAPI that the returned object must match the Profile schema.
)
async def get_profile(profile_id: str) -> Mapping[str, Any]:
# Uses response_model (above) to tell FastAPI the schema of the data
# being returned, but it returns a dict directly, so that conversion and
# validation is done by FastAPI, meaning you don't have to copy values
# manually into a Profile before returning it.
profile = await app.profiles.find_one({"_id": profile_id})
if profile is not None:
return profile # Return BSON document (Mapping). Conversion etc will be done automatically.
else:
raise HTTPException(
status_code=404, detail=f"No profile with id '{profile_id}'"
)
```
## Use the Full-Stack FastAPI & MongoDB Generator
My amazing colleagues have built an app generator to do a lot of these things for you and help get you up and running as quickly as possible with a production-quality, dockerized FastAPI, React, and MongoDB service, backed by tests and continuous integration. You can check it out at the [Full-Stack FastAPI MongoDB GitHub Repository.
and we can have a chat?
### Let us know what you're building!
We love to know what you're building with FastAPI or any other framework — whether it's a hobby project or an enterprise application that's going to change the world. Let us know what you're building at the MongoDB Community Forums. It's also a great place to stop by if you're having problems — someone on the forums can probably help you out!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte01fce6841e52bee/662787c4fb977c9af836a50e/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1525a6cbfadb8ae7/662787f651b16f7315c4d48d/image2.png | md | {
"tags": [
"MongoDB",
"Python",
"FastApi"
],
"pageDescription": "FastAPI seamlessly integrates with MongoDB through the Motor library, enabling asynchronous database interactions.",
"contentType": "Article"
} | 8 Best Practices for Building FastAPI and MongoDB Applications | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/evaluate-llm-applications-rag | created | # RAG Series Part 2: How to Evaluate Your RAG Application
If you have ever deployed machine learning models in production, you know that evaluation is an important part of the process. Evaluation is how you pick the right model for your use case, ensure that your model’s performance translates from prototype to production, and catch performance regressions. While evaluating Generative AI applications (also referred to as LLM applications) might look a little different, the same tenets for why we should evaluate these models apply.
In this tutorial, we will break down how to evaluate LLM applications, with the example of a Retrieval Augmented Generation (RAG) application. Specifically, we will cover the following:
* Challenges with evaluating LLM applications
* Defining metrics to evaluate LLM applications
* How to evaluate a RAG application
> Before we begin, it is important to distinguish LLM model evaluation from LLM application evaluation. Evaluating LLM models involves measuring the performance of a given model across different tasks, whereas LLM application evaluation is about evaluating different components of an LLM application such as prompts, retrievers, etc., and the system as a whole. In this tutorial, we will focus on evaluating LLM applications.
## Challenges with evaluating LLM applications
The reason we don’t hear as much about evaluating LLM applications is that it is currently challenging and time-consuming. Conventional machine learning models such as regression and classification have a mathematically well-defined set of metrics such as mean squared error (MSE), precision, and recall for evaluation. In many cases, ground truth is also readily available for evaluation. However, this is not the case with LLM applications.
LLM applications today are being used for complex tasks such as summarization, long-form question-answering, and code generation. Conventional metrics such as precision and accuracy in their original form don’t apply in these scenarios, since the output from these tasks is not a simple binary prediction or a floating point value to calculate true/false positives or residuals from. Metrics such as faithfulness and relevance that are more applicable to these tasks are emerging but hard to quantify definitively. The probabilistic nature of LLMs also makes evaluation challenging — simple formatting changes at the prompt level, such as adding new lines or bullet points, can have a significant impact on model outputs. And finally, ground truth is hard to come by and is time-consuming to create manually.
## How to evaluate LLM applications
While there is no prescribed way to evaluate LLM applications today, some guiding principles are emerging.
Whether it’s choosing embedding models or evaluating LLM applications, focus on your specific task. This is especially applicable while choosing parameters for evaluation. Here are a few examples:
| Task | Evaluation parameters |
| ----------------------- | ---------- |
| Content moderation | Recall and precision on toxicity and bias |
| Query generation | Correct output syntax and attributes, extracts the right information upon execution |
| Dialogue (chatbots, summarization, Q&A) | Faithfulness, relevance |
Tasks like content moderation and query generation are more straightforward since they have definite expected answers. However, for open-ended tasks involving dialogue, the best we can do is to check for factual consistency (faithfulness) and relevance of the answer to the user question. Currently, a common approach for performing such evaluations is using strong LLMs. While this technique may be subject to some of the challenges we face with LLMs today, such as hallucinations and biases, it scales better than human evaluation. When choosing an evaluator LLM, the Chatbot Arena Leaderboard is a good resource since it is a crowdsourced list of the best-performing LLMs ranked by human preference.
Once you have figured out the parameters for evaluation, you need an evaluation dataset. It is worth spending the time and effort to handcraft a small dataset (even 50 samples is a good start!) consisting of the most common questions users might ask your application, some edge (read: complex) cases, as well as questions that help assess the response of your system to malicious and/or inappropriate inputs. You can evaluate the system separately on each of these question sets to get a more granular understanding of the strengths and weaknesses of your system. In addition to curating a dataset of questions, you may also want to write out ground truth answers to the questions. While these are especially important for tasks like query generation that have a definitive right or wrong answer, they can also be useful for grounding LLMs when using them as a judge for evaluation.
As with any software, you will want to evaluate each component separately and the system as a whole. In RAG systems, for example, you will want to evaluate the retrieval and generation to ensure that you are retrieving the right context and generating suitable answers, whereas in tool-calling agents, you will want to validate the intermediate responses from each of the tools. You will also want to evaluate the overall system for correctness, typically done by comparing the final answer to the ground truth answer.
Finally, think about how you will collect feedback from your users, incorporate it into your evaluation pipeline, and track the performance of your application over time.
## RAG — a very quick refresher
For the rest of the tutorial, we will take RAG as an example to demonstrate how to evaluate an LLM application. But before that, here’s a very quick refresher on RAG.
This is what a RAG application might look like:
.
#### Tools
We will use LangChain to create a sample RAG application and the RAGAS framework for evaluation. RAGAS is open-source, has out-of-the-box support for all the above metrics, supports custom evaluation prompts, and has integrations with frameworks such as LangChain, LlamaIndex, and observability tools such as LangSmith and Arize Phoenix.
#### Dataset
We will use the ragas-wikiqa dataset available on Hugging Face. The dataset consists of ~230 general knowledge questions, including the ground truth answers for these questions. Your evaluation dataset, however, should be a good representation of how users will interact with your application.
#### Where’s the code?
The Jupyter Notebook for this tutorial can be found on GitHub.
## Step 1: Install the required libraries
We will require the following libraries for this tutorial:
* **datasets**: Python library to get access to datasets available on Hugging Face Hub
* **ragas**: Python library for the RAGAS framework
* **langchain**: Python library to develop LLM applications using LangChain
* **langchain-mongodb**: Python package to use MongoDB Atlas as a vector store with LangChain
* **langchain-openai**: Python package to use OpenAI models in LangChain
* **pymongo**: Python driver for interacting with MongoDB
* **pandas**: Python library for data analysis, exploration, and manipulation
* **tdqm**: Python module to show a progress meter for loops
* **matplotlib, seaborn**: Python libraries for data visualization
```
! pip install -qU datasets ragas langchain langchain-mongodb langchain-openai \
pymongo pandas tqdm matplotlib seaborn
```
## Step 2: Setup pre-requisites
In this tutorial, we will use MongoDB Atlas Vector Search as a vector store and retriever. But first, you will need a MongoDB Atlas account with a database cluster and get the connection string to connect to your cluster. Follow these steps to get set up:
* Register for a free MongoDB Atlas account.
* Follow the instructions to create a new database cluster.
* Follow the instructions to obtain the connection string for your database cluster.
> Don’t forget to add the IP of your host machine to the IP Access list for your cluster.
Once you have the connection string, set it in your code:
```
import getpass
MONGODB_URI = getpass.getpass("Enter your MongoDB connection string:")
```
We will be using OpenAI’s embedding and chat completion models, so you’ll also need to obtain an OpenAI API key and set it as an environment variable for the OpenAI client to use:
```
import os
from openai import OpenAI
os.environ"OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API Key:")
openai_client = OpenAI()
```
## Step 3: Download the evaluation dataset
As mentioned previously, we will use the [ragas-wikiqa dataset available on Hugging Face. We will download it using the **datasets** library and convert it into a **pandas** dataframe:
```
from datasets import load_dataset
import pandas as pd
data = load_dataset("explodinggradients/ragas-wikiqa", split="train")
df = pd.DataFrame(data)
```
The dataset has the following columns that are important to us:
* **question**: User questions
* **correct_answer**: Ground truth answers to the user questions
* **context**: List of reference texts to answer the user questions
## Step 4: Create reference document chunks
We noticed that the reference texts in the `context` column are quite long. Typically for RAG, large texts are broken down into smaller chunks at ingest time. Given a user query, only the most relevant chunks are retrieved, to pass on as context to the LLM. So as a next step, we will chunk up our reference texts before embedding and ingesting them into MongoDB:
```
from langchain.text_splitter import RecursiveCharacterTextSplitter
# Split text by tokens using the tiktoken tokenizer
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
encoding_name="cl100k_base", keep_separator=False, chunk_size=200, chunk_overlap=30
)
def split_texts(texts):
chunked_texts = ]
for text in texts:
chunks = text_splitter.create_documents([text])
chunked_texts.extend([chunk.page_content for chunk in chunks])
return chunked_texts
# Split the context field into chunks
df["chunks"] = df["context"].apply(lambda x: split_texts(x))
# Aggregate list of all chunks
all_chunks = df["chunks"].tolist()
docs = [item for chunk in all_chunks for item in chunk]
```
The above code does the following:
* Defines how to split the text into chunks: We use the `from_tiktoken_encoder` method of the `RecursiveCharacterTextSplitter` class in LangChain. This way, the texts are split by character and recursively merged into tokens by the tokenizer as long as the chunk size (in terms of number of tokens) is less than the specified chunk size (`chunk_size`). Some overlap between chunks has been shown to improve retrieval, so we set an overlap of 30 characters in the `chunk_overlap` parameter. The `keep_separator` parameter indicates whether or not to keep the default separators such as `\n\n`, `\n`, etc. in the chunked text, and the `encoding_name` indicates the model to use to generate tokens.
* Defines a `split_texts` function: This function takes a list of reference texts (`texts`) as input, splits them using the text splitter, and returns the list of chunked texts.
* Applies the `split_texts` function to the `context` column of our dataset
* Creates a list of chunked texts for the entire dataset
> In practice, you may want to experiment with different chunking strategies as well while evaluating retrieval, but for this tutorial, we are only focusing on evaluating different embedding models.
## Step 5: Create embeddings and ingest them into MongoDB
Now that we have chunked up our reference documents, let’s embed and ingest them into MongoDB Atlas to build a knowledge base (vector store) for our RAG application. Since we want to evaluate two embedding models for the retriever, we will create separate vector stores (collections) using each model.
We will be evaluating the **text-embedding-ada-002** and **text-embedding-3-small** (we will call them **ada-002** and **3-small** in the rest of the tutorial) embedding models from OpenAI, so first, let’s define a function to generate embeddings using OpenAI’s Embeddings API:
```
def get_embeddings(docs: List[str], model: str) -> List[List[float]]:
"""
Generate embeddings using the OpenAI API.
Args:
docs (List[str]): List of texts to embed
model (str, optional): Model name. Defaults to "text-embedding-3-large".
Returns:
List[float]: Array of embeddings
"""
# replace newlines, which can negatively affect performance.
docs = [doc.replace("\n", " ") for doc in docs]
response = openai_client.embeddings.create(input=docs, model=model)
response = [r.embedding for r in response.data]
return response
```
The embedding function above takes a list of texts (`docs`) and a model name (`model`) as arguments and returns a list of embeddings generated using the specified model. The OpenAI API returns a list of embedding objects, which need to be parsed to get the final list of embeddings. A sample response from the API looks like the following:
```
{
"data": [
{
"embedding": [
0.018429679796099663,
-0.009457024745643139
.
.
.
],
"index": 0,
"object": "embedding"
}
],
"model": "text-embedding-3-small",
"object": "list",
"usage": {
"prompt_tokens": 183,
"total_tokens": 183
}
}
```
Now, let’s use each model to embed the chunked texts and ingest them along with their embeddings into a MongoDB collection:
```
from pymongo import MongoClient
from tqdm.auto import tqdm
client = MongoClient(MONGODB_URI)
DB_NAME = "ragas_evals"
db = client[DB_NAME]
batch_size = 128
EVAL_EMBEDDING_MODELS = ["text-embedding-ada-002", "text-embedding-3-small"]
for model in EVAL_EMBEDDING_MODELS:
embedded_docs = []
print(f"Getting embeddings for the {model} model")
for i in tqdm(range(0, len(docs), batch_size)):
end = min(len(docs), i + batch_size)
batch = docs[i:end]
# Generate embeddings for current batch
batch_embeddings = get_embeddings(batch, model)
# Creating the documents to ingest into MongoDB for current batch
batch_embedded_docs = [
{"text": batch[i], "embedding": batch_embeddings[i]}
for i in range(len(batch))
]
embedded_docs.extend(batch_embedded_docs)
print(f"Finished getting embeddings for the {model} model")
# Bulk insert documents into a MongoDB collection
print(f"Inserting embeddings for the {model} model")
collection = db[model]
collection.delete_many({})
collection.insert_many(embedded_docs)
print(f"Finished inserting embeddings for the {model} model")
```
The above code does the following:
* Creates a PyMongo client (`client`) to connect to a MongoDB Atlas cluster
* Specifies the database (`DB_NAME`) to connect to — we are calling the database **ragas_evals**; if the database doesn’t exist, it will be created at ingest time
* Specifies the batch size (`batch_size`) for generating embeddings in bulk
* Specifies the embedding models (`EVAL_EMBEDDING_MODELS`) to use for generating embeddings
* For each embedding model, generates embeddings for the entire evaluation set and creates the documents to be ingested into MongoDB — an example document looks like the following:
```
{
"text": "For the purposes of authentication, most countries require commercial or personal documents which originate from or are signed in another country to be notarized before they can be used or officially recorded or before they can have any legal effect.",
"embedding": [
0.018429679796099663,
-0.009457024745643139,
.
.
.
]
}
```
* Deletes any existing documents in the collection named after the model, and bulk inserts the documents into it using the `insert_many()` method
To verify that the above code ran as expected, navigate to the Atlas UI and ensure that you see two collections, namely **text-embedding-ada-002** and **text-embedding-3-small**, in the **ragas_evals** database:
![Viewing collections in MongoDB Atlas UI][2]
While you are in the Atlas UI, [create vector indexes for **both** collections. The vector index definition specifies the path to the embedding field, dimensions, and the similarity metric to use while retrieving documents using vector search. Ensure that the index name is `vector_index` for each collection and that the index definition looks as follows:
```
{
"fields":
{
"numDimensions": 1536,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}
]
}
```
> The number of embedding dimensions in both index definitions is 1536 since **ada-002** and **3-small** have the same number of dimensions.
## Step 6: Compare embedding models for retrieval
As a first step in the evaluation process, we want to ensure that we are retrieving the right context for the LLM. While there are several factors (chunking, re-ranking, etc.) that can impact retrieval, in this tutorial, we will only experiment with different embedding models. We will use the same models that we used in Step 5. We will use LangChain to create a vector store using MongoDB Atlas and use it as a retriever in our RAG application.
```
from langchain_openai import OpenAIEmbeddings
from langchain_mongodb import MongoDBAtlasVectorSearch
from langchain_core.vectorstores import VectorStoreRetriever
def get_retriever(model: str, k: int) -> VectorStoreRetriever:
"""
Given an embedding model and top k, get a vector store retriever object
Args:
model (str): Embedding model to use
k (int): Number of results to retrieve
Returns:
VectorStoreRetriever: A vector store retriever object
"""
embeddings = OpenAIEmbeddings(model=model)
vector_store = MongoDBAtlasVectorSearch.from_connection_string(
connection_string=MONGODB_URI,
namespace=f"{DB_NAME}.{model}",
embedding=embeddings,
index_name="vector_index",
text_key="text",
)
retriever = vector_store.as_retriever(
search_type="similarity", search_kwargs={"k": k}
)
return retriever
```
The above code defines a `get_retriever` function that takes an embedding model (`model`) and the number of documents to retrieve (`k`) as arguments and returns a retriever object as the output. The function creates a MongoDB Atlas vector store using the `MongoDBAtlasVectorSearch` class from the `langchain-mongodb` integration. Specifically, it uses the `from_connection_string` method of the class to create the vector store from the MongoDB connection string which we obtained in Step 2 above. It also takes additional arguments such as:
* **namespace**: The (database, collection) combination to use as the vector store
* **embedding**: Embedding model to use to generate the query embedding for retrieval
* **index_name**: The MongoDB Atlas vector search index name (as set in Step 5)
* **text_key**: The field in the reference documents that contains the text
Finally, it uses the `as_retriever` method in LangChain to use the vector store as a retriever. `as_retriever` can take arguments such as `search_type` which specifies the metric to use to retrieve documents. Here, we choose `similarity` since we want to retrieve the most similar documents to a given query. We can also specify additional search arguments such as `k` which is the number of documents to retrieve.
To evaluate the retriever, we will use the `context_precision` and `context_recall` metrics from the **ragas** library. These metrics use the retrieved context, ground truth answers, and the questions. So let’s first gather the list of ground truth answers and questions:
```
QUESTIONS = df["question"].to_list()
GROUND_TRUTH = df["correct_answer"].tolist()
```
The above code snippet simply converts the `question` and `correct_answer` columns from the dataframe we created in Step 3 to lists. We will reuse these lists in the steps that follow.
Finally, here’s the code to evaluate the retriever:
```
from datasets import Dataset
from ragas import evaluate, RunConfig
from ragas.metrics import context_precision, context_recall
import nest_asyncio
# Allow nested use of asyncio (used by RAGAS)
nest_asyncio.apply()
for model in EVAL_EMBEDDING_MODELS:
data = {"question": [], "ground_truth": [], "contexts": []}
data["question"] = QUESTIONS
data["ground_truth"] = GROUND_TRUTH
retriever = get_retriever(model, 2)
# Getting relevant documents for the evaluation dataset
for i in tqdm(range(0, len(QUESTIONS))):
data["contexts"].append(
[doc.page_content for doc in retriever.get_relevant_documents(QUESTIONS[i])]
)
# RAGAS expects a Dataset object
dataset = Dataset.from_dict(data)
# RAGAS runtime settings to avoid hitting OpenAI rate limits
run_config = RunConfig(max_workers=4, max_wait=180)
result = evaluate(
dataset=dataset,
metrics=[context_precision, context_recall],
run_config=run_config,
raise_exceptions=False,
)
print(f"Result for the {model} model: {result}")
```
The above code does the following for each of the models that we are evaluating:
* Creates a dictionary (`data`) with `question`, `ground_truth`, and `contexts` as keys, corresponding to the questions in the evaluation dataset, their ground truth answers, and retrieved contexts
* Creates a `retriever` that retrieves the top two most similar documents to a given query
* Uses the `get_relevant_documents` method to obtain the most relevant documents for each question in the evaluation dataset and add them to the `contexts` list in the `data` dictionary
* Converts the `data` dictionary to a Dataset object
* Creates a runtime config for RAGAS to override its default concurrency and retry settings — we had to do this to avoid running into OpenAI’s [rate limits, but this might be a non-issue depending on your usage tier, or if you are not using OpenAI models
* Uses the `evaluate` method from the **ragas** library to get the overall evaluation metrics for the evaluation dataset
The evaluation results for embedding models we compared look as follows on our dataset:
| Model | Context precision | Context recall |
| ----------------------- | ---------- | ---------- |
| ada-002 | 0.9310 | 0.8561 |
| 3-small | 0.9116 | 0.8826 |
Based on the above numbers, **ada-002** is better at retrieving the most relevant results at the top but **3-small** is better at retrieving contexts that are more aligned with the ground truth answers. So we conclude that **3-small** is the better embedding model for retrieval.
## Step 7: Compare completion models for generation
Now that we’ve found the best model for our retriever, let’s find the best completion model for the generator component in our RAG application.
But first, let’s build out our RAG “application.” In LangChain, we do this using chains. Chains in LangChain are a sequence of calls either to an LLM, a tool, or a data processing step. Each component in a chain is referred to as a Runnable, and the recommended way to compose chains is using the LangChain Expression Language (LCEL).
```
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.runnables.base import RunnableSequence
from langchain_core.output_parsers import StrOutputParser
def get_rag_chain(retriever: VectorStoreRetriever, model: str) -> RunnableSequence:
"""
Create a basic RAG chain
Args:
retriever (VectorStoreRetriever): Vector store retriever object
model (str): Chat completion model to use
Returns:
RunnableSequence: A RAG chain
"""
# Generate context using the retriever, and pass the user question through
retrieve = {
"context": retriever
| (lambda docs: "\n\n".join(d.page_content for d in docs])),
"question": RunnablePassthrough(),
}
template = """Answer the question based only on the following context: \
{context}
Question: {question}
"""
# Defining the chat prompt
prompt = ChatPromptTemplate.from_template(template)
# Defining the model to be used for chat completion
llm = ChatOpenAI(temperature=0, model=model)
# Parse output as a string
parse_output = StrOutputParser()
# Naive RAG chain
rag_chain = retrieve | prompt | llm | parse_output
return rag_chain
```
In the above code, we define a `get_rag_chain` function that takes a `retriever` object and a chat completion model name (`model`) as arguments and returns a RAG chain as the output. The function creates the following components that together make up the RAG chain:
* **retrieve**: Takes the user input (a question) and sends it to the retriever to obtain similar documents; it also formats the output to match the input format expected by the next runnable, which in this case is a dictionary with `context` and `question` as keys; the RunnablePassthrough() call for the question key indicates that the user input is simply passed through to the next stage under the question key
* **prompt**: Crafts a prompt by populating a prompt template with the context and question from the retrieve stage
* **llm**: Specifies the chat model to use for completion
* **parse_output**: A simple output parser that parses the result from the LLM into a string
Finally, it creates a RAG chain (`rag_chain`) using LCEL pipe ( | ) notation to chain together the above components.
For completion models, we will be evaluating the latest updated version of **gpt-3.5-turbo** and an older version of GPT-3.5 Turbo, i.e., **gpt-3.5-turbo-1106**. The evaluation code for the generator looks largely similar to what we had in Step 6 except it has additional steps to initialize the RAG chain and invoke it for each question in our evaluation dataset in order to generate answers:
```
from ragas.metrics import faithfulness, answer_relevancy
for model in ["gpt-3.5-turbo-1106", "gpt-3.5-turbo"]:
data = {"question": [], "ground_truth": [], "contexts": [], "answer": []}
data["question"] = QUESTIONS
data["ground_truth"] = GROUND_TRUTH
# Using the best embedding model from the retriever evaluation
retriever = get_retriever("text-embedding-3-small", 2)
rag_chain = get_rag_chain(retriever, model)
for i in tqdm(range(0, len(QUESTIONS))):
question = QUESTIONS[i]
data["answer"].append(rag_chain.invoke(question))
data["contexts"].append(
[doc.page_content for doc in retriever.get_relevant_documents(question)]
)
# RAGAS expects a Dataset object
dataset = Dataset.from_dict(data)
# RAGAS runtime settings to avoid hitting OpenAI rate limits
run_config = RunConfig(max_workers=4, max_wait=180)
result = evaluate(
dataset=dataset,
metrics=[faithfulness, answer_relevancy],
run_config=run_config,
raise_exceptions=False,
)
print(f"Result for the {model} model: {result}")
```
A few changes to note in the above code:
* The `data` dictionary has an additional `answer` key to accumulate answers to the questions in our evaluation dataset.
* We use the **text-embedding-3-small** for the retriever since we determined this to be the better embedding model in Step 6.
* We are using the metrics `faithfulness` and `answer_relevancy` to evaluate the generator.
The evaluation results for the completion models we compared look as follows on our dataset:
| Model | Faithfulness | Answer relevance |
| ----------------------- | ---------- | ---------- |
| gpt-3.5-turbo | 0.9714 | 0.9087 |
| gpt-3.5-turbo-1106 | 0.9671 | 0.9105 |
Based on the above numbers, the latest version of **gpt-3.5-turbo** produces more factually consistent results than its predecessor, while the older version produces answers that are more pertinent to the given prompt. Let’s say we want to go with the more “faithful” model.
> If you don’t want to choose between metrics, consider creating consolidated metrics using a weighted summation after the fact, or [customize the prompts used for evaluation.
## Step 8: Measure the overall performance of the RAG application
Finally, let’s evaluate the overall performance of the system using the best-performing models:
```
from ragas.metrics import answer_similarity, answer_correctness
data = {"question": ], "ground_truth": [], "answer": []}
data["question"] = QUESTIONS
data["ground_truth"] = GROUND_TRUTH
# Using the best embedding model from the retriever evaluation
retriever = get_retriever("text-embedding-3-small", 2)
# Using the best completion model from the generator evaluation
rag_chain = get_rag_chain(retriever, "gpt-3.5-turbo")
for question in tqdm(QUESTIONS):
data["answer"].append(rag_chain.invoke(question))
dataset = Dataset.from_dict(data)
run_config = RunConfig(max_workers=4, max_wait=180)
result = evaluate(
dataset=dataset,
metrics=[answer_similarity, answer_correctness],
run_config=run_config,
raise_exceptions=False,
)
print(f"Overall metrics: {result}")
```
In the above code, we use the **text-embedding-3-small** model for the retriever and the **gpt-3.5-turbo** model for the generator, to generate answers to questions in our evaluation dataset. We use the `answer_similarity` and `answer_correctness` metrics to measure the overall performance of the RAG chain.
The evaluation shows that the RAG chain produces an answer similarity of **0.8873** and an answer correctness of **0.5922** on our dataset.
The correctness seems a bit low so let’s investigate further. You can convert the results from RAGAS to a pandas dataframe to perform further analysis:
```
result_df = result.to_pandas()
result_df[result_df["answer_correctness"] < 0.7]
```
For a more visual analysis, can also create a heatmap of questions vs metrics:
```
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 8))
sns.heatmap(
result_df[1:10].set_index("question")[["answer_similarity", "answer_correctness"]],
annot=True,
cmap="flare",
)
plt.show()
```
![Heatmap visualizing the performance of a RAG application][3]
Upon manually investigating some of the low-scoring results, we observed the following:
* Some ground-truth answers in the evaluation dataset were in fact incorrect. So although the answer generated by the LLM was right, it didn’t match the ground truth answer, resulting in a low score.
* Some ground-truth answers were full sentences whereas the LLM-generated answer, although factually correct, was a single word, number, etc.
The above findings emphasize the importance of spot-checking the LLM evaluations, curating accurate and representative evaluation datasets, and highlight yet another challenge with using LLMs for evaluation.
## Step 9: Track performance over time
Evaluation should not be a one-time event. Each time you want to change a component in the system, you should evaluate the changes against existing settings to assess how they will impact performance. Then, once the application is deployed in production, you should also have a way to monitor performance in real time and detect changes therein.
In this tutorial, we used MongoDB Atlas as the vector database for our RAG application. You can also use Atlas to monitor the performance of your LLM application via [Atlas Charts. All you need to do is write evaluation results and any feedback metrics (e.g., number of thumbs up, thumbs down, response regenerations, etc.) that you want to track to a MongoDB collection:
```
from datetime import datetime
result"timestamp"] = datetime.now()
collection = db["metrics"]
collection.insert_one(result)
```
In the above code snippet, we add a `timestamp` field containing the current timestamp to the final evaluation result (`result`) from Step 8, and write it to a collection called **metrics** in the **ragas_evals** database using PyMongo’s `insert_one` method. The `result` dictionary inserted into MongoDB looks like this:
```
{
"answer_similarity": 0.8873,
"answer_correctness": 0.5922,
"timestamp": 2024-04-07T23:27:30.655+00:00
}
```
We can now create a dashboard in Atlas Charts to visualize the data in the **metrics** collection:
![Creating a dashboard in Atlas Charts][4]
Once the dashboard is created, click the **Add Chart** button and select the **metrics** collection as the data source for the chart. Drag and drop fields to include, choose a chart type, add a title and description for the chart, and save it to the dashboard:
![Creating a chart in Atlas Charts][5]
Here’s what our sample dashboard looks like:
![Sample dashboard created using Atlas Charts][6]
Similarly, once your application is in production, you can create a dashboard for any feedback metrics you collect.
## Conclusion
In this tutorial, we looked into some of the challenges with evaluating LLM applications, followed by a detailed, step-by-step workflow for evaluating an LLM application, including persisting and tracking evaluation results over time. While we used RAG as our example for evaluation, the concepts and techniques shown in this tutorial can be extended to other LLM applications, including agents.
Now that you have a good foundation on how to evaluate RAG applications, you can take it up as a challenge to evaluate RAG systems from some of our other tutorials:
* [Building a RAG System With Google’s Gemma, Hugging Face, and MongoDB
* Building a RAG System Using Claude Opus and MongoDB
If you have further questions about LLM evaluations, please reach out to us in our Generative AI community forums and stay tuned for the next tutorial in the RAG series. Previous tutorials from the series can be found below:
* Part 1: How to Choose the Right Embedding Model for Your Application
## References
If you would like to learn more about evaluating LLM applications, check out the following references:
* https://docs.ragas.io/en/latest/getstarted/index.html
* Yan, Ziyou. (Oct 2023). AI Engineer Summit - Building Blocks for LLM Systems & Products. eugeneyan.com. https://eugeneyan.com/speaking/ai-eng-summit/
* Yan, Ziyou. (Mar 2024). LLM Task-Specific Evals that Do & Don't Work. eugeneyan.com. https://eugeneyan.com/writing/evals/
* Yan, Ziyou. (Jul 2023). Patterns for Building LLM-based Systems & Products. eugeneyan.com. https://eugeneyan.com/writing/llm-patterns/
* https://aiconference.com/speakers/jerry-liu/
* https://www.databricks.com/blog/LLM-auto-eval-best-practices-RAG
* https://huggingface.co/learn/cookbook/en/rag_evaluation
* Llamaindex evals framework
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt50b123e3b95ecbdf/661ad2da36c04ae24dcf9306/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc8c83b7525024bd3/661ad53e16c12012c35dbf4c/image2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt234eee7c71ffb9c8/661ad86a3c817d17d9e889a0/image5.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcfead54751777066/661ad95120797a9792b05cca/image3.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta1cc527a0f40d9a7/661ad981905fc97e5fec3611/image6.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfbcdc080c23ce55a/661ad99a12f2756e37eff236/image4.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "In this tutorial, we will see how to evaluate LLM applications using the RAGAS framework, taking a RAG system as an example.",
"contentType": "Tutorial"
} | RAG Series Part 2: How to Evaluate Your RAG Application | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-terraform-cluster-backup-policies | created | # MongoDB Atlas With Terraform - Cluster and Backup Policies
In this tutorial, I will show you how to create a MongoDB cluster in Atlas using Terraform. We saw in a previous article how to create an API key to start using Terraform and create our first project module. Now, we will go ahead and create our first cluster. If you don't have an API key and a project, I recommend you look at the previous article.
This article is for anyone who intends to use or already uses infrastructure as code (IaC) on the MongoDB Atlas platform or wants to learn more about it.
Everything we do here is contained in the provider/resource documentation: mongodbatlas_advanced_cluster | Resources | mongodb/mongodbatlas | Terraform
> Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as S3, GCS, Azurerm, etc.
## Creating a cluster
At this point, we will create our first replica set cluster using Terraform in MongoDB Atlas. As discussed in the previous article, Terraform is a powerful infrastructure-as-code tool that allows you to manage and provision IT resources in an efficient and predictable way. By using it in conjunction with MongoDB Atlas, you can automate the creation and management of database resources in the cloud, ensuring a consistent and reliable infrastructure.
Before we begin, make sure that all the prerequisites mentioned in the previous article are properly configured: Install Terraform, create an API key in MongoDB Atlas, and set up a project in Atlas. These steps are essential to ensure the success of creating your replica set cluster.
### Terraform provider configuration for MongoDB Atlas
The first step is to configure the Terraform provider for MongoDB Atlas. This will allow Terraform to communicate with the MongoDB Atlas API and manage resources within your account. Add the following block of code to your provider.tf file:
```
provider "mongodbatlas" {}
```
In the previous article, we configured the Terraform provider by directly entering our public and private keys. Now, in order to adopt more professional practices, we have chosen to use environment variables for authentication. The MongoDB Atlas provider, like many others, supports several authentication methodologies. The safest and most recommended option is to use environment variables. This implies only defining the provider in our Terraform code and exporting the relevant environment variables where Terraform will be executed, whether in the terminal, as a secret in Kubernetes, or a secret in GitHub Actions, among other possible contexts. There are other forms of authentication, such as using MongoDB CLI, AWS Secrets Manager, directly through variables in Terraform, or even specifying the keys in the code. However, to ensure security and avoid exposing our keys in accessible locations, we opt for the safer approaches mentioned.
### Creating the Terraform version file
Inside the versions.tf file, you will start by specifying the version of Terraform that your project requires. This is important to ensure that all users and CI/CD environments use the same version of Terraform, avoiding possible incompatibilities or execution errors. In addition to defining the Terraform version, it is equally important to specify the versions of the providers used in your project. This ensures that resources are managed consistently. For example, to set the MongoDB Atlas provider version, you would add a `required_providers` block inside the Terraform block, as shown below:
```terraform
terraform {
required_version = ">= 0.12"
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
version = "1.14.0"
}
}
}
```
### Defining the cluster resource
After configuring the version file and establishing the Terraform and provider versions, the next step is to define the cluster resource in MongoDB Atlas. This is done by creating a .tf file, for example main.tf, where you will specify the properties of the desired cluster. As we are going to make a module that will be reusable, we will use variables and default values so that other calls can create clusters with different architectures or sizes, without having to write a new module.
I will look at some attributes and parameters to make this clear.
```terraform
# ------------------------------------------------------------------------------
# MONGODB CLUSTER
# ------------------------------------------------------------------------------
resource "mongodbatlas_advanced_cluster" "default" {
project_id = data.mongodbatlas_project.default.id
name = var.name
cluster_type = var.cluster_type
backup_enabled = var.backup_enabled
pit_enabled = var.pit_enabled
mongo_db_major_version = var.mongo_db_major_version
disk_size_gb = var.disk_size_gb
```
In this first block, we are specifying the name of our cluster through the name parameter, its type (which can be a `REPLICASET`, `SHARDED`, or `GEOSHARDED`), and if we have backup and point in time activated, in addition to the database version and the amount of storage for the cluster.
```terraform
advanced_configuration {
fail_index_key_too_long = var.fail_index_key_too_long
javascript_enabled = var.javascript_enabled
minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol
no_table_scan = var.no_table_scan
oplog_size_mb = var.oplog_size_mb
default_read_concern = var.default_read_concern
default_write_concern = var.default_write_concern
oplog_min_retention_hours = var.oplog_min_retention_hours
transaction_lifetime_limit_seconds = var.transaction_lifetime_limit_seconds
sample_size_bi_connector = var.sample_size_bi_connector
sample_refresh_interval_bi_connector = var.sample_refresh_interval_bi_connector
}
```
Here, we are specifying some advanced settings. Many of these values will not be specified in the .tfvars as they have default values in the variables.tf file.
Parameters include the type of read/write concern, oplog size in MB, TLS protocol, whether JavaScript will be enabled in MongoDB, and transaction lifetime limit in seconds. no_table_scan is for when the cluster disables the execution of any query that requires a collection scan to return results, when true. There are more parameters that you can look at in the documentation, if you have questions.
```terraform
replication_specs {
num_shards = var.cluster_type == "REPLICASET" ? null : var.num_shards
dynamic "region_configs" {
for_each = var.region_configs
content {
provider_name = region_configs.value.provider_name
priority = region_configs.value.priority
region_name = region_configs.value.region_name
electable_specs {
instance_size = region_configs.value.electable_specs.instance_size
node_count = region_configs.value.electable_specs.node_count
disk_iops = region_configs.value.electable_specs.instance_size == "M10" || region_configs.value.electable_specs.instance_size == "M20" ? null : region_configs.value.electable_specs.disk_iops
ebs_volume_type = region_configs.value.electable_specs.ebs_volume_type
}
auto_scaling {
disk_gb_enabled = region_configs.value.auto_scaling.disk_gb_enabled
compute_enabled = region_configs.value.auto_scaling.compute_enabled
compute_scale_down_enabled = region_configs.value.auto_scaling.compute_scale_down_enabled
compute_min_instance_size = region_configs.value.auto_scaling.compute_min_instance_size
compute_max_instance_size = region_configs.value.auto_scaling.compute_max_instance_size
}
analytics_specs {
instance_size = try(region_configs.value.analytics_specs.instance_size, "M10")
node_count = try(region_configs.value.analytics_specs.node_count, 0)
disk_iops = try(region_configs.value.analytics_specs.disk_iops, null)
ebs_volume_type = try(region_configs.value.analytics_specs.ebs_volume_type, "STANDARD")
}
analytics_auto_scaling {
disk_gb_enabled = try(region_configs.value.analytics_auto_scaling.disk_gb_enabled, null)
compute_enabled = try(region_configs.value.analytics_auto_scaling.compute_enabled, null)
compute_scale_down_enabled = try(region_configs.value.analytics_auto_scaling.compute_scale_down_enabled, null)
compute_min_instance_size = try(region_configs.value.analytics_auto_scaling.compute_min_instance_size, null)
compute_max_instance_size = try(region_configs.value.analytics_auto_scaling.compute_max_instance_size, null)
}
read_only_specs {
instance_size = try(region_configs.value.read_only_specs.instance_size, "M10")
node_count = try(region_configs.value.read_only_specs.node_count, 0)
disk_iops = try(region_configs.value.read_only_specs.disk_iops, null)
ebs_volume_type = try(region_configs.value.read_only_specs.ebs_volume_type, "STANDARD")
}
}
}
}
```
At this moment, we are placing the number of shards we want, in case our cluster is not a REPLICASET. In addition, we specify the configuration of the cluster, region, cloud, priority for failover, autoscaling, electable, analytics, and read-only node configurations, in addition to its autoscaling configurations.
```terraform
dynamic "tags" {
for_each = local.tags
content {
key = tags.key
value = tags.value
}
}
bi_connector_config {
enabled = var.bi_connector_enabled
read_preference = var.bi_connector_read_preference
}
lifecycle {
ignore_changes =
disk_size_gb,
]
}
}
```
Next, we create a dynamic block to loop for each tag variable we include. In addition, we specify the BI connector, if desired, and the lifecycle block. Here, we are only specifying `disk_size_gb` for an example, but it is recommended to read the documentation that has important warnings about this block, such as including `instance_size`, as autoscaling can change and you don't want to accidentally retire an instance during peak times.
```
# ------------------------------------------------------------------------------
# MONGODB BACKUP SCHEDULE
# ------------------------------------------------------------------------------
resource "mongodbatlas_cloud_backup_schedule" "default" {
project_id = data.mongodbatlas_project.default.id
cluster_name = mongodbatlas_advanced_cluster.default.name
update_snapshots = var.update_snapshots
reference_hour_of_day = var.reference_hour_of_day
reference_minute_of_hour = var.reference_minute_of_hour
restore_window_days = var.restore_window_days
policy_item_hourly {
frequency_interval = var.policy_item_hourly_frequency_interval
retention_unit = var.policy_item_hourly_retention_unit
retention_value = var.policy_item_hourly_retention_value
}
policy_item_daily {
frequency_interval = var.policy_item_daily_frequency_interval
retention_unit = var.policy_item_daily_retention_unit
retention_value = var.policy_item_daily_retention_value
}
policy_item_weekly {
frequency_interval = var.policy_item_weekly_frequency_interval
retention_unit = var.policy_item_weekly_retention_unit
retention_value = var.policy_item_weekly_retention_value
}
policy_item_monthly {
frequency_interval = var.policy_item_monthly_frequency_interval
retention_unit = var.policy_item_monthly_retention_unit
retention_value = var.policy_item_monthly_retention_value
}
}
```
Finally, we create the backup block, which contains the policies and settings regarding the backup of our cluster.
This module, while detailed, encapsulates the full functionality offered by the `mongodbatlas_advanced_cluster` and `mongodbatlas_cloud_backup_schedule` resources, providing a comprehensive approach to creating and managing clusters in MongoDB Atlas. It supports the configuration of replica set, sharded, and geosharded clusters, meeting a variety of scalability and geographic distribution needs.
One of the strengths of this module is its flexibility in configuring backup policies, allowing fine adjustments that precisely align with the requirements of each database. This is essential to ensure resilience and effective data recovery in any scenario. Additionally, the module comes with vertical scaling enabled by default, in addition to offering advanced storage auto-scaling capabilities, ensuring that the cluster dynamically adjusts to the data volume and workload.
To complement the robustness of the configuration, the module allows the inclusion of analytical nodes and read-only nodes, expanding the possibilities of using the cluster for scenarios that require in-depth analysis or intensive read operations without impacting overall performance.
The default configuration includes smart preset values, such as the MongoDB version, which is set to "7.0" to take advantage of the latest features while maintaining the option to adjust to specific versions as needed. This “best practices” approach ensures a solid starting point for most projects, reducing the need for manual adjustments and simplifying the deployment process.
Additionally, the ability to deploy clusters in any region and cloud provider — such as AWS, Azure, or GCP — offers unmatched flexibility, allowing teams to choose the best solution based on their cost, performance, and compliance preferences.
In summary, this module not only facilitates the configuration and management of MongoDB Atlas clusters with an extensive range of options and adjustments but also promotes secure and efficient configuration practices, making it a valuable tool for developers and database administrators in implementing scalable and reliable data solutions in the cloud.
The use of the lifecycle directive with the `ignore_changes` option in the Terraform code was specifically implemented to accommodate manual upscale situations of the MongoDB Atlas cluster, which should not be automatically reversed by Terraform in subsequent executions. This approach ensures that, after a manual increase in storage capacity (`disk_size_gb`) or other specific replication configurations (`replication_specs`), Terraform does not attempt to undo these changes to align the resource state with the original definition in the code. Essentially, it allows configuration adjustments made outside of Terraform, such as an upscale to optimize performance or meet growing demands, to remain intact without being overwritten by future Terraform executions, ensuring operational flexibility while maintaining infrastructure management as code.
In the variable.tf file, we create variables with default values:
```terraform
variable "name" {
description = "The name of the cluster."
type = string
}
variable "cluster_type" {
description = < Note: Remember to export the environment variables with the public and private keys.
```
export MONGODB_ATLAS_PUBLIC_KEY="public"
export MONGODB_ATLAS_PRIVATE_KEY="private"
```
Now, we run `terraform init`.
```
(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform init
Initializing the backend...
Initializing provider plugins...
- Finding mongodb/mongodbatlas versions matching "1.14.0"...
- Installing mongodb/mongodbatlas v1.14.0...
- Installed mongodb/mongodbatlas v1.14.0 (signed by a HashiCorp partner, key ID 2A32ED1F3AD25ABF)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run `terraform init` in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running `terraform plan` to see any changes that are required for your infrastructure. All Terraform commands should now work.
If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
```
Now that init has worked, let's run `terraform plan` and evaluate what will happen:
```
(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform plan
data.mongodbatlas_project.default: Reading...
data.mongodbatlas_project.default: Read complete after 2s [id=65bfd71a08b61c36ca4d8eaa]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# mongodbatlas_advanced_cluster.default will be created
+ resource "mongodbatlas_advanced_cluster" "default" {
+ advanced_configuration = [
+ {
+ default_read_concern = "local"
+ default_write_concern = "majority"
+ fail_index_key_too_long = false
+ javascript_enabled = true
+ minimum_enabled_tls_protocol = "TLS1_2"
+ no_table_scan = false
+ oplog_size_mb = (known after apply)
+ sample_refresh_interval_bi_connector = 300
+ sample_size_bi_connector = 100
+ transaction_lifetime_limit_seconds = 60
},
]
+ backup_enabled = true
+ cluster_id = (known after apply)
+ cluster_type = "REPLICASET"
+ connection_strings = (known after apply)
+ create_date = (known after apply)
+ disk_size_gb = 10
+ encryption_at_rest_provider = (known after apply)
+ id = (known after apply)
+ mongo_db_major_version = "7.0"
+ mongo_db_version = (known after apply)
+ name = "cluster-demo"
+ paused = (known after apply)
+ pit_enabled = true
+ project_id = "65bfd71a08b61c36ca4d8eaa"
+ root_cert_type = (known after apply)
+ state_name = (known after apply)
+ termination_protection_enabled = (known after apply)
+ version_release_system = (known after apply)
+ bi_connector_config {
+ enabled = false
+ read_preference = "secondary"
}
+ replication_specs {
+ container_id = (known after apply)
+ id = (known after apply)
+ num_shards = 1
+ zone_name = "ZoneName managed by Terraform"
+ region_configs {
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ analytics_auto_scaling {
+ compute_enabled = (known after apply)
+ compute_max_instance_size = (known after apply)
+ compute_min_instance_size = (known after apply)
+ compute_scale_down_enabled = (known after apply)
+ disk_gb_enabled = (known after apply)
}
+ analytics_specs {
+ disk_iops = (known after apply)
+ ebs_volume_type = "STANDARD"
+ instance_size = "M10"
+ node_count = 0
}
+ auto_scaling {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
}
+ electable_specs {
+ disk_iops = (known after apply)
+ ebs_volume_type = "STANDARD"
+ instance_size = "M10"
+ node_count = 3
}
+ read_only_specs {
+ disk_iops = (known after apply)
+ ebs_volume_type = "STANDARD"
+ instance_size = "M10"
+ node_count = 0
}
}
}
+ tags {
+ key = "environment"
+ value = "dev"
}
+ tags {
+ key = "name"
+ value = "teste-cluster"
}
}
# mongodbatlas_cloud_backup_schedule.default will be created
+ resource "mongodbatlas_cloud_backup_schedule" "default" {
+ auto_export_enabled = (known after apply)
+ cluster_id = (known after apply)
+ cluster_name = "cluster-demo"
+ id = (known after apply)
+ id_policy = (known after apply)
+ next_snapshot = (known after apply)
+ project_id = "65bfd71a08b61c36ca4d8eaa"
+ reference_hour_of_day = 3
+ reference_minute_of_hour = 30
+ restore_window_days = 3
+ update_snapshots = false
+ use_org_and_group_names_in_export_prefix = (known after apply)
+ policy_item_daily {
+ frequency_interval = 1
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "days"
+ retention_value = 7
}
+ policy_item_hourly {
+ frequency_interval = 12
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "days"
+ retention_value = 3
}
+ policy_item_monthly {
+ frequency_interval = 1
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "months"
+ retention_value = 12
}
+ policy_item_weekly {
+ frequency_interval = 1
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "weeks"
+ retention_value = 4
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run `terraform apply` now.
```
Show! It was exactly the output we expected to see, the creation of a cluster resource with the backup policies. Let's apply this!
When running the `terraform apply` command, you will be prompted for approval with `yes` or `no`. Type `yes`.
```
(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform apply
data.mongodbatlas_project.default: Reading...
data.mongodbatlas_project.default: Read complete after 2s [id=65bfd71a08b61c36ca4d8eaa]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# mongodbatlas_advanced_cluster.default will be created
+ resource "mongodbatlas_advanced_cluster" "default" {
+ advanced_configuration = [
+ {
+ default_read_concern = "local"
+ default_write_concern = "majority"
+ fail_index_key_too_long = false
+ javascript_enabled = true
+ minimum_enabled_tls_protocol = "TLS1_2"
+ no_table_scan = false
+ oplog_size_mb = (known after apply)
+ sample_refresh_interval_bi_connector = 300
+ sample_size_bi_connector = 100
+ transaction_lifetime_limit_seconds = 60
},
]
+ backup_enabled = true
+ cluster_id = (known after apply)
+ cluster_type = "REPLICASET"
+ connection_strings = (known after apply)
+ create_date = (known after apply)
+ disk_size_gb = 10
+ encryption_at_rest_provider = (known after apply)
+ id = (known after apply)
+ mongo_db_major_version = "7.0"
+ mongo_db_version = (known after apply)
+ name = "cluster-demo"
+ paused = (known after apply)
+ pit_enabled = true
+ project_id = "65bfd71a08b61c36ca4d8eaa"
+ root_cert_type = (known after apply)
+ state_name = (known after apply)
+ termination_protection_enabled = (known after apply)
+ version_release_system = (known after apply)
+ bi_connector_config {
+ enabled = false
+ read_preference = "secondary"
}
+ replication_specs {
+ container_id = (known after apply)
+ id = (known after apply)
+ num_shards = 1
+ zone_name = "ZoneName managed by Terraform"
+ region_configs {
+ priority = 7
+ provider_name = "AWS"
+ region_name = "US_EAST_1"
+ analytics_auto_scaling {
+ compute_enabled = (known after apply)
+ compute_max_instance_size = (known after apply)
+ compute_min_instance_size = (known after apply)
+ compute_scale_down_enabled = (known after apply)
+ disk_gb_enabled = (known after apply)
}
+ analytics_specs {
+ disk_iops = (known after apply)
+ ebs_volume_type = "STANDARD"
+ instance_size = "M10"
+ node_count = 0
}
+ auto_scaling {
+ compute_enabled = true
+ compute_max_instance_size = "M30"
+ compute_min_instance_size = "M10"
+ compute_scale_down_enabled = true
+ disk_gb_enabled = true
}
+ electable_specs {
+ disk_iops = (known after apply)
+ ebs_volume_type = "STANDARD"
+ instance_size = "M10"
+ node_count = 3
}
+ read_only_specs {
+ disk_iops = (known after apply)
+ ebs_volume_type = "STANDARD"
+ instance_size = "M10"
+ node_count = 0
}
}
}
+ tags {
+ key = "environment"
+ value = "dev"
}
+ tags {
+ key = "name"
+ value = "teste-cluster"
}
}
# mongodbatlas_cloud_backup_schedule.default will be created
+ resource "mongodbatlas_cloud_backup_schedule" "default" {
+ auto_export_enabled = (known after apply)
+ cluster_id = (known after apply)
+ cluster_name = "cluster-demo"
+ id = (known after apply)
+ id_policy = (known after apply)
+ next_snapshot = (known after apply)
+ project_id = "65bfd71a08b61c36ca4d8eaa"
+ reference_hour_of_day = 3
+ reference_minute_of_hour = 30
+ restore_window_days = 3
+ update_snapshots = false
+ use_org_and_group_names_in_export_prefix = (known after apply)
+ policy_item_daily {
+ frequency_interval = 1
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "days"
+ retention_value = 7
}
+ policy_item_hourly {
+ frequency_interval = 12
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "days"
+ retention_value = 3
}
+ policy_item_monthly {
+ frequency_interval = 1
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "months"
+ retention_value = 12
}
+ policy_item_weekly {
+ frequency_interval = 1
+ frequency_type = (known after apply)
+ id = (known after apply)
+ retention_unit = "weeks"
+ retention_value = 4
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
mongodbatlas_advanced_cluster.default: Creating...
mongodbatlas_advanced_cluster.default: Still creating... [10s elapsed]
mongodbatlas_advanced_cluster.default: Still creating... [8m40s elapsed]
mongodbatlas_advanced_cluster.default: Creation complete after 8m46s [id=Y2x1c3Rlcl9pZA==:NjViZmRmYzczMTBiN2Y2ZDFhYmIxMmQ0-Y2x1c3Rlcl9uYW1l:Y2x1c3Rlci1kZW1v-cHJvamVjdF9pZA==:NjViZmQ3MWEwOGI2MWMzNmNhNGQ4ZWFh]
mongodbatlas_cloud_backup_schedule.default: Creating...
mongodbatlas_cloud_backup_schedule.default: Creation complete after 2s [id=Y2x1c3Rlcl9uYW1l:Y2x1c3Rlci1kZW1v-cHJvamVjdF9pZA==:NjViZmQ3MWEwOGI2MWMzNmNhNGQ4ZWFh]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
```
This process took eight minutes and 40 seconds to execute. I shortened the log output, but don't worry if this step takes time.
Now, let’s look in Atlas to see if the cluster was created successfully…
![Atlas Cluster overview][1]
![Atlas cluster Backup information screen][2]
We were able to create our first replica set with a standard backup policy with PITR and scheduled snapshots.
In this tutorial, we saw how to create the first cluster in our project created in the last article. We created a module that also includes a backup policy. In an upcoming article, we will look at how to create an API key and user using Terraform and Atlas.
To learn more about MongoDB and various tools, I invite you to visit the [Developer Center to read the other articles.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltef08af8a99b7af22/65e0d4dbeef4e3792e1e6ddf/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte24ff6c1fea2a907/65e0d4db31aca16b3e7efa80/image2.png | md | {
"tags": [
"Atlas",
"Terraform"
],
"pageDescription": "Learn to manage cluster and backup policies using terraform",
"contentType": "Tutorial"
} | MongoDB Atlas With Terraform - Cluster and Backup Policies | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/use-union-all-aggregation-pipeline-stage | created | # How to Use the Union All Aggregation Pipeline Stage in MongoDB 4.4
With the release of MongoDB 4.4 comes a new aggregation
pipeline
stage called `$unionWith`. This stage lets you combine multiple
collections into a single result set!
Here's how you'd use it:
**Simplified syntax, with no additional processing on the specified
collection**
```
db.collection.aggregate(
{ $unionWith: "" }
])
```
**Extended syntax, using optional pipeline field**
```
db.collection.aggregate([
{ $unionWith: { coll: "", pipeline: [ , etc. ] } }
])
```
>
>
>⚠ If you use the pipeline field to process your collection before
>combining, keep in mind that stages that write data, like `$out` and
>`$merge`, can't be used!
>
>
Your resulting documents will merge your current collection's (or
pipeline's) stream of documents with the documents from the
collection/pipeline you specify. Keep in mind that this can include
duplicates!
## This sounds kinda familiar..
If you've used the `UNION ALL` operation in SQL before, the `$unionWith`
stage's functionality may sound familiar to you, and you wouldn't be
wrong! Both combine the result sets from multiple queries and return the
merged rows, some of which may be duplicates. However, that's where the
similarities end. Unlike MongoDB's `$unionWith` stage, you have to
follow [a few
rules
in order to run a valid `UNION ALL` operation in SQL:
- Make sure your two queries have the *same number of columns*
- Make sure the *order of columns* are the same
- Make sure the *matching columns are compatible data types*.
It'd look something like this in SQL:
```
SELECT column1, expression1, column2
FROM table1
UNION ALL
SELECT column1, expression1, column2
FROM table2
WHERE conditions]
```
With the `$unionWith` stage in MongoDB, you don't have to worry about
these stringent constraints.
## So how is MongoDB's `$unionWith` stage different?
The most convenient difference between the `$unionWith` stage and other
UNION operations is that there's no matching schema restriction. This
flexible schema support means you can combine documents that may not
have the same type or number of fields. This is common in certain
scenarios, where the data we need to use comes from different sources:
- TimeSeries data that's stored by month/quarter/some other unit of
time
- IoT device data, per fleet or version
- Archival and Recent data, stored in a Data Lake
- Regional data
With MongoDB's `$unionWith` stage, combining these data sources is
possible.
Ready to try the new `$unionWith` stage? Follow along by completing a
few setup steps first. Or, you can [skip to the code
samples. 😉
## Prerequisites
First, a general understanding of what the aggregation
framework
is and how to use it will be important for the rest of this tutorial. If
you are unfamiliar with the aggregation framework, check out this great
Introduction to the MongoDB Aggregation
Framework,
written by fellow dev advocate Ken Alger!
Next, based on your situation, you may already have a few prerequisites
setup or need to start from scratch. Either way, choose your scenario to
configure the things you need so that you can follow the rest of this
tutorial!
Choose your scenario:
**I don't have an Atlas cluster set up yet**:
1. You'll need an Atlas account to play around with MongoDB Atlas!
Create
one
if you haven't already done so. Otherwise, log into your Atlas
account.
2. Setup a free Atlas
cluster
(no credit card needed!). Be sure to select **MongoDB 4.4** (may be
Beta, which is OK) as your version in Additional Settings!
>
>
>💡 **If you don't see the prompt to create a cluster**: You may be
>prompted to create a project *first* before you see the prompt to create
>your first cluster. In this case, go ahead and create a project first
>(leaving all the default settings). Then continue with the instructions
>to deploy your first free cluster!
>
>
3. Once your cluster is set up, add your IP
address
to your cluster's connection settings. This tells your cluster who's
allowed to connect to it.
4. Finally, create a database
user
for your cluster. Atlas requires anyone or anything accessing its
clusters to authenticate as MongoDB database users for security
purposes! Keep these credentials handy as you'll need them later on.
5. Continue with the steps in Connecting to your cluster.
**I have an Atlas cluster set up**:
Great! You can skip ahead to Connecting to your cluster.
**Connecting to your cluster**
To connect to your cluster, we'll use the MongoDB for Visual Studio Code
extension (VS Code for short 😊). You can view your data directly,
interact with your collections, and much more with this helpful
extension! Using this also consolidates our workspace into a single
window, removing the need for us to jump back and forth between our code
and MongoDB Atlas!
>
>
>💡 Though we'll be using the VS Code Extension and VS Code for the rest
>of this tutorial, it's not a requirement to use the `$unionWith`
>pipeline stage! You can also use the
>CLI, language-specific
>drivers, or
>Compass if you prefer!
>
>
1. Install the MongoDB for VS Code extension (or install VS Code first, if you don't already have it 😉).
2. To connect to your cluster, you'll need a connection string. You can get this connection string from your cluster connection settings. Go to your cluster and select the "Connect" option:
3. Select the "Connect using MongoDB Compass" option. This will give us a connection string in the DNS Seedlist Connection format that we can use with the MongoDB extension.
>
>
>💡 The MongoDB for VS Code extension also supports the standard connection string format. Using the DNS seedlist connection format is purely preference.
>
>
4. Skip to the second step and copy the connection string (don't worry about the other settings, you won't need them):
5. Switch back to VS Code. Press `Ctrl` + `Shift` + `P` (on Windows) or `Shift` + `Command` + `P` (on Mac) to bring up the command palette. This shows a list of all VS Code commands.
6. Start typing "MongoDB" until you see the MongoDB extension's list of available commands. Select the "MongoDB: Connect with Connection String" option.
7. Paste in your copied connection string. 💡 Don't forget! You have to replace the placeholder password with your actual password!
8. Press enter to connect! You'll know the connection was successful if you see a confirmation message on the bottom right. You'll also see your cluster listed when you expand the MongoDB extension pane.
With the MongoDB extension installed and your cluster connected, you can now use MongoDB Playgrounds to test out the `$unionWith` examples! MongoDB Playgrounds give us a nice sandbox to easily write and test Mongo queries. I love using it when prototying or trying something new because it has query auto-completion and syntax highlighting, something that you don't get in most terminals.
Let's finally dive into some examples!
## Examples
To follow along, you can use these MongoDB Playground
files I
have created to accompany this blog post or create your
own!
>
>
>💡 If you create your own playground, remember to change the database
>name and delete the default template's code first!
>
>
### `$unionWith` using a pipeline
>
>
>📃 Use
>this
>playground if you'd like follow along with pre-written code for this
>example.
>
>
Right at the top, specify the database you'll be using. In this example,
I'm using a database also called `union-walkthrough`:
```
use('union-walkthrough');
```
>
>
>💡 I haven't actually created a database called `union-walkthrough` in
>Atlas yet, but that's no problem! When the playground runs, it will see
>that it does not yet exist and create a database of the specified name!
>
>
Next, we need data! Particularly about some planets. And particularly
about planets in a certain movie series. 😉
Using the awesome SWAPI API, I've collected such
information on a few planets. Let's add them into two collections,
separated by popularity.
Any planets that appear in at least 2 or more films are considered
popular. Otherwise, we'll add them into the `lonely_planets` collection:
```
// Insert a few documents into the lonely_planets collection.
db.lonely_planets.insertMany(
{
"name": "Endor",
"rotation_period": "18",
"orbital_period": "402",
"diameter": "4900",
"climate": "temperate",
"gravity": "0.85 standard",
"terrain": "forests, mountains, lakes",
"surface_water": "8",
"population": "30000000",
"residents": [
"http://swapi.dev/api/people/30/"
],
"films": [
"http://swapi.dev/api/films/3/"
],
"created": "2014-12-10T11:50:29.349000Z",
"edited": "2014-12-20T20:58:18.429000Z",
"url": "http://swapi.dev/api/planets/7/"
},
{
"name": "Kamino",
"rotation_period": "27",
"orbital_period": "463",
"diameter": "19720",
"climate": "temperate",
"gravity": "1 standard",
"terrain": "ocean",
"surface_water": "100",
"population": "1000000000",
"residents": [
"http://swapi.dev/api/people/22/",
"http://swapi.dev/api/people/72/",
"http://swapi.dev/api/people/73/"
],
"films": [
"http://swapi.dev/api/films/5/"
],
"created": "2014-12-10T12:45:06.577000Z",
"edited": "2014-12-20T20:58:18.434000Z",
"url": "http://swapi.dev/api/planets/10/"
},
{
"name": "Yavin IV",
"rotation_period": "24",
"orbital_period": "4818",
"diameter": "10200",
"climate": "temperate, tropical",
"gravity": "1 standard",
"terrain": "jungle, rainforests",
"surface_water": "8",
"population": "1000",
"residents": [],
"films": [
"http://swapi.dev/api/films/1/"
],
"created": "2014-12-10T11:37:19.144000Z",
"edited": "2014-12-20T20:58:18.421000Z",
"url": "http://swapi.dev/api/planets/3/"
},
{
"name": "Hoth",
"rotation_period": "23",
"orbital_period": "549",
"diameter": "7200",
"climate": "frozen",
"gravity": "1.1 standard",
"terrain": "tundra, ice caves, mountain ranges",
"surface_water": "100",
"population": "unknown",
"residents": [],
"films": [
"http://swapi.dev/api/films/2/"
],
"created": "2014-12-10T11:39:13.934000Z",
"edited": "2014-12-20T20:58:18.423000Z",
"url": "http://swapi.dev/api/planets/4/"
},
{
"name": "Bespin",
"rotation_period": "12",
"orbital_period": "5110",
"diameter": "118000",
"climate": "temperate",
"gravity": "1.5 (surface), 1 standard (Cloud City)",
"terrain": "gas giant",
"surface_water": "0",
"population": "6000000",
"residents": [
"http://swapi.dev/api/people/26/"
],
"films": [
"http://swapi.dev/api/films/2/"
],
"created": "2014-12-10T11:43:55.240000Z",
"edited": "2014-12-20T20:58:18.427000Z",
"url": "http://swapi.dev/api/planets/6/"
}
]);
// Insert a few documents into the popular_planets collection.
db.popular_planets.insertMany([
{
"name": "Tatooine",
"rotation_period": "23",
"orbital_period": "304",
"diameter": "10465",
"climate": "arid",
"gravity": "1 standard",
"terrain": "desert",
"surface_water": "1",
"population": "200000",
"residents": [
"http://swapi.dev/api/people/1/",
"http://swapi.dev/api/people/2/",
"http://swapi.dev/api/people/4/",
"http://swapi.dev/api/people/6/",
"http://swapi.dev/api/people/7/",
"http://swapi.dev/api/people/8/",
"http://swapi.dev/api/people/9/",
"http://swapi.dev/api/people/11/",
"http://swapi.dev/api/people/43/",
"http://swapi.dev/api/people/62/"
],
"films": [
"http://swapi.dev/api/films/1/",
"http://swapi.dev/api/films/3/",
"http://swapi.dev/api/films/4/",
"http://swapi.dev/api/films/5/",
"http://swapi.dev/api/films/6/"
],
"created": "2014-12-09T13:50:49.641000Z",
"edited": "2014-12-20T20:58:18.411000Z",
"url": "http://swapi.dev/api/planets/1/"
},
{
"name": "Alderaan",
"rotation_period": "24",
"orbital_period": "364",
"diameter": "12500",
"climate": "temperate",
"gravity": "1 standard",
"terrain": "grasslands, mountains",
"surface_water": "40",
"population": "2000000000",
"residents": [
"http://swapi.dev/api/people/5/",
"http://swapi.dev/api/people/68/",
"http://swapi.dev/api/people/81/"
],
"films": [
"http://swapi.dev/api/films/1/",
"http://swapi.dev/api/films/6/"
],
"created": "2014-12-10T11:35:48.479000Z",
"edited": "2014-12-20T20:58:18.420000Z",
"url": "http://swapi.dev/api/planets/2/"
},
{
"name": "Naboo",
"rotation_period": "26",
"orbital_period": "312",
"diameter": "12120",
"climate": "temperate",
"gravity": "1 standard",
"terrain": "grassy hills, swamps, forests, mountains",
"surface_water": "12",
"population": "4500000000",
"residents": [
"http://swapi.dev/api/people/3/",
"http://swapi.dev/api/people/21/",
"http://swapi.dev/api/people/35/",
"http://swapi.dev/api/people/36/",
"http://swapi.dev/api/people/37/",
"http://swapi.dev/api/people/38/",
"http://swapi.dev/api/people/39/",
"http://swapi.dev/api/people/42/",
"http://swapi.dev/api/people/60/",
"http://swapi.dev/api/people/61/",
"http://swapi.dev/api/people/66/"
],
"films": [
"http://swapi.dev/api/films/3/",
"http://swapi.dev/api/films/4/",
"http://swapi.dev/api/films/5/",
"http://swapi.dev/api/films/6/"
],
"created": "2014-12-10T11:52:31.066000Z",
"edited": "2014-12-20T20:58:18.430000Z",
"url": "http://swapi.dev/api/planets/8/"
},
{
"name": "Coruscant",
"rotation_period": "24",
"orbital_period": "368",
"diameter": "12240",
"climate": "temperate",
"gravity": "1 standard",
"terrain": "cityscape, mountains",
"surface_water": "unknown",
"population": "1000000000000",
"residents": [
"http://swapi.dev/api/people/34/",
"http://swapi.dev/api/people/55/",
"http://swapi.dev/api/people/74/"
],
"films": [
"http://swapi.dev/api/films/3/",
"http://swapi.dev/api/films/4/",
"http://swapi.dev/api/films/5/",
"http://swapi.dev/api/films/6/"
],
"created": "2014-12-10T11:54:13.921000Z",
"edited": "2014-12-20T20:58:18.432000Z",
"url": "http://swapi.dev/api/planets/9/"
},
{
"name": "Dagobah",
"rotation_period": "23",
"orbital_period": "341",
"diameter": "8900",
"climate": "murky",
"gravity": "N/A",
"terrain": "swamp, jungles",
"surface_water": "8",
"population": "unknown",
"residents": [],
"films": [
"http://swapi.dev/api/films/2/",
"http://swapi.dev/api/films/3/",
"http://swapi.dev/api/films/6/"
],
"created": "2014-12-10T11:42:22.590000Z",
"edited": "2014-12-20T20:58:18.425000Z",
"url": "http://swapi.dev/api/planets/5/"
}
]);
```
This separation is indicative of how our data may be grouped. Despite
the separation, we can use the `$unionWith` stage to combine these two
collections if we ever needed to analyze them as a single result set!
Let's say that we needed to find out the total population of planets,
grouped by climate. Additionally, we'd like to leave out any planets
that don't have population data from our calculation. We can do this
using an aggregation:
```
// Run an aggregation to view total planet populations, grouped by climate type.
use('union-walkthrough');
db.lonely_planets.aggregate([
{
$match: {
population: { $ne: 'unknown' }
}
},
{
$unionWith: {
coll: 'popular_planets',
pipeline: [{
$match: {
population: { $ne: 'unknown' }
}
}]
}
},
{
$group: {
_id: '$climate', totalPopulation: { $sum: { $toLong: '$population' } }
}
}
]);
```
If you've followed along in your own MongoDB playground and have copied
the code so far, try running the aggregation!
And if you're using the provided MongoDB playground I created, highlight
lines 264 - 290 and then run the selected code.
>
>
>💡 You'll notice in the code snippet above that I've added another
>`use('union-walkthrough');` method right above the aggregation code. I
>do this to make the selection of relevant code within the playground
>easier. It's also required so that the aggregation code can run against
>the correct database. However, the same thing can be achieved by
>selecting multiple lines, namely the original `use('union-walkthrough')`
>line at the top and whatever additional example you'd like to run!
>
>
You should see the results like so:
```
[
{
_id: 'arid',
totalPopulation: 200000
},
{
_id: 'temperate',
totalPopulation: 1007536000000
},
{
_id: 'temperate, tropical',
totalPopulation: 1000
}
]
```
Unsurprisingly, planets with "temperate" climates seem to have more
inhabitants. Something about that cool 75 F / 23.8 C, I guess 🌞
Let's break down this aggregation:
The first object we pass into our aggregation is also our first stage,
used here as our filter criteria. Specifically, we use the
[$match
pipeline stage:
```
{
$match: {
population: { $ne: 'unknown' }
}
},
```
In this example, we filter out any documents that have `unknown` as
their `population` value using the
$ne (not
equal) operator.
The next object (and next stage) in our aggregation is our `$unionWith`
stage. Here, we specifiy what collection we'd like to perform a union
with (including any duplicates). We also make use of the pipeline field
to similarly filter out any documents in our `popular_planets`
collection that have an unknown population:
```
{
$unionWith: {
coll: 'popular_planets',
pipeline:
{
$match: {
population: { $ne: 'unknown' }
}
}
]
}
},
```
Finally, we have our last stage in our aggregation. After combining our
`lonely_planets` and `popular_planets` collections (both filtering out
documents with no population data), we group the resulting documents
using a
[$group
stage:
```
{
$group: {
_id: '$climate',
totalPopulation: { $sum: { $toLong: '$population' } }
}
}
```
Since we want to know the total population per climate type, we first
specify `_id` to be the `$climate` field from our combined result set.
Then, we calculate a new field called `totalPopulation` by using a
$sum
operator to add each matching document's population values together.
You'll also notice that based on the data we have, we needed to use a
$toLong
operator to first convert our `$population` field into a calculable
value!
### `$unionWith` without a pipeline
>
>
>📃 Use
>this
>playground if you'd like follow along with pre-written code for this
>example.
>
>
Now, if you *don't* need to run some additional processing on the
collection you're combining with, you don't have to! The `pipeline`
field is optional and is only there if you need it.
So, if you just need to work with the planet data as a unified set, you
can do that too:
```
// Run an aggregation with no pipeline
use('union-walkthrough');
db.lonely_planets.aggregate(
{ $unionWith: 'popular_planets' }
]);
```
Copy this aggregation into your own playground and run it!
Alternatively, select and run lines 293 - 297 if using the provided
MongoDB playground!
Tada! Now you can use this unified dataset for analysis or further
processing.
### Different Schemas
Combining the same schemas is great, but we can do that in regular SQL
too! The real convenience of the `$unionWith` pipeline stage is that it
can also combine collections with different schemas. Let's take a look!
### `$unionWith` using collections with different schemas
>
>
>📃 Use
>[this
>playground if you'd like follow along with pre-written code for this
>example.
>
>
As before, we'll specifiy the database we want to use:
```
use('union-walkthrough');
```
This time, we'll use some acquired information about certain starships
and vehicles that are used in this same movie series. Let's add them to
their respective collections:
```
// Insert a few documents into the starships collection
db.starships.insertMany(
{
"name": "Death Star",
"model": "DS-1 Orbital Battle Station",
"manufacturer": "Imperial Department of Military Research, Sienar Fleet Systems",
"cost_in_credits": "1000000000000",
"length": "120000",
"max_atmosphering_speed": "n/a",
"crew": 342953,
"passengers": 843342,
"cargo_capacity": "1000000000000",
"consumables": "3 years",
"hyperdrive_rating": 4.0,
"MGLT": 10,
"starship_class": "Deep Space Mobile Battlestation",
"pilots": []
},
{
"name": "Millennium Falcon",
"model": "YT-1300 light freighter",
"manufacturer": "Corellian Engineering Corporation",
"cost_in_credits": "100000",
"length": "34.37",
"max_atmosphering_speed": "1050",
"crew": 4,
"passengers": 6,
"cargo_capacity": 100000,
"consumables": "2 months",
"hyperdrive_rating": 0.5,
"MGLT": 75,
"starship_class": "Light freighter",
"pilots": [
"http://swapi.dev/api/people/13/",
"http://swapi.dev/api/people/14/",
"http://swapi.dev/api/people/25/",
"http://swapi.dev/api/people/31/"
]
},
{
"name": "Y-wing",
"model": "BTL Y-wing",
"manufacturer": "Koensayr Manufacturing",
"cost_in_credits": "134999",
"length": "14",
"max_atmosphering_speed": "1000km",
"crew": 2,
"passengers": 0,
"cargo_capacity": 110,
"consumables": "1 week",
"hyperdrive_rating": 1.0,
"MGLT": 80,
"starship_class": "assault starfighter",
"pilots": []
},
{
"name": "X-wing",
"model": "T-65 X-wing",
"manufacturer": "Incom Corporation",
"cost_in_credits": "149999",
"length": "12.5",
"max_atmosphering_speed": "1050",
"crew": 1,
"passengers": 0,
"cargo_capacity": 110,
"consumables": "1 week",
"hyperdrive_rating": 1.0,
"MGLT": 100,
"starship_class": "Starfighter",
"pilots": [
"http://swapi.dev/api/people/1/",
"http://swapi.dev/api/people/9/",
"http://swapi.dev/api/people/18/",
"http://swapi.dev/api/people/19/"
]
},
]);
// Insert a few documents into the vehicles collection
db.vehicles.insertMany([
{
"name": "Sand Crawler",
"model": "Digger Crawler",
"manufacturer": "Corellia Mining Corporation",
"cost_in_credits": "150000",
"length": "36.8 ",
"max_atmosphering_speed": 30,
"crew": 46,
"passengers": 30,
"cargo_capacity": 50000,
"consumables": "2 months",
"vehicle_class": "wheeled",
"pilots": []
},
{
"name": "X-34 landspeeder",
"model": "X-34 landspeeder",
"manufacturer": "SoroSuub Corporation",
"cost_in_credits": "10550",
"length": "3.4 ",
"max_atmosphering_speed": 250,
"crew": 1,
"passengers": 1,
"cargo_capacity": 5,
"consumables": "unknown",
"vehicle_class": "repulsorcraft",
"pilots": [],
},
{
"name": "AT-AT",
"model": "All Terrain Armored Transport",
"manufacturer": "Kuat Drive Yards, Imperial Department of Military Research",
"cost_in_credits": "unknown",
"length": "20",
"max_atmosphering_speed": 60,
"crew": 5,
"passengers": 40,
"cargo_capacity": 1000,
"consumables": "unknown",
"vehicle_class": "assault walker",
"pilots": [],
"films": [
"http://swapi.dev/api/films/2/",
"http://swapi.dev/api/films/3/"
],
"created": "2014-12-15T12:38:25.937000Z",
"edited": "2014-12-20T21:30:21.677000Z",
"url": "http://swapi.dev/api/vehicles/18/"
},
{
"name": "AT-ST",
"model": "All Terrain Scout Transport",
"manufacturer": "Kuat Drive Yards, Imperial Department of Military Research",
"cost_in_credits": "unknown",
"length": "2",
"max_atmosphering_speed": 90,
"crew": 2,
"passengers": 0,
"cargo_capacity": 200,
"consumables": "none",
"vehicle_class": "walker",
"pilots": [
"http://swapi.dev/api/people/13/"
]
},
{
"name": "Storm IV Twin-Pod cloud car",
"model": "Storm IV Twin-Pod",
"manufacturer": "Bespin Motors",
"cost_in_credits": "75000",
"length": "7",
"max_atmosphering_speed": 1500,
"crew": 2,
"passengers": 0,
"cargo_capacity": 10,
"consumables": "1 day",
"vehicle_class": "repulsorcraft",
"pilots": [],
}
]);
```
You may be thinking (as I first did), what's the difference between
starships and vehicles? You'll be pleased to know that starships are
defined as any "single transport craft that has hyperdrive capability".
Any other single transport craft that **does not have** hyperdrive
capability is considered a vehicle. The more you know! 😮
If you look at the two collections, you'll see that they have two key
differences:
- The `max_atmosphering_speed` field is present in both collections,
but is a `string` in the `starships` collection and an `int` in the
`vehicles` collection.
- The `starships` collection has two fields (`hyperdrive_rating`,
`MGLT`) that are not present in the `vehicles` collection, as it
only relates to starships.
But you know what? That's not a problem for the `$unionWith` stage! You
can combine them just as before:
```
// Run an aggregation with no pipeline and differing schemas
use('union-walkthrough');
db.starships.aggregate([
{ $unionWith: 'vehicles' }
]);
```
Try running the aggregation in your playground! Or if you're following
along in the MongoDB playground I've provided, select and run lines
185 - 189! You should get the following combined result set as your
output:
```
[
{
_id: 5f306ddca3ee8339643f137e,
name: 'Death Star',
model: 'DS-1 Orbital Battle Station',
manufacturer: 'Imperial Department of Military Research, Sienar Fleet Systems',
cost_in_credits: '1000000000000',
length: '120000',
max_atmosphering_speed: 'n/a',
crew: 342953,
passengers: 843342,
cargo_capacity: '1000000000000',
consumables: '3 years',
hyperdrive_rating: 4,
MGLT: 10,
starship_class: 'Deep Space Mobile Battlestation',
pilots: []
},
{
_id: 5f306ddca3ee8339643f137f,
name: 'Millennium Falcon',
model: 'YT-1300 light freighter',
manufacturer: 'Corellian Engineering Corporation',
cost_in_credits: '100000',
length: '34.37',
max_atmosphering_speed: '1050',
crew: 4,
passengers: 6,
cargo_capacity: 100000,
consumables: '2 months',
hyperdrive_rating: 0.5,
MGLT: 75,
starship_class: 'Light freighter',
pilots: [
'http://swapi.dev/api/people/13/',
'http://swapi.dev/api/people/14/',
'http://swapi.dev/api/people/25/',
'http://swapi.dev/api/people/31/'
]
},
// + 7 other results, omitted for brevity
]
```
Can you imagine doing that in SQL? Hint: You can't! That kind of schema
restriction is something you don't need to worry about with MongoDB,
though!
### $unionWith using collections with different schemas and a pipeline
>
>
>📃 Use
>[this
>playground if you'd like follow along with pre-written code for this
>example.
>
>
So we can combine different schemas no problem. What if we need to do a
little extra work on our collection before combining it? That's where
the `pipeline` field comes in!
Let's say that there's some classified information in our data about the
vehicles. Namely, any vehicles manufactured by Kuat Drive Yards (AKA a
division of the Imperial Department of Military Research).
By direct orders, you are instructed not to give out this information
under any circumstances. In fact, you need to intercept any requests for
vehicle information and remove these classified vehicles from the list!
We can do that like so:
```
use('union-walkthrough');
db.starships.aggregate(
{
$unionWith: {
coll: 'vehicles',
pipeline: [
{
$redact: {
$cond: {
if: { $eq: [ "$manufacturer", "Kuat Drive Yards, Imperial Department of Military Research"] },
then: "$$PRUNE",
else: "$$DESCEND"
}
}
}
]
}
}
]);
```
In this example, we're combining the `starships` and `vehicles`
collections as before, using the `$unionWith` pipeline stage. We also
process the `vehicle` data a bit more, using the `$unionWith`'s optional
`pipeline` field:
```
// Pipeline used with the vehicle collection
{
$redact: {
$cond: {
if: { $eq: [ "$manufacturer", "Kuat Drive Yards, Imperial Department of Military Research"] },
then: "$$PRUNE",
else: "$$DESCEND"
}
}
}
```
Inside the `$unionWith`'s pipeline, we use a
[$redact
stage to restrict the contents of our documents based on a condition.
The condition is specified using the
$cond
operator, which acts like an `if/else` statement.
In our case, we are evaluating whether or not the `manufacturer` field
holds a value of "Kuat Drive Yards, Imperial Department of Military
Research". If it does (uh oh, that's classified!), we use a system
variable called
$$PRUNE,
which lets us exclude all fields at the current document/embedded
document level. If it doesn't, we use another system variable called
$$DESCEND,
which will return all fields at the current document level, except for
any embedded documents.
This works perfectly for our use case. Try running the aggregation
(lines 192 - 211, if using the provided MongoDB Playground). You should
see a combined result set, minus any Imperial manufactured vehicles:
```
{
_id: 5f306ddca3ee8339643f137e,
name: 'Death Star',
model: 'DS-1 Orbital Battle Station',
manufacturer: 'Imperial Department of Military Research, Sienar Fleet Systems',
cost_in_credits: '1000000000000',
length: '120000',
max_atmosphering_speed: 'n/a',
crew: 342953,
passengers: 843342,
cargo_capacity: '1000000000000',
consumables: '3 years',
hyperdrive_rating: 4,
MGLT: 10,
starship_class: 'Deep Space Mobile Battlestation',
pilots: []
},
{
_id: 5f306ddda3ee8339643f1383,
name: 'X-34 landspeeder',
model: 'X-34 landspeeder',
manufacturer: 'SoroSuub Corporation',
cost_in_credits: '10550',
length: '3.4 ',
max_atmosphering_speed: 250,
crew: 1,
passengers: 1,
cargo_capacity: 5,
consumables: 'unknown',
vehicle_class: 'repulsorcraft',
pilots: []
},
// + 5 more non-Imperial manufactured results, omitted for brevity
]
```
We did our part to restrict classified information! 🎶 *Hums Imperial
March* 🎶
## Restrictions for UNION ALL
Now that we know how the `$unionWith` stage works, it's important to
discuss its limits and restrictions.
### Duplicates
We've mentioned it already, but it's important to reiterate: using the
`$unionWith` stage will give you a combined result set which may include
duplicates! This is equivalent to how the `UNION ALL` operator works in
`SQL` as well. As a workaround, using a `$group` stage at the end of
your pipeline to remove duplicates is advised, but only when possible
and if the resulting data does not get inaccurately skewed.
There are plans to add similar fuctionality to `UNION` (which combines
result sets but *removes* duplicates), but that may be in a future
release.
### Sharded Collections
If you use a `$unionWith` stage as part of a
[$lookup
pipeline, the collection you specify for the `$unionWith` cannot be
sharded. As an example, take a look at this aggregation:
```
// Invalid aggregation (tried to use sharded collection with $unionWith)
db.lonely_planets.aggregate(
{
$lookup: {
from: "extinct_planets",
let: { last_known_population: "$population", years_extinct: "$time_extinct" },
pipeline: [
// Filter criteria
{ $unionWith: { coll: "questionable_planets", pipeline: [ { pipeline } ] } },
// Other pipeline stages
],
as: "planetdata"
}
}
])
```
The coll `questionable_planets` (located within the `$unionWith` stage)
cannot be sharded. This is enforced to prevent a significant decrease in
performance due to the shuffling of data around the cluster as it
determines the best execution plan.
### Transactions
Aggregation pipelines can't use the `$unionWith` stage inside
transactions because a rare, but possible 3-thread deadlock can occur in
very niche scenarios. Additionally, in MongoDB 4.4, there is a
first-time definition of a view that would restrict its reading from
within a transaction.
### `$out` and `$merge`
The
[$out
and
$merge
stages cannot be used in a `$unionWith` pipeline. Since both `$out` and
`$merge` are stages that *write* data to a collection, they need to be
the *last* stage in a pipeline. This conflicts with the usage of the
`$unionWith` stage as it outputs its combined result set onto the next
stage, which can be used at any point in an aggregation pipeline.
### Collations
If your aggregation includes a
collation,
that collation is used for the operation, ignoring any other collations.
However, if your aggregation doesn't include a collation, it will use
the collation for the top-level collection/view on which the aggregation
is run:
- If the `$unionWith` coll is a collection, its collation is ignored.
- If the `$unionWith` coll is a view, then its collation must match
that of the top-level collection/view. Otherwise, the operation
errors.
## You've made it to the end!
We've discussed what the `$unionWith` pipeline stage is and how you can
use it in your aggregations to combine data from multiple collections.
Though similar to SQL's `UNION ALL` operation, MongoDB's `$unionWith`
stage distinguishes itself through some convenient and much-needed
characteristics. Most notable is the ability to combine collections with
different schemas! And as a much needed improvement, using a
`$unionWith` stage eliminates the need to write additional code, code
that was required because we had no other way to combine our data!
If you have any questions about the `$unionWith` pipeline stage or this
blog post, head over to the MongoDB Community
forums or Tweet
me!
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to use the Union All ($unionWith) aggregation pipeline stage, newly released in MongoDB 4.4.",
"contentType": "Tutorial"
} | How to Use the Union All Aggregation Pipeline Stage in MongoDB 4.4 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/beanie-odm-fastapi-cocktails | created | # Build a Cocktail API with Beanie and MongoDB
I have a MongoDB collection containing cocktail recipes that I've made during lockdown.
Recently, I've been trying to build an API over it, using some technologies I know well. I wasn't very happy with the results. Writing code to transform the BSON that comes out of MongoDB into suitable JSON is relatively fiddly. I felt I wanted something more declarative, but my most recent attempt—a mash-up of Flask, MongoEngine, and Marshmallow—just felt clunky and repetitive. I was about to start experimenting with building my own declarative framework, and then I stumbled upon an introduction to a brand new MongoDB ODM called Beanie. It looked like exactly what I was looking for.
The code used in this post borrows heavily from the Beanie post linked above. I've customized it to my needs, and added an extra endpoint that makes use of MongoDB Atlas Search, to provide autocompletion, for a GUI I'm planning to build in the future.
You can find all the code on GitHub.
>
>
>**Note**: The code here was written for Beanie 0.2.3. It's a new library, and things are moving fast! Check out the Beanie Changelog to see what things have changed between this version and the latest version of Beanie.
>
>
I have a collection of documents that looks a bit like this:
``` json
{
"_id": "5f7daa158ec9dfb536781b0a",
"name": "Hunter's Moon",
"ingredients":
{
"name": "Vermouth",
"quantity": {
"quantity": "25",
"unit": "ml"
}
},
{
"name": "Maraschino Cherry",
"quantity": {
"quantity": "15",
"unit": "ml"
}
},
{
"name": "Sugar Syrup",
"quantity": {
"quantity": "10",
"unit": "ml"
}
},
{
"name": "Lemonade",
"quantity": {
"quantity": "100",
"unit": "ml"
}
},
{
"name": "Blackberries",
"quantity": {
"quantity": "2",
"unit": null
}
}
]
}
```
The promise of Beanie and FastAPI—to just build a model for this data and have it automatically translate the tricky field types, like `ObjectId` and `Date` between BSON and JSON representation—was very appealing, so I fired up a new Python project, and defined my schema in a [models submodule like so:
``` python
class Cocktail(Document):
class DocumentMeta:
collection_name = "recipes"
name: str
ingredients: List"Ingredient"]
instructions: List[str]
class Ingredient(BaseModel):
name: str
quantity: Optional["IngredientQuantity"]
class IngredientQuantity(BaseModel):
quantity: Optional[str]
unit: Optional[str]
Cocktail.update_forward_refs()
Ingredient.update_forward_refs()
```
I was pleased to see that I could define a `DocumentMeta` inner class and override the collection name. It was a feature that I thought *should* be there, but wasn't totally sure it would be.
The other thing that was a little bit tricky was to get `Cocktail` to refer to `Ingredient`, which hasn't been defined at that point. Fortunately,
[Pydantic's `update_forward_refs` method can be used later to glue together the references. I could have just re-ordered the class definitions, but I preferred this approach.
The beaniecocktails package, defined in the `__init__.py` file, contains mostly boilerplate code for initializing FastAPI, Motor, and Beanie:
``` python
# ... some code skipped
@app.on_event("startup")
async def app_init():
client = motor.motor_asyncio.AsyncIOMotorClient(Settings().mongodb_url)
init_beanie(client.get_default_database(), document_models=Cocktail])
app.include_router(cocktail_router, prefix="/v1")
```
The code above defines an event handler for the FastAPI app startup. It connects to MongoDB, configures Beanie with the database connection, and provides the `Cocktail` model I'll be using to Beanie.
The last line adds the `cocktail_router` to Beanie. It's an `APIRouter` that's defined in the [routes submodule.
So now it's time to show you the routes file—this is where I spent most of my time. I was *amazed* by how quickly I could get API endpoints developed.
``` python
# ... imports skipped
cocktail_router = APIRouter()
```
The `cocktail_router` is responsible for routing URL paths to different function handlers which will provide data to be rendered as JSON. The simplest handler is probably:
``` python
@cocktail_router.get("/cocktails/", response_model=ListCocktail])
async def list_cocktails():
return await Cocktail.find_all().to_list()
```
This handler takes full advantage of these facts: FastAPI will automatically render Pydantic instances as JSON; and Beanie `Document` models are defined using Pydantic. `Cocktail.find_all()` returns an iterator over all the `Cocktail` documents in the `recipes` collection. FastAPI can't deal with these iterators directly, so the sequence is converted to a list using the `to_list()` method.
If you have the [Just task runner installed, you can run the server with:
``` bash
just run
```
If not, you can run it directly by running:
``` bash
uvicorn beaniecocktails:app --reload --debug
```
And then you can test the endpoint by pointing your browser at
"".
A similar endpoint for just a single cocktail is neatly encapsulated by two methods: one to look up a document by `_id` and raise a "404 Not Found" error if it doesn't exist, and a handler to route the HTTP request. The two are neatly glued together using the `Depends` declaration that converts the provided `cocktail_id` into a loaded `Cocktail` instance.
``` python
async def get_cocktail(cocktail_id: PydanticObjectId) -> Cocktail:
""" Helper function to look up a cocktail by id """
cocktail = await Cocktail.get(cocktail_id)
if cocktail is None:
raise HTTPException(status_code=404, detail="Cocktail not found")
return cocktail
@cocktail_router.get("/cocktails/{cocktail_id}", response_model=Cocktail)
async def get_cocktail_by_id(cocktail: Cocktail = Depends(get_cocktail)):
return cocktail
```
*Now* for the thing that I really like about Beanie: its integration with MongoDB's Aggregation Framework. Aggregation pipelines can reshape documents through projection or grouping, and Beanie allows the resulting documents to be mapped to a Pydantic `BaseModel` subclass.
Using this technique, an endpoint can be added that provides an index of all of the ingredients and the number of cocktails each appears in:
``` python
# models.py:
class IngredientAggregation(BaseModel):
""" A model for an ingredient count. """
id: str = Field(None, alias="_id")
total: int
# routes.py:
@cocktail_router.get("/ingredients", response_model=ListIngredientAggregation])
async def list_ingredients():
""" Group on each ingredient name and return a list of `IngredientAggregation`s. """
return await Cocktail.aggregate(
aggregation_query=[
{"$unwind": "$ingredients"},
{"$group": {"_id": "$ingredients.name", "total": {"$sum": 1}}},
{"$sort": {"_id": 1}},
],
item_model=IngredientAggregation,
).to_list()
```
The results, at "", look a bit like this:
``` json
[
{"_id":"7-Up","total":1},
{"_id":"Amaretto","total":2},
{"_id":"Angostura Bitters","total":1},
{"_id":"Apple schnapps","total":1},
{"_id":"Applejack","total":1},
{"_id":"Apricot brandy","total":1},
{"_id":"Bailey","total":1},
{"_id":"Baileys irish cream","total":1},
{"_id":"Bitters","total":3},
{"_id":"Blackberries","total":1},
{"_id":"Blended whiskey","total":1},
{"_id":"Bourbon","total":1},
{"_id":"Bourbon Whiskey","total":1},
{"_id":"Brandy","total":7},
{"_id":"Butterscotch schnapps","total":1},
]
```
I loved this feature so much, I decided to use it along with [MongoDB Atlas Search, which provides free text search over MongoDB collections, to implement an autocomplete endpoint.
The first step was to add a search index on the `recipes` collection, in the MongoDB Atlas web interface:
I had to add the `name` field as an "autocomplete" field type.
I waited for the index to finish building, which didn't take very long, because it's not a very big collection. Then I was ready to write my autocomplete endpoint:
``` python
@cocktail_router.get("/cocktail_autocomplete", response_model=Liststr])
async def cocktail_autocomplete(fragment: str):
""" Return an array of cocktail names matched from a string fragment. """
return [
c["name"]
for c in await Cocktail.aggregate(
aggregation_query=[
{
"$search": {
"autocomplete": {
"query": fragment,
"path": "name",
}
}
}
]
).to_list()
]
```
The `$search` aggregation stage specifically uses a search index. In this case, I'm using the `autocomplete` type, to match the type of the index I created on the `name` field. Because I wanted the response to be as lightweight as possible, I'm taking over the serialization to JSON myself, extracting the name from each `Cocktail` instance and just returning a list of strings.
The results are great!
Pointing my browser at
"" gives me `["Imperial Fizz","Vodka Fizz"]`, and
"" gives me `["Manhattan","Espresso Martini"]`.
The next step is to build myself a React front end, so that I can truly call this a [FARM Stack app.
## Wrap-Up
I was really impressed with how quickly I could get all of this up and running. Handling of `ObjectId` instances was totally invisible, thanks to Beanie's `PydanticObjectId` type, and I've seen other sample code that shows how BSON `Date` values are equally well-handled.
I need to see how I can build some HATEOAS functionality into the endpoints, with entities linking to their canonical URLs. Pagination is also something that will be important as my collection grows, but I think I already know how to handle that.
I hope you enjoyed this quick run-through of my first experience using Beanie. The next time you're building an API on top of MongoDB, I recommend you give it a try!
If this was your first exposure to the Aggregation Framework, I really recommend you read our documentation on this powerful feature of MongoDB. Or if you really want to get your hands dirty, why not check out our free MongoDB University course?
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Python",
"Atlas",
"Flask"
],
"pageDescription": "This new Beanie ODM is very good.",
"contentType": "Tutorial"
} | Build a Cocktail API with Beanie and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/cpp/noise-sensor-mqtt-client | created | # Red Mosquitto: Implement a noise sensor with an MQTT client in an ESP32
Welcome to another article of the "Adventures in IoT" series. So far, we have defined an end-to-end project, written the firmware for a Raspberry Pi Pico MCU board to measure the temperature and send the value via Bluetooth Low Energy, learned how to use Bluez and D-Bus, and implemented a collecting station that was able to read the BLE data. If you haven't had the time yet, you can read them or watch the videos.
In this article, we are going to write the firmware for a different board: an ESP32-C6-DevKitC-1. ESP32 boards are very popular among the DIY community and for IoT in general. The creator of these boards, Espressif, is putting a good amount of effort into supporting Rust as a first-class developer language for them. I am thankful for that and I will take advantage of the tools they have created for us.
We can write code for the ESP32 that talks to the bare metal, a.k.a. core, or use an operating system that allows us to take full advantage of the capabilities provided by std library. ESP-IDF –i.e., ESPressif IoT Development Framework– is created to simplify that development and is not only available in C/C++ but also in Rust, which we will be using for the rest of this article. By using ESP-IDF through the corresponding crates, we can use threads, mutexes, and other synchronization primitives, collections, random number generation, sockets, etc.
. It provides an abstraction to create drivers that are independent from the MCU. This is very useful for us developers because it allows us to develop and maintain the driver once and use it for the many different MCU boards that honor that abstraction.
This development board kit has a neopixel LED –i.e., an RGB LED controlled by a WS2812– which we will use for our "Hello World!" iteration and then to inform the user about the state of the device. The WS2812 requires sending sequences of high and low voltages that use the duration of those high and low values to specify the bits that define the RGB color components of the LED. The ESP32 has a Remote Control Transceiver (RMT) that was conceived as an infrared transceiver but can be repurposed to generate the signals required for the single-line serial protocol used by the WS1812. Neither the RMT nor the timers are available in the just released version of the `embedded-hal`, but the ESP-IDF provided by Expressif does implement the full `embedded-hal` abstraction, and the WS2812 driver uses the available abstractions.
## Setup
### The tools
There are some tools that you will need to have installed in your computer to be able to follow along and compile and install the firmware on your board. I have installed them on my computer, but before spending time on this setup, consider using the container provided by Espressif if you prefer that choice.
The first thing that might be different for you is that we need the bleeding edge version of the Rust toolchain. We will be using the nightly version of it:
```shell
rustup toolchain install nightly --component rust-src
```
As for the tools, you may already have some of these tools on your computer, but double-check that you have installed all of them:
- Git (in macOS installed with Code)
- Some tools to assist on the building process (`brew install cmake ninja dfu-util python3` –This works on macOS, but if you use a different OS, please check the list here)
- A tool to forward linker arguments to the actual linker (`cargo install ldproxy`)
- A utility to write the firmware to the board (`cargo install espflash`)
- A tool that is used to produce a new project from a template (`cargo install cargo-generate`)
### Project creation using a template
We can then create a project using the template for `stdlib` projects (`esp-idf-template`):
```sh
cargo generate esp-rs/esp-idf-template cargo
```
And we fill in this data:
- **Project name:** mosquitto-bzzz
- **MCU to target:** esp32c6
- **Configure advanced template options:** false
`cargo b` produces the build. Target is `riscv32imac-esp-espidf` (RISC-V architecture with support for atomics), so the binary is generated in `target/riscv32imac-esp-espidf/debug/mosquitto-bzzz`. And it can be run on the device using this command:
```sh
espflash flash target/riscv32imac-esp-espidf/debug/mosquitto-bzzz --monitor
```
And at the end of the output log, you can find these lines:
```
I (358) app_start: Starting scheduler on CPU0
I (362) main_task: Started on CPU0
I (362) main_task: Calling app_main()
I (362) mosquitto_bzzz: Hello, world!
I (372) main_task: Returned from app_main()
```
Let's understand the project that has been created so we can take advantage of all the pieces:
- **Cargo.toml:** It is main the configuration file for the project. Besides what a regular `cargo new` would do, we will see that:
- It defines some features available that modify the configuration of some of the dependencies.
- It includes a couple of dependencies: one for the logging API and another for using the ESP-IDF.
- It adds a build dependency that provides utilities for building applications for embedded systems.
- It adjusts the profile settings that modify some compiler options, optimization level, and debug symbols, for debug and release.
- **build.rs:** A build script that doesn't belong to the application but is executed as part of the build process.
- **rust-toolchain.toml:** A configuration file to enforce the usage of the nightly toolchain as well as a local copy of the Rust standard library source code.
- **sdkconfig.defaults:** A file with some configuration parameters for the esp-idf.
- **.cargo/config.toml:** A configuration file for Cargo itself, where we have the architecture, the tools, and the unstable flags of the compiler used in the build process, and the environment variables used in the process.
- **src/main.rs:** The seed for our code with the minimal skeleton.
## Foundations of our firmware
The idea is to create firmware similar to the one we wrote for the Raspberry Pi Pico but exposing the sensor data using MQTT instead of Bluetooth Low Energy. That means that we have to connect to the WiFi, then to the MQTT broker, and start publishing data. We will use the RGB LED to show the status of our sensor and use a sound sensor to obtain the desired data.
### Control the LED
Making an LED blink is considered the *hello world* of embedded programming. We can take it a little bit further and use colors rather than just blink.
1. According to the documentation of the board, the LED is controlled by the GPIO8 pin. We can get access to that pin using the `Peripherals` module of the esp-idf-svc, which exposes the hal adding `use esp_idf_svc::hal::peripherals::Peripherals;`:
```rust
let peripherals = Peripherals::take().expect("Unable to access device peripherals");
let led_pin = peripherals.pins.gpio8;
```
2. Also using the Peripherals singleton, we can access the RMT channel that will produce the desired waveform signal required to set each of the three color components of the LED:
```rust
let rmt_channel = peripherals.rmt.channel0;
```
3. We could do the RGB color encoding manually, but there is a crate that will help us talk to the built-in WS2812 (neopixel) controller that drives the RGB LED. The create `smart-leds` could be used on top of it if we had several LEDs, but we don't need it for this board.
```sh
cargo add ws2812-esp32-rmt-driver
```
4. We create an instance that talks to the WS2812 in pin 8 and uses the Remote Control Transceiver – a.k.a. RMT – peripheral in channel 0. We add the symbol `use ws2812_esp32_rmt_driver::Ws2812Esp32RmtDriver;` and:
```rust
let mut neopixel =
Ws2812Esp32RmtDriver::new(rmt_channel, led_pin).expect("Unable to talk to ws2812");
```
5. Then, we define the data for a pixel and write it with the instance of the driver so it gets used in the LED. It is important to not only import the type for the 24bi pixel color but also get the trait with `use ws2812_esp32_rmt_driver::driver::color::{LedPixelColor,LedPixelColorGrb24};`:
```rust
let color_1 = LedPixelColorGrb24::new_with_rgb(255, 255, 0);
neopixel
.write_blocking(color_1.as_ref().iter().cloned())
.expect("Error writing to neopixel");
```
6. At this moment, you can run it with `cargo r` and expect the LED to be on with a yellow color.
7. Let's add a loop and some changes to complete our "hello world." First, we define a second color:
```rust
let color_2 = LedPixelColorGrb24::new_with_rgb(255, 0, 255);
```
8. Then, we add a loop at the end where we switch back and forth between these two colors:
```rust
loop {
neopixel
.write_blocking(color_1.as_ref().iter().cloned())
.expect("Error writing to neopixel");
neopixel
.write_blocking(color_2.as_ref().iter().cloned())
.expect("Error writing to neopixel");
}
```
9. If we don't introduce any delays, we won't be able to perceive the colors changing, so we add `use std::{time::Duration, thread};` and wait for half a second before every change:
```rust
neopixel
.write_blocking(color_1.as_ref().iter().cloned())
.expect("Error writing to neopixel");
thread::sleep(Duration::from_millis(500));
neopixel
.write_blocking(color_2.as_ref().iter().cloned())
.expect("Error writing to neopixel");
thread::sleep(Duration::from_millis(500));
```
10. We run and watch the LED changing color from purple to yellow and back every half a second.
### Use the LED to communicate with the user
We are going to encapsulate the usage of the LED in its own thread. That thread needs to be aware of any changes in the status of the device and use the current one to decide how to use the LED accordingly.
1. First, we are going to need an enum with all of the possible states. Initially, it will contain one variant for no error, one variant for WiFi error, and another one for MQTT error:
```rust
enum DeviceStatus {
Ok,
WifiError,
MqttError,
}
```
2. And we can add an implementation to convert from eight-bit unsigned integers into a variant of this enum:
```rust
impl TryFrom for DeviceStatus {
type Error = &'static str;
fn try_from(value: u8) -> Result {
match value {
0u8 => Ok(DeviceStatus::Ok),
1u8 => Ok(DeviceStatus::WifiError),
2u8 => Ok(DeviceStatus::MqttError),
_ => Err("Unknown status"),
}
}
}
```
3. We would like to use the `DeviceStatus` variants by name where a number is required. We achieve the inverse conversion by adding an annotation to the enum:
```rust
#repr(u8)]
enum DeviceStatus {
```
4. Next, I am going to do something that will be considered naïve by anybody that has developed anything in Rust, beyond the simplest "hello world!" However, I want to highlight one of the advantages of using Rust, instead of most other languages, to write firmware (and software in general). I am going to define a variable in the main function that will hold the current status of the device and share it among the threads.
```rust
let mut status = DeviceStatus::Ok as u8;
```
5. We are going to define two threads. The first one is meant for reporting back to the user the status of the device. The second one is just needed for testing purposes, and we will replace it with some real functionality in a short while. We will be using sequences of colors in the LED to report the status of the sensor. So, let's start by defining each of the steps in those color sequences:
```rust
struct ColorStep {
red: u8,
green: u8,
blue: u8,
duration: u64,
}
```
6. We also define a constructor as an associated function for our own convenience:
```rust
impl ColorStep {
fn new(red: u8, green: u8, blue: u8, duration: u64) -> Self {
ColorStep {
red,
green,
blue,
duration,
}
}
}
```
7. We can then use those steps to transform each status into a different sequence that we can display in the LED:
```rust
impl DeviceStatus {
fn light_sequence(&self) -> Vec {
match self {
DeviceStatus::Ok => vec![ColorStep::new(0, 255, 0, 500), ColorStep::new(0, 0, 0, 500)],
DeviceStatus::WifiError => {
vec![ColorStep::new(255, 0, 0, 200), ColorStep::new(0, 0, 0, 100)]
}
DeviceStatus::MqttError => vec![
ColorStep::new(255, 0, 255, 100),
ColorStep::new(0, 0, 0, 300),
],
}
}
}
```
8. We start the thread by initializing the WS2812 that controls the LED:
```rust
use esp_idf_svc::hal::{
gpio::OutputPin,
peripheral::Peripheral,
rmt::RmtChannel,
};
fn report_status(
status: &u8,
rmt_channel: impl Peripheral,
led_pin: impl Peripheral,
) -> ! {
let mut neopixel =
Ws2812Esp32RmtDriver::new(rmt_channel, led_pin).expect("Unable to talk to ws2812");
loop {}
}
```
9. We can keep track of the previous status and the current sequence, so we don't have to regenerate it after displaying it once. This is not required, but it is more efficient:
```rust
let mut prev_status = DeviceStatus::WifiError; // Anything but Ok
let mut sequence: Vec = vec![];
```
10. We then get into an infinite loop, in which we update the status, if it has changed, and the sequence accordingly. In any case, we use each of the steps of the sequence to display it in the LED:
```rust
loop {
if let Ok(status) = DeviceStatus::try_from(*status) {
if status != prev_status {
prev_status = status;
sequence = status.light_sequence();
}
for step in sequence.iter() {
let color = LedPixelColorGrb24::new_with_rgb(step.red, step.green, step.blue);
neopixel
.write_blocking(color.as_ref().iter().cloned())
.expect("Error writing to neopixel");
thread::sleep(Duration::from_millis(step.duration));
}
}
}
```
11. Notice that the status cannot be compared until we implement `PartialEq`, and assigning it requires Clone and Copy, so we derive them:
```rust
#[derive(Clone, Copy, PartialEq)]
enum DeviceStatus {
```
12. Now, we are going to implement the function that is run in the other thread. This function will change the status every 10 seconds. Since this is for the sake of testing the reporting capability, we won't be doing anything fancy to change the status, just moving from one status to the next and back to the beginning:
```rust
fn change_status(status: &mut u8) -> ! {
loop {
thread::sleep(Duration::from_secs(10));
if let Ok(current) = DeviceStatus::try_from(*status) {
match current {
DeviceStatus::Ok => *status = DeviceStatus::WifiError as u8,
DeviceStatus::WifiError => *status = DeviceStatus::MqttError as u8,
DeviceStatus::MqttError => *status = DeviceStatus::Ok as u8,
}
}
}
}
```
13. With the two functions in place, we just need to spawn two threads, one with each one of them. We will use a thread scope that will take care of joining the threads that we spawn:
```rust
thread::scope(|scope| {
scope.spawn(|| report_status(&status, rmt_channel, led_pin));
scope.spawn(|| change_status(&mut status));
});
```
14. Compiling this code will result in errors. It is the blessing/curse of the borrow checker, which is capable of figuring out that we are sharing memory in an unsafe way. The status can be changed in one thread while being read by the other. We could use a mutex, as we did in the previous C++ code, and wrap it in an `Arc` to be able to use a reference in each thread, but there is an easier way to achieve the same goal: We can use an atomic type. (`use std::sync::atomic::AtomicU8;`)
```rust
let status = &AtomicU8::new(0u8);
```
15. We modify `report_status()` to use the reference to the atomic type and add `use std::sync::atomic::Ordering::Relaxed;`:
```rust
fn report_status(
status: &AtomicU8,
rmt_channel: impl Peripheral,
led_pin: impl Peripheral,
) -> ! {
let mut neopixel =
Ws2812Esp32RmtDriver::new(rmt_channel, led_pin).expect("Unable to talk to ws2812");
let mut prev_status = DeviceStatus::WifiError; // Anything but Ok
let mut sequence: Vec = vec![];
loop {
if let Ok(status) = DeviceStatus::try_from(status.load(Relaxed)) {
```
16. And `change_status()`. Notice that in this case, thanks to the interior mutability, we don't need a mutable reference but a regular one. Also, we need to specify the guaranties in terms of how multiple operations will be ordered. Since we don't have any other atomic operations in the code, we can go with the weakest level – i.e., `Relaxed`:
```rust
fn change_status(status: &AtomicU8) -> ! {
loop {
thread::sleep(Duration::from_secs(10));
if let Ok(current) = DeviceStatus::try_from(status.load(Relaxed)) {
match current {
DeviceStatus::Ok => status.store(DeviceStatus::WifiError as u8, Relaxed),
DeviceStatus::WifiError => status.store(DeviceStatus::MqttError as u8, Relaxed),
DeviceStatus::MqttError => status.store(DeviceStatus::Ok as u8, Relaxed),
}
}
}
}
```
17. Finally, we have to change the lines in which we spawn the threads to reflect the changes that we have introduced:
```rust
scope.spawn(|| report_status(status, rmt_channel, led_pin));
scope.spawn(|| change_status(status));
```
18. You can use `cargo r` to compile the code and run it on your board. The lights should be displaying the sequences, which should change every 10 seconds.
## Getting the noise level
It is time to interact with a temperature sensor… Just kidding. This time, we are going to use a sound sensor. No more temperature measurements in this project. Promise.
The sensor I am going to use is an OSEPP Sound-01 that claims to be "the perfect sensor to detect environmental variations in noise." It supports an input voltage from 3V to 5V and provides an analog signal. We are going to connect the signal to pin 0 of the GPIO, which is also the pin for the first channel of the analog-to-digital converter (ADC1_CH0). The other two pins are connected to 5V and GND (+ and -, respectively).
![enter image description here][2]
You don't have to use this particular sensor. There are many other options on the market. Some of them have pins for digital output, instead of just an analog one as in this one. Some sensors also have a potentiometer that allows you to adjust the sensitivity of the microphone.
### Read from the sensor
1. We are going to perform this task in a new function:
```rust
fn read_noise_level() -> ! {
}
```
2. We want to use the ADC on the pin that we have connected the signal. We can get access to the ADC1 using the `peripherals` singleton in the main function.
```rust
let adc = peripherals.adc1;
```
3. And also to the pin that will receive the signal from the sensor:
```rust
let adc_pin = peripherals.pins.gpio0;
```
4. We modify the signature of our new function to accept the parameters we need:
```rust
fn read_noise_level(adc1: ADC1, adc1_pin: GPIO) -> !
where
GPIO: ADCPin,
```
5. Now, we use those two parameters to attach a driver that can be used to read from the ADC. Notice that the `AdcDriver` needs a configuration, which we create with the default value. Also, `AdcChannelDriver` requires a [generic const parameter that is used to define the attenuation level. I am going to go with maximum attenuation initially to have more sensibility in the mic, but we can change it later if needed. We add `use esp_idf_svc::hal::adc::{attenuation, AdcChannelDriver};`:
```rust
let mut adc =
AdcDriver::new(adc1, &adc::config::Config::default()).expect("Unable to initialze ADC1");
let mut adc_channel_drv: AdcChannelDriver<{ attenuation::DB_11 }, _> =
AdcChannelDriver::new(adc1_pin).expect("Unable to access ADC1 channel 0");
```
6. With the required pieces in place, we can use the `adc_channel` to sample in an infinite loop. A delay of 10ms means that we will be sampling at ~100Hz:
```rust
loop {
thread::sleep(Duration::from_millis(10));
println!("ADC value: {:?}", adc.read(&mut adc_channel));
}
```
7. Lastly, we spawn a thread with this function in the same scope that we were using before:
```rust
scope.spawn(|| read_noise_level(adc, adc_pin));
```
### Compute noise levels (Sorta!)
In order to get an estimation of the noise level, I am going to compute the Root Mean Square (RMS) of a buffer of 50ms, i.e., five samples at our current sampling rate. Yes, I know this isn't exactly how decibels are measured, but it will be good enough for us and the data that we want to gather.
1. Let's start by creating that buffer where we will be putting the samples:
```rust
const LEN: usize = 5;
let mut sample_buffer = [0u16; LEN];
```
2. Inside the infinite loop, we are going to have a for-loop that goes through the buffer:
```rust
for i in 0..LEN {
}
```
3. We modify the sampling that we were doing before, so a zero value is used if the ADC fails to get a sample:
```rust
thread::sleep(Duration::from_millis(10));
if let Ok(sample) = adc.read(&mut adc_pin) {
sample_buffer[i] = sample;
} else {
sample_buffer[i] = 0u16;
}
```
4. Before starting with the iterations of the for loop, we are going to define a variable to hold the addition of the squares of the samples:
```rust
let mut sum = 0.0f32;
```
5. And each sample is squared and added to the sum. We could do the conversion into floats after the square, but then, the square value might not fit into a u16:
```rust
sum += (sample as f32) * (sample as f32);
```
6. And we compute the decibels (or something close enough to that) after the for loop:
```rust
let d_b = 20.0f32 * (sum / LEN as f32).sqrt().log10();
println!(
"ADC values: {:?}, sum: {}, and dB: {} ",
sample_buffer, sum, d_b
);
```
7. We compile and run with `cargo r` and should get some output similar to:
```
ADC values: [0, 0, 0, 0, 0], sum: 0, and dB: -inf
ADC values: [0, 0, 0, 3, 0], sum: 9, and dB: 2.5527248
ADC values: [0, 0, 0, 11, 0], sum: 121, and dB: 13.838154
ADC values: [8, 0, 38, 0, 102], sum: 11912, and dB: 33.770145
ADC values: [64, 23, 0, 8, 26], sum: 5365, and dB: 30.305998
ADC values: [0, 8, 41, 0, 87], sum: 9314, and dB: 32.70166
ADC values: [137, 0, 79, 673, 0], sum: 477939, and dB: 49.804024
ADC values: [747, 0, 747, 504, 26], sum: 1370710, and dB: 54.379753
ADC values: [240, 0, 111, 55, 26], sum: 73622, and dB: 41.680374
ADC values: [8, 26, 26, 58, 96], sum: 13996, and dB: 34.470337
```
## MQTT
### Concepts
When we wrote our previous firmware, we used Bluetooth Low Energy to make the data from the sensor available to the rest of the world. That was an interesting experiment, but it had some limitations. Some of those limitations were introduced by the hardware we were using, like the fact that we were getting some interferences in the Bluetooth signal from the WiFi communications in the Raspberry Pi. But others are inherent to the Bluetooth technology, like the maximum distance from the sensor to the collecting station.
For this firmware, we have decided to take a different approach. We will be using WiFi for the communications from the sensors to the collecting station. WiFi will allow us to spread the sensors through a much greater area, especially if we have several access points. However, it comes with a price: The sensors will consume more energy and their batteries will last less.
Using WiFi practically implies that our communications will be TCP/IP-based. And that opens a wide range of possibilities, which we can summarize with this list in increasing order of likelihood:
- Implement a custom TCP or UDP protocol.
- Use an existing protocol that is commonly used for writing APIs. There are other options, but HTTP is the main one here.
- Use an existing protocol that is more tailored for the purpose of sending event data that contains values.
Creating a custom protocol is expensive, time-consuming, and error-prone, especially without previous experience. It''s probably the worst idea for a proof of concept unless you have a very specific requirement that cannot be accomplished otherwise.
HTTP comes to mind as an excellent solution to exchange data. REST APIs are an example of that. However, it has some limitations, like the unidirectional flow of data, the overhead –both in terms of the protocol itself and on using a new connection for every new request– and even the lack of provision to notify selected clients when the data they are interested in changes.
If we want to go with a protocol that was designed for this, MQTT is the natural choice. Besides overcoming the limitations of HTTP for this type of communication, it has been tested in the field with many sensors that change very often and out of the box, can do fancy things like storing the last known good value or having specific client commands that allow them to receive updates on specific values or a set of them. MQTT is designed as a protocol for publish/subscribe (pub/sub) in the scenarios that are common for IoT. The server that controls all the communications is commonly referred to as a *broker*, and our sensors will be its clients.
### Connect to the WiFi
Now that we have a better understanding of why we are using MQTT, we are going to connect to our broker and send the data that we obtain from our sensor so it gets published there.
However, before being able to do that, we need to connect to the WiFi.
It is important to keep in mind that the board we are using has support for WiFi but only on the 2.4GHz band. It won't be able to connect to your router using the 5GHz band, no matter how kindly you ask it to do it.
Also, unless you are a wealthy millionaire and you've got yourself a nice island to focus on following along with this content, it would be wise to use a fairly strong password to keep unauthorized users out of your network.
1. We are going to begin by setting some structure for holding the authentication data to access the network:
```rust
struct Configuration {
wifi_ssid: &'static str,
wifi_password: &'static str,
}
```
2. We could set the values in the code, but I like better the approach suggested by Ferrous Systems. We will be using the `toml_cfg` crate. We will have default values (useless in this case other than to get an error) that we will be overriding by using a toml file with the desired values. First things first: Let's add the crate:
```shell
cargo add toml-cfg
```
3. Let's now annotate the struct with some macros:
```rust
#[toml_cfg::toml_config]
struct Configuration {
#[default("NotMyWifi")]
wifi_ssid: &'static str,
#[default("NotMyPassword")]
wifi_password: &'static str,
}
```
4. We can now add a `cfg.toml` file with the **actual** values of these parameters.
```
[mosquitto-bzzz]
wifi_ssid = "ThisAintEither"
wifi_password = "NorIsThisMyPassword"
```
5. Please, remember to add that filename to the `.gitignore` configuration, so it doesn't end up in our repository with our dearest secrets:
```shell
echo "cfg.toml" >> .gitignore
```
6. The code for connecting to the WiFi is a little bit tedious. It makes sense to do it in a different function:
```rust
fn connect_to_wifi(ssid: &str, passwd: &str) {}
```
7. This function should have a way to let us know if there has been a problem, but we want to simplify error handling, so we add the `anyhow` crate:
```rust
cargo add anyhow
```
8. We can now use the `Result` type provided by anyhow (`import anyhow::Result;`). This way, we don't need to be bored with creating and using a custom error type.
```rust
fn connect_to_wifi(ssid: &str, passwd: &str) -> Result<()> {
Ok(())
}
```
9. If the function doesn't get an SSID, it won't be able to connect to the WiFi, so it's better to stop here and return an error (`import anyhow::bail;`):
```rust
if ssid.is_empty() {
bail!("No SSID defined");
}
```
10. If the function gets a password, we will assume that authentication uses WPA2. Otherwise, no authentication will be used (`use esp_idf_svc::wifi::AuthMethod;`):
```rust
let auth_method = if passwd.is_empty() {
AuthMethod::None
} else {
AuthMethod::WPA2Personal
};
```
11. We will need an instance of the system loop to maintain the connection to the WiFi alive and kicking, so we access the system event loop singleton (`use esp_idf_svc::eventloop::EspSystemEventLoop;` and `use anyhow::Context`).
```rust
let sys_loop = EspSystemEventLoop::take().context("Unable to access system event loop.")?;
```
12. Although it is not required, the esp32 stores some data from previous network connections in the non-volatile storage, so getting access to it will simplify and accelerate the connection process (`use esp_idf_svc::nvs::EspDefaultNvsPartition;`).
```rust
let nvs = EspDefaultNvsPartition::take().context("Unable to access default NVS partition")?;
```
13. The connection to the WiFi is done through the modem, which can be accessed via the peripherals of the board. We pass the peripherals, obtain the modem, and use it to first wrap it with a WiFi driver and then get an instance that we will use to manage the WiFi connection (`use esp_idf_svc::wifi::{EspWifi, BlockingWifi};`):
```rust
fn connect_to_wifi(ssid: &str, passwd: &str,
modem: impl Peripheral + 'static,
) -> Result<()> {
// Auth checks here and sys_loop ...
let mut esp_wifi = EspWifi::new(modem, sys_loop.clone(), Some(nvs))?;
let mut wifi = BlockingWifi::wrap(&mut esp_wifi, sys_loop)?;
```
14. Then, we add a configuration to the WiFi (`use esp_idf_svc::wifi;`):
```rust
wifi.set_configuration(&mut wifi::Configuration::Client(
wifi::ClientConfiguration {
ssid: ssid
.try_into()
.map_err(|_| anyhow::Error::msg("Unable to use SSID"))?,
password: passwd
.try_into()
.map_err(|_| anyhow::Error::msg("Unable to use Password"))?,
auth_method,
..Default::default()
},
))?;
```
15. With the configuration in place, we start the WiFi radio, connect to the WiFi network, and wait to have the connection completed. Any errors will bubble up:
```rust
wifi.start()?;
wifi.connect()?;
wifi.wait_netif_up()?;
```
16. It is useful at this point to display the data of the connection.
```rust
let ip_info = wifi.wifi().sta_netif().get_ip_info()?;
log::info!("DHCP info: {:?}", ip_info);
```
17. We also want to return the variable that holds the connection. Otherwise, the connection will be closed when it goes out of scope at the end of this function. We change the signature to be able to do it:
```rust
) -> Result>> {
```
18. And return that value:
```rust
Ok(Box::new(wifi_driver))
```
19. We are going to initialize the connection to the WiFi from our function to read the noise, so let's add the modem as a parameter:
```rust
fn read_noise_level(
adc1: ADC1,
adc1_pin: GPIO,
modem: impl Peripheral + 'static,
) -> !
```
20. This new parameter has to be initialized in the main function:
```rust
let modem = peripherals.modem;
```
21. And passed it onto the function when we spawn the thread:
```rust
scope.spawn(|| read_noise_level(adc, adc_pin, modem));
```
22. Inside the function where we plan to use these parameters, we retrieve the configuration. The `CONFIGURATION` constant is generated automatically by the `cfg-toml` crate using the type of the struct:
```rust
let app_config = CONFIGURATION;
```
23. Next, we try to connect to the WiFi using those parameters:
```rust
let _wifi = match connect_to_wifi(app_config.wifi_ssid, app_config.wifi_password, modem) {
Ok(wifi) => wifi,
Err(err) => {
}
};
```
24. And, when dealing with the error case, we change the value of the status:
```rust
log::error!("Connect to WiFi: {}", err);
status.store(DeviceStatus::WifiError as u8, Relaxed);
```
25. This function doesn't take the state as an argument, so we add it to its signature:
```rust
fn read_noise_level(
status: &AtomicU8,
```
26. That argument is provided when the thread is spawned:
```rust
scope.spawn(|| read_noise_level(status, adc, adc_pin, modem));
```
27. We don't want the status to be changed sequentially anymore, so we remove that thread and the function that was implementing that change.
28. We run this code with `cargo r` to verify that we can connect to the network. However, this version is going to crash. 😱 Our function is going to exceed the default stack size for a thread, which, by default, is 4Kbytes.
29. We can use a thread builder, instead of the `spawn` function, to change the stack size:
```rust
thread::Builder::new()
.stack_size(6144)
.spawn_scoped(scope, || read_noise_level(status, adc, adc_pin, modem))
.unwrap();
```
30. After performing this change, we run it again `cargo r` and it should work as expected.
### Set up the MQTT broker
The next step after connecting to the WiFi is to connect to the MQTT broker as a client, but we don't have an MQTT broker yet. In this section, I will show you how to install Mosquitto, which is an open-source project of the Eclipse Foundation.
1. For this section, we need to have an MQTT broker. In my case, I will be installing Mosquitto, which implements versions 3.1.1 and 5.0 of the MQTT protocol. It will run in the same Raspberry Pi that I am using as a collecting station.
```shell
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install -y {mosquitto,mosquitto-clients,mosquitto-dev}
sudo systemctl enable mosquitto.service
```
2. We modify the Mosquitto configuration to enable clients to connect from outside of the localhost. We need some credentials and a configuration that enforces authentication:
```shell
sudo mosquitto_passwd -c -b /etc/mosquitto/passwd soundsensor "Zap\!Pow\!Bam\!Kapow\!"
sudo sh -c 'echo "listener 1883\nallow_anonymous false\npassword_file /etc/mosquitto/passwd" > /etc/mosquitto/conf.d/remote_access.conf'
sudo systemctl restart mosquitto
```
3. Let's test that we can subscribe and publish to a topic. The naming convention tends to use lowercase letters, numbers, and dashes only and reserves dashes for separating topics hierarchically. On one terminal, subscribe to the `testTopic`:
```rust
mosquitto_sub -t test/topic -u soundsensor -P "Zap\!Pow\!Bam\!Kapow\!"
```
4. And on another terminal, publish something to it:
```rust
mosquitto_pub -d -t test/topic -m "Hola caracola" -u soundsensor -P "Zap\!Pow\!Bam\!Kapow\!"
```
5. You should see the message that we wrote on the second terminal appear on the first one. This means that Mosquitto is running as expected.
### Publish to MQTT from the sensor
With the MQTT broker installed and ready, we can write the code to connect our sensor to it as an MQTT client and publish its data.
1. We are going to need the credentials that we have just created to publish data to the MQTT broker, so we add them to the `Configuration` structure:
```rust
#[toml_cfg::toml_config]
struct Configuration {
#[default("NotMyWifi")]
wifi_ssid: &'static str,
#[default("NotMyPassword")]
wifi_password: &'static str,
#[default("mqttserver")]
mqtt_host: &'static str,
#[default("")]
mqtt_user: &'static str,
#[default("")]
mqtt_password: &'static str,
}
```
2. You have to remember to add the values that make sense to the `cfg.toml` file for your environment. Don't expect to get them from my repo, because we have asked Git to ignore this file. At the very least, you need the hostname or IP address of your MQTT broker. Copy the user name and password that we created previously:
```
[mosquitto-bzzz]
wifi_ssid = "ThisAintEither"
wifi_password = "NorIsThisMyPassword"
mqtt_host = "mqttsystem"
mqtt_user = "soundsensor"
mqtt_password = "Zap!Pow!Bam!Kapow!"
```
3. Coming back to the function that we have created to read the noise sensor, we can now initialize an MQTT client after connecting to the WiFi (`use mqtt::client::{EspMqttClient, MqttClientConfiguration, QoS},`):
```rust
let mut mqtt_client =
EspMqttClient::new()
.expect("Unable to initialize MQTT client");
```
4. The first parameter is a URL to the MQTT server that will include the user and password, if defined:
```rust
let mqtt_url = if app_config.mqtt_user.is_empty() || app_config.mqtt_password.is_empty() {
format!("mqtt://{}/", app_config.mqtt_host)
} else {
format!(
"mqtt://{}:{}@{}/",
app_config.mqtt_user, app_config.mqtt_password, app_config.mqtt_host
)
};
```
5. The second parameter is the configuration. Let's add them to the creation of the MQTT client:
```rust
EspMqttClient::new(&mqtt_url, &MqttClientConfiguration::default(), |_| {
log::info!("MQTT client callback")
})
```
6. In order to publish, we need to define the topic:
```rust
const TOPIC: &str = "home/noise sensor/01";
```
7. And a variable that will be used to contain the message that we will publish:
```rust
let mut mqtt_msg: String;
```
8. Inside the loop, we will format the noise value because it is sent as a string:
```rust
mqtt_msg = format!("{}", d_b);
```
9. We publish this value using the MQTT client:
```rust
if let Ok(msg_id) = mqtt_client.publish(TOPIC, QoS::AtMostOnce, false, mqtt_msg.as_bytes())
{
println!(
"MSG ID: {}, ADC values: {:?}, sum: {}, and dB: {} ",
msg_id, sample_buffer, sum, d_b
);
} else {
println!("Unable to send MQTT msg");
}
```
10. As we did when we were publishing from the command line, we need to subscribe, in an independent terminal, to the topic that we plan to publish to. In this case, we are going to start with `home/noise sensor/01`. Notice that we represent a hierarchy, i.e., there are noise sensors at home and each of the sensors has an identifier. Also, notice that levels of the hierarchy are separated by slashes and can include spaces in their names.
```shell
mosquitto_sub -t "home/noise sensor/01" -u soundsensor -P "Zap\!Pow\!Bam\!Kapow\!"
```
11. Finally, we compile and run the firmware with `cargo r` and will be able to see those values appearing on the terminal that is subscribed to the topic.
### Use a unique ID for each sensor
I would like to finish this firmware solving a problem that won't show up until we have two sensors or more. Our firmware uses a constant topic. That means that two sensors with the same firmware will use the same topic and we won't have a way to know which value corresponds to which sensor. A better option is to use a unique identifier that will be different for every ESP32-C6 board. We can use the MAC address for that.
1. Let's start by creating a function that returns that identifier:
```rust
fn get_sensor_id() -> String {
}
```
2. Our function is going to use an unsafe function from ESP-IDF, and format the result as a `String` (`use esp_idf_svc::sys::{esp_base_mac_addr_get, ESP_OK};` and `use std::fmt::Write`). The function that returns the MAC address uses a pointer and, having been written in C++, couldn't care less about the safety rules that Rust code must obey. That function is considered unsafe and, as such, Rust requires us to use it within an `unsafe` scope. It is their way to tell us, "Here be dragons… and you know about it":
```rust
let mut mac_addr = [0u8; 8];
unsafe {
match esp_base_mac_addr_get(mac_addr.as_mut_ptr()) {
ESP_OK => {
let sensor_id = mac_addr.iter().fold(String::new(), |mut output, b| {
let _ = write!(output, "{b:02x}");
output
});
log::info!("Id: {:?}", sensor_id);
sensor_id
}
_ => {
log::error!("Unable to get id.");
String::from("BADCAFE00BADBEEF")
}
}
}
```
3. Then, we use the function before defining the topic and use its result with it:
```rust
let sensor_id = get_sensor_id();
let topic = format!("home/noise sensor/{sensor_id}");
```
4. And we slightly change the way we publish the data to use the topic:
```rust
if let Ok(msg_id) = mqtt_client.publish(&topic, QoS::AtMostOnce, false, mqtt_msg.as_bytes())
```
5. We also need to change the subscription so we listen to all the topics that start with `home/sensor/` and have one more level:
```shell
mosquitto_sub -t "home/noise sensor/+" -u soundsensor -P "Zap\!Pow\!Bam\!Kapow\!"
```
6. We compile and run with `cargo r` and the values start showing up on the terminal where the subscription was initiated.
## Recap and future work
In this article, we have used Rust to write the firmware for an ESP32-C6-DevKitC-1 board from beginning to end. Although we can agree that Python was an easier approach for our first firmware, I believe that Rust is a more robust, approachable, and useful language for this purpose.
The firmware that we have created can inform the user of any problems using an RGB LED, measure noise in something close enough to deciBels, connect our board to the WiFi and then to our MQTT broker as a client, and publish the measurements of our noise sensor. Not bad for a single tutorial.
We have even gotten ahead of ourselves and added some code to ensure that different sensors with the same firmware publish their values to different topics. And to do so, we have done a very brief incursion in the universe of *unsafe Rust* and survived the wilderness. Now you can go to a bar and tell your friends, "I wrote unsafe Rust." Well done!
In our next article, we will be writing C++ code again to collect the data from the MQTT broker and then send it to our instance of MongoDB Atlas in the Cloud. So get ready!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt567d405088cd0cc8/65f858c6a1e8150c7bd5bf74/ESP32-C6_B.jpeg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc51e9f705b7af11c/65f858c66405528ee97b0a83/ESP32-C6_A.jpeg | md | {
"tags": [
"C++",
"Rust",
"RaspberryPi"
],
"pageDescription": "We write in Rust from scratch the firmware of a noise sensor implemented with an ESP32. We use the neopixel to inform the user about the status of the device. And we make that sensor expose the measurements through MQTT.",
"contentType": "Tutorial"
} | Red Mosquitto: Implement a noise sensor with an MQTT client in an ESP32 | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/python-subsets-and-joins | created | # Coding With Mark: Abstracting Joins & Subsets in Python
This tutorial will talk about MongoDB design patterns — specifically, the Subset Pattern — and show how you can build an abstraction in your Python data model that hides how data is actually modeled within your database.
This is the third tutorial in a series! Feel free to check out the first tutorial or second tutorial if you like, but it's not necessary if you want to just read on.
## Coding with Mark?
This tutorial is loosely based on some episodes of a livestream I host, called "Coding with Mark." I'm streaming on Wednesdays at 2 p.m. GMT (that's 9 a.m. ET or 6 a.m. PT, if you're an early riser!). If that time doesn't work for you, you can always catch up by watching the recordings!
Currently, I'm building an experimental data access layer library that should provide a toolkit for abstracting complex document models from the business logic layer of the application that's using them.
You can check out the code in the project's GitHub repository!
## Setting the scene
The purpose of docbridge, my Object-Document Mapper, is to abstract the data model used within MongoDB from the data model used by a Python program. With a codebase of any size, you *need* something like this because otherwise, every time you change your data model (in your database), you need to change the object model (in your code). By having an abstraction layer, you localize all of this mapping into a single area of your codebase, and that's then the only part that needs to change when you change your data model. This ability to change your data model really allows you to take advantage of the flexibility of MongoDB's document model.
In the first tutorial, I showed a very simple abstraction, the FallbackField, that would try various different field names in a document until it found one that existed, and then would return that value. This was a very simple implementation of the Schema Versioning pattern.
In this tutorial, I'm going to abstract something more complex: the Subset Pattern.
## The Subset Pattern
MongoDB allows you to store arrays in your documents, natively. The values in those arrays can be primitive types, like numbers, strings, dates, or even subdocuments. But sometimes, those arrays can get too big, and the Subset Pattern describes a technique where the most important subset of the array (often just the *first* few items) is stored directly in the embedded array, and any overflow items are stored in other documents and looked up only when necessary.
This solves two design problems: First, we recommend that you don't store more than 200 items in an array, as the more items you have, the slower the database is at traversing the fields in each document. Second, the subset pattern also answers a question that I've seen many times when we've been teaching data modeling: "How do I stop my array from growing so big that the document becomes bigger than the 16MB limit?" While we're on the subject, do avoid your documents getting this big — it usually implies that you could improve your data model, for example, by separating out data into separate documents, or if you're storing lots of binary data, you could keep it outside your database, in an object store.
## Implementing the SequenceField type
Before delving into how to abstract a lookup for the extra array items that aren't embedded in the source document, I'll first implement a wrapper type for a BSON array. This can be used to declare array fields on a `Document` class, instead of the `Field` type that I implemented in previous articles.
I'm going to define a `SequenceField` to map a document's array into my access layer's object model. The core functionality of a SequenceField is you can specify a type for the array's items, and then when you iterate through the sequence, it will return you objects of that type, instead of just yielding the type that's stored in the document.
A concrete example would be a social media API's UserProfile class, which would store a list of Follower objects. I've created some sample documents with a Python script using Faker. A sample document looks like this:
```python
{
"_id": { "$oid": "657072b56731c9e580e9dd70" },
"user_id": "4",
"user_name": "@tanya15",
"full_name": "Deborah White",
"birth_date": { "$date": { "$numberLong": "931219200000" } },
"email": "[email protected]",
"bio": "Music conference able doctor degree debate. Participant usually above relate.",
"follower_count": { "$numberInt": "59" },
"followers":
{
"_id": { "$oid": "657072b66731c9e580e9dda6" },
"user_id": "58",
"user_name": "@rduncan",
"bio": "Rich beautiful color life. Relationship instead win join enough board successful."
},
{
"_id": { "$oid": "657072b66731c9e580e9dd99" },
"user_id": "45",
"user_name": "@paynericky",
"bio": "Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid."
},
# ... other followers
]
}
```
I can model this data using two classes — one for the top-level Profile data, and one for the summary data for that profile's followers (embedded in the array).
```python
class Follower(Document):
_id = Field(transform=str)
user_name = Field()
class Profile(Document):
_id = Field(transform=str)
followers = SequenceField(type=Follower)
```
If I want to loop through all the followers of a profile instance, each item should be a `Follower` instance:
```python
profile = Profile(SOME_BSON_DATA)
for follower in profile.followers:
assert isinstance(follower, Follower)
```
This behavior can be implemented in a similar way to the `Field` class, by implementing it as a descriptor, with a `__get__` method that, in this case, yields a `Follower` constructed for each item in the underlying BSON array. The code looks a little like this:
```python
class SequenceField:
"""
Allows an underlying array to have its elements wrapped in
Document instances.
"""
def __init__(
self,
type,
field_name=None,
):
self._type = type
self.field_name = field_name
def __set_name__(self, owner, name):
"""
Called when the enclosing Document subclass (owner) is defined.
"""
self.name = name # Store the attribute name.
# If a field-name mapping hasn't been provided,
# the BSON field will have the same name as the attribute name.
if self.field_name is None:
self.field_name = name
def __get__(self, ob, cls):
"""
Called when the SequenceField attribute is accessed on the enclosed
Document subclass.
"""
try:
# Lookup the field in the BSON, and return an array where each item
# is wrapped by the class defined as type in __init__:
return [
self._type(item, ob._db)
for item in ob._doc[self.field_name]
]
except KeyError as ke:
raise ValueError(
f"Attribute {self.name!r} is mapped to missing document property {self.field_name!r}."
) from ke
```
That's a lot of code, but quite a lot of it is duplicated from `Field` - I'll fix that with some inheritance at some point. The most important part is near the end:
```python
return [
self._type(item, ob._db)
for item in ob._doc[self.field_name]
]
```
In the concrete example above, this would resolve to something like this fictional code:
```python
return [
Follower(item, db=None) for item in profile._doc["followers"]
]
```
## Adding in the extra followers
The dataset I've created for working with this only stores the first 20 followers within a profile document. The rest are stored in a "followers" collection, and they're bucketed to store up to 20 followers per document, in a field called "followers." The "user_id" field says who the followers belong to. A single document in the "followers" collection looks like this:
![A document containing a "followers" field that contains some more followers for the user with a "user_id" of "4"][1]
[The Bucket Pattern is a technique for putting lots of small subdocuments together in a bucket document, which can make it more efficient to retrieve documents that are usually retrieved together, and it can keep index sizes down. The downside is that it makes updating individual subdocuments slightly slower and more complex.
### How to query documents in buckets
I have a collection where each document contains an array of followers — a "bucket" of followers. But what I *want* is a query that returns individual follower documents. Let's break down how this query will work:
1. I want to look up all the documents for a particular user_id.
1. For each item in followers — each item is a follower — I want to yield a single document for that follower.
1. I want to restructure each document so that it *only* contains the follower information, not the bucket information.
This is what I love about aggregation pipelines — once I've come up with those steps, I can often convert each step into an aggregation pipeline stage.
**Step 1**: Look up all the documents for a particular user:
```python
{"$match": {"user_id": "4"}}
```
Note that this stage has hard-coded the value "4" for the "user_id" field. I'll explain later how dynamic values can be inserted into these queries. This outputs a single document, a bucket, containing many followers, in a field called "followers":
```json
{
"user_name": "@tanya15",
"full_name": "Deborah White",
"birth_date": {
"$date": "1999-07-06T00:00:00.000Z"
},
"email": "[email protected]",
"bio": "Music conference able doctor degree debate. Participant usually above relate.",
"user_id": "4",
"follower_count": 59,
"followers":
{
"_id": {
"$oid": "657072b66731c9e580e9dda6"
},
"user_id": "58",
"user_name": "@rduncan",
"bio": "Rich beautiful color life. Relationship instead win join enough board successful."
},
{
"bio": "Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid.",
"_id": {
"$oid": "657072b66731c9e580e9dd99"
},
"user_id": "45",
"user_name": "@paynericky"
},
{
"_id": {
"$oid": "657072b76731c9e580e9ddba"
},
"user_id": "78",
"user_name": "@tiffanyhicks",
"bio": "Sign writer win. Look television official information laugh. Lay plan effect break expert message during firm."
},
. . .
],
"_id": {
"$oid": "657072b56731c9e580e9dd70"
}
}
```
**Step 2**: Yield a document for each follower — the $unwind stage can do exactly this:
```python
{"$unwind": "$followers"}
```
This instructs MongoDB to return one document for each item in the "followers" array. All of the document contents will be included, but the followers *array* will be replaced with the single follower *subdocument* each time. This outputs several documents, each containing a single follower in the "followers" field:
```python
# First document:
{
"bio": "Music conference able doctor degree debate. Participant usually above relate.",
"follower_count": 59,
"followers": {
"_id": {
"$oid": "657072b66731c9e580e9dda6"
},
"user_id": "58",
"user_name": "@rduncan",
"bio": "Rich beautiful color life. Relationship instead win join enough board successful."
},
"user_id": "4",
"user_name": "@tanya15",
"full_name": "Deborah White",
"birth_date": {
"$date": "1999-07-06T00:00:00.000Z"
},
"email": "[email protected]",
"_id": {
"$oid": "657072b56731c9e580e9dd70"
}
}
# Second document
{
"_id": {
"$oid": "657072b56731c9e580e9dd70"
},
"full_name": "Deborah White",
"email": "[email protected]",
"bio": "Music conference able doctor degree debate. Participant usually above relate.",
"follower_count": 59,
"user_id": "4",
"user_name": "@tanya15",
"birth_date": {
"$date": "1999-07-06T00:00:00.000Z"
},
"followers": {
"_id": {
"$oid": "657072b66731c9e580e9dd99"
},
"user_id": "45",
"user_name": "@paynericky",
"bio": "Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid."
}
# . . . More documents follow
```
**Step 3**: Restructure the document, pulling the "follower" value up to the top-level of the document. There's a special stage for doing this — $replaceRoot:
```python
{"$replaceRoot": {"newRoot": "$followers"}},
```
Adding the stage above results in each document containing a single follower, at the top level:
```python
# Document 1:
{
"_id": {
"$oid": "657072b66731c9e580e9dda6"
},
"user_id": "58",
"user_name": "@rduncan",
"bio": "Rich beautiful color life. Relationship instead win join enough board successful."
}
# Document 2
{
"_id": {
"$oid": "657072b66731c9e580e9dd99"
},
"user_id": "45",
"user_name": "@paynericky",
"bio": "Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid."
}
} # . . . More documents follow
```
Putting it all together, the query looks like this:
```python
[
{"$match": {"user_id": "4"}},
{"$unwind": "$followers"},
{"$replaceRoot": {"newRoot": "$followers"}},
]
```
I've explained the query that I want to be run each time I iterate through the followers field in my data abstraction library. Now, I'll show you how to hide this query (or whatever query is required) away in the SequenceField implementation.
### Abstracting out the Lookup
Now, I would like to change the behavior of the SequenceField so that it does the following:
- Iterate through the embedded subdocuments and yield each one, wrapped by type (the callable that wraps each subdocument.)
- If the user gets to the end of the embedded array, make a query to look up the rest of the followers and yield them one by one, also wrapped by type.
First, I'll change the `__init__` method so that the user can provide two extra parameters:
- The collection that contains the extra documents, superset_collection
- The query to run against that collection to return individual documents, superset_query
The result looks like this:
```python
class Field:
def __init__(
self,
type,
field_name=None,
superset_collection=None,
superset_query: Callable = None,
):
self._type = type
self.field_name = field_name
self.superset_collection = superset_collection
self.superset_query = superset_query
```
The query will have to be provided as a callable, i.e., a function, lambda expression, or method. The reason for that is that generating the query will usually need access to some of the state of the document (in this case, the `user_id`, to construct the query to look up the correct follower documents.) The callable is stored in the Field instance, and then when the lookup is needed, it calls the callable, passing it the Document that contains the Field, so the callable can look up the user "\_id" in the wrapped `_doc` dictionary.
Now that the user can provide enough information to look up the extra followers (the superset), I changed the `__get__` method to perform the lookup when it runs out of embedded followers. To make this simpler to write, I took advantage of *laziness*. Twice! Here's how:
**Laziness Part 1**: When you execute a query by calling `find` or `aggregate`, the query is not executed immediately. Instead, the method immediately returns a cursor. Cursors are lazy — which means they don't do anything until you start to use them, by iterating over their contents. As soon as you start to iterate, or loop, over the cursor, it *then* queries the database and starts to yield results.
**Laziness Part 2**: Most of the functions in the core Python `itertools` module are also lazy, including the `chain` function. Chain is called with one or more iterables as arguments and then *only* starts to loop through the later arguments when the earlier iterables are exhausted (meaning the code has looped through all of the contents of the iterable.)
These can be combined to create a single iterable that will never request any extra followers from the database, *unless* the code specifically requests more items after looping through the embedded items:
```python
embedded_followers = self._doc["followers"] # a list
cursor = followers.find({"user_id": "4"}) # a lazy database cursor
# Looping through all_followers will only make a database call if you have
# looped through all of the contents of embedded_followers:
all_followers = itertools.chain(embedded_followers, cursor)
```
The real code is a bit more flexible, because it supports both find and aggregate queries. It recognises the type because find queries are provided as dicts, and aggregate queries are lists.
```python
def __get__(self, ob, cls):
if self.superset_query is None:
# Use an empty sequence if there are no extra items.
# It's still iterable, like a cursor, but immediately exits.
superset = []
else:
# Call the superset_query callable to obtain the generated query:
query = self.superset_query(ob)
# If the query is a mapping, it's a find query, otherwise it's an
# aggregation pipeline.
if isinstance(query, Mapping):
superset = ob._db.get_collection(self.superset_collection).find(query)
elif isinstance(query, Iterable):
superset = ob._db.get_collection(self.superset_collection).aggregate(
query
)
else:
raise Exception("Returned was not a mapping or iterable.")
try:
# Return an iterable that first yields all the embedded items, and
return chain(
[self._type(item, ob._db) for item in ob._doc[self.field_name]],
(self._type(item, ob._db) for item in superset),
)
except KeyError as ke:
raise ValueError(
f"Attribute {self.name!r} is mapped to missing document property {self.field_name!r}."
) from ke
```
I've added quite a few comments to the code above, so hopefully you can see the relationship between the simplified code above it and the real code here.
## Using the SequenceField to declare relationships
Implementing `Profile` and `Follower` is now a matter of providing the query (wrapped in a lambda expression) and the collection that should be queried.
```python
# This is the same as it was originally
class Follower(Document):
_id = Field(transform=str)
user_name = Field()
def extra_followers_query(profile):
return [
{
"$match": {"user_id": profile.user_id},
},
{"$unwind": "$followers"},
{"$replaceRoot": {"newRoot": "$followers"}},
]
class Profile(Document):
_id = Field(transform=str)
followers = SequenceField(
type=Follower,
superset_collection="followers",
superset_query=lambda ob: extra_followers_query,
)
```
An application that used the above `Profile` definition could look up the `Profile` with "user_id" of "4" and then print out the user names of all their followers with some code like this:
```python
for follower in profile.followers:
print(follower.user_name)
```
See how the extra query is now part of the type's mapping definition and not the code dealing with the data? That's the kind of abstraction I wanted to provide when I started building this experimental library. I have more plans, so stick with me! But before I implement more data abstractions, I first need to implement updates — that's something I'll describe in my next tutorial.
### Conclusion
This is now the third tutorial in my Python data abstraction series, and I'll admit that this was the code I envisioned when I first came up with the idea of the docbridge library. It's been super satisfying to get to this point, and because I've been developing the whole thing with test-driven development practices, there's already good code coverage.
If you're looking for more information on aggregation pipelines, you should have a look at [Practical MongoDB Aggregations — or now, you can buy an expanded version of the book in paperback.
If you're interested in the abstraction topics and Python code architecture in general, you can buy the Architecture Patterns with Python book, or read it online at CosmicPython.com
I livestream most weeks, usually at 2 p.m. UTC on Wednesdays. If that sounds interesting, check out the MongoDB YouTube channel. I look forward to seeing you there!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt582eb5d324589b37/65f9711af4a4cf479114f828/image1.png | md | {
"tags": [
"MongoDB",
"Python"
],
"pageDescription": "Learn how to use advanced Python to abstract subsets and joins in MongoDB data models.",
"contentType": "Tutorial"
} | Coding With Mark: Abstracting Joins & Subsets in Python | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/building-real-time-dynamic-seller-dashboard | created | # Building a Real-Time, Dynamic Seller Dashboard on MongoDB
One of the key aspects of being a successful merchant is knowing your market. Understanding your top-selling products, trending SKUs, and top customer locations helps you plan, market, and sell effectively. As a marketplace, providing this visibility and insights for your sellers is crucial. For example, SHOPLINE has helped over 350,000 merchants reach more than 680 million customers via e-commerce, social commerce, and offline point-of-sale (POS) transactions. With key features such as inventory and sales management tools, data analytics, etc. merchants have everything they need to build a successful online store.
In this article, we are going to look at how a single query on MongoDB can power a real-time view of top selling products, and a deep-dive into the top selling regions.
## Status Quo: stale data
In the relational world, such a dashboard would require multiple joins across at least four distinct tables: seller details, product details, channel details, and transaction details.
This increases complexity, data latency, and costs for providing insights on real-time, operational data. Often, organizations pre-compute these tables with up to a 24-hour lag to ensure a better user experience.
## How can MongoDB help deliver real-time insights?
With MongoDB, using the Query API, we could deliver such dashboards in near real-time, working directly on operational data. The required information for each sales transaction can be stored in a single collection.
Each document would look as follows:
```
{
"_id": { "$oid": "5bd761dcae323e45a93ccfed" },
"saleDate": { "$date": {...} },
"items":
{ "name": "binder",
"tags": [
"school",
"general"],
"price": { "$numberDecimal": "13.44" },
"quantity": 8
},
{ "name": "binder",
"tags": [
"general",
"organization"
],
"price": { "$numberDecimal": "16.66" },
"quantity": 10
}
],
"storeLocation": "London",
"customer": {
"gender": "M",
"age": 44,
"email": "[email protected]",
"satisfaction": 2
},
"couponUsed": false,
"purchaseMethod": "In store"
}
```
This specific document is from the *“sales”* collection within the *“sample_supplies”* database, available as sample data when you create an Atlas Cluster. [Start free on Atlas and try out this exercise yourself. MongoDB allows for flexible schema and versioning which makes updating this document with a “seller” field, similar to the customer field, and managing it in your application, very simple. From a data modeling perspective, the polymorphic pattern is ideal for our current use case.
## Desired output
In order to build a dashboard showcasing the top five products sold over a specific period, we would want to transform the documents into the following sorted array:
```
{
"total_volume": 1897,
"item": "envelopes"
},
{
"total_volume": 1844,
"item": "binder"
},
{
"total_volume": 1788,
"item": "notepad"
},
{
"total_volume": 1018,
"item": "pens"
},
{
"total_volume": 830,
"item": "printer paper"
}
]
```
With just the “_id” and “total_volume” fields, we can build a chart of the top five products. If we wanted to deliver an improved seller experience, we could build a deep-dive chart with the same single query that provides the top five locations and the quantity sold for each.
The output for each item would look like this:
```
{
"_id": "binder",
"totalQuantity": 100,
"topFiveRegionsByQuantity": {
"Seattle": 41,
"Denver": 26,
"New York": 14,
"Austin": 10,
"London": 9
}
}
```
With the Query API, this transformation can be done in real-time in the database with a single query. In this example, we go a bit further to build another transformation on top which can improve user experience. In fact, on our Atlas developer data platform, this becomes significantly easier when you leverage [Atlas Charts.
## Getting started
1. Set up your Atlas Cluster and load sample data “sample_supplies.”
2. Connect to your Atlas cluster through Compass or open the Data Explorer tab on Atlas.
In this example, we can use the aggregation builder in Compass to build the following pipeline.
(Tip: Click “Create new pipeline from text” to copy the code below and easily play with the pipeline.)
## Aggregations with the query API
Keep scrolling to see the following code examples in Python, Java, and JavaScript.
```
{
$match: {
saleDate: {
$gte: ISODate('2017-12-25T05:00:00.000Z'),
$lt: ISODate('2017-12-30T05:00:00.000Z')
}
}
}, {
$unwind: {
path: '$items'
}
}, {
$group: {
_id: {
item: '$items.name',
region: '$storeLocation'
},
quantity: {
$sum: '$items.quantity'
}
}
}, {
$addFields: {
'_id.quantity': '$quantity'
}
}, {
$replaceRoot: {
newRoot: '$_id'
}
}, {
$group: {
_id: '$item',
totalQuantity: {
$sum: '$quantity'
},
topFiveRegionsByQuantity: {
$topN: {
output: {
k: '$region',
v: '$quantity'
},
sortBy: {
quantity: -1
},
n: 5
}
}
}
}, {
$sort: {
totalQuantity: -1
}
}, {
$limit: 5
}, {
$set: {
topFiveRegionsByQuantity: {
$arrayToObject: '$topFiveRegionsByQuantity'
}
}
}]
```
This short but powerful pipeline processes our data through the following stages:
* First, it filters our data to the specific subset we need. In this case, sale transactions are from the specified dates. It’s worth noting here that you can parametrize inputs to the [$match stage to dynamically filter based on user choices.
Note: Beginning our pipeline with this filter stage significantly improves processing times. With the right index, this entire operation can be extremely fast and reduce the number of documents to be processed in subsequent stages.
* To fully leverage the polymorphic pattern and the document model, we store items bought in each order as an embedded array. The second stage unwinds this so our pipeline can look into each array. We then group the unwound documents by item and region and use $sum to calculate the total quantity sold.
* Ideally, at this stage we would want our documents to have three data points: the item, the region, and the quantity sold. However, at the end of the previous stage, the item and region are in an embedded object, while quantity is a separate field. We use $addFields to move quantity within the embedded object, and then use $replaceRoot to use this embedded _id document as the source document for further stages. This quick maneuver gives us the transformed data we need as a single document.
* Next, we group the items as per the view we want on our dashboard. In this example, we want the total volume of each product sold, and to make our dashboard more insightful, we could also get the top five regions for each of these products. We use $group for this with two operators within it:
* $sum to calculate the total quantity sold.
* $topN to create a new array of the top five regions for each product and the quantity sold at each location.
* Now that we have the data transformed the way we want, we use a $sort and $limit to find the top five items.
* Finally, we use $set to convert the array of the top five regions per item to an embedded document with the format {region: quantity}, making it easier to work with objects in code. This is an optional step.
Note: The $topN operator was introduced in MongoDB 5.2. To test this pipeline on Atlas, you would require an M10 cluster. By downloading MongoDB community version, you can test through Compass on your local machine.
## What would you build?
While adding visibility on the top five products and the top-selling regions is one part of the dashboard, by leveraging MongoDB and the Query API, we deliver near real-time visibility into live operational data.
In this article, we saw how to build a single query which can power multiple charts on a seller dashboard. What would you build into your dashboard views? Join our vibrant community forums, to discuss more.
*For reference, here’s what the code blocks look like in other languages.*
*Python*
```python
# Import the necessary packages
from pymongo import MongoClient
from bson.son import SON
# Connect to the MongoDB server
client = MongoClient(URI)
# Get a reference to the sample_supplies collection
db = client.
supplies = db.sample_supplies
# Build the pipeline stages
match_stage = {
"$match": {
"saleDate": {
"$gte": "ISODate('2017-12-25T05:00:00.000Z')",
"$lt": "ISODate('2017-12-30T05:00:00.000Z')"
}
}
}
unwind_stage = {
"$unwind": {
"path": "$items"
}
}
group_stage = {
"$group": {
"_id": {
"item": "$items.name",
"region": "$storeLocation"
},
"quantity": {
"$sum": "$items.quantity"
}
}
}
addfields_stage = {
$addFields: {
'_id.quantity': '$quantity'
}
}
replaceRoot_stage = {
$replaceRoot: {
newRoot: '$_id'
}
}
group2_stage = {
$group: {
_id: '$item',
totalQuantity: {
$sum: '$quantity'
},
topFiveRegionsByQuantity: {
$topN: {
output: {
k: '$region',
v: '$quantity'
},
sortBy: {
quantity: -1
},
n: 5
}
}
}
}
sort_stage = {
$sort: {
totalQuantity: -1
}
}
limit_stage = {
$limit: 5
}
set_stage = {
$set: {
topFiveRegionsByQuantity: {
$arrayToObject: '$topFiveRegionsByQuantity'
}
}
}
pipeline = [match_stage, unwind_stage, group_stage,
addfields_stage, replaceroot_stage, group2_stage,
sort_stage, limit_stage, set_stage]
# Execute the aggregation pipeline
results = supplies.aggregate(pipeline)
```
*Java*
```java
import com.mongodb.client.MongoCollection;
import com.mongodb.client.model.Aggregates;
import org.bson.Document;
import java.util.Arrays;
// Connect to MongoDB and get the collection
MongoClient mongoClient = new MongoClient(URI);
MongoDatabase database = mongoClient.getDatabase();
MongoCollection collection = database.getCollection("sample_supplies");
// Create the pipeline stages
Bson matchStage = Aggregates.match(Filters.and(
Filters.gte("saleDate", new Date("2017-12-25T05:00:00.000Z")),
Filters.lt("saleDate", new Date("2017-12-30T05:00:00.000Z"))
));
Bson unwindStage = Aggregates.unwind("$items");
Bson groupStage = Aggregates.group("$items.name",
Accumulators.sum("quantity", "$items.quantity")
);
Bson addFieldsStage = Aggregates.addFields(new Field("_id.quantity", "$quantity"));
Bson replaceRootStage = Aggregates.replaceRoot("_id");
Bson group2Stage = Aggregates.group("$item",
Accumulators.sum("totalQuantity", "$quantity"),
Accumulators.top("topFiveRegionsByQuantity", 5, new TopOptions()
.output(new Document("k", "$region").append("v", "$quantity"))
.sortBy(new Document("quantity", -1))
)
);
Bson sortStage = Aggregates.sort(new Document("totalQuantity", -1));
Bson limitStage = Aggregates.limit(5);
Bson setStage = Aggregates.set("topFiveRegionsByQuantity", new Document("$arrayToObject", "$topFiveRegionsByQuantity"));
// Execute the pipeline
List results = collection.aggregate(Arrays.asList(matchStage, unwindStage, groupStage, addFieldsStage, replaceRootStage, group2Stage, sortStage, limitStage, setStage)).into(new ArrayList<>());
```
*JavaScript*
```javascript
const MongoClient = require('mongodb').MongoClient;
const assert = require('assert');
// Connection URL
const url = 'URI';
// Database Name
const db = 'database_name';
// Use connect method to connect to the server
MongoClient.connect(url, function(err, client) {
assert.equal(null, err);
console.log("Connected successfully to server");
const db = client.db(dbName);
// Create the pipeline stages
const matchStage = {
$match: {
saleDate: {
$gte: new Date('2017-12-25T05:00:00.000Z'),
$lt: new Date('2017-12-30T05:00:00.000Z')
}
}
};
const unwindStage = {
$unwind: {
path: '$items'
}
};
const groupStage = {
$group: {
_id: {
item: '$items.name',
region: '$storeLocation'
},
quantity: {
$sum: '$items.quantity'
}
}
};
const addFieldsStage = {
$addFields: {
'_id.quantity': '$quantity'
}
};
const replaceRootStage = {
$replaceRoot: {
newRoot: '$_id'
}
};
const groupStage = {
$group: {
_id: '$item',
totalQuantity: {
$sum: '$quantity'
},
topFiveRegionsByQuantity: {
$topN: {
output: {
k: '$region',
v: '$quantity'
},
sortBy: {
quantity: -1
},
n: 5
}
}
}
};
const sortStage = {
$sort: {
totalQuantity: -1
}
};
const limitStage = {
$limit: 5
};
const setStage = {
$set: {
topFiveRegionsByQuantity: {
$arrayToObject: '$topFiveRegionsByQuantity'
}
}
};
const pipeline = [matchStage, unwindStage, groupStage,
addFieldsStage, replaceRootStage, group2Stage,
sortStage, limitStage, setStage]
// Execute the pipeline
db.collection('sample_supplies')
.aggregate(pipeline)
.toArray((err, results) => {
assert.equal(null, err);
console.log(results);
client.close();
});
});
``` | md | {
"tags": [
"Atlas",
"Python",
"Java",
"JavaScript"
],
"pageDescription": "In this article, we're looking at how a single query on MongoDB can power a real-time view of top-selling products, and deep-dive into the top-selling regions.",
"contentType": "Tutorial"
} | Building a Real-Time, Dynamic Seller Dashboard on MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/microservices-architecture-spring-mongodb | created | # Microservices Architecture With Java, Spring, and MongoDB
## Introduction
"Microservices are awesome and monolithic applications are evil."
If you are reading this article, you have already read that a million times, and I'm not the one who's going to tell
you otherwise!
In this post, we are going to create a microservices architecture using MongoDB.
## TL;DR
The source code is available in these two repositories.
The README.md files will
help you start everything.
```bash
git clone [email protected]:mongodb-developer/microservices-architecture-mongodb.git
git clone [email protected]:mongodb-developer/microservices-architecture-mongodb-config-repo.git
```
## Microservices architecture
We are going to use Spring Boot and Spring Cloud dependencies to build our architecture.
Here is what a microservices architecture looks like, according to Spring:
file and start the service related to each section.
### Config server
The first service that we need is a configuration server.
This service allows us to store all the configuration files of our microservices in a single repository so our
configurations are easy to version and store.
The configuration of our config server is simple and straight to the point:
```properties
spring.application.name=config-server
server.port=8888
spring.cloud.config.server.git.uri=${HOME}/Work/microservices-architecture-mongodb-config-repo
spring.cloud.config.label=main
```
It allows us to locate the git repository that stores our microservices configuration and the branch that should be
used.
> Note that the only "trick" you need in your Spring Boot project to start a config server is the `@EnableConfigServer`
> annotation.
```java
package com.mongodb.configserver;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.config.server.EnableConfigServer;
@EnableConfigServer
@SpringBootApplication
public class ConfigServerApplication {
public static void main(String] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
```
### Service registry
A service registry is like a phone book for microservices. It keeps track of which microservices are running and where
they are located (IP address and port). Other services can look up this information to find and communicate with the
microservices they need.
A service registry is useful because it enables client-side load balancing and decouples service providers from
consumers without the need for DNS.
Again, you don't need much to be able to start a Spring Boot service registry. The `@EnableEurekaServer` annotation
makes all the magic happen.
```java
package com.mongodb.serviceregistry;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
@SpringBootApplication
@EnableEurekaServer
public class ServiceRegistryApplication {
public static void main(String[] args) {
SpringApplication.run(ServiceRegistryApplication.class, args);
}
}
```
The configuration is also to the point:
```properties
spring.application.name=service-registry
server.port=8761
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
```
> The last two lines prevent the service registry from registering to itself and retrieving the registry from itself.
### API gateway
The API gateway service allows us to have a single point of entry to access all our microservices. Of course, you should
have more than one in production, but all of them will be able to communicate with all the microservices and distribute
the workload evenly by load-balancing the queries across your pool of microservices.
Also, an API gateway is useful to address cross-cutting concerns like security, monitoring, metrics gathering, and
resiliency.
When our microservices start, they register themselves to the service registry. The API gateway can use this registry to
locate the microservices and distribute the queries according to its routing configuration.
```Shell
server:
port: 8080
spring:
application:
name: api-gateway
cloud:
gateway:
routes:
- id: company-service
uri: lb://company-service
predicates:
- Path=/api/company/**,/api/companies
- id: employee-service
uri: lb://employee-service
predicates:
- Path=/api/employee/**,/api/employees
eureka:
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://localhost:8761/eureka/
instance:
hostname: localhost
```
> Note that our API gateway runs on port 8080.
### MongoDB microservices
Finally, we have our MongoDB microservices.
Microservices are supposed to be independent of each other. For this reason, we need two MongoDB instances: one for
each microservice.
Check out the [README.md
file to run everything.
> Note that in
> the configuration files for the
> company and employee services, they are respectively running on ports 8081 and 8082.
company-service.properties
```properties
spring.data.mongodb.uri=${MONGODB_URI_1:mongodb://localhost:27017}
spring.threads.virtual.enabled=true
management.endpoints.web.exposure.include=*
management.info.env.enabled=true
info.app.name=Company Microservice
info.app.java.version=21
info.app.type=Spring Boot
server.port=8081
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/
eureka.instance.hostname=localhost
```
employee-service.properties
```properties
spring.data.mongodb.uri=${MONGODB_URI_2:mongodb://localhost:27018}
spring.threads.virtual.enabled=true
management.endpoints.web.exposure.include=*
management.info.env.enabled=true
info.app.name=Employee Microservice
info.app.java.version=21
info.app.type=Spring Boot
server.port=8082
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.client.service-url.defaultZone=http://localhost:8761/eureka/
eureka.instance.hostname=localhost
```
> Note that the two microservices are connected to two different MongoDB clusters to keep their independence. The
> company service is using the MongoDB node on port 27017 and the employee service is on port 27018.
Of course, this is only if you are running everything locally. In production, I would recommend to use two clusters on
MongoDB Atlas. You can overwrite the MongoDB URI with the environment variables (see README.md).
## Test the REST APIs
At this point, you should have five services running:
- A config-server on port 8888
- A service-registry on port 8761
- An api-gateway on port 8080
- Two microservices:
- company-service on port 8081
- employee-service on port 8082
And two MongoDB nodes on ports 27017 and 27018 or two MongoDB clusters on MongoDB Atlas.
If you start the
script 2_api-tests.sh,
you should get an output like this.
```
DELETE Companies
2
DELETE Employees
2
POST Company 'MongoDB'
POST Company 'Google'
GET Company 'MongoDB' by 'id'
{
"id": "661aac7904e1bf066ee8e214",
"name": "MongoDB",
"headquarters": "New York",
"created": "2009-02-11T00:00:00.000+00:00"
}
GET Company 'Google' by 'name'
{
"id": "661aac7904e1bf066ee8e216",
"name": "Google",
"headquarters": "Mountain View",
"created": "1998-09-04T00:00:00.000+00:00"
}
GET Companies
{
"id": "661aac7904e1bf066ee8e214",
"name": "MongoDB",
"headquarters": "New York",
"created": "2009-02-11T00:00:00.000+00:00"
},
{
"id": "661aac7904e1bf066ee8e216",
"name": "Google",
"headquarters": "Mountain View",
"created": "1998-09-04T00:00:00.000+00:00"
}
]
POST Employee Maxime
POST Employee Tim
GET Employee 'Maxime' by 'id'
{
"id": "661aac79cf04401110c03516",
"firstName": "Maxime",
"lastName": "Beugnet",
"company": "Google",
"headquarters": "Mountain View",
"created": "1998-09-04T00:00:00.000+00:00",
"joined": "2018-02-12T00:00:00.000+00:00",
"salary": 2468
}
GET Employee 'Tim' by 'id'
{
"id": "661aac79cf04401110c03518",
"firstName": "Tim",
"lastName": "Kelly",
"company": "MongoDB",
"headquarters": "New York",
"created": "2009-02-11T00:00:00.000+00:00",
"joined": "2023-08-23T00:00:00.000+00:00",
"salary": 13579
}
GET Employees
[
{
"id": "661aac79cf04401110c03516",
"firstName": "Maxime",
"lastName": "Beugnet",
"company": "Google",
"headquarters": "Mountain View",
"created": "1998-09-04T00:00:00.000+00:00",
"joined": "2018-02-12T00:00:00.000+00:00",
"salary": 2468
},
{
"id": "661aac79cf04401110c03518",
"firstName": "Tim",
"lastName": "Kelly",
"company": "MongoDB",
"headquarters": "New York",
"created": "2009-02-11T00:00:00.000+00:00",
"joined": "2023-08-23T00:00:00.000+00:00",
"salary": 13579
}
]
```
> Note that the employee service sends queries to the company service to retrieve the details of the employees' company.
This confirms that the service registry is doing its job correctly because the URL only contains a reference to the company microservice, not its direct IP and port.
```java
private CompanyDTO getCompany(String company) {
String url = "http://company-service/api/company/name/";
CompanyDTO companyDTO = restTemplate.getForObject(url + company, CompanyDTO.class);
if (companyDTO == null) {
throw new EntityNotFoundException("Company not found: ", company);
}
return companyDTO;
}
```
## Conclusion
And voilà! You now have a basic microservice architecture running that is easy to use to kickstart your project.
In this architecture, we could seamlessly integrate additional features to enhance performance and maintainability in
production. Caching would be essential, particularly with a potentially large number of employees within the same
company, significantly alleviating the load on the company service.
The addition of a [Spring Cloud Circuit Breaker could also
improve the resiliency in production and a Spring Cloud Sleuth would
help with distributed tracing and auto-configuration.
If you have questions, please head to our Developer Community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt332394d666c28140/661ab5bf188d353a3e2da005/microservices-architecture.svg
| md | {
"tags": [
"Java",
"MongoDB",
"Spring",
"Docker"
],
"pageDescription": "In this post, you'll learn about microservices architecture and you'll be able to deploy your first architecture locally using Spring Boot, Spring Cloud and MongoDB.",
"contentType": "Tutorial"
} | Microservices Architecture With Java, Spring, and MongoDB | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/how-maintain-multiple-versions-record-mongodb | created | # How to Maintain Multiple Versions of a Record in MongoDB (2024 Updates)
Over the years, there have been various methods proposed for versioning data in MongoDB. Versioning data means being able to easily get not just the latest version of a document or documents but also view and query the way the documents were at a given point in time.
There was the blog post from Asya Kamsky written roughly 10 years ago, an update from Paul Done (author of Practical MongoDB Aggregations), and also information on the MongoDB website about the version pattern from 2019.
These variously maintain two distinct collections of data — one with the latest version and one with prior versions or updates, allowing you to reconstruct them.
Since then, however, there have been seismic, low-level changes in MongoDB's update and aggregation capabilities. Here, I will show you a relatively simple way to maintain a document history when updating without maintaining any additional collections.
To do this, we use expressive updates, also sometimes called aggregation pipeline updates. Rather than pass an object with update operators as the second argument to update, things like $push and $set, we express our update as an aggregation pipeline, with an ordered set of changes. By doing this, we can not only make changes but take the previous values of any fields we change and record those in a different field as a history.
The simplest example of this would be to use the following as the update parameter for an updateOne operation.
```
{ $set : { a: 5 , previous_a: "$a" } }]
```
This would explicitly set `a` to 5 but also set `previous_a` to whatever `a` was before the update. This would only give us a history look-back of a single change, though.
Before:
```
{
a: 3
}
```
After:
```
{
a: 5,
previous_a: 3
}
```
What we want to do is take all the fields we change and construct an object with those prior values, then push it into an array — theoretically, like this:
```
[ { $set : { a: 5 , b: 8 } ,
$push : { history : { a:"$a",b:"$b"} } ]
```
The above does not work because the $push part in bold is an update operator, not aggregation syntax, so it gives a syntax error. What we instead need to do is rewrite push as an array operation, like so:
```
{"$set":{"history":
{"$concatArrays":[[{ _updateTime: "$$NOW", a:"$a",b:"$b"}}],
{"$ifNull":["$history",[]]}]}}}
```
To talk through what's happening here, I want to add an object, `{ _updateTime: "$$NOW", a:"$a",b:"$b"}`, to the array at the beginning. I cannot use $push as that is update syntax and expressive syntax is about generating a document with new versions for fields, effectively, just $set. So I need to set the array to the previous array with nym new value prepended.
We use $concatArrays to join two arrays, so I wrap my single document containing the old values for fields in an array. Then, the new array is my array of one concatenated with the old array.
I use $ifNUll to say if the value previously was null or missing, treat it as an empty array instead, so the first time, it actually does `history = [{ _updateTime: "$$NOW", a:"$a",b:"$b"}] + []`.
Before:
```
{
a: 3,
b: 1
}
```
After:
```
{
a: 5,
b: 8,
history: [
{
_updateTime: Date(...),
a: 3,
b: 1
}
]
}
```
That's a little hard to write but if we actually write out the code to demonstrate this and declare it as separate objects, it should be a lot clearer. The following is a script you can run in the MongoDB shell either by pasting it in or [loading it with `load("versioning.js")`.
This code first generates some simple records:
```javascript
// Configure the inspection depth for better readability in output
config.set("inspectDepth", 8) // Set mongosh to print nicely
// Connect to a specific database
db = db.getSiblingDB("version_example")
db.data.drop()
const nFields = 5
// Function to generate random field values based on a specified change percentage
function randomFieldValues(percentageToChange) {
const fieldVals = new Object();
for (let fldNo = 1; fldNo < nFields; fldNo++) {
if (Math.random() < (percentageToChange / 100)) {
fieldVals`field_${fldNo}`] = Math.floor(Math.random() * 100)
}
}
return fieldVals
}
// Loop to create and insert 10 records with random data into the 'data' collection
for (let id = 0; id < 10; id++) {
const record = randomFieldValues(100)
record._id = id
record.dateUpdated = new Date()
db.data.insertOne(record)
}
// Log the message indicating the data that will be printed next
console.log("ORIGINAL DATA")
console.table(db.data.find().toArray())
```
| (index) | _id | field_1 | field_2 | field_3 | field_4 | dateUpdated |
| ------- | ---- | ------- | ------- | ------- | ------- | ------------------------ |
| 0 | 0 | 34 | 49 | 19 | 74 | 2024-04-15T13:30:12.788Z |
| 1 | 1 | 13 | 9 | 43 | 4 | 2024-04-15T13:30:12.836Z |
| 2 | 2 | 51 | 30 | 96 | 93 | 2024-04-15T13:30:12.849Z |
| 3 | 3 | 29 | 44 | 21 | 85 | 2024-04-15T13:30:12.860Z |
| 4 | 4 | 41 | 35 | 15 | 7 | 2024-04-15T13:30:12.866Z |
| 5 | 5 | 0 | 85 | 56 | 28 | 2024-04-15T13:30:12.874Z |
| 6 | 6 | 85 | 56 | 24 | 78 | 2024-04-15T13:30:12.883Z |
| 7 | 7 | 27 | 23 | 96 | 25 | 2024-04-15T13:30:12.895Z |
| 8 | 8 | 70 | 40 | 40 | 30 | 2024-04-15T13:30:12.905Z |
| 9 | 9 | 69 | 13 | 13 | 9 | 2024-04-15T13:30:12.914Z |
Then, we modify the data recording the history as part of the update operation.
```javascript
const oldTime = new Date()
//We can make changes to these without history like so
sleep(500);
// Making the change and recording the OLD value
for (let id = 0; id < 10; id++) {
const newValues = randomFieldValues(30)
//Check if any changes
if (Object.keys(newValues).length) {
newValues.dateUpdated = new Date()
const previousValues = new Object()
for (let fieldName in newValues) {
previousValues[fieldName] = `$${fieldName}`
}
const existingHistory = { $ifNull: ["$history", []] }
const history = { $concatArrays: [[previousValues], existingHistory] }
newValues.history = history
db.data.updateOne({ _id: id }, [{ $set: newValues }])
}
}
console.log("NEW DATA")
db.data.find().toArray()
```
We now have records that look like this — with the current values but also an array reflecting any changes.
```
{
_id: 6,
field_1: 85,
field_2: 3,
field_3: 71,
field_4: 71,
dateUpdated: ISODate('2024-04-15T13:34:31.915Z'),
history: [
{
field_2: 56,
field_3: 24,
field_4: 78,
dateUpdated: ISODate('2024-04-15T13:30:12.883Z')
}
]
}
```
We can now use an aggregation pipeline to retrieve any prior version of each document. To do this, we first filter the history to include only changes up to the point in time we want. We then merge them together in order:
```javascript
//Get only history until point required
const filterHistory = { $filter: { input: "$history", cond: { $lt: ["$$this.dateUpdated", oldTime] } } }
//Merge them together and replace the top level document
const applyChanges = { $replaceRoot: { newRoot: { $mergeObjects: { $concatArrays: [["$$ROOT"], { $ifNull: [filterHistory, []] }] } } } }
// You can optionally add a $match here but you would normally be better to
// $match on the history fields at the start of the pipeline
const revertPipeline = [{ $set: { rewoundTO: oldTime } }, applyChanges]
//Show results
db.data.aggregate(revertPipeline).toArray()
```
```
{
_id: 6,
field_1: 85,
field_2: 56,
field_3: 24,
field_4: 78,
dateUpdated: ISODate('2024-04-15T13:30:12.883Z'),
history: [
{
field_2: 56,
field_3: 24,
field_4: 78,
dateUpdated: ISODate('2024-04-15T13:30:12.883Z')
}
],
rewoundTO: ISODate('2024-04-15T13:34:31.262Z')
},
```
This technique came about through discussing the needs of a MongoDB customer. They had exactly this use case to retain both current and history and to be able to query and retrieve any of them without having to maintain a full copy of the document. It is an ideal choice if changes are relatively small. It could also be adapted to only record a history entry if the field value is different, allowing you to compute deltas even when overwriting the whole record.
As a cautionary note, versioning inside a document like this will make the documents larger. It also means an ever-growing array of edits. If you believe there may be hundreds or thousands of changes, this technique is not suitable and the history should be written to a second document using a transaction. To do that, perform the update with findOneAndUpdate and return the fields you are changing from that call to then insert into a history collection.
This isn't intended as a step-by-step tutorial, although you can try the examples above and see how it works. It's one of many sophisticated data modeling
techniques you can use to build high-performance services on MongoDB and MongoDB Atlas. If you have a need for record versioning, you can use this. If not, then perhaps spend a little more time seeing what you can create with the aggregation pipeline, a Turing-complete data processing engine that runs alongside your data, saving you the time and cost of fetching it to the client to process. Learn more about [aggregation. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "",
"contentType": "Tutorial"
} | How to Maintain Multiple Versions of a Record in MongoDB (2024 Updates) | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/implementing-right-erasure-csfle | created | # Implementing Right to Erasure with CSFLE
The right to erasure, also known as the right to be forgotten, is a right granted to individuals under laws and regulations such as GDPR. This means that companies storing an individual's personal data must be able to delete it on request. Because this data can be spread across several systems, it can be technically challenging for these companies to identify and remove it from all places. Even if this is properly executed, there is also a risk that deleted data can be restored from backups in the future, potentially contributing to legal and financial risks.
This blog post addresses those challenges, demonstrating how you can make use of MongoDB's Client-Side Field Level Encryption to strengthen procedures for removing sensitive data.
>***Disclaimer**: We provide no guarantees that the solution and techniques described in this article will fulfill regulatory requirements around the right to erasure. Each organization needs to make their own determination on appropriate or sufficient measures to comply with various regulatory requirements such as GDPR.*
## What is crypto shredding?
Crypto shredding is a data destruction technique that consists of destroying the encryption keys that allow the data to be decrypted, thus making the data undecipherable. The example below gives a more in-depth explanation.
Imagine you are storing data for multiple users. You start by giving each user their own unique data encryption key (DEK), and mapping it to that customer. This is represented in the below diagram, where "User A" and "User B" each have their own key in the key store. This DEK can then be used to encrypt and decrypt any data related to the user in question.
Let's assume that we want to remove all data for User B. If we remove User B's DEK, we can no longer decrypt any of the data that was encrypted with it; all we have left in our data store is "junk" cipher text. As the diagram below illustrates, User A's data is unaffected, but we can no longer read User B's data.
## What is CSFLE?
With MongoDB’s Client-Side Field Level Encryption (CSFLE), applications can encrypt sensitive fields in documents prior to transmitting data to the server. This means that even when data is being used by the database in memory, it is never in plain text. The database only sees the encrypted data but still enables you to query it.
MongoDB CSFLE utilizes envelope encryption, which is the practice of encrypting plaintext data with a data key, which itself is in turn encrypted by a top level envelope key (also known as a "master key").
Envelope keys are usually managed by a Key Management Service (KMS). MongoDB CSFLE supports multiple KMSs, such as AWS KMS, GCP KMS, Azure KeyVault, and Keystores supporting the KMIP standard (e.g., Hashicorp Keyvault).
CSFLE can be used in either automatic mode or explicit mode — or a combination of both. Automatic mode enables you to perform encrypted read and write operations based on a defined encryption schema, avoiding the need for application code to specify how to encrypt or decrypt fields. This encryption schema is a JSON document that defines what fields need to be encrypted. Explicit mode refers to using the MongoDB driver's encryption library to manually encrypt or decrypt fields in your application.
In this article, we are going to use the explicit encryption technique to showcase how we can use crypto shredding techniques with CSFLE to implement (or augment) procedures to "forget" sensitive data. We'll be using AWS KMS to demonstrate this.
## Bringing it all together
With MongoDB as our database, we can use CSFLE to implement crypto shredding, so we can provide stronger guarantees around data privacy.
To demonstrate how you could implement this, we'll walk you through a demo application. The demo application is a python (Flask) web application with a front end, which exposes functionality for signup, login, and a data entry form. We have also added an "admin" page to showcase the crypto shredding related functionality. If you want to follow along, you can run the application yourself — you'll find the necessary code and instructions in GitHub.
When a user signs up, our application will generate a DEK for the user, then store the ID for the DEK along with other user details. Key generation is done via the `create_data_key` method on the `ClientEncryption` class, which we initialized earlier as `app.mongodb_encryption_client`. This encryption client is responsible for generating a DEK, which in this case will be encrypted by the envelope key. In our case, the encryption client is configured to use an envelope key from AWS KMS.
```python
# flaskapp/db_queries.py
@aws_credential_handler
def create_key(userId):
data_key_id = \
app.mongodb_encryption_client.create_data_key(kms_provider,
master_key, key_alt_names=userId])
return data_key_id
```
We can then use this method when saving the user.
```python
# flaskapp/user.py
def save(self):
dek_id = db_queries.create_key(self.username)
result = app.mongodb[db_name].user.insert_one(
{
"username": self.username,
"password_hash": self.password_hash,
"dek_id": dek_id,
"createdAt": datetime.now(),
}
)
if result:
self.id = result.inserted_id
return True
else:
return False
```
Once signed up, the user can then log in, after which they can enter data via a form shown in the screenshot below. This data has a "name" and a "value", allowing the user to store arbitrary key-value pairs.
![demo application showing a form to add data
In the database, we'll store this data in a MongoDB collection called “data,” in documents structured like this:
```json
{
"name": "shoe size",
"value": "10",
"username": "tom"
}
```
For the sake of this demonstration, we have chosen to encrypt the value and username fields from this document. Those fields will be encrypted using the DEK created on signup belonging to the logged in user.
```python
# flaskapp/db_queries.py
# Fields to encrypt, and the algorithm to encrypt them with
ENCRYPTED_FIELDS = {
# Deterministic encryption for username, because we need to search on it
"username": Algorithm.AEAD_AES_256_CBC_HMAC_SHA_512_Deterministic,
# Random encryption for value, as we don't need to search on it
"value": Algorithm.AEAD_AES_256_CBC_HMAC_SHA_512_Random,
}
```
The insert_data function then loops over the fields we want to encrypt and the algorithm we're using for each.
```python
# flaskapp/db_queries.py
def insert_data(document):
document"username"] = current_user.username
# Loop over the field names (and associated algorithm) we want to encrypt
for field, algo in ENCRYPTED_FIELDS.items():
# if the field exists in the document, encrypt it
if document.get(field):
document[field] = encrypt_field(document[field], algo)
# Insert document (now with encrypted fields) to the data collection
app.data_collection.insert_one(document)
```
If the specified fields exist in the document, this will call our encrypt_field function to perform the encryption using the specified algorithm.
```python
# flaskapp/db_queries.py
# Encrypt a single field with the given algorithm
@aws_credential_handler
def encrypt_field(field, algorithm):
try:
field = app.mongodb_encryption_client.encrypt(
field,
algorithm,
key_alt_name=current_user.username,
)
return field
except pymongo.errors.EncryptionError as ex:
# Catch this error in case the DEK doesn't exist. Log a warning and
# re-raise the exception
if "not all keys requested were satisfied" in ex._message:
app.logger.warn(
f"Encryption failed: could not find data encryption key for user: {current_user.username}"
)
raise ex
```
Once data is added, it will be shown in the web app:
![demo application showing the data added in the previous step
Now let's see what happens if we delete the DEK. To do this, we can head over to the admin page. This admin page should only be provided to individuals that have a need to manage keys, and we have some choices:
We're going to use the "Delete data encryption key" option, which will remove the DEK, but leave all data entered by the user intact. After that, the application will no longer be able to retrieve the data that was stored via the form. When trying to retrieve the data for the logged in user, an error will be thrown
**Note**: After we do perform the data key deletion, the web application may still be able to decrypt and show the data for a short period of time before its cache expires — this takes a maximum of 60 seconds.
But what is actually left in the database? To get a view of this, you can go back to the Admin page and choose "Fetch data for all users." In this view, we won't throw an exception if we can't decrypt the data. We'll just show exactly what we have stored in the database. Even though we haven't actually deleted the user's data, because the data encryption key no longer exists, all we can see now is cipher text for the encrypted fields "username" and "value".
And here is the code we're using to fetch the data in this view. As you can see, we use very similar logic to the encrypt method shown earlier. We perform a find operation without any filters to retrieve all the data from our data collection. We'll then loop over our ENCRYPTED_FIELDS dictionary to see which fields need to be decrypted.
```python
# flaskapp/db_queries.py
def fetch_all_data_unencrypted(decrypt=False):
results = list(app.data_collection.find())
if decrypt:
for field in ENCRYPTED_FIELDS.keys():
for result in results:
if result.get(field):
resultfield], result["encryption_succeeded"] = decrypt_field(result[field])
return results
```
The decrypt_field function is called for each field to be decrypted, but in this case we'll catch the error if we cannot successfully decrypt it due to a missing DEK.
```python
# flaskapp/db_queries.py
# Try to decrypt a field, returning a tuple of (value, status). This will be either (decrypted_value, True), or (raw_cipher_text, False) if we couldn't decrypt
def decrypt_field(field):
try:
# We don't need to pass the DEK or algorithm to decrypt a field
field = app.mongodb_encryption_client.decrypt(field)
return field, True
# Catch this error in case the DEK doesn't exist.
except pymongo.errors.EncryptionError as ex:
if "not all keys requested were satisfied" in ex._message:
app.logger.warn(
"Decryption failed: could not find data encryption key to decrypt the record."
)
# If we can't decrypt due to missing DEK, return the "raw" value.
return field, False
raise ex
```
We can also use the `mongosh` shell to check directly in the database, just to prove that there's nothing there we can read.
![mongosh
At this point, savvy readers may be asking the question, "But what if we restore the database from a backup?" If we want to prevent this, we can use two separate database clusters in our application — one for storing data and one for storing DEKs (the "key vault"). This theory is applied in the sample application, which requires you to specify two MongoDB connection strings — one for data and one for the key vault. If we use separate clusters, it decouples the restoration of backups for application data and the key vault; restoring a backup on the data cluster won't restore any DEKs which have been deleted from the key vault cluster.
## Conclusion
In this blog post, we've demonstrated how MongoDB's Client-Side Field Level Encryption can be used to simplify the task of "forgetting" certain data. With a single "delete data key" operation, we can effectively forget data which may be stored across different databases, collections, backups, and logs. In a real production application, we may wish to delete all the user's data we can find, on top of removing their DEK. This "defense in depth" approach helps us to ensure that the data is really gone. By implementing crypto shredding, the impact is much smaller if a delete operation fails, or misses some data that should have been wiped.
You can find more details about MongoDB's Client-Side Field Level Encryption in our documentation. If you have questions, feel free to make a post on our community forums. | md | {
"tags": [
"MongoDB",
"Python",
"Flask"
],
"pageDescription": "Learn how to make use of MongoDB's Client-Side Field Level Encryption to strengthen procedures for removing sensitive data.",
"contentType": "Article"
} | Implementing Right to Erasure with CSFLE | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/using-openai-latest-embeddings-rag-system-mongodb | created | # Using OpenAI Latest Embeddings In A RAG System With MongoDB
Using OpenAI Latest Embeddings in a RAG System With MongoDB
-----------------------------------------------------------
## Introduction
OpenAI recently released new embeddings and moderation models. This article explores the step-by-step implementation process of utilizing one of the new embedding models: text-embedding-3-small within a retrieval-augmented generation (RAG) system powered by MongoDB Atlas Vector Database.
## What is an embedding?
**An embedding is a mathematical representation of data within a high-dimensional space, typically referred to as a vector space.** Within a vector space, vector embeddings are positioned based on their semantic relationships, concepts, or contextual relevance. This spatial relationship within the vector space effectively mirrors the associations in the original data, making embeddings useful in various artificial intelligence domains, such as machine learning, deep learning, generative AI (GenAI), natural language processing (NLP), computer vision, and data science.
Creating an embedding involves mapping data related to entities like words, products, audio, and user profiles into a numerical format. In NLP, this process involves transforming words and phrases into vectors, converting their semantic meanings into a machine-readable form.
AI applications that utilize RAG architecture design patterns leverage embeddings to augment the large language model (LLM) generative process by retrieving relevant information from a data store such as MongoDB Atlas. By comparing embeddings of the query with those in the database, RAG systems incorporate external knowledge, improving the relevance and accuracy of the responses.
. This dataset is a collection of movie-related details that include attributes such as the title, release year, cast, and plot. A unique feature of this dataset is the plot_embedding field for each movie. These embeddings are generated using OpenAI's text-embedding-ada-002 model.
After loading the dataset, it is converted into a pandas DataFrame; this data format simplifies data manipulation and analysis. Display the first five rows using the head(5) function to gain an initial understanding of the data. This preview provides a snapshot of the dataset's structure and its various attributes, such as genres, cast, and plot embeddings.
```python
from datasets import load_dataset
import pandas as pd
#
dataset = load_dataset("AIatMongoDB/embedded_movies")
# Convert the dataset to a pandas dataframe
dataset_df = pd.DataFrame(dataset'train'])
dataset_df.head(5)
```
**Import libraries:**
- from datasets import load_dataset: imports the load_dataset function from the Hugging Face datasets library; this function is used to load datasets from Hugging Face's extensive dataset repository.
- import pandas as pd: imports the pandas library, a fundamental tool in Python for data manipulation and analysis, using the alias pd.
**Load the dataset:**
- `dataset = load_dataset("AIatMongoDB/embedded_movies")`: Loads the dataset named `embedded_movies` from the Hugging Face datasets repository; this dataset is provided by MongoDB and is specifically designed for embedding and retrieval tasks.
**Convert dataset to pandas DataFrame:**
- `dataset_df = pd.DataFrame(dataset\['train'\])`: converts the training portion of the dataset into a pandas DataFrame.
**Preview the dataset:**
- `dataset_df.head(5)`: displays the first five entries of the DataFrame.
## Step 3: data cleaning and preparation
The next step cleans the data and prepares it for the next stage, which creates a new embedding data point using the new OpenAI embedding model.
```python
# Remove data point where plot column is missing
dataset_df = dataset_df.dropna(subset=['plot'])
print("\\nNumber of missing values in each column after removal:")
print(dataset_df.isnull().sum())
# Remove the plot_embedding from each data point in the dataset as we are going to create new embeddings with the new OpenAI embedding Model "text-embedding-3-small"
dataset_df = dataset_df.drop(columns=['plot_embedding'])
dataset_df.head(5)
```
**Removing incomplete data:**
- `dataset_df = dataset_df.dropna(subset=\['plot'\])`: ensures data integrity by removing any data point/row where the “plot” column is missing data; since “plot” is a vital component for the new embeddings, its completeness affects the retrieval performance.
**Preparing for new embeddings:**
- `dataset_df = dataset_df.drop(columns=\['plot_embedding'\])`: remove the existing “plot_embedding” column; new embeddings using OpenAI's "text-embedding-3-small" model, the existing embeddings (generated by a different model) are no longer needed.
- `dataset_df.head(5)`: allows us to preview the first five rows of the updated datagram to ensure the removal of the “plot_embedding” column and confirm data readiness.
## Step 4: create embeddings with OpenAI
This stage focuses on generating new embeddings using OpenAI's advanced model.
This demonstration utilises a Google Colab Notebook, where environment variables are configured explicitly within the notebook's Secrets section and accessed using the user data module. In a production environment, the environment variables that store secret keys are usually stored in a .env file or equivalent.
An [OpenAI API key is required to ensure the successful completion of this step. More details on OpenAI's embedding models can be found on the official site.
```
python
import openai
from google.colab import userdata
openai.api_key = userdata.get("open_ai")
EMBEDDING_MODEL = "text-embedding-3-small"
def get_embedding(text):
"""Generate an embedding for the given text using OpenAI's API."""
# Check for valid input
if not text or not isinstance(text, str):
return None
try:
# Call OpenAI API to get the embedding
embedding = openai.embeddings.create(input=text, model=EMBEDDING_MODEL).data0].embedding
return embedding
except Exception as e:
print(f"Error in get_embedding: {e}")
return None
dataset_df["plot_embedding_optimised"] = dataset_df['plot'].apply(get_embedding)
dataset_df.head()
```
**Setting up OpenAI API:**
- Imports and API key: Import the openai library and retrieve the API key from Google Colab's userdata.
- Model selection: Set the variable EMBEDDING_MODEL to text-embedding-3-small.
**Embedding generation function:**
- get_embedding: converts text into embeddings; it takes both the string input and the embedding model as arguments and generates the text embedding using the specified OpenAI model.
- Input validation and API call: validates the input to ensure it's a valid string, then calls the OpenAI API to generate the embedding.
- If the process encounters any issues, such as invalid input or API errors, the function returns None.
- Applying to dataset: The function get_embedding is applied to the “plot” column of the DataFrame dataset_df. Each plot is transformed into an optimized embedding data stored in a new column, plot_embedding_optimised.
- Preview updated dataset: dataset_df.head() displays the first few rows of the DataFrame.
## Step 5: Vector database setup and data ingestion
MongoDB acts as both an operational and a vector database. It offers a database solution that efficiently stores, queries, and retrieves vector embeddings — the advantages of this lie in the simplicity of database maintenance, management, and cost.
To create a new MongoDB database, set up a database cluster:
1. Register for a [free MongoDB Atlas account, or for existing users, sign into MongoDB Atlas.
2. Select the “Database” option on the left-hand pane, which will navigate to the Database Deployment page, where there is a deployment specification of any existing cluster. Create a new database cluster by clicking on the "+Create" button.
.
1\. Navigate to the movie_collection in the movie database. At this point, the database is populated with several documents containing information about various movies, particularly within the action and romance genres.
for vector search.
- type: This field specifies the data type the index will handle. In this case, it is set to `vector`, indicating that this index is specifically designed for handling and optimizing searches over vector data.
for the implementation code.
In practical scenarios, lower-dimension embeddings that can maintain a high level of semantic capture are beneficial for Generative AI applications where the relevance and speed of retrieval are crucial to user experience and value.
**Further advantages of lower embedding dimensions with high performance are:**
- Improved user experience and relevance: Relevance of information retrieval is optimized, directly impacting the user experience and value in AI-driven applications.
- Comparison with previous model: In contrast to the previous ada v2 model, which only provided embeddings at a dimension of 1536, the new models offer more flexibility. The text-embedding-3-large extends this flexibility further with dimensions of 256, 1024, and 3072.
- Efficiency in data processing: The availability of lower-dimensional embeddings aids in more efficient data processing, reducing computational load without compromising the quality of results.
- Resource optimization: Lower-dimensional embeddings are resource-optimized, beneficial for applications running on limited memory and processing power, and for reducing overall computational costs.
Future articles will cover advanced topics, such as benchmarking embedding models and handling migration of embeddings.
______________________________________________________________________
## Frequently asked questions
### 1. What is an embedding?
An embedding is a technique where data — such as words, audio, or images — is transformed into mathematical representations, vectors of real numbers in a high-dimensional space referred to as a vector space. This process allows AI models to understand and process complex data by capturing the underlying semantic relationships and contextual nuances.
### 2. What is a vector store in the context of AI and databases?
A vector store, such as a MongoDB Atlas database, is a storage mechanism for vector embeddings. It allows efficient storing, indexing, and retrieval of vector data, essential for tasks like semantic search, recommendation systems, and other AI applications.
### 3. How does a retrieval-augmented generation (RAG) system utilize embeddings?
A RAG system uses embeddings to improve the response generated by a large language model (LLM) by retrieving relevant information from a knowledge store based on semantic similarities. The query embedding is compared with the knowledge store (database record) embedding to fetch contextually similar and relevant data, which improves the accuracy and relevance of generated responses by the LLM to the user’s query.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdae0dd2e997f2ffb/65bb84bd8fc5c0be070bdc73/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted4bbac5068dcb4c/65bb84bd63dd3a0334963206/image12.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt33548488915c749d/65bb84befd23e5ad9c7daf92/image4.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4fa575a1c29ef2d2/65bb84bd30d47e0ce7523376/image6.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta937aecb6255a6c6/65bb84be1f10e80b6d4bae47/image3.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt197dc2ffe0b9b8b0/65bb84bee5c1f3217ad96ce8/image10.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf3736d4623ccad02/65bb84bdc6000531b5d5c021/image7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt497f84d6aa7eb7a7/65bb84be461c13598eb900f8/image11.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7bfa203e05eac169/65bb84bda0c8781b0a5934db/image1.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb5c63f5e8ec2ca3c/65bb84bd292a0e5a2f87e7c7/image9.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt904b4cb46ada9153/65bb84be292a0e7daa87e7cb/image5.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3f7abad9e10b8b24/65bb84bed2067bce2d8c6e6c/image8.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "Explore OpenAI's latest embeddings in RAG systems with MongoDB. Learn to enhance AI responses in NLP and GenAI with practical examples.",
"contentType": "Tutorial"
} | Using OpenAI Latest Embeddings In A RAG System With MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/crud-changetracking-mongodb-provider-for-efcore | created | # MongoDB Provider for EF Core Tutorial: Building an App with CRUD and Change Tracking
Entity Framework (EF) has been part of .NET for a long time (since .NET 3.51) and is a popular object relational mapper (ORM) for many applications. EF has evolved into EF Core alongside the evolution of .NET. EF Core supports a number of different database providers and can now be used with MongoDB with the help of the MongoDB Provider for Entity Framework Core.
In this tutorial, we will look at how you can build a car booking application using the new MongoDB Provider for EF Core that will support create, read, update, and delete operations (CRUD) as well as change tracking, which helps to automatically update the database and only the fields that have changed.
A car booking system is a good example to explore the benefits of using EF Core with MongoDB because there is a need to represent a diverse range of entities. There will be entities like cars with their associated availability status and location, and bookings including the associated car.
As the system evolves and grows, ensuring data consistency can become challenging. Additionally, as users interact with the system, partial updates to data entities — like booking details or car specifications — will happen more and more frequently. Capturing and efficiently handling these updates is paramount for good system performance and data integrity.
## Prerequisites ##
In order to follow along with this tutorial, you are going to need a few things:
- .NET 7.0.
- Basic knowledge of ASP.NET MVC and C#.
- Free MongoDB Atlas account and free tier
cluster.
If you just want to see example code, you can view the full code in the GitHub repository.
## Create the project
ASP.NET Core is a very flexible web framework, allowing you to scaffold out different types of web applications that have slight differences in terms of their UI or structure.
For this tutorial, we are going to create an MVC project that will make use of static files and controllers. There are other types of front end you could use, such as React, but MVC with .cshtml views is the most commonly used.
To create the project, we are going to use the .NET CLI:
```bash
dotnet new mvc -o SuperCarBookingSystem
```
Because we used the CLI, although easier, it only creates the csproj file and not the solution file which allows us to open it in Visual Studio, so we will fix that.
```bash
cd SuperCarBookingSystem
dotnet new sln
dotnet sln .\SuperCarBookingSystem.sln add .\SuperCarBookingSystem.csproj
```
## Add the NuGet packages
Now that we have the new project created, we will want to go ahead and add the required NuGet packages. Either using the NuGet Package Manager or using the .NET CLI commands below, add the MongoDB MongoDB.EntityFrameworkCore and Microsoft.EntityFrameworkCore packages.
```bash
dotnet add package MongoDB.EntityFrameworkCore --version 7.0.0-preview.1
dotnet add package Microsoft.EntityFrameworkCore
```
> At the time of writing, the MongoDB.EntityFrameworkCore is in preview, so if using the NuGet Package Manager UI inside Visual Studio, be sure to tick the “include pre-release” box or you won’t get any results when searching for it.
## Create the models
Before we can start implementing the new packages we just added, we need to create the models that represent the entities we want in our car booking system that will of course be stored in MongoDB Atlas as documents.
In the following subsections, we will create the following models:
- Car
- Booking
- MongoDBSettings
### Car
First, we need to create our car model that will represent the cars that are available to be booked in our system.
1. Create a new class in the Models folder called Car.
2. Add the following code:
```csharp
using MongoDB.Bson;
using MongoDB.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations;
namespace SuperCarBookingSystem.Models
{
Collection("cars")]
public class Car
{
public ObjectId Id { get; set; }
[Required(ErrorMessage = "You must provide the make and model")]
[Display(Name = "Make and Model")]
public string? Model { get; set; }
[Required(ErrorMessage = "The number plate is required to identify the vehicle")]
[Display(Name = "Number Plate")]
public string NumberPlate { get; set; }
[Required(ErrorMessage = "You must add the location of the car")]
public string? Location { get; set; }
public bool IsBooked { get; set; } = false;
}
}
```
The collection attribute before the class tells the application what collection inside the database we are using. This allows us to have differing names or capitalization between our class and our collection should we want to.
### Booking
We also need to create a booking class to represent any bookings we take in our system.
1. Create a new class inside the Models folder called Booking.
2. Add the following code to it:
```csharp
using MongoDB.Bson;
using MongoDB.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations;
namespace SuperCarBookingSystem.Models
{
[Collection("bookings")]
public class Booking
{
public ObjectId Id { get; set; }
public ObjectId CarId { get; set; }
public string CarModel { get; set; }
[Required(ErrorMessage = "The start date is required to make this booking")]
[Display(Name = "Start Date")]
public DateTime StartDate { get; set; }
[Required(ErrorMessage = "The end date is required to make this booking")]
[Display(Name = "End Date")]
public DateTime EndDate { get; set; }
}
}
```
### MongoDBSettings
Although it won’t be a document in our database, we need a model class to store our MongoDB-related settings so they can be used across the application.
1. Create another class in Models called MongoDBSettings.
2. Add the following code:
```csharp
public class MongoDBSettings
{
public string AtlasURI { get; set; }
public string DatabaseName { get; set; }
}
```
## Setting up EF Core
This is the exciting part. We are going to start to implement EF Core and take advantage of the new MongoDB Provider. If you are used to working with EF Core already, some of this will be familiar to you.
### CarBookingDbContext
1. In a location of your choice, create a class called CarBookingDbContext. I placed it inside a new folder called Services.
2. Replace the code inside the namespace with the following:
```csharp
using Microsoft.EntityFrameworkCore;
using SuperCarBookingSystem.Models;
namespace SuperCarBookingSystem.Services
{
public class CarBookingDbContext : DbContext
{
public DbSet Cars { get; init; }
public DbSet Bookings { get; init; }
public CarBookingDbContext(DbContextOptions options)
: base(options)
{
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Entity();
modelBuilder.Entity();
}
}
}
```
If you are used to EF Core, this will look familiar. The class extends the DbContext and we create DbSet properties that store the models that will also be present in the database. We also override the OnModelCreating method. You may notice that unlike when using SQL Server, we don’t call .ToTable(). We could call ToCollection instead but this isn’t required here as we specify the collection using attributes on the classes.
### Add connection string and database details to appsettings
Earlier, we created a MongoDBSettings model, and now we need to add the values that the properties map to into our appsettings.
1. In both appsettings.json and appsettings.Development.json, add the following new section:
```json
"MongoDBSettings": {
"AtlasURI": "mongodb+srv://:@",
"DatabaseName": "cargarage"
}
```
2. Replace the Atlas URI with your own [connection string from Atlas.
### Updating program.cs
Now we have configured our models and DbContext, it is time to add them to our program.cs file.
After the existing line `builder.Services.AddControllersWithViews();`, add the following code:
```csharp
var mongoDBSettings = builder.Configuration.GetSection("MongoDBSettings").Get();
builder.Services.Configure(builder.Configuration.GetSection("MongoDBSettings"));
builder.Services.AddDbContext(options =>
options.UseMongoDB(mongoDBSettings.AtlasURI ?? "", mongoDBSettings.DatabaseName ?? ""));
```
## Creating the services
Now, it is time to add the services we will use to talk to the database via the CarBookingDbContext we created. For each service, we will create an interface and the class that implements it.
### ICarService and CarService
The first interface and service we will implement is for carrying out the CRUD operations on the cars collection. This is known as the repository pattern. You may see people interact with the DbContext directly. But most people use this pattern, which is why we are including it here.
1. If you haven’t already, create a Services folder to store our new classes.
2. Create an ICarService interface and add the following code for the methods we will implement:
```csharp
using MongoDB.Bson;
using SuperCarBookingSystem.Models;
namespace SuperCarBookingSystem.Services
{
public interface ICarService
{
IEnumerable GetAllCars();
Car? GetCarById(ObjectId id);
void AddCar(Car newCar);
void EditCar(Car updatedCar);
void DeleteCar(Car carToDelete);
}
}
```
3. Create a CarService class file.
4. Update the CarService class declaration so it implements the ICarService we just created:
```csharp
using Microsoft.EntityFrameworkCore;
using MongoDB.Bson;
using MongoDB.Driver;
using SuperCarBookingSystem.Models;
namespace SuperCarBookingSystem.Services
{
public class CarService : ICarService
{
```
5. This will cause a red squiggle to appear underneath ICarService as we haven’t implemented all the methods yet, but we will implement the methods one by one.
6. Add the following code after the class declaration that adds a local CarBookingDbContext object and a constructor that gets an instance of the DbContext via dependency injection.
```csharp
private readonly CarBookingDbContext _carDbContext;
public CarService(CarBookingDbContext carDbContext)
{
_carDbContext = carDbContext;
}
```
7. Next, we will implement the GetAllCars method so add the following code:
```csharp
public IEnumerable GetAllCars()
{
return _carDbContext.Cars.OrderBy(c => c.Id).AsNoTracking().AsEnumerable();
}
```
The id property here maps to the _id field in our document which is a special MongoDB ObjectId type and is auto-generated when a new document is created. But what is useful about the _id property is that it can actually be used to order documents because of how it is generated under the hood.
If you haven’t seen it before, the `AsNoTracking()` method is part of EF Core and prevents EF tracking changes you make to an object. This is useful for reads when you know no changes are going to occur.
8. Next, we will implement the method to get a specific car using its Id property:
```csharp
public Car? GetCarById(ObjectId id)
{
return _carDbContext.Cars.FirstOrDefault(c => c.Id == id);
}
```
Then, we will add the AddCar implementation:
```csharp
public void AddCar(Car car)
{
_carDbContext.Cars.Add(car);
_carDbContext.ChangeTracker.DetectChanges();
Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);
_carDbContext.SaveChanges();
}
```
In a production environment, you might want to use something like ILogger to track these changes rather than printing to the console. But this will allow us to clearly see that a new entity has been added, showing change tracking in action.
9. EditCar is next:
```csharp
public void EditCar(Car car)
{
var carToUpdate = _carDbContext.Cars.FirstOrDefault(c => c.Id == car.Id);
if(carToUpdate != null)
{
carToUpdate.Model = car.Model;
carToUpdate.NumberPlate = car.NumberPlate;
carToUpdate.Location = car.Location;
carToUpdate.IsBooked = car.IsBooked;
_carDbContext.Cars.Update(carToUpdate);
_carDbContext.ChangeTracker.DetectChanges();
Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);
_carDbContext.SaveChanges();
}
else
{
throw new ArgumentException("The car to update cannot be found. ");
}
}
```
Again, we add a call to print out information from change tracking as it will show that the new EF Core Provider, even when using MongoDB as the database, is able to track modifications.
10. Finally, we need to implement DeleteCar:
```csharp
public void DeleteCar(Car car)
{
var carToDelete = _carDbContext.Cars.Where(c => c.Id == car.Id).FirstOrDefault();
if(carToDelete != null) {
_carDbContext.Cars.Remove(carToDelete);
_carDbContext.ChangeTracker.DetectChanges();
Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);
_carDbContext.SaveChanges();
}
else {
throw new ArgumentException("The car to delete cannot be found.");
}
}
```
### IBookingService and BookingService
Next up is our IBookingService and BookingService.
1. Create the IBookingService interface and add the following methods:
```csharp
using MongoDB.Bson;
using SuperCarBookingSystem.Models;
namespace SuperCarBookingSystem.Services
{
public interface IBookingService
{
IEnumerable GetAllBookings();
Booking? GetBookingById(ObjectId id);
void AddBooking(Booking newBooking);
void EditBooking(Booking updatedBooking);
void DeleteBooking(Booking bookingToDelete);
}
}
```
2. Create the BookingService class, and replace your class with the following code that implements all the methods:
```csharp
using Microsoft.EntityFrameworkCore;
using MongoDB.Bson;
using SuperCarBookingSystem.Models;
namespace SuperCarBookingSystem.Services
{
public class BookingService : IBookingService
{
private readonly CarBookingDbContext _carDbContext;
public BookingService(CarBookingDbContext carDBContext)
{
_carDbContext = carDBContext;
}
public void AddBooking(Booking newBooking)
{
var bookedCar = _carDbContext.Cars.FirstOrDefault(c => c.Id == newBooking.CarId);
if (bookedCar == null)
{
throw new ArgumentException("The car to be booked cannot be found.");
}
newBooking.CarModel = bookedCar.Model;
bookedCar.IsBooked = true;
_carDbContext.Cars.Update(bookedCar);
_carDbContext.Bookings.Add(newBooking);
_carDbContext.ChangeTracker.DetectChanges();
Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);
_carDbContext.SaveChanges();
}
public void DeleteBooking(Booking booking)
{
var bookedCar = _carDbContext.Cars.FirstOrDefault(c => c.Id == booking.CarId);
bookedCar.IsBooked = false;
var bookingToDelete = _carDbContext.Bookings.FirstOrDefault(b => b.Id == booking.Id);
if(bookingToDelete != null)
{
_carDbContext.Bookings.Remove(bookingToDelete);
_carDbContext.Cars.Update(bookedCar);
_carDbContext.ChangeTracker.DetectChanges();
Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);
_carDbContext.SaveChanges();
}
else
{
throw new ArgumentException("The booking to delete cannot be found.");
}
}
public void EditBooking(Booking updatedBooking)
{
var bookingToUpdate = _carDbContext.Bookings.FirstOrDefault(b => b.Id == updatedBooking.Id);
if (bookingToUpdate != null)
{
bookingToUpdate.StartDate = updatedBooking.StartDate;
bookingToUpdate.EndDate = updatedBooking.EndDate;
_carDbContext.Bookings.Update(bookingToUpdate);
_carDbContext.ChangeTracker.DetectChanges();
_carDbContext.SaveChanges();
Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);
}
else
{
throw new ArgumentException("Booking to be updated cannot be found");
}
}
public IEnumerable GetAllBookings()
{
return _carDbContext.Bookings.OrderBy(b => b.StartDate).AsNoTracking().AsEnumerable();
}
public Booking? GetBookingById(ObjectId id)
{
return _carDbContext.Bookings.AsNoTracking().FirstOrDefault(b => b.Id == id);
}
}
}
```
This code is very similar to the code for the CarService class but for bookings instead.
### Adding them to Dependency Injection
The final step for the services is to add them to the dependency injection container.
Inside Program.cs, add the following code after the code we added there earlier:
```csharp
builder.Services.AddScoped();
builder.Services.AddScoped();
```
## Creating the view models
Before we implement the front end, we need to add the view models that will act as a messenger between our front and back ends where required. Even though our application is quite simple, implementing the view model is still good practice as it helps decouple the pieces of the app.
### CarListViewModel
The first one we will add is the CarListViewModel. This will be used as the model in our Razor page later on for listing cars in our database.
1. Create a new folder in the root of the project called ViewModels.
2. Add a new class called CarListViewModel.
3. Add `public IEnumerable Cars { get; set; }` inside your class.
### CarAddViewModel
We also want a view model that can be used by the Add view we will add later.
1. Inside the ViewModels folder, create a new class called
CarAddViewModel.
2. Add `public Car? Car { get; set; }`.
### BookingListViewModel
Now, we want to do something very similar for bookings, starting with BookingListViewModel.
1. Create a new class in the ViewModels folder called
BookingListViewModel.
2. Add `public IEnumerable Bookings { get; set; }`.
### BookingAddViewModel
Finally, we have our BookingAddViewModel.
Create the class and add the property `public Booking? Booking { get; set; }` inside the class.
### Adding to _ViewImports
Later on, we will be adding references to our models and viewmodels in the views. In order for the application to know what they are, we need to add references to them in the _ViewImports.cshtml file inside the Views folder.
There will already be some references in there, including TagHelpers, so we want to add references to our .Models and .ViewModels folders. When added, it will look something like below, just with your application name instead.
```csharp
@using
@using .Models
@using .ViewModels
```
## Creating the controllers
Now we have the backend implementation and the view models we will refer to, we can start working toward the front end.
We will be creating two controllers: one for Car and one for Booking.
### CarController
The first controller we will add is for the car.
1. Inside the existing Controllers folder, add a new controller. If
using Visual Studio, use the MVC Controller - Empty controller
template.
2. Add a local ICarService object and a constructor that fetches it
from dependency injection:
```csharp
private readonly ICarService _carService;
public CarController(ICarService carService)
{
_carService = carService;
}
```
3. Depending on what your scaffolded controller came with, either
create or update the Index function with the following:
```csharp
public IActionResult Index()
{
CarListViewModel viewModel = new()
{
Cars = _carService.GetAllCars(),
};
return View(viewModel);
}
```
For the other CRUD operations — so create, update, and delete — we will have two methods for each: one is for Get and the other is for Post.
4. The HttpGet for Add will be very simple as it doesn’t need to pass
any data around:
```csharp
public IActionResult Add()
{
return View();
}
```
5. Next, add the Add method that will be called when a new car is requested to be added:
```csharp
HttpPost]
public IActionResult Add(CarAddViewModel carAddViewModel)
{
if(ModelState.IsValid)
{
Car newCar = new()
{
Model = carAddViewModel.Car.Model,
Location = carAddViewModel.Car.Location,
NumberPlate = carAddViewModel.Car.NumberPlate
};
_carService.AddCar(newCar);
return RedirectToAction("Index");
}
return View(carAddViewModel);
}
```
6. Now, we will add the code for editing a car:
```csharp
public IActionResult Edit(string id)
{
if(id == null)
{
return NotFound();
}
var selectedCar = _carService.GetCarById(new ObjectId(id));
return View(selectedCar);
}
[HttpPost]
public IActionResult Edit(Car car)
{
try
{
if(ModelState.IsValid)
{
_carService.EditCar(car);
return RedirectToAction("Index");
}
else
{
return BadRequest();
}
}
catch (Exception ex)
{
ModelState.AddModelError("", $"Updating the car failed, please try again! Error: {ex.Message}");
}
return View(car);
}
```
7. Finally, we have Delete:
```csharp
public IActionResult Delete(string id) {
if (id == null)
{
return NotFound();
}
var selectedCar = _carService.GetCarById(new ObjectId(id));
return View(selectedCar);
}
[HttpPost]
public IActionResult Delete(Car car)
{
if (car.Id == null)
{
ViewData["ErrorMessage"] = "Deleting the car failed, invalid ID!";
return View();
}
try
{
_carService.DeleteCar(car);
TempData["CarDeleted"] = "Car deleted successfully!";
return RedirectToAction("Index");
}
catch (Exception ex)
{
ViewData["ErrorMessage"] = $"Deleting the car failed, please try again! Error: {ex.Message}";
}
var selectedCar = _carService.GetCarById(car.Id);
return View(selectedCar);
}
```
### BookingController
Now for the booking controller. This is very similar to the CarController but it has a reference to both the car and booking service as we need to associate a car with a booking. This is because at the moment, the EF Core Provider doesn’t support relationships between entities so we can relate entities in a different way. You can view the roadmap on the [GitHub repo, however.
1. Create another empty MVC Controller called BookingController.
2. Paste the following code replacing the current class:
```csharp
public class BookingController : Controller
{
private readonly IBookingService _bookingService;
private readonly ICarService _carService;
public BookingController(IBookingService bookingService, ICarService carService)
{
_bookingService = bookingService;
_carService = carService;
}
public IActionResult Index()
{
BookingListViewModel viewModel = new BookingListViewModel()
{
Bookings = _bookingService.GetAllBookings()
};
return View(viewModel);
}
public IActionResult Add(string carId)
{
var selectedCar = _carService.GetCarById(new ObjectId(carId));
BookingAddViewModel bookingAddViewModel = new BookingAddViewModel();
bookingAddViewModel.Booking = new Booking();
bookingAddViewModel.Booking.CarId = selectedCar.Id;
bookingAddViewModel.Booking.CarModel = selectedCar.Model;
bookingAddViewModel.Booking.StartDate = DateTime.UtcNow;
bookingAddViewModel.Booking.EndDate = DateTime.UtcNow.AddDays(1);
return View(bookingAddViewModel);
}
HttpPost]
public IActionResult Add(BookingAddViewModel bookingAddViewModel)
{
Booking newBooking = new()
{
CarId = bookingAddViewModel.Booking.CarId,
StartDate = bookingAddViewModel.Booking.StartDate,
EndDate = bookingAddViewModel.Booking.EndDate,
};
_bookingService.AddBooking(newBooking);
return RedirectToAction("Index");
}
public IActionResult Edit(string Id)
{
if(Id == null)
{
return NotFound();
}
var selectedBooking = _bookingService.GetBookingById(new ObjectId(Id));
return View(selectedBooking);
}
[HttpPost]
public IActionResult Edit(Booking booking)
{
try
{
var existingBooking = _bookingService.GetBookingById(booking.Id);
if (existingBooking != null)
{
_bookingService.EditBooking(existingBooking);
return RedirectToAction("Index");
}
else
{
ModelState.AddModelError("", $"Booking with ID {booking.Id} does not exist!");
}
}
catch (Exception ex)
{
ModelState.AddModelError("", $"Updating the booking failed, please try again! Error: {ex.Message}");
}
return View(booking);
}
public IActionResult Delete(string Id)
{
if (Id == null)
{
return NotFound();
}
var selectedBooking = _bookingService.GetBookingById(Id);
return View(selectedBooking);
}
[HttpPost]
public IActionResult Delete(Booking booking)
{
if(booking.Id == null)
{
ViewData["ErrorMessage"] = "Deleting the booking failed, invalid ID!";
return View();
}
try
{
_bookingService.DeleteBooking(booking);
TempData["BookingDeleted"] = "Booking deleted successfully";
return RedirectToAction("Index");
}
catch (Exception ex)
{
ViewData["ErrorMessage"] = $"Deleting the booking failed, please try again! Error: {ex.Message}";
}
var selectedCar = _bookingService.GetBookingById(booking.Id.ToString());
return View(selectedCar);
}
}
```
## Creating the views
Now we have the back end and the controllers prepped with the endpoints for our car booking system, it is time to implement the views. This will be using Razor pages. You will also see reference to classes from Bootstrap as this is the CSS framework that comes with MVC applications out of the box.
We will be providing views for the CRUD operations for both listings and bookings.
### Listing Cars
First, we will provide a view that will map to the root of /Car, which will by convention look at the Index method we implemented.
ASP.NET Core MVC uses a convention pattern whereby you name the .cshtml file the name of the endpoint/method it uses and it lives inside a folder named after its controller.
1. Inside the Views folder, create a new subfolder called Car.
2. Inside that Car folder, add a new view. If using the available
templates, you want Razor View - Empty. Name the view Index.
3. Delete the contents of the file and add a reference to the
CarListViewModel at the top `@model CarListViewModel`.
4. Next, we want to add a placeholder for the error handling. If there
was an issue deleting a car, we added a string to TempData so we
want to add that into the view, if there is data to display.
```csharp
@if (TempData["CarDeleted"] != null)
{
@TempData["CarDeleted"]
}
```
5. Next, we will handle if there are no cars in the database, by
displaying a message to the user:
```csharp
@if (!Model.Cars.Any())
{
No results
}
```
6. The easiest way to display the list of cars and the relevant
information is to use a table:
```csharp
else
{
Model
Number Plate
Location
Actions
@foreach (var car in Model.Cars)
{
@car.Model
@car.NumberPlate
@car.Location
Edit
Delete
@if(!car.IsBooked)
{
Book
}
}
}
Add new car
```
It makes sense to have the list of cars as our home page so before we move on, we will update the default route from Home to /Car.
7. In Program.cs, inside `app.MapControllerRoute`, replace the pattern
line with the following:
```csharp
pattern: "{controller=Car}/{action=Index}/{id?}");
```
If we ran this now, the buttons would lead to 404s because we haven’t implemented them yet. So let’s do that now.
### Adding cars
We will start with the form for adding new cars.
1. Add a new, empty Razor View inside the Car subfolder called
Add.cshtml.
2. Before adding the form, we will add the model reference at the top,
a header, and some conditional content for the error message.
```csharp
@model CarAddViewModel
CREATE A NEW CAR
@if (ViewData["ErrorMessage"] != null)
{
@ViewData["ErrorMessage"]
}
```
3. Now, we can implement the form.
```csharp
```
Now, we want to add a button at the bottom to easily navigate back to the list of cars in case the user decides not to add a new car after all.
Add the following after the `` tag:
```csharp
Back to list
```
### Editing cars
The code for the Edit page is almost identical to Add, but it uses the Car as a model as it will use the car it is passed to pre-populate the form for editing.
1. Add another view inside the Car subfolder called Edit.cshtml.
2. Add the following code:
```csharp
@model Car
UPDATE @MODEL.MODEL
Back to list
```
### Deleting cars
The final page we need to implement is the page that is called when the delete button is clicked for a car.
1. Create a new empty View called Delete.cshtml.
2. Add the following code to add the model, heading, and conditional
error message:
```csharp
@model Car
DELETING @MODEL.MODEL
@if(ViewData["ErrorMessage"] != null)
{
@ViewData["ErrorMessage"]
}
```
Instead of a form like in the other views, we are going to add a description list to display information about the car that we are confirming deletion of.
```csharp
@Model?.Model
@Model?.NumberPlate
@Model?.Location
```
3. Below that, we will add a form for submitting the deletion and the
button to return to the list:
```csharp
Back to list
```
### Listing bookings
We have added the views for the cars so now we will add the views for bookings, starting with listing any existing books.
1. Create a new folder inside the Views folder called Booking.
2. Create a new empty view called Index.
3. Add the following code to display the bookings, if any exist:
```csharp
@model BookingListViewModel
@if (TempData["BookingDeleted"] != null)
{
@TempData["BookingDeleted"]
}
@if (!Model.Bookings.Any())
{
No results
}
else
{
Booked Car
Start Date
End Date
Actions
@foreach(var booking in Model.Bookings)
{
@booking.CarModel
@booking.StartDate
@booking.EndDate
Edit
Delete
}
}
```
### Adding bookings
Adding bookings is next. This view will be available when the book button is clicked next to a listed car.
1. Create an empty view called Add.cshtml.
2. Add the following code:
```csharp
@model BookingAddViewModel
@if (ViewData["ErrorMessage"] != null)
{
@ViewData["ErrorMessage"]
}
```
### Editing bookings
Just like with cars, we also want to be able to edit existing books.
1. Create an empty view called Edit.cshtml.
2. Add the following code:
```csharp
@model Booking
EDITING BOOKING FOR @MODEL.CARMODEL BETWEEN @MODEL.STARTDATE AND @MODEL.ENDDATE
Back to bookings
```
### Deleting bookings
The final view we need to add is to delete a booking. As with cars, we will display the booking information and deletion confirmation.
```csharp
@model Booking
DELETE BOOKING
@if (ViewData["ErrorMessage"] != null)
{
@ViewData["ErrorMessage"]
}
@Model?.CarModel
@Model?.StartDate
@Model?.EndDate
Back to list
```
If you want to view the full solution code, you can find it in the [GitHub Repo.
## Testing our application
We now have a functioning application that uses the new MongoDB Provider for EF Core — hooray! Now is the time to test it all and visit our endpoints to make sure it all works.
It is not part of this tutorial as it is not required, but I chose to make some changes to the site.css file to add some color. I also updated the _Layout.cshtml file to add the Car and Bookings pages to the navbar. You will see this reflected in the screenshots in the rest of the article. You are of course welcome to make your own changes if you have ideas of how you would like the application to look.
### Cars
Below are some screenshots I took from the app, showing the features of the Cars endpoint.
### Bookings
The bookings pages will look very similar to cars but are adapted for the bookings model that includes dates.
## Conclusion
There we have it: a full stack application using ASP.NET MVC that takes advantage of the new MongoDB Provider for EF Core. We are able to do the CRUD operations and track changes.
EF Core is widely used amongst developers so having an official MongoDB Provider is super exciting. This library is in Preview, which means we are continuing to build out new features. Stay tuned for updates and we are always open to feedback. We can’t wait to see what you build!
You can view the Roadmap of the provider in the GitHub repository, where you can also find links to the documentation!
As always, if you have any questions about this or other topics, get involved at our MongoDB Community Forums.
| md | {
"tags": [
"C#",
".NET"
],
"pageDescription": "",
"contentType": "Tutorial"
} | MongoDB Provider for EF Core Tutorial: Building an App with CRUD and Change Tracking | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/entangled-data-re-modeling-10x-storage-reduction | created | # Entangled: A Story of Data Re-modeling and 10x Storage Reduction
One of the most distinctive projects I've worked on is an application named Entangled. Developed in partnership with the Princeton Engineering Anomalies Research lab (PEAR), The Global Consciousness Project, and the Institute of Noetic Sciences, Entangled aims to test human consciousness.
The application utilizes a quantum random number generator to measure the influence of human consciousness. This quantum generator is essential because conventional computers, due to their deterministic nature, cannot generate truly random numbers. The quantum generator produces random sequences of 0s and 1s. In large datasets, there should be an equal number of 0s and 1s.
For the quantum random number generation, we used an in-house Quantis QRNG USB device. This device is plugged into our server, and through specialized drivers, we programmatically obtain the random sequences directly from the USB device.
Experiments were conducted to determine if a person could influence these quantum devices with their thoughts, specifically by thinking about more 0s or 1s. The results were astonishing, demonstrating the real potential of this influence.
To expand this test globally, we developed a new application. This platform allows users to sign up and track their contributions. The system generates a new random number for each user every second. Every hour, these contributions are grouped for analysis at personal, city, and global levels. We calculate the standard deviation of these contributions, and if this deviation exceeds a certain threshold, users receive notifications.
This data supports various experiments. For instance, in the "Earthquake Prediction" experiment, we use the contributions from all users in a specific area. If the standard deviation is higher than the set threshold, it may indicate that users have predicted an earthquake.
If you want to learn more about Entangled, you can check the official website.
## Hourly-metrics schema modeling
As the lead backend developer, and with MongoDB being my preferred database for all projects, it was a natural choice for Entangled.
For the backend development, I chose Node.js (Express), along with the Mongoose library for schema definition and data modeling. Mongoose, an Object Data Modeling (ODM) library for MongoDB, is widely used in the Node.js ecosystem for its ability to provide a straightforward way to model our application data.
Careful schema modeling was crucial due to the anticipated scaling of the database. Remember, we were generating one random number per second for each user.
My initial instinct was to create hourly-based schemas, aligning with our hourly analytics snapshots. The initial schema was structured as follows:
- User: a reference to the "Users" collection
- Total Sum: the sum of each user's random numbers; either 1s or 0s, so their sum was sufficient for later analysis
- Generated At: the timestamp of the snapshot
- Data File: a reference to the "Data Files" collection, which contains all random numbers generated by all users in a given hour
```javascript
const { Schema, model } = require("mongoose");
const hourlyMetricSchema = new Schema({
user: { type: Schema.Types.ObjectId, ref: "Users" },
total_sum: { type: Number },
generated_at: { type: Date },
data_file: { type: Schema.Types.ObjectId, ref: "DataFiles" }
});
// Compound index forr "user" (ascending) and "generated_at" (descending) fields
hourlyMetricSchema.index({ user: 1, generated_at: -1 });
const HourlyMetrics = model("HourlyMetrics", hourlyMetricSchema);
module.exports = HourlyMetrics;
```
Although intuitive, this schema faced a significant scaling challenge. We estimated over 100,000 users soon after launch. This meant about 2.4 million records daily or 72 million records monthly. Consequently, we were looking at approximately 5GB of data (including storage and indexes) each month.
This encouraged me to explore alternative approaches.
## Daily-metrics schema modeling
I explored whether alternative modeling approaches could further optimize storage requirements while also enhancing scalability and cost-efficiency.
A significant observation was that out of 5GB of total storage, 3.5GB was occupied by indexes, a consequence of the large volume of documents.
This led me to experiment with a schema redesign, shifting from hourly to daily metrics. The new schema was structured as follows:
```javascript
const { Schema, model } = require("mongoose");
const dailyMetricSchema = new Schema({
user: { type: Schema.Types.ObjectId, ref: "Users" },
date: { type: Date },
samples:
{
total_sum: { type: Number },
generated_at: { type: Date },
data_file: { type: Schema.Types.ObjectId, ref: "DataFiles" }
}
]
});
// Compound index forr "user" (ascending) and "date" (descending) fields
hourlyMetricSchema.index({ user: 1, date: -1 });
const DailyMetrics = model("DailyMetrics", dailyMetricSchema);
module.exports = DailyMetrics;
```
Rather than storing metrics for just one hour in each document, I now aggregated an entire day's metrics in a single document. Each document included a "samples" array with 24 entries, one for each hour of the day.
It's important to note that this method is a good solution because the array has a fixed size — a day only has 24 hours. This is very different from the [anti-pattern of using big, massive arrays in MongoDB.
This minor modification had a significant impact. The storage requirement for a month's worth of data drastically dropped from 5GB to just 0.49GB. This was mainly due to the decrease in index size, from 3.5GB to 0.15GB. The number of documents required each month dropped from 72 million to 3 million.
Encouraged by these results, I didn't stop there. My next step was to consider the potential benefits of shifting to a monthly-metrics schema. Could this further optimize our storage? This was the question that drove my next phase of exploration.
## Monthly-metrics schema modeling
The monthly-metrics schema was essentially identical to the daily-metrics schema. The key difference lay in how the data was stored in the "samples" array, which now contained approximately 720 records representing a full month's metrics.
```javascript
const { Schema, model } = require("mongoose");
const monthlyMetricSchema = new Schema({
user: { type: Schema.Types.ObjectId, ref: "Users" },
date: { type: Date },
samples:
{
total_sum: { type: Number },
generated_at: { type: Date },
data_file: { type: Schema.Types.ObjectId, ref: "DataFiles" }
}
]
});
// Compound index forr "user" (ascending) and "date" (descending) fields
monthlyMetricSchema.index({ user: 1, date: -1 });
const MonthlyMetrics = model("MonthlyMetrics", monthlyMetricSchema);
module.exports = MonthlyMetrics;
```
This adjustment was expected to further reduce the document count to around 100,000 documents for a month, leading me to anticipate even greater storage optimization. However, the actual results were surprising.
Upon storing a month's worth of data under this new schema, the storage size unexpectedly increased from 0.49GB to 0.58GB. This increase is likely due to the methods MongoDB's WiredTiger storage engine uses to compress arrays internally.
## Summary
Below is a detailed summary of the different approaches and their respective results for one month’s worth of data:
| | **Hourly Document** | **Daily Document** | **Monthly Document** |
| -------------------------------- | ----------------------------------------------- | ----------------------------------- | ----------------------- |
| **Document Size** | 0.098 KB | 1.67 KB | 49.18 KB |
| **Total Documents (per month)** | 72,000,000 (100,000 users * 24 hours * 30 days) | 3,000,000 (100,000 users * 30 days) | 100,000 (100,000 users) |
| **Storage Size** | 1.45 GB | 0.34 GB | 0.58 GB |
| **Index Size** | 3.49 GB | 0.15 GB | 0.006 GB |
| **Total Storage (Data + Index)** | 4.94 GB | 0.49 GB | 0.58 GB |
## Conclusion
In this exploration of schema modeling for the Entangled project, we investigated the challenges and solutions for managing large-scale data in MongoDB.
Our journey began with hourly metrics, which, while intuitive, posed significant scaling challenges due to the large volume of data and index size.
This prompted a shift to daily metrics, drastically reducing storage requirements by over 10 times, primarily due to a significant decrease in index size.
The experiment with monthly metrics offered an unexpected twist. Although it further reduced the number of documents, it increased the overall storage size, likely due to the internal compression mechanics of MongoDB's WiredTiger storage engine.
This case study highlights the critical importance of schema design in database management, especially when dealing with large volumes of data. It also emphasizes the need for continuous experimentation and optimization to balance storage efficiency, scalability, and cost.
If you want to learn more about designing efficient schemas with MongoDB, I recommend checking out the [MongoDB Data Modeling Patterns series. | md | {
"tags": [
"MongoDB",
"JavaScript",
"Node.js"
],
"pageDescription": "Learn how to reduce your storage in MongoDB by optimizing your data model through various techniques.",
"contentType": "Article"
} | Entangled: A Story of Data Re-modeling and 10x Storage Reduction | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/go/golang-alexa-skills | created | # Developing Alexa Skills with MongoDB and Golang
The popularity of Amazon Alexa and virtual assistants in general is no question, huge. Having a web application and mobile application isn't enough for most organizations anymore, and now you need to start supporting voice operated applications.
So what does it take to create something for Alexa? How different is it from creating a web application?
In this tutorial, we're going to see how to create an Amazon Alexa Skill, also referred to as an Alexa application, that interacts with a MongoDB cluster using the Go programming language (Golang) and AWS Lambda.
## The Requirements
A few requirements must be met prior to starting this tutorial:
- Golang must be installed and configured
- A MongoDB Atlas cluster
If you don't have a MongoDB Atlas cluster, you can configure one for free. For this example an M0 cluster is more than sufficient.
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
Make sure the Atlas cluster has the proper IP addresses on the Network Access List for AWS services. If AWS Lambda cannot reach your cluster then requests made by Alexa will fail.
Having an Amazon Echo or other Amazon Alexa enabled device is not necessary to be successful with this tutorial. Amazon offers a really great simulator that can be used directly in the web browser.
## Designing an Alexa Skill with an Invocation Term and Sample Utterances
When it comes to building an Alexa Skill, it doesn't matter if you start with the code or the design. For this tutorial we're going to start with the design, directly in the Amazon Developer Portal for Alexa.
Sign into the portal and choose to create a new custom Skill. After creating the Skill, you'll be brought to a dashboard with several checklist items:
In the checklist, you should take note of the following:
- Invocation Name
- Intents, Samples, and Slots
- Endpoint
There are other items, one being optional and the other being checked naturally as the others complete.
The first step is to define the invocation name. This is the name that users will use when they speak to their virtual assistant. It should not be confused with the Skill name because the two do not need to match. The Skill name is what would appear in the online marketplace.
For our invocation name, let's use **recipe manager**, something that is easy to remember and easy to pronounce. With the invocation name in place, we can anticipate using our Skill like the following:
``` none
Alexa, ask Recipe Manager to INTENT
```
The user would not literally speak **INTENT** in the command. The intent
is the command that will be defined through sample utterances, also
known as sample phrases or data. You can, and probably should, have
multiple intents for your Skill.
Let's start by creating an intent titled **GetIngredientsForRecipeIntent** with the following sample utterances:
``` none
what ingredients do i need for {recipe}
what do i need to cook {recipe}
to cook {recipe} what ingredients do i need
```
There are a few things to note about the above phrases:
- The `{recipe}` tag is a slot variable which is going to be user defined when spoken.
- Every possible spoken phrase to execute the command should be listed.
Alexa operates from machine learning, so the more sample data the better. When defining the `{recipe}` variable, it should be assigned a type of `AMAZON.Food`.
When all said and done, you could execute the intent by doing something like:
``` none
Alexa, ask Recipe Manager what do I need to cook Chocolate Chip Cookies
```
Having one intent in your Alexa Skill is no fun, so let's create another intent with its own set of sample phrases. Choose to create a new intent titled `GetRecipeFromIngredientsIntent` with the following sample utterances:
``` none
what can i cook with {ingredientone} and {ingredienttwo}
what are some recipes with {ingredientone} and {ingredienttwo}
if i have {ingredientone} and {ingredienttwo} what can i cook
```
This time around we're using two slot variables instead of one. Like previously mentioned, it is probably a good idea to add significantly more sample utterances to get the best results. Alexa needs to be able to process the data to send to your Lambda function.
At this point in time, the configuration in the Alexa Developer Portal is about complete. The exception being the endpoint which doesn't exist yet.
## Building a Lambda Function with Golang and MongoDB
Alexa, for the most part should be able to direct requests, so now we need to create our backend to receive and process them. This is where Lambda, Go, and MongoDB come into play.
Assuming Golang has been properly installed and configured, create a new project within your **$GOPATH** and within that project, create a **main.go** file. As boilerplate to get the ball rolling, this file should contain the following:
``` go
package main
func main() { }
```
With the boilerplate code added, now we can install the MongoDB Go driver. To do this, you could in theory do a `go get`, but the preferred approach as of now is to use the dep package management tool for Golang. To do this, after having installed the tool, execute the following:
``` bash
dep init
dep ensure -add "go.mongodb.org/mongo-driver/mongo"
```
We're using `dep` so that way the version of the driver that we're using in our project is version locked.
In addition to the MongoDB Go driver, we're also going to need to get the AWS Lambda SDK for Go as well as an unofficial SDK for Alexa, since no official SDK exists. To do this, we can execute:
``` bash
dep ensure -add "github.com/arienmalec/alexa-go"
dep ensure -add "github.com/aws/aws-lambda-go/lambda"
```
With the dependencies available to us, we can modify the project's **main.go** file. Open the file and add the following code:
``` go
package main
import (
"context"
"os"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
// Stores a handle to the collection being used by the Lambda function
type Connection struct {
collection *mongo.Collection
}
func main() {
ctx := context.Background()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
if err != nil {
panic(err)
}
defer client.Disconnect(ctx)
connection := Connection{
collection: client.Database("alexa").Collection("recipes"),
}
}
```
In the `main` function we are creating a client using the connection string of our cluster. In this case, I'm using an environment variable on my computer that points to my MongoDB Atlas cluster. Feel free to configure that connection string however you feel the most confident.
Upon connecting, we are getting a handle of a `recipes` collection for an `alexa` database and storing it in a `Connection` data structure. Because we won't be writing any data in this example, both the `alexa` database and the `recipes` collection should exist prior to running this application.
You can check out more information about connecting to MongoDB with the Go programming language in a previous tutorial I wrote.
So why are we storing the collection handle in a `Connection` data structure?
AWS Lambda behaves a little differently when it comes to web applications. Instead of running the `main` function and then remaining alive for as long as your server remains alive, Lambda functions tend to suspend or shutdown when they are not used. For this reason, we cannot rely on our connection being available and we also don't want to establish too many connections to our database in the scenario where our function hasn't shut down. To handle this, we can pass the connection from our `main` function to our logic function.
Let's make a change to see this in action:
``` go
package main
import (
"context"
"os"
"github.com/arienmalec/alexa-go"
"github.com/aws/aws-lambda-go/lambda"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
// Stores a handle to the collection being used by the Lambda function
type Connection struct {
collection *mongo.Collection
}
func (connection Connection) IntentDispatcher(ctx context.Context, request alexa.Request) (alexa.Response, error) {
// Alexa logic here...
}
func main() {
ctx := context.Background()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
if err != nil {
panic(err)
}
defer client.Disconnect(ctx)
connection := Connection{
collection: client.Database("alexa").Collection("recipes"),
}
lambda.Start(connection.IntentDispatcher)
}
```
Notice in the above code that we've added a `lambda.Start` call in our `main` function that points to an `IntentDispatcher` function. We're designing this function to use the connection information established in the `main` function, which based on our Lambda knowledge, may not run every time the function is executed.
So we've got the foundation to our Alexa Skill in place. Now we need to design the logic for each of our intents that were previously defined in the Alexa Developer Portal.
Since this is going to be a recipe related Skill, let's model our MongoDB documents like the following:
``` json
{
"_id": ObjectID("234232358943"),
"name": "chocolate chip cookies",
"ingredients":
"flour",
"egg",
"sugar",
"chocolate"
]
}
```
There is no doubt that our documents could be more extravagant, but for this example it will work out fine. Within the MongoDB Atlas cluster, create the **alexa** database if it doesn't already exist and add a document modeled like the above in a **recipes** collection.
In the `main.go` file of the project, add the following data structure:
``` go
// A data structure representation of the collection schema
type Recipe struct {
ID primitive.ObjectID `bson:"_id"`
Name string `bson:"name"`
Ingredients []string `bson:"ingredients"`
}
```
With the MongoDB Go driver, we can annotate Go data structures with BSON
so that way we can easily map between the two. It essentially makes our
lives a lot easier when working with MongoDB and Go.
Let's circle back to the `IntentDispatcher` function:
``` go
func (connection Connection) IntentDispatcher(ctx context.Context, request alexa.Request) (alexa.Response, error) {
var response alexa.Response
switch request.Body.Intent.Name {
case "GetIngredientsForRecipeIntent":
case "GetRecipeFromIngredientsIntent":
default:
response = alexa.NewSimpleResponse("Unknown Request", "The intent was unrecognized")
}
return response, nil
}
```
Remember the two intents from the Alexa Developer Portal? We need to assign logic to them.
Essentially, we're going to do some database logic and then use the `NewSimpleResponse` function to create a response the the results.
Let's start with the `GetIngredientsForRecipeIntent` logic:
``` go
case "GetIngredientsForRecipeIntent":
var recipe Recipe
recipeName := request.Body.Intent.Slots["recipe"].Value
if recipeName == "" {
return alexa.Response{}, errors.New("Recipe name is not present in the request")
}
if err := connection.collection.FindOne(ctx, bson.M{"name": recipeName}).Decode(&recipe); err != nil {
return alexa.Response{}, err
}
response = alexa.NewSimpleResponse("Ingredients", strings.Join(recipe.Ingredients, ", "))
```
In the above snippet, we are getting the slot variable that was passed and are issuing a `FindOne` query against the collection. The filter for the query says that the `name` field of the document must match the recipe that was passed in as a slot variable.
If there was a match, we are serializing the array of ingredients into a string and are returning it back to Alexa. In theory, Alexa should then read back the comma separated list of ingredients.
Now let's take a look at the `GetRecipeFromIngredientsIntent` intent logic:
``` go
case "GetRecipeFromIngredientsIntent":
var recipes []Recipe
ingredient1 := request.Body.Intent.Slots["ingredientone"].Value
ingredient2 := request.Body.Intent.Slots["ingredienttwo"].Value
cursor, err := connection.collection.Find(ctx, bson.M{
"ingredients": bson.D{
{"$all", bson.A{ingredient1, ingredient2}},
},
})
if err != nil {
return alexa.Response{}, err
}
if err = cursor.All(ctx, &recipes); err != nil {
return alexa.Response{}, err
}
var recipeList []string
for _, recipe := range recipes {
recipeList = append(recipeList, recipe.Name)
}
response = alexa.NewSimpleResponse("Recipes", strings.Join(recipeList, ", "))
```
In the above snippet, we are taking both slot variables that represent
ingredients and are using them in a `Find` query on the collection. This
time around we are using the `$all` operator because we want to filter
for all recipes that contain both ingredients anywhere in the array.
With the results of the `Find`, we can create create an array of the
recipe names and serialize it to a string to be returned as part of the
Alexa response.
If you'd like more information on the `Find` and `FindOne` commands for
Go and MongoDB, check out my [how to read documents
tutorial
on the subject.
While it might seem simple, the code for the Alexa Skill is actually
complete. We've coded scenarios for each of the two intents that we've
set up in the Alexa Developer Portal. We could improve upon what we've
done or create more intents, but it is out of the scope of what we want
to accomplish.
Now that we have our application, we need to build it for Lambda.
Execute the following commands:
``` bash
GOOS=linux go build
zip handler.zip ./project-name
```
So what's happening in the above commands? First we are building a Linux compatible binary. We're doing this because if you're developing on Mac or Windows, you're going to end up with a binary that is incompatible. By defining the operating system, we're telling Go what to build for.
For more information on cross-compiling with Go, check out my Cross Compiling Golang Applications For Use On A Raspberry Pi post.
Next, we are creating an archive of our binary. It is important to replace the `project-name` with that of your actual binary name. It is important to remember the name of the file as it is used in the Lambda dashboard.
When you choose to create a new Lambda function within AWS, make sure Go is the development technology. Choose to upload the ZIP file and add the name of the binary as the handler.
Now it comes down to linking Alexa with Lambda.
Take note of the **ARN** value of your Lambda function. This will be added in the Alexa Portal. Also, make sure you add the Alexa Skills Kit as a trigger to the function. It is as simple as selecting it from the list.
Navigate back to the Alexa Developer Portal and choose the **Endpoint** checklist item. Add the ARN value to the default region and choose to build the Skill using the **Build Model** button.
When the Skill is done building, you can test it using the simulator that Amazon offers as part of the Alexa Developer Portal. This simulator can be accessed using the **Test** tab within the portal.
If you've used the same sample utterances that I have, you can try entering something like this:
``` none
ask recipe manager what can i cook with flour and sugar
ask recipe manager what chocolate chip cookies requires
```
Of course the assumption is that you also have collection entries for chocolate chip cookies and the various ingredients that I used above. Feel free to modify the variable terms with those of your own data.
## Conclusion
You just saw how to build an Alexa Skill with MongoDB, Golang, and AWS Lambda. Knowing how to develop applications for voice assistants like Alexa is great because they are becoming increasingly popular, and the good news is that they aren't any more difficult than writing standard applications.
As previously mentioned, MongoDB Atlas makes pairing MongoDB with Lambda and Alexa very convenient. You can use the free tier or upgrade to something better.
If you'd like to expand your Alexa with Go knowledge and get more practice, check out a previous tutorial I wrote titled Build an Alexa Skill with Golang and AWS Lambda. | md | {
"tags": [
"Go",
"AWS"
],
"pageDescription": "Learn how to develop Amazon Alexa Skills that interact with MongoDB using the Go programming language and AWS Lambda.",
"contentType": "Tutorial"
} | Developing Alexa Skills with MongoDB and Golang | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/building-a-mobile-chat-app-using-realm | created | # Building a Mobile Chat App Using Realm – Integrating Realm into Your App
This article is a follow-up to Building a Mobile Chat App Using Realm – Data Architecture. Read that post first if you want to understand the Realm data/partitioning architecture and the decisions behind it.
This article targets developers looking to build the Realm mobile database into their mobile apps and use MongoDB Realm Sync. It focuses on how to integrate the Realm-Cocoa SDK into your iOS (SwiftUI) app. Read Building a Mobile Chat App Using Realm – Data Architecture This post will equip you with the knowledge needed to persist and sync your iOS application data using Realm.
RChat is a chat application. Members of a chat room share messages, photos, location, and presence information with each other. The initial version is an iOS (Swift and SwiftUI) app, but we will use the same data model and back end Realm application to build an Android version in the future.
If you're looking to add a chat feature to your mobile app, you can repurpose the article's code and the associated repo. If not, treat it as a case study that explains the reasoning behind the data model and partitioning/syncing decisions taken. You'll likely need to make similar design choices in your apps.
>
>
>Update: March 2021
>
>Building a Mobile Chat App Using Realm – The New and Easier Way is a follow-on post from this one. It details building the app using the latest SwiftUI features released with Realm-Cocoa 10.6. If you know that you'll only be building apps with SwiftUI (rather than UIKit) then jump straight to that article.
>
>In writing that post, the app was updated to take advantage of those new SwiftUI features, use this snapshot of the app's GitHub repo to view the code described in this article.
>
>
## Prerequisites
If you want to build and run the app for yourself, this is what you'll need:
- iOS14.2+
- XCode 12.3+
- MongoDB Atlas account and a (free) Atlas cluster
## Walkthrough
The iOS app uses MongoDB Realm Sync to share data between instances of the app (e.g., the messages sent between users). This walkthrough covers both the iOS code and the back end Realm app needed to make it work. Remember that all of the code for the final app is available in the GitHub repo.
### Create a Realm App
From the Atlas UI, select the "Realm" tab. Select the options to indicate that you're creating a new iOS mobile app and then click "Start a New Realm App":
Name the app "RChat" and click "Create Realm Application":
Copy the "App ID." You'll need to use this in your iOS app code:
### Connect iOS App to Your Realm App
The SwiftUI entry point for the app is RChatApp.swift. This is where you define your link to your Realm application (named `app`) using the App ID from your new back end Realm app:
``` swift
import SwiftUI
import RealmSwift
let app = RealmSwift.App(id: "rchat-xxxxx") // TODO: Set the Realm application ID
@main
struct RChatApp: SwiftUI.App {
@StateObject var state = AppState()
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(state)
}
}
}
```
Note that we created an instance of AppState and pass it into our top-level view (ContentView) as an `environmentObject`. This is a common SwiftUI pattern for making state information available to every view without the need to explicitly pass it down every level of the view hierarchy:
``` swift
import SwiftUI
import RealmSwift
let app = RealmSwift.App(id: "rchat-xxxxx") // TODO: Set the Realm application ID
@main
struct RChatApp: SwiftUI.App {
@StateObject var state = AppState()
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(state)
}
}
}
```
### Application-Wide State: AppState
Views can pass state up and down the hierarchy. However, it can simplify state management by making some state available application-wide. In this app, we centralize this app-wide state data storage and control in an instance of the AppState class.
There's a lot going on in `AppState.swift`, and you can view the full file in the repo.
Let's start by looking at some of the `AppState` attributes:
``` swift
class AppState: ObservableObject {
...
var userRealm: Realm?
var chatsterRealm: Realm?
var user: User?
...
}
```
`user` represents the user that's currently logged into the app (and Realm). We'll look at the User class later, but it includes the user's username, preferences, presence state, and a list of the conversations/chat rooms they're members of. If `user` is set to `nil`, then no user is logged in.
When logged in, the app opens two realms:
- `userRealm` lets the user **read and write just their own data** from the Atlas `User` collection.
- `chatsterRealm` enables the user to **read data for every user** from the Atlas `Chatster` collection.
The app uses the Realm SDK to interact with the back end Realm application to perform actions such as logging into Realm. Those operations can take some time as they involve accessing resources over the internet, and so we don't want the app to sit busy-waiting for a response. Instead, we use Combine publishers and subscribers to handle these events. `loginPublisher`, `chatsterLoginPublisher`, `logoutPublisher`, `chatsterRealmPublisher`, and `userRealmPublisher` are publishers to handle logging in, logging out, and opening realms for a user:
``` swift
class AppState: ObservableObject {
...
let loginPublisher = PassthroughSubject()
let chatsterLoginPublisher = PassthroughSubject()
let logoutPublisher = PassthroughSubject()
let chatsterRealmPublisher = PassthroughSubject()
let userRealmPublisher = PassthroughSubject()
...
}
```
When an `AppState` class is instantiated, the realms are initialized to `nil` and actions are assigned to each of the Combine publishers:
``` swift
init() {
_ = app.currentUser?.logOut()
userRealm = nil
chatsterRealm = nil
initChatsterLoginPublisher()
initChatsterRealmPublisher()
initLoginPublisher()
initUserRealmPublisher()
initLogoutPublisher()
}
```
We'll later see that an event is sent to `loginPublisher` and `chatsterLoginPublisher` when a user has successfully logged into Realm. In `AppState`, we define what should be done when those events are received. For example, events received on `loginPublisher` trigger the opening of a realm with the partition set to `user=`, which in turn sends an event to `userRealmPublisher`:
``` swift
func initLoginPublisher() {
loginPublisher
.receive(on: DispatchQueue.main)
.flatMap { user -> RealmPublishers.AsyncOpenPublisher in
self.shouldIndicateActivity = true
let realmConfig = user.configuration(partitionValue: "user=\(user.id)")
return Realm.asyncOpen(configuration: realmConfig)
}
.receive(on: DispatchQueue.main)
.map {
return $0
}
.subscribe(userRealmPublisher)
.store(in: &self.cancellables)
}
```
When the realm has been opened and the realm sent to `userRealmPublisher`, the Realm struct is stored in the `userRealm` attribute and the local `user` is initialized with the `User` object retrieved from the realm:
``` swift
func initUserRealmPublisher() {
userRealmPublisher
.sink(receiveCompletion: { result in
if case let .failure(error) = result {
self.error = "Failed to log in and open user realm: \(error.localizedDescription)"
}
}, receiveValue: { realm in
print("User Realm User file location: \(realm.configuration.fileURL!.path)")
self.userRealm = realm
self.user = realm.objects(User.self).first
do {
try realm.write {
self.user?.presenceState = .onLine
}
} catch {
self.error = "Unable to open Realm write transaction"
}
self.shouldIndicateActivity = false
})
.store(in: &cancellables)
}
```
`chatsterLoginPublisher` behaves in the same way, but for a realm that stores `Chatster` objects:
``` swift
func initChatsterLoginPublisher() {
chatsterLoginPublisher
.receive(on: DispatchQueue.main)
.flatMap { user -> RealmPublishers.AsyncOpenPublisher in
self.shouldIndicateActivity = true
let realmConfig = user.configuration(partitionValue: "all-users=all-the-users")
return Realm.asyncOpen(configuration: realmConfig)
}
.receive(on: DispatchQueue.main)
.map {
return $0
}
.subscribe(chatsterRealmPublisher)
.store(in: &self.cancellables)
}
func initChatsterRealmPublisher() {
chatsterRealmPublisher
.sink(receiveCompletion: { result in
if case let .failure(error) = result {
self.error = "Failed to log in and open chatster realm: \(error.localizedDescription)"
}
}, receiveValue: { realm in
print("Chatster Realm User file location: \(realm.configuration.fileURL!.path)")
self.chatsterRealm = realm
self.shouldIndicateActivity = false
})
.store(in: &cancellables)
}
```
After logging out of Realm, we simply set the attributes to nil:
``` swift
func initLogoutPublisher() {
logoutPublisher
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: { _ in
}, receiveValue: { _ in
self.user = nil
self.userRealm = nil
self.chatsterRealm = nil
})
.store(in: &cancellables)
}
```
### Enabling Email/Password Authentication in the Realm App
After seeing what happens **after** a user has logged into Realm, we need to circle back and enable email/password authentication in the back end Realm app. Fortunately, it's straightforward to do.
From the Realm UI, select "Authentication" from the lefthand menu, followed by "Authentication Providers." Click the "Edit" button for "Email/Password":
Enable the provider and select "Automatically confirm users" and "Run a password reset function." Select "New function" and save without making any edits:
Don't forget to click on "REVIEW & DEPLOY" whenever you've made a change to the back end Realm app.
### Create `User` Document on User Registration
When a new user registers, we need to create a `User` document in Atlas that will eventually synchronize with a `User` object in the iOS app. Realm provides authentication triggers that can automate this.
Select "Triggers" and then click on "Add a Trigger":
Set the "Trigger Type" to "Authentication," provide a name, set the "Action Type" to "Create" (user registration), set the "Event Type" to "Function," and then select "New Function":
Name the function `createNewUserDocument` and add the code for the function:
``` javascript
exports = function({user}) {
const db = context.services.get("mongodb-atlas").db("RChat");
const userCollection = db.collection("User");
const partition = `user=${user.id}`;
const defaultLocation = context.values.get("defaultLocation");
const userPreferences = {
displayName: ""
};
const userDoc = {
_id: user.id,
partition: partition,
userName: user.data.email,
userPreferences: userPreferences,
location: context.values.get("defaultLocation"),
lastSeenAt: null,
presence:"Off-Line",
conversations: ]
};
return userCollection.insertOne(userDoc)
.then(result => {
console.log(`Added User document with _id: ${result.insertedId}`);
}, error => {
console.log(`Failed to insert User document: ${error}`);
});
};
```
Note that we set the `partition` to `user=`, which matches the partition used when the iOS app opens the User realm.
"Save" then "REVIEW & DEPLOY."
### Define Realm Schema
Refer to [Building a Mobile Chat App Using Realm – Data Architecture to understand more about the app's schema and partitioning rules. This article skips the analysis phase and just configures the Realm schema.
Browse to the "Rules" section in the Realm UI and click on "Add Collection." Set "Database Name" to `RChat` and "Collection Name" to `User`. We won't be accessing the `User` collection directly through Realm, so don't select a "Permissions Template." Click "Add Collection":
At this point, I'll stop reminding you to click "REVIEW & DEPLOY!"
Select "Schema," paste in this schema, and then click "SAVE":
``` javascript
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"conversations": {
"bsonType": "array",
"items": {
"bsonType": "object",
"properties": {
"displayName": {
"bsonType": "string"
},
"id": {
"bsonType": "string"
},
"members": {
"bsonType": "array",
"items": {
"bsonType": "object",
"properties": {
"membershipStatus": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
}
},
"required":
"membershipStatus",
"userName"
],
"title": "Member"
}
},
"unreadCount": {
"bsonType": "long"
}
},
"required": [
"unreadCount",
"id",
"displayName"
],
"title": "Conversation"
}
},
"lastSeenAt": {
"bsonType": "date"
},
"partition": {
"bsonType": "string"
},
"presence": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
},
"userPreferences": {
"bsonType": "object",
"properties": {
"avatarImage": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"displayName": {
"bsonType": "string"
}
},
"required": [],
"title": "UserPreferences"
}
},
"required": [
"_id",
"partition",
"userName",
"presence"
],
"title": "User"
}
```
Repeat for the `Chatster` schema:
``` javascript
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"avatarImage": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"displayName": {
"bsonType": "string"
},
"lastSeenAt": {
"bsonType": "date"
},
"partition": {
"bsonType": "string"
},
"presence": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
}
},
"required": [
"_id",
"partition",
"presence",
"userName"
],
"title": "Chatster"
}
```
And for the `ChatMessage` collection:
``` javascript
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"author": {
"bsonType": "string"
},
"image": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"location": {
"bsonType": "array",
"items": {
"bsonType": "double"
}
},
"partition": {
"bsonType": "string"
},
"text": {
"bsonType": "string"
},
"timestamp": {
"bsonType": "date"
}
},
"required": [
"_id",
"partition",
"text",
"timestamp"
],
"title": "ChatMessage"
}
```
### Enable Realm Sync
Realm Sync is used to synchronize objects between instances of the iOS app (and we'll extend this app to also include Android). It also syncs those objects with Atlas collections. Note that there are three options to create a Realm schema:
1. Manually code the schema as a JSON schema document.
2. Derive the schema from existing data stored in Atlas. (We don't yet have any data and so this isn't an option here.)
3. Derive the schema from the Realm objects used in the mobile app.
We've already specified the schema and so will stick to the first option.
Select "Sync" and then select your Atlas cluster. Set the "Partition Key" to the `partition` attribute (it appears in the list as it's already in the schema for all three collections), and the rules for whether a user can sync with a given partition:
The "Read" rule controls whether a user can establish one-way read-only sync relationship to the mobile app for a given user and partition. In this case, the rule delegates this to a Realm function named `canReadPartition`:
``` json
{
"%%true": {
"%function": {
"arguments": [
"%%partition"
],
"name": "canReadPartition"
}
}
}
```
The "Write" rule delegates to the `canWritePartition`:
``` json
{
"%%true": {
"%function": {
"arguments": [
"%%partition"
],
"name": "canWritePartition"
}
}
}
```
Once more, we've already seen those functions in [Building a Mobile Chat App Using Realm – Data Architecture but I'll include the code here for completeness.
canReadPartition:
``` javascript
exports = function(partition) {
console.log(`Checking if can sync a read for partition = ${partition}`);
const db = context.services.get("mongodb-atlas").db("RChat");
const chatsterCollection = db.collection("Chatster");
const userCollection = db.collection("User");
const chatCollection = db.collection("ChatMessage");
const user = context.user;
let partitionKey = "";
let partitionVale = "";
const splitPartition = partition.split("=");
if (splitPartition.length == 2) {
partitionKey = splitPartition0];
partitionValue = splitPartition[1];
console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);
} else {
console.log(`Couldn't extract the partition key/value from ${partition}`);
return false;
}
switch (partitionKey) {
case "user":
console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) – ${partitionKey === user.id}`);
return partitionValue === user.id;
case "conversation":
console.log(`Looking up User document for _id = ${user.id}`);
return userCollection.findOne({ _id: user.id })
.then (userDoc => {
if (userDoc.conversations) {
let foundMatch = false;
userDoc.conversations.forEach( conversation => {
console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)
if (conversation.id === partitionValue) {
console.log(`Found matching conversation element for id = ${partitionValue}`);
foundMatch = true;
}
});
if (foundMatch) {
console.log(`Found Match`);
return true;
} else {
console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);
return false;
}
} else {
console.log(`No conversations attribute in User doc`);
return false;
}
}, error => {
console.log(`Unable to read User document: ${error}`);
return false;
});
case "all-users":
console.log(`Any user can read all-users partitions`);
return true;
default:
console.log(`Unexpected partition key: ${partitionKey}`);
return false;
}
};
```
[canWritePartition:
``` javascript
exports = function(partition) {
console.log(`Checking if can sync a write for partition = ${partition}`);
const db = context.services.get("mongodb-atlas").db("RChat");
const chatsterCollection = db.collection("Chatster");
const userCollection = db.collection("User");
const chatCollection = db.collection("ChatMessage");
const user = context.user;
let partitionKey = "";
let partitionVale = "";
const splitPartition = partition.split("=");
if (splitPartition.length == 2) {
partitionKey = splitPartition0];
partitionValue = splitPartition[1];
console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);
} else {
console.log(`Couldn't extract the partition key/value from ${partition}`);
return false;
}
switch (partitionKey) {
case "user":
console.log(`Checking if partitionKey(${partitionValue}) matches user.id(${user.id}) – ${partitionKey === user.id}`);
return partitionValue === user.id;
case "conversation":
console.log(`Looking up User document for _id = ${user.id}`);
return userCollection.findOne({ _id: user.id })
.then (userDoc => {
if (userDoc.conversations) {
let foundMatch = false;
userDoc.conversations.forEach( conversation => {
console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)
if (conversation.id === partitionValue) {
console.log(`Found matching conversation element for id = ${partitionValue}`);
foundMatch = true;
}
});
if (foundMatch) {
console.log(`Found Match`);
return true;
} else {
console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);
return false;
}
} else {
console.log(`No conversations attribute in User doc`);
return false;
}
}, error => {
console.log(`Unable to read User document: ${error}`);
return false;
});
case "all-users":
console.log(`No user can write to an all-users partitions`);
return false;
default:
console.log(`Unexpected partition key: ${partitionKey}`);
return false;
}
};
```
To create these functions, select "Functions" and click "Create New Function." Make sure you type the function name precisely, set "Authentication" to "System," and turn on the "Private" switch (which means it can't be called directly from external services such as our mobile app):
### Linking User and Chatster Documents
As described in [Building a Mobile Chat App Using Realm – Data Architecture, there are relationships between different `User` and `Chatster` documents. Now that we've defined the schemas and enabled Realm Sync, it's a convenient time to add the Realm function and database trigger to maintain those relationships.
Create a Realm function named `userDocWrittenTo`, set "Authentication" to "System," and make it private. This article is aiming to focus on the iOS app more than the back end Realm app, and so we won't delve into this code:
``` javascript
exports = function(changeEvent) {
const db = context.services.get("mongodb-atlas").db("RChat");
const chatster = db.collection("Chatster");
const userCollection = db.collection("User");
const docId = changeEvent.documentKey._id;
const user = changeEvent.fullDocument;
let conversationsChanged = false;
console.log(`Mirroring user for docId=${docId}. operationType = ${changeEvent.operationType}`);
switch (changeEvent.operationType) {
case "insert":
case "replace":
case "update":
console.log(`Writing data for ${user.userName}`);
let chatsterDoc = {
_id: user._id,
partition: "all-users=all-the-users",
userName: user.userName,
lastSeenAt: user.lastSeenAt,
presence: user.presence
};
if (user.userPreferences) {
const prefs = user.userPreferences;
chatsterDoc.displayName = prefs.displayName;
if (prefs.avatarImage && prefs.avatarImage._id) {
console.log(`Copying avatarImage`);
chatsterDoc.avatarImage = prefs.avatarImage;
console.log(`id of avatarImage = ${prefs.avatarImage._id}`);
}
}
chatster.replaceOne({ _id: user._id }, chatsterDoc, { upsert: true })
.then (() => {
console.log(`Wrote Chatster document for _id: ${docId}`);
}, error => {
console.log(`Failed to write Chatster document for _id=${docId}: ${error}`);
});
if (user.conversations && user.conversations.length > 0) {
for (i = 0; i < user.conversations.length; i++) {
let membersToAdd = ];
if (user.conversations[i].members.length > 0) {
for (j = 0; j < user.conversations[i].members.length; j++) {
if (user.conversations[i].members[j].membershipStatus == "User added, but invite pending") {
membersToAdd.push(user.conversations[i].members[j].userName);
user.conversations[i].members[j].membershipStatus = "Membership active";
conversationsChanged = true;
}
}
}
if (membersToAdd.length > 0) {
userCollection.updateMany({userName: {$in: membersToAdd}}, {$push: {conversations: user.conversations[i]}})
.then (result => {
console.log(`Updated ${result.modifiedCount} other User documents`);
}, error => {
console.log(`Failed to copy new conversation to other users: ${error}`);
});
}
}
}
if (conversationsChanged) {
userCollection.updateOne({_id: user._id}, {$set: {conversations: user.conversations}});
}
break;
case "delete":
chatster.deleteOne({_id: docId})
.then (() => {
console.log(`Deleted Chatster document for _id: ${docId}`);
}, error => {
console.log(`Failed to delete Chatster document for _id=${docId}: ${error}`);
});
break;
}
};
```
Set up a database trigger to execute the new function whenever anything in the `User` collection changes:
### Registering and Logging in From the iOS App
We've now created enough of the back end Realm app that mobile apps can now register new Realm users and use them to log into the app.
The app's top-level SwiftUI view is [ContentView, which decides which sub-view to show based on whether our `AppState` environment object indicates that a user is logged in or not:
``` swift
@EnvironmentObject var state: AppState
...
if state.loggedIn {
if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {
SetProfileView(isPresented: $showingProfileView)
} else {
ConversationListView()
.navigationBarTitle("Chats", displayMode: .inline)
.navigationBarItems(
trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(
photo: state.user?.userPreferences?.avatarImage,
online: true) { showingProfileView.toggle() } : nil
)
}
} else {
LoginView()
}
...
```
When first run, no user is logged in and so `LoginView` is displayed.
Note that `AppState.loggedIn` checks whether a user is currently logged into the Realm `app`:
``` swift
var loggedIn: Bool {
app.currentUser != nil && app.currentUser?.state == .loggedIn
&& userRealm != nil && chatsterRealm != nil
}
```
The UI for LoginView contains cells to provide the user's email address and password, a radio button to indicate whether this is a new user, and a button to register or log in a user:
Clicking the button executes one of two functions:
``` swift
...
CallToActionButton(
title: newUser ? "Register User" : "Log In",
action: { self.userAction(username: self.username, password: self.password) })
...
private func userAction(username: String, password: String) {
state.shouldIndicateActivity = true
if newUser {
signup(username: username, password: password)
} else {
login(username: username, password: password)
}
}
```
`signup` makes an asynchronous call to the Realm SDK to register the new user. Through a Combine pipeline, `signup` receives an event when the registration completes, which triggers it to invoke the `login` function:
``` swift
private func signup(username: String, password: String) {
if username.isEmpty || password.isEmpty {
state.shouldIndicateActivity = false
return
}
self.state.error = nil
app.emailPasswordAuth.registerUser(email: username, password: password)
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: {
state.shouldIndicateActivity = false
switch $0 {
case .finished:
break
case .failure(let error):
self.state.error = error.localizedDescription
}
}, receiveValue: {
self.state.error = nil
login(username: username, password: password)
})
.store(in: &state.cancellables)
}
```
The `login` function uses the Realm SDK to log in the user asynchronously. If/when the Realm login succeeds, the Combine pipeline sends the Realm user to the `chatsterLoginPublisher` and `loginPublisher` publishers (recall that we've seen how those are handled within the `AppState` class):
``` swift
private func login(username: String, password: String) {
if username.isEmpty || password.isEmpty {
state.shouldIndicateActivity = false
return
}
self.state.error = nil
app.login(credentials: .emailPassword(email: username, password: password))
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: {
state.shouldIndicateActivity = false
switch $0 {
case .finished:
break
case .failure(let error):
self.state.error = error.localizedDescription
}
}, receiveValue: {
self.state.error = nil
state.chatsterLoginPublisher.send($0)
state.loginPublisher.send($0)
})
.store(in: &state.cancellables)
}
```
### Saving the User Profile
On being logged in for the first time, the user is presented with SetProfileView. (They can also return here later by clicking on their avatar.) This is a SwiftUI sheet where the user can set their profile and preferences by interacting with the UI and then clicking "Save User Profile":
When the view loads, the UI is populated with any existing profile information found in the `User` object in the `AppState` environment object:
``` swift
...
@EnvironmentObject var state: AppState
...
.onAppear { initData() }
...
private func initData() {
displayName = state.user?.userPreferences?.displayName ?? ""
photo = state.user?.userPreferences?.avatarImage
}
```
As the user updates the UI elements, the Realm `User` object isn't changed. It's only when they click "Save User Profile" that we update the `User` object. Note that it uses the `userRealm` that was initialized when the user logged in to open a Realm write transaction before making the change:
``` swift
...
@EnvironmentObject var state: AppState
...
CallToActionButton(title: "Save User Profile", action: saveProfile)
...
private func saveProfile() {
if let realm = state.userRealm {
state.shouldIndicateActivity = true
do {
try realm.write {
state.user?.userPreferences?.displayName = displayName
if photoAdded {
guard let newPhoto = photo else {
print("Missing photo")
state.shouldIndicateActivity = false
return
}
state.user?.userPreferences?.avatarImage = newPhoto
}
state.user?.presenceState = .onLine
}
} catch {
state.error = "Unable to open Realm write transaction"
}
}
state.shouldIndicateActivity = false
}
```
Once saved to the local realm, Realm Sync copies changes made to the `User` object to the associated `User` document in Atlas.
### List of Conversations
Once the user has logged in and set up their profile information, they're presented with the `ConversationListView`:
``` swift
if state.loggedIn {
if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {
SetProfileView(isPresented: $showingProfileView)
} else {
ConversationListView()
.navigationBarTitle("Chats", displayMode: .inline)
.navigationBarItems(
trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(
photo: state.user?.userPreferences?.avatarImage,
online: true) { showingProfileView.toggle() } : nil
)
}
} else {
LoginView()
}
```
ConversationListView displays a list of all the conversations that the user is currently a member of (initially none) by looping over `conversations` within their `User` Realm object:
``` swift
if let conversations = state.user?.conversations.freeze().sorted(by: sortDescriptors) {
List {
ForEach(conversations) { conversation in
Button(action: {
self.conversation = conversation
showConversation.toggle()
}) {
ConversationCardView(
conversation: conversation,
lastSync: lastSync)
}
}
}
...
}
```
At any time, another user can include you in a new group conversation. This view needs to reflect those changes as they happen:
When the other user adds us to a conversation, our `User` document is updated automatically through the magic of Realm Sync and our Realm trigger; but we need to give SwiftUI a nudge to refresh the current view. We do that by registering for Realm notifications and updating the `lastSync` state variable on each change. We register for notifications when the view appears and deregister when it disappears:
``` swift
@State var lastSync: Date?
...
var body: some View {
VStack {
...
if let lastSync = lastSync {
LastSync(date: lastSync)
}
...
}
...
.onAppear { watchRealms() }
.onDisappear { stopWatching() }
}
private func watchRealms() {
if let userRealm = state.userRealm {
realmUserNotificationToken = userRealm.observe {_, _ in
lastSync = Date()
}
}
if let chatsterRealm = state.chatsterRealm {
realmChatsterNotificationToken = chatsterRealm.observe { _, _ in
lastSync = Date()
}
}
}
private func stopWatching() {
if let userToken = realmUserNotificationToken {
userToken.invalidate()
}
if let chatsterToken = realmChatsterNotificationToken {
chatsterToken.invalidate()
}
}
```
### Creating New Conversations
NewConversationView is another view that lets the user provide a number of details which are then saved to Realm when the "Save" button is tapped. What's new is that it uses Realm to search for all users that match a filter pattern:
``` swift
private func searchUsers() {
var candidateChatsters: Results
if let chatsterRealm = state.chatsterRealm {
let allChatsters = chatsterRealm.objects(Chatster.self)
if candidateMember == "" {
candidateChatsters = allChatsters
} else {
let predicate = NSPredicate(format: "userName CONTAINScd] %@", candidateMember)
candidateChatsters = allChatsters.filter(predicate)
}
candidateMembers = []
candidateChatsters.forEach { chatster in
if !members.contains(chatster.userName) && chatster.userName != state.user?.userName {
candidateMembers.append(chatster.userName)
}
}
}
}
```
### Conversation Status
When the status of a conversation changes (users go online/offline or new messages are received), the card displaying the conversation details should update.
We already have a Realm function to set the `presence` status in `Chatster` documents/objects when users log on or off. All `Chatster` objects are readable by all users, and so [ConversationCardContentsView can already take advantage of that information.
The `conversation.unreadCount` is part of the `User` object and so we need another Realm trigger to update that whenever a new chat message is posted to a conversation.
We add a new Realm function `chatMessageChange` that's configured as private and with "System" authentication (just like our other functions). This is the function code that will increment the `unreadCount` for all `User` documents for members of the conversation:
``` javascript
exports = function(changeEvent) {
if (changeEvent.operationType != "insert") {
console.log(`ChatMessage ${changeEvent.operationType} event – currently ignored.`);
return;
}
console.log(`ChatMessage Insert event being processed`);
let userCollection = context.services.get("mongodb-atlas").db("RChat").collection("User");
let chatMessage = changeEvent.fullDocument;
let conversation = "";
if (chatMessage.partition) {
const splitPartition = chatMessage.partition.split("=");
if (splitPartition.length == 2) {
conversation = splitPartition1];
console.log(`Partition/conversation = ${conversation}`);
} else {
console.log("Couldn't extract the conversation from partition ${chatMessage.partition}");
return;
}
} else {
console.log("partition not set");
return;
}
const matchingUserQuery = {
conversations: {
$elemMatch: {
id: conversation
}
}
};
const updateOperator = {
$inc: {
"conversations.$[element].unreadCount": 1
}
};
const arrayFilter = {
arrayFilters:[
{
"element.id": conversation
}
]
};
userCollection.updateMany(matchingUserQuery, updateOperator, arrayFilter)
.then ( result => {
console.log(`Matched ${result.matchedCount} User docs; updated ${result.modifiedCount}`);
}, error => {
console.log(`Failed to match and update User docs: ${error}`);
});
};
```
That function should be invoked by a new Realm database trigger (`ChatMessageChange`) to fire whenever a document is inserted into the `RChat.ChatMessage` collection.
### Within the Chat Room
[ChatRoomView has a lot of similarities with `ConversationListView`, but with one fundamental difference. Each conversation/chat room has its own partition, and so when opening a conversation, you need to open a new realm and observe for changes in it:
``` swift
@EnvironmentObject var state: AppState
...
var body: some View {
VStack {
...
}
.onAppear { loadChatRoom() }
.onDisappear { closeChatRoom() }
}
private func loadChatRoom() {
clearUnreadCount()
if let user = app.currentUser, let conversation = conversation {
scrollToBottom()
self.state.shouldIndicateActivity = true
let realmConfig = user.configuration(partitionValue: "conversation=\(conversation.id)")
Realm.asyncOpen(configuration: realmConfig)
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: { result in
if case let .failure(error) = result {
self.state.error = "Failed to open ChatMessage realm: \(error.localizedDescription)"
state.shouldIndicateActivity = false
}
}, receiveValue: { realm in
chatRealm = realm
chats = realm.objects(ChatMessage.self).sorted(byKeyPath: "timestamp")
realmChatsNotificationToken = realm.observe {_, _ in
scrollToBottom()
clearUnreadCount()
lastSync = Date()
}
scrollToBottom()
state.shouldIndicateActivity = false
})
.store(in: &self.state.cancellables)
}
}
```
Note that we only open a `Conversation` realm when the user opens the associated view because having too many realms open concurrently can exhaust resources. It's also important that we stop observing the realm by setting it to `nil` when leaving the view:
``` swift
@EnvironmentObject var state: AppState
...
var body: some View {
VStack {
...
}
.onAppear { loadChatRoom() }
.onDisappear { closeChatRoom() }
}
private func closeChatRoom() {
clearUnreadCount()
if let token = realmChatsterNotificationToken {
token.invalidate()
}
if let token = realmChatsNotificationToken {
token.invalidate()
}
chatRealm = nil
}
```
To send a message, all the app needs to do is to add the new chat message to Realm. Realm Sync will then copy it to Atlas, where it is then synced to the other users:
``` swift
private func sendMessage(text: String, photo: Photo?, location: Double]) {
if let conversation = conversation {
let chatMessage = ChatMessage(conversationId: conversation.id,
author: state.user?.userName ?? "Unknown",
text: text,
image: photo,
location: location)
if let chatRealm = chatRealm {
do {
try chatRealm.write {
chatRealm.add(chatMessage)
}
} catch {
state.error = "Unable to open Realm write transaction"
}
} else {
state.error = "Cannot save chat message as realm is not set"
}
}
}
```
## Summary
In this article, we've gone through the key steps you need to take when building a mobile app using Realm, including:
- Managing the user lifecycle: registering, authenticating, logging in, and logging out.
- Managing and storing user profile information.
- Adding objects to Realm.
- Performing searches on Realm data.
- Syncing data between your mobile apps and with MongoDB Atlas.
- Reacting to data changes synced from other devices.
- Adding some back end magic using Realm triggers and functions.
There's a lot of code and functionality that hasn't been covered in this article, and so it's worth looking through the rest of the app to see how to use features such as these from a SwiftUI iOS app:
- Location data
- Maps
- Camera and photo library
- Actions when minimizing your app
- Notifications
We wrote the iOS version of the app first, but we plan on adding an Android (Kotlin) version soon – keep checking the [developer hub and the repo for updates.
## References
- GitHub Repo for this app, as it stood when this article was written
- Read Building a Mobile Chat App Using Realm – Data Architecture to understand the data model and partitioning strategy behind the RChat app
- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine
- GitHub Repo for Realm-Cocoa SDK
- Realm Cocoa SDK documentation
- MongoDB's Realm documentation
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Realm",
"Swift",
"iOS",
"Mobile"
],
"pageDescription": "How to incorporate Realm into your iOS App. Building a chat app with SwiftUI and Realm-Cocoa",
"contentType": "Tutorial"
} | Building a Mobile Chat App Using Realm – Integrating Realm into Your App | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/go/field-level-encryption-fle-mongodb-golang | created | # Client-Side Field Level Encryption (CSFLE) in MongoDB with Golang
One of the many great things about MongoDB is how secure you can make
your data in it. In addition to network and user-based rules, you have
encryption of your data at rest, encryption over the wire, and now
recently, client-side encryption known as client-side field level
encryption (CSFLE).
So, what exactly is client-side field level encryption (CSFLE) and how
do you use it?
With field level encryption, you can choose to encrypt certain fields
within a document, client-side, while leaving other fields as plain
text. This is particularly useful because when viewing a CSFLE document
with the CLI,
Compass, or directly within
Altas, the encrypted fields will
not be human readable. When they are not human readable, if the
documents should get into the wrong hands, those fields will be useless
to the malicious user. However, when using the MongoDB language drivers
while using the same encryption keys, those fields can be decrypted and
are queryable within the application.
In this quick start themed tutorial, we're going to see how to use
MongoDB field level
encryption
with the Go programming language (Golang). In particular, we're going to
be exploring automatic encryption rather than manual encryption.
## The Requirements
There are a few requirements that must be met prior to attempting to use
CSFLE with the Go driver.
- MongoDB Atlas 4.2+
- MongoDB Go driver 1.2+
- The libmongocrypt
library installed
- The
mongocryptd
binary installed
>
>
>This tutorial will focus on automatic encryption. While this tutorial
>will use MongoDB Atlas, you're
>going to need to be using version 4.2 or newer for MongoDB Atlas or
>MongoDB Enterprise Edition. You will not be able to use automatic field
>level encryption with MongoDB Community Edition.
>
>
The assumption is that you're familiar with developing Go applications
that use MongoDB. If you want a refresher, take a look at the quick
start
series
that I published on the topic.
To use field level encryption, you're going to need a little more than
just having an appropriate version of MongoDB and the MongoDB Go driver.
We'll need **libmongocrypt**, which is a companion library for
encryption in the MongoDB drivers, and **mongocryptd**, which is a
binary for parsing automatic encryption rules based on the extended JSON
format.
## Installing the Libmongocrypt and Mongocryptd Binaries and Libraries
Because of the **libmongocrypt** and **mongocryptd** requirements, it's
worth reviewing how to install and configure them. We'll be exploring
installation on macOS, but refer to the documentation for
libmongocrypt and
mongocryptd
for your particular operating system.
There are a few solutions torward installing the **libmongocrypt**
library on macOS, the easiest being with Homebrew.
If you've got Homebrew installed, you can install **libmongocrypt** with
the following command:
``` bash
brew install mongodb/brew/libmongocrypt
```
Just like that, the MongoDB Go driver will be able to handle encryption.
Further explanation of the instructions can be found in the
documentation.
Because we want to do automatic encryption with the driver using an
extended JSON schema, we need **mongocryptd**, a binary that ships with
MongoDB Enterprise Edition. The **mongocryptd** binary needs to exist on
the computer or server where the Go application intends to run. It is
not a development dependency like **libmongocrypt**, but a runtime
dependency.
You'll want to consult the
documentation
on how to obtain the **mongocryptd** binary as each operating system has
different steps.
For macOS, you'll want to download MongoDB Enterprise Edition from the
MongoDB Download
Center.
You can refer to the Enterprise Edition installation
instructions
for macOS to install, but the gist of the installation involves
extracting the TAR file and moving the files to the appropriate
directory.
By this point, all the appropriate components for field level encryption
should be installed or available.
## Create a Data Key in MongoDB for Encrypting and Decrypting Document Fields
Before we can start encrypting and decrypting fields within our
documents, we need to establish keys to do the bulk of the work. This
means defining our key vault location within MongoDB and the Key
Management System (KMS) we wish to use for decrypting the data
encryption keys.
The key vault is a collection that we'll create within MongoDB for
storing encrypted keys for our document fields. The primary key within
the KMS will decrypt the keys within the key vault.
For this particular tutorial, we're going to use a Local Key Provider
for our KMS. It is worth looking into something like AWS
KMS or similar, something we'll explore in
a future tutorial, as an alternative to a Local Key Provider.
On your computer, create a new Go project with the following **main.go**
file:
``` go
package main
import (
"context"
"crypto/rand"
"fmt"
"io/ioutil"
"log"
"os"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
var (
ctx = context.Background()
kmsProviders mapstring]map[string]interface{}
schemaMap bson.M
)
func createDataKey() {}
func createEncryptedClient() *mongo.Client {}
func readSchemaFromFile(file string) bson.M {}
func main() {}
```
You'll need to install the MongoDB Go driver to proceed. To learn how to
do this, take a moment to check out my previous tutorial titled [Quick
Start: Golang & MongoDB - Starting and
Setup.
In the above code, we have a few variables defined as well as a few
functions. We're going to focus on the `kmsProviders` variable and the
`createDataKey` function for this particular part of the tutorial.
Take a look at the following `createDataKey` function:
``` go
func createDataKey() {
kvClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
if err != nil {
log.Fatal(err)
}
clientEncryptionOpts := options.ClientEncryption().SetKeyVaultNamespace("keyvault.datakeys").SetKmsProviders(kmsProviders)
clientEncryption, err := mongo.NewClientEncryption(kvClient, clientEncryptionOpts)
if err != nil {
log.Fatal(err)
}
defer clientEncryption.Close(ctx)
_, err = clientEncryption.CreateDataKey(ctx, "local", options.DataKey().SetKeyAltNames(]string{"example"}))
if err != nil {
log.Fatal(err)
}
}
```
In the above `createDataKey` function, we are first connecting to
MongoDB. The MongoDB connection string is defined by the environment
variable `ATLAS_URI` in the above code. While you could hard-code this
connection string or store it in a configuration file, for security
reasons, it makes a lot of sense to use environment variables instead.
If the connection was successful, we need to define the key vault
namespace and the KMS provider as part of the encryption configuration
options. The namespace is composed of the database name followed by the
collection name. This is where the key information will be stored. The
`kmsProviders` map, which will be defined later, will have local key
information.
Executing the `CreateDataKey` function will create the key information
within MongoDB as a document.
We are choosing to specify an alternate key name of `example` so that we
don't have to refer to the data key by its `_id` when using it with our
documents. Instead, we'll be able to use the unique alternate name which
could follow a special naming convention. It is important to note that
the alternate key name is only useful when using the
`AEAD_AES_256_CBC_HMAC_SHA_512-Random`, something we'll explore later in
this tutorial.
To use the `createDataKey` function, we can make some modifications to
the `main` function:
``` go
func main() {
localKey := make([]byte, 96)
if _, err := rand.Read(localKey); err != nil {
log.Fatal(err)
}
kmsProviders = map[string]map[string]interface{}{
"local": {
"key": localKey,
},
}
createDataKey()
}
```
In the above code, we are generating a random key. This random key is
added to the `kmsProviders` map that we were using within the
`createDataKey` function.
>
>
>It is insecure to have your local key stored within the application or
>on the same server. In production, consider using AWS KMS or accessing
>your local key through a separate request before adding it to the Local
>Key Provider.
>
>
If you ran the code so far, you'd end up with a `keyvault` database and
a `datakeys` collection which has a document of a key with an alternate
name. That document would look something like this:
``` none
{
"_id": UUID("27a51d69-809f-4cb9-ae15-d63f7eab1585"),
"keyAltNames": [
"example"
],
"keyMaterial": Binary("oJ6lEzjIEskHFxz7zXqddCgl64EcP1A7E/r9zT+OL19/ZXVwDnEjGYMvx+BgcnzJZqkXTFTgJeaRYO/fWk5bEcYkuvXhKqpMq2ZO", 0),
"creationDate": 2020-11-05T23:32:26.466+00:00,
"updateDate": 2020-11-05T23:32:26.466+00:00,
"status": 0,
"masterKey": {
"provider": "local"
}
}
```
There are a few important things to note with our code so far:
- The `localKey` is random and is not persisting beyond the runtime
which will result in key mismatches upon consecutive runs of the
application. Either specify a non-random key or store it somewhere
after generation.
- We're using a Local Key Provider with a key that exists locally.
This is not recommended in a production scenario due to security
concerns. Instead, use a provider like AWS KMS or store the key
externally.
- The `createDataKey` should only be executed when a particular key is
needed to be created, not every time the application runs.
- There is no strict naming convention for the key vault and the keys
that reside in it. Name your database and collection however makes
sense to you.
After we run our application the first time, we'll probably want to
comment out the `createDataKey` line in the `main` function.
## Defining an Extended JSON Schema Map for Fields to be Encrypted
With the data key created, we're at a point in time where we need to
figure out what fields should be encrypted in a document and what fields
should be left as plain text. The easiest way to do this is with a
schema map.
A schema map for encryption is extended JSON and can be added directly
to the Go source code or loaded from an external file. From a
maintenance perspective, loading from an external file is easier to
maintain.
Take a look at the following schema map for encryption:
``` json
{
"fle-example.people": {
"encryptMetadata": {
"keyId": "/keyAltName"
},
"properties": {
"ssn": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random"
}
}
},
"bsonType": "object"
}
}
```
Let's assume the above JSON exists in a **schema.json** file which sits
relative to our Go files or binary. In the above JSON, we're saying that
the map applies to the `people` collection within the `fle-example`
database.
The `keyId` field within the `encryptMetadata` object says that
documents within the `people` collection must have a string field called
`keyAltName`. The value of this field will reflect the alternate key
name that we defined when creating the data key. Notice the `/` that
prefixes the value. That is not an error. It is a requirement for this
particular value since it is a pointer.
The `properties` field lists fields within our document and in this
example lists the fields that should be encrypted along with the
encryption algorithm to use. In our example, only the `ssn` field will
be encrypted while all other fields will remain as plain text.
There are two algorithms currently supported:
- AEAD_AES_256_CBC_HMAC_SHA_512-Random
- AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic
In short, the `AEAD_AES_256_CBC_HMAC_SHA_512-Random` algorithm is best
used on fields that have low cardinality or don't need to be used within
a filter for a query. The `AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic`
algorithm should be used for fields with high cardinality or for fields
that need to be used within a filter.
To learn more about these algorithms, visit the
[documentation.
We'll be exploring both algorithms in this particular tutorial.
If we wanted to, we could change the schema map to the following:
``` json
{
"fle-example.people": {
"properties": {
"ssn": {
"encrypt": {
"keyId": "/keyAltName",
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random"
}
}
},
"bsonType": "object"
}
}
```
The change made in the above example has to do with the `keyId` field.
Rather than declaring it as part of the `encryptMetadata`, we've
declared it as part of a particular field. This could be useful if you
want to use different keys for different fields.
Remember, the pointer used for the `keyId` will only work with the
`AEAD_AES_256_CBC_HMAC_SHA_512-Random` algorithm. You can, however, use
the actual key id for both algorithms.
With a schema map for encryption available, let's get it loaded in the
Go application. Change the `readSchemaFromFile` function to look like
the following:
``` go
func readSchemaFromFile(file string) bson.M {
content, err := ioutil.ReadFile(file)
if err != nil {
log.Fatal(err)
}
var doc bson.M
if err = bson.UnmarshalExtJSON(content, false, &doc); err != nil {
log.Fatal(err)
}
return doc
}
```
In the above code, we are reading the file, which will be the
**schema.json** file soon enough. If it is read successfully, we use the
`UnmarshalExtJSON` function to load it into a `bson.M` object that is
more pleasant to work with in Go.
## Enabling MongoDB Automatic Client Encryption in a Golang Application
By this point, you should have the code in place for creating a data key
and a schema map defined to be used with the automatic client encryption
functionality that MongoDB supports. It's time to bring it together to
actually encrypt and decrypt fields.
We're going to start with the `createEncryptedClient` function within
our project:
``` go
func createEncryptedClient() *mongo.Client {
schemaMap = readSchemaFromFile("schema.json")
mongocryptdOpts := mapstring]interface{}{
"mongodcryptdBypassSpawn": true,
}
autoEncryptionOpts := options.AutoEncryption().
SetKeyVaultNamespace("keyvault.datakeys").
SetKmsProviders(kmsProviders).
SetSchemaMap(schemaMap).
SetExtraOptions(mongocryptdOpts)
mongoClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("ATLAS_URI")).SetAutoEncryptionOptions(autoEncryptionOpts))
if err != nil {
log.Fatal(err)
}
return mongoClient
}
```
In the above code we are making use of the `readSchemaFromFile` function
that we had just created to load our schema map for encryption. Next, we
are defining our auto encryption options and establishing a connection
to MongoDB. This will look somewhat familiar to what we did in the
`createDataKey` function. When defining the auto encryption options, not
only are we specifying the KMS for our key and vault, but we're also
supplying the schema map for encryption.
You'll notice that we are using `mongocryptdBypassSpawn` as an extra
option. We're doing this so that the client doesn't try to automatically
start the **mongocryptd** daemon if it is already running. You may or
may not want to use this in your own application.
If the connection was successful, the client is returned.
It's time to revisit the `main` function within the project:
``` go
func main() {
localKey := make([]byte, 96)
if _, err := rand.Read(localKey); err != nil {
log.Fatal(err)
}
kmsProviders = map[string]map[string]interface{}{
"local": {
"key": localKey,
},
}
// createDataKey()
client := createEncryptedClient()
defer client.Disconnect(ctx)
collection := client.Database("fle-example").Collection("people")
if _, err := collection.InsertOne(context.TODO(), bson.M{"name": "Nic Raboy", "ssn": "123456", "keyAltName": "example"}); err != nil {
log.Fatal(err)
}
result, err := collection.FindOne(context.TODO(), bson.D{}).DecodeBytes()
if err != nil {
log.Fatal(err)
}
fmt.Println(result)
}
```
In the above code, we are creating our Local Key Provider using a local
key that was randomly generated. Remember, this key should match what
was used when creating the data key, so random may not be the best
long-term. Likewise, a local key shouldn't be used in production because
of security reasons.
Once the KMS providers are established, the `createEncryptedClient`
function is executed. Remember, this particular function will set the
automatic encryption options and establish a connection to MongoDB.
To match the database and collection used in the schema map definition,
we are using `fle-example` as the database and `people` as the
collection. The operations that follow, such as `InsertOne` and
`FindOne`, can be used as if field level encryption wasn't even a thing.
Because we have an `ssn` field and the `keyAltName` field, the `ssn`
field will be encrypted client-side and saved to MongoDB. When doing
lookup operation, the encrypted field will be decrypted.
![FLE Data in MongoDB Atlas
When looking at the data in Atlas, for example, the encrypted fields
will not be human readable as seen in the above screenshot.
## Running and Building a Golang Application with MongoDB Field Level Encryption
When field level encryption is included in the Go application, a special
tag must be included in the build or run process, depending on the route
you choose. You should already have **mongocryptd** and
**libmongocrypt**, so to build your Go application, you'd do the
following:
``` bash
go build -tags cse
```
If you use the above command to build your binary, you can use it as
normal. However, if you're running your application without building,
you can do something like the following:
``` bash
go run -tags cse main.go
```
The above command will run the application with client-side encryption
enabled.
## Filter Documents in MongoDB on an Encrypted Field
If you've run the example so far, you'll probably notice that while you
can automatically encrypt fields and decrypt fields, you'll get an error
if you try to use a filter that contains an encrypted field.
In our example thus far, we use the
`AEAD_AES_256_CBC_HMAC_SHA_512-Random` algorithm on our encrypted
fields. To be able to filter on encrypted fields, the
`AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic` must be used. More
information between the two options can be found in the
documentation.
To use the deterministic approach, we need to make a few revisions to
our project. These changes are a result of the fact that we won't be
able to use alternate key names within our schema map.
First, let's change the **schema.json** file to the following:
``` json
{
"fle-example.people": {
"encryptMetadata": {
"keyId":
{
"$binary": {
"base64": "%s",
"subType": "04"
}
}
]
},
"properties": {
"ssn": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic"
}
}
},
"bsonType": "object"
}
}
```
The two changes in the above JSON reflect the new algorithm and the
`keyId` using the actual `_id` value rather than an alias. For the
`base64` field, notice the use of the `%s` placeholder. If you know the
base64 string version of your key, then swap it out and save yourself a
bunch of work. Since this tutorial is an example and the data changes
pretty much every time we run it, we probably want to swap out that
field after the file is loaded.
Starting with the `createDataKey` function, find the following line with
the `CreateDataKey` function call:
``` go
dataKeyId, err := clientEncryption.CreateDataKey(ctx, "local", options.DataKey())
```
What we didn't see in the previous parts of this tutorial is that this
function returns the `_id` of the data key. We should probably update
our `createDataKey` function to return `primitive.Binary` and then
return that `dataKeyId` variable.
We need to move that `dataKeyId` value around until it reaches where we
load our JSON file. We're doing a lot of work for the following reasons:
- We're in the scenario where we don't know the `_id` of our data key
prior to runtime. If we know it, we can add it to the schema and be
done.
- We designed our code to jump around with functions.
The schema map requires a base64 value to be used, so when we pass
around `dataKeyId`, we need to have first encoded it.
In the `main` function, we might have something that looks like this:
``` go
dataKeyId := createDataKey()
client := createEncryptedClient(base64.StdEncoding.EncodeToString(dataKeyId.Data))
```
This means that the `createEncryptedClient` needs to receive a string
argument. Update the `createEncryptedClient` to accept a string and then
change how we're reading our JSON file:
``` go
schemaMap = readSchemaFromFile("schema.json", dataKeyIdBase64)
```
Remember, we're just passing the base64 encoded value through the
pipeline. By the end of this, in the `readSchemaFromFile` function, we
can update our code to look like the following:
``` go
func readSchemaFromFile(file string, dataKeyIdBase64 string) bson.M {
content, err := ioutil.ReadFile(file)
if err != nil {
log.Fatal(err)
}
content = []byte(fmt.Sprintf(string(content), dataKeyIdBase64))
var doc bson.M
if err = bson.UnmarshalExtJSON(content, false, &doc); err != nil {
log.Fatal(err)
}
return doc
}
```
Not only are we receiving the base64 string, but we are using an
`Sprintf` function to swap our `%s` placeholder with the actual value.
Again, these changes were based around how we designed our code. At the
end of the day, we were really only changing the `keyId` in the schema
map and the algorithm used for encryption. By doing this, we are not
only able to decrypt fields that had been encrypted, but we're also able
to filter for documents using encrypted fields.
## The Field Level Encryption (FLE) Code in Go
While it might seem like we wrote a lot of code, the reality is that the
code was far simpler than the concepts involved. To get a better look at
the code, you can find it below:
``` go
package main
import (
"context"
"crypto/rand"
"encoding/base64"
"fmt"
"io/ioutil"
"log"
"os"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
var (
ctx = context.Background()
kmsProviders map[string]map[string]interface{}
schemaMap bson.M
)
func createDataKey() primitive.Binary {
kvClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
if err != nil {
log.Fatal(err)
}
kvClient.Database("keyvault").Collection("datakeys").Drop(ctx)
clientEncryptionOpts := options.ClientEncryption().SetKeyVaultNamespace("keyvault.datakeys").SetKmsProviders(kmsProviders)
clientEncryption, err := mongo.NewClientEncryption(kvClient, clientEncryptionOpts)
if err != nil {
log.Fatal(err)
}
defer clientEncryption.Close(ctx)
dataKeyId, err := clientEncryption.CreateDataKey(ctx, "local", options.DataKey())
if err != nil {
log.Fatal(err)
}
return dataKeyId
}
func createEncryptedClient(dataKeyIdBase64 string) *mongo.Client {
schemaMap = readSchemaFromFile("schema.json", dataKeyIdBase64)
mongocryptdOpts := map[string]interface{}{
"mongodcryptdBypassSpawn": true,
}
autoEncryptionOpts := options.AutoEncryption().
SetKeyVaultNamespace("keyvault.datakeys").
SetKmsProviders(kmsProviders).
SetSchemaMap(schemaMap).
SetExtraOptions(mongocryptdOpts)
mongoClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("ATLAS_URI")).SetAutoEncryptionOptions(autoEncryptionOpts))
if err != nil {
log.Fatal(err)
}
return mongoClient
}
func readSchemaFromFile(file string, dataKeyIdBase64 string) bson.M {
content, err := ioutil.ReadFile(file)
if err != nil {
log.Fatal(err)
}
content = []byte(fmt.Sprintf(string(content), dataKeyIdBase64))
var doc bson.M
if err = bson.UnmarshalExtJSON(content, false, &doc); err != nil {
log.Fatal(err)
}
return doc
}
func main() {
fmt.Println("Starting the application...")
localKey := make([]byte, 96)
if _, err := rand.Read(localKey); err != nil {
log.Fatal(err)
}
kmsProviders = map[string]map[string]interface{}{
"local": {
"key": localKey,
},
}
dataKeyId := createDataKey()
client := createEncryptedClient(base64.StdEncoding.EncodeToString(dataKeyId.Data))
defer client.Disconnect(ctx)
collection := client.Database("fle-example").Collection("people")
collection.Drop(context.TODO())
if _, err := collection.InsertOne(context.TODO(), bson.M{"name": "Nic Raboy", "ssn": "123456"}); err != nil {
log.Fatal(err)
}
result, err := collection.FindOne(context.TODO(), bson.M{"ssn": "123456"}).DecodeBytes()
if err != nil {
log.Fatal(err)
}
fmt.Println(result)
}
```
Try to set the `ATLAS_URI` in your environment variables and give the
code a spin.
## Troubleshooting Common MongoDB CSFLE Problems
If you ran the above code and found some encrypted data in your
database, fantastic! However, if you didn't get so lucky, I want to
address a few of the common problems that come up.
Let's start with the following runtime error:
``` none
panic: client-side encryption not enabled. add the cse build tag to support
```
If you see the above error, it is likely because you forgot to use the
`-tags cse` flag when building or running your application. To get
beyond this, just build your application with the following:
``` none
go build -tags cse
```
Assuming there aren't other problems, you won't receive that error
anymore.
When you build or run with the `-tags cse` flag, you might stumble upon
the following error:
``` none
/usr/local/Cellar/go/1.13.1/libexec/pkg/tool/darwin_amd64/link: running clang failed: exit status 1
ld: warning: directory not found for option '-L/usr/local/Cellar/libmongocrypt/1.0.4/lib'
ld: library not found for -lmongocrypt
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
The error might not look exactly the same as mine depending on the
operating system you're using, but the gist of it is that it's saying
you are missing the **libmongocrypt** library. Make sure that you've
installed it correctly for your operating system per the
[documentation.
Now, what if you encounter the following?
``` none
exec: "mongocryptd": executable file not found in $PATH
exit status 1
```
Like with the **libmongocrypt** error, it just means that we don't have
access to **mongocryptd**, a requirement for automatic field level
encryption. There are numerous methods toward installing this binary, as
seen in the
documentation,
but on macOS it means having MongoDB Enterprise Edition nearby.
## Conclusion
You just saw how to use MongoDB client-side field level encryption
(CSFLE) in your Go application. This is useful if you'd like to encrypt
fields within MongoDB documents client-side before it reaches the
database.
To give credit where credit is due, a lot of the code from this tutorial
was taken from Kenn White's sandbox
repository
on GitHub.
There are a few things that I want to reiterate:
- Using a local key is a security risk in production. Either use
something like AWS KMS or load your Local Key Provider with a key
that was obtained through an external request.
- The **mongocryptd** binary must be available on the computer or
server running the Go application. This is easily installed through
the MongoDB Enterprise Edition installation.
- The **libmongocrypt** library must be available to add compatibility
to the Go driver for client-side encryption and decryption.
- Don't lose your client-side key. Otherwise, you lose the ability to
decrypt your fields.
In a future tutorial, we'll explore how to use AWS KMS and similar for
key management.
Questions? Comments? We'd love to connect with you. Join the
conversation on the MongoDB Community
Forums.
| md | {
"tags": [
"Go",
"MongoDB"
],
"pageDescription": "Learn how to encrypt document fields client-side in Go with MongoDB client-side field level encryption (CSFLE).",
"contentType": "Tutorial"
} | Client-Side Field Level Encryption (CSFLE) in MongoDB with Golang | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-relevancy-explained | created | # Atlas Search Relevancy Explained
Full-text search powers all of our digital lives — googling for this and that; asking Siri where to find a tasty, nearby dinner; shopping at Amazon; and so on. We receive relevant results, often even in spite of our typos, voice transcription mistakes, or vaguely formed queries. We have grown accustomed to expecting the best results for our searching intentions, right there, at the top.
But now it’s your turn, dear developer, to build the same satisfying user experience into your Atlas-powered application.
If you’ve not yet created an Atlas Search index, it would be helpful to do so before delving into the rest of this article. We’ve got a handy tutorial to get started with Atlas Search. We will happily and patiently wait for you to get started and return here when you’ve got some search results.
Welcome back! We see that you’ve got data, and it lives in MongoDB Atlas. You’ve turned on Atlas Search and run some queries, and now you want to understand why the results are in the order they appear and get some tips on tuning the relevancy ranking order.
## Relevancy riddle
In the article Using Atlas Search from Java, we left the reader with a bit of a search relevancy mystery, using a query of the cast field for the phrase “keanu reeves” (lowercase; a `$match` fails at even this inexact of a query) narrowing the results to movies that are both dramatic (`genres:Drama`) _AND_ romantic (`genres:Romance`). We’ll use that same query here. The results of this query match several documents, but with differing scores. The only scoring factor is a `must` clause of the `phrase` “keanu reeves”. Why don’t “Sweet November” and “A Walk in the Clouds” score identically?
Can you spot the difference? Read on as we provide you the tools and tips to suss out and solve these kinds of challenges presented by full-text, inexact/fuzzy/close-but-not-exact search results.
## Score details
Atlas Search makes building full-text search applications possible, and with a few clicks, accepting default settings, you’ve got incredibly powerful capabilities within reach. You’ve got a pretty good auto-pilot system, but you’re in the cockpit of a 747 with knobs and dials all around. The plane will take off and land safely by itself — most of the time. Depending on conditions and goals, manually going up to 11.0 on the volume knob, and perhaps a bit more on the thrust lever, is needed to fly there in style. Relevancy tuning can be described like this as well, and before you take control of the parameters, you need to understand what the settings do and what’s possible with adjustments.
The scoring details of each document for a given query can be requested and returned. There are two steps needed to get the score details: first requesting them in the `$search` request, and then projecting the score details metadata into each returned document. Requesting score details is a performance hit on the underlying search engine, so only do this for diagnostic or learning purposes. To request score details from the search request, set `scoreDetails` to `true`. Those score details are available in the results `$meta`data for each document.
Here’s what’s needed to get score details:
```
{
"$search": {
...
"scoreDetails": true
}
},
{
"$project": {
...
"scoreDetails": {"$meta": "searchScoreDetails"}
}
}]
```
Let’s search the movies collection built from the [tutorial for dramatic, romance movies starring “keanu reeves” (tl; dr: add sample collections, create a search index `default` on movies collection with `dynamic=”true”`), bringing in the score and score details:
```
{
"$search": {
"compound": {
"filter": [
{
"compound": {
"must": [
{
"text": {
"query": "Drama",
"path": "genres"
}
},
{
"text": {
"query": "Romance",
"path": "genres"
}
}
]
}
}
],
"must": [
{
"phrase": {
"query": "keanu reeves",
"path": "cast"
}
}
]
},
"scoreDetails": true
}
},
{
"$project": {
"_id": 0,
"title": 1,
"cast": 1,
"genres": 1,
"score": {
"$meta": "searchScore"
},
"scoreDetails": {
"$meta": "searchScoreDetails"
}
}
},
{
"$limit": 10
}
]
```
Content warning! The following output is not for the faint of heart. It’s the daunting reason we are here though, so please push through as these details are explained below. The value of the projected `scoreDetails` will look something like the following for the first result:
```
"scoreDetails": {
"value": 6.011996746063232,
"description": "sum of:",
"details": [
{
"value": 0,
"description": "match on required clause, product of:",
"details": [
{
"value": 0,
"description": "# clause",
"details": []
},
{
"value": 1,
"description": "+ScoreDetailsWrapped ($type:string/genres:drama) +ScoreDetailsWrapped ($type:string/genres:romance)",
"details": []
}
]
},
{
"value": 6.011996746063232,
"description": "$type:string/cast:\"keanu reeves\" [BM25Similarity], result of:",
"details": [
{
"value": 6.011996746063232,
"description": "score(freq=1.0), computed as boost * idf * tf from:",
"details": [
{
"value": 13.083234786987305,
"description": "idf, sum of:",
"details": [
{
"value": 6.735175132751465,
"description": "idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
"details": [
{
"value": 27,
"description": "n, number of documents containing term",
"details": []
},
{
"value": 23140,
"description": "N, total number of documents with field",
"details": []
}
]
},
{
"value": 6.348059177398682,
"description": "idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
"details": [
{
"value": 40,
"description": "n, number of documents containing term",
"details": []
},
{
"value": 23140,
"description": "N, total number of documents with field",
"details": []
}
]
}
]
},
{
"value": 0.4595191478729248,
"description": "tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
"details": [
{
"value": 1,
"description": "phraseFreq=1.0",
"details": []
},
{
"value": 1.2000000476837158,
"description": "k1, term saturation parameter",
"details": []
},
{
"value": 0.75,
"description": "b, length normalization parameter",
"details": []
},
{
"value": 8,
"description": "dl, length of field",
"details": []
},
{
"value": 8.217415809631348,
"description": "avgdl, average length of field",
"details": []
}
]
}
]
}
]
}
]
}
```
We’ll write a little code, below, that presents this nested structure in a more concise, readable format, and delve into the details there. Before we get to breaking down the score, we need to understand where these various factors come from. They come from Lucene.
## Lucene inside
[Apache Lucene powers a large percentage of the world’s search experiences, from the majority of e-commerce sites to healthcare and insurance systems, to intranets, to top secret intelligence, and so much more. And it’s no secret that Apache Lucene powers Atlas Search. Lucene has proven itself to be robust and scalable, and it’s pervasively deployed. Many of us would consider Lucene to be the most important open source project ever, where a diverse community of search experts from around the world and across multiple industries collaborate constructively to continually improve and innovate this potent project.
So, what is this amazing thing called Lucene? Lucene is an open source search engine library written in Java that indexes content and handles sophisticated queries, rapidly returning relevant results. In addition, Lucene provides faceting, highlighting, vector search, and more.
## Lucene indexing
We cannot discuss search relevancy without addressing the indexing side of the equation as they are interrelated. When documents are added to an Atlas collection with an Atlas Search index enabled, the fields of the documents are indexed into Lucene according to the configured index mappings.
When textual fields are indexed, a data structure known as an inverted index is built through a process called analysis. The inverted index, much like a physical dictionary, is a lexicographically/alphabetically ordered list of terms/words, cross-referenced to the documents that contain them. The analysis process is initially fed the entire text value of the field during indexing and, according to the analyzer defined in the mapping, breaks it down into individual terms/words.
For example, the silly sentence “The quick brown fox jumps over the lazy dog” is analyzed by the Atlas Search default analyzer (`lucene.standard`) into the following terms: the,quick,brown,fox,jumps,over,the,lazy,dog. Now, if we alphabetize (and de-duplicate, noting the frequency) those terms, it looks like this:
| term | frequency |
| :-------- | -------: |
| brown | 1 |
| dog | 1 |
| fox | 1 |
| jumps | 1 |
| lazy | 1 |
| over | 1 |
| quick | 1 |
| the | 2 |
In addition to which documents contain a term, the positions of each instance of that term are recorded in the inverted index structure. Recording term positions allows for phrase queries (like our “keanu reeves” example), where terms of the query must be adjacent to one another in the indexed field.
Suppose we have a Silly Sentences collection where that was our first document (document id 1), and we add another document (id 2) with the text “My dogs play with the red fox”. Our inverted index, showing document ids and term positions. becomes:
| term | document ids | term frequency | term positions
| :----| --------------: | ---------------: | ---------------: |
| brown | 1 | 1 | Document 1: 3 |
| dog | 1 | 1 | Document 1: 9 |
| dogs | 2 | 1 | Document 2: 2 |
| fox | 1,2 | 2 | Document 1: 4; Document 2: 7 |
| jumps | 1 | 1 | Document 1: 5 |
| lazy | 1 | 1 | Document 1: 8 |
| my | 2 | 1 | Document 2: 1 |
| over | 1 | 1 | Document 1: 6 |
| play | 2 | 1 | Document 2: 3 |
| quick | 1 | 1 | Document 1: 2 |
| red | 2 | 1 | Document 2: 6 |
| the | 1,2 | 3 | Document 1: 1, 7; Document 2: 5 |
| with | 2 | 1 | Document 2: 4 |
With this data structure, Lucene can quickly navigate to a queried term and return the documents containing it.
There are a couple of notable features of this inverted index example. The words “dog” and “dogs” are separate terms. The terms emitted from the analysis process, which are indexed exactly as they are emitted, are the atomic searchable units, where “dog” is not the same as “dogs”. Does your application need to find both documents for a search of either of these terms? Or should it be more exact? Also of note, out of two documents, “the” has appeared three times — more times than there are documents. Maybe words such as “the” are so common in your data that a search for that term isn’t useful. Your analyzer choices determine what lands in the inverted index, and thus what is searchable or not. Atlas Search provides a variety of analyzer options, with the right choice being the one that works best for your domain and data.
There are a number of statistics about a document collection that emerge through the analysis and indexing processes, including:
* Term frequency: How many times did a term appear in the field of the document?
* Document frequency: In how many documents does this term appear?
* Field length: How many terms are in this field?
* Term positions: In which position, in the emitted terms, does each instance appear?
These stats lurk in the depths of the Lucene index structure and surface visibly in the score detail output that we’ve seen above and will delve into below.
## Lucene scoring
The statistics captured during indexing factor into how documents are scored at query time. Lucene scoring, at its core, is built upon TF/IDF — term frequency/inverse document frequency. Generally speaking, TF/IDF scores documents with higher term frequencies greater than ones with lower term frequencies, and scores documents with more common terms lower than ones with rarer terms — the idea being that a rare term in the collection conveys more information than a frequently occurring one and that a term’s weight is proportional to its frequency.
There’s a bit more math behind the scenes of Lucene’s implementation of TF/IDF, to dampen the effect (e.g., take the square root) of TF and to scale IDF (using a logarithm function).
The classic TF/IDF formula has worked well in general, when document fields are of generally the same length, and there aren’t nefarious or odd things going on with the data where the same word is repeated many times — which happens in product descriptions, blog post comments, restaurant reviews, and where boosting a document to the top of the results has some incentive. Given that not all documents are created equal — some titles are long, some are short, and some have descriptions that repeat words a lot or are very succinct — some fine-tuning is warranted to account for these situations.
## Best matches
As search engines have evolved, refinements have been made to the classic TF/IDF relevancy computation to account for term saturation (an excessively large number of the same term within a field) and reduce the contribution of long field values which contain many more terms than shorter fields, by factoring in the ratio of the field length of the document to the average field length of the collection. The now popular BM25 method has become the default scoring formula in Lucene and is the scoring formula used by Atlas Search. BM25 stands for “Best Match 25” (the 25th iteration of this scoring algorithm). A really great writeup comparing classic TF/IDF to BM25, including illustrative graphs, can be found on OpenSource Connections.
There are built-in values for the additional BM25 factors, `k1` and `b`. The `k1` factor affects how much the score increases with each reoccurrence of the term, and `b` controls the effect of field length. Both of these factors are currently internally set to the Lucene defaults and are not settings a developer can adjust at this point, but that’s okay as the built-in values have been tuned to provide great relevancy as is.
## Breaking down the score details
Let’s look at those same score details in a slimmer, easier-to-read fashion:
It’s easier to see in this format that the score of roughly 6.011 comes from the sum of two numbers: 0.0 (the non-scoring `# clause`-labeled filters) and roughly 6.011. And that ~6.011 factor comes from the BM25 scoring formula that multiples the “idf” (inverse document frequency) factor of ~13.083 with the “tf” (term frequency) factor of ~0.459. The “idf” factor is the “sum of” two components, one for each of the terms in our `phrase` operator clause. Each of the `idf` factors for our two query terms, “keanu” and “reeves”, is computed using the formula in the output, which is:
log(1 + (N - n + 0.5) / (n + 0.5))
The “tf” factor for the full phrase is “computed as” this formula:
freq / (freq + k1 * (1 - b + b * dl / avgdl))
This uses the factors indented below it, such as the average length (in number of terms) of the “cast” field across all documents in the collection.
In front of each field name in this output (“genres” and “cast”) there is a prefix used internally to note the field type (the “$type:string/” prefix).
## Pretty printing the score details
The more human-friendly output of the score details above was generated using MongoDB VS Code Playgrounds. This JavaScript code will print a more concise, indented version of the scoreDetails, by calling: `print_score_details(doc.scoreDetails);`:
```
function print_score_details(details, indent_level) {
if (!indent_level) { indent_level = 0; }
spaces = " ".padStart(indent_level);
console.log(spaces + details.value + ", " + details.description);
details.details.forEach (d => {
print_score_details(d, indent_level + 2);
});
}
```
Similarly, pretty printing in Java can be done like the code developed in the article Using Atlas Search from Java, which is available on GitHub.
## Mystery solved!
Going back to our Relevancy Riddle, let’s see the score details:
Using the detailed information provided about the statistics captured in the Lucene inverted index, it turns out that the `cast` fields of these two documents have an interesting difference. They both have four cast members, but remember the analysis process that extracts searchable terms from text. In the lower scoring of the two documents, one of the cast members has a hyphenated last name: Aitana Sènchez-Gijèn. The dash/hyphen character is a term separator character for the `lucene.standard` analyzer, making one additional term for that document which in turn increases the length (in number of terms) of the `cast` field. A greater field length causes term matches to weigh less than if they were in a shorter length field.
## Compound is king
Even in this simple phrase query example, the scoring is made up of many factors that are the “sum of”, “product of”, “result of”, or “from” other factors and formulas. Relevancy tuning involves crafting clauses nested within a `compound` operator using `should` and `must`. Note again that `filter` clauses do not contribute to the score but are valuable to narrow the documents considered for scoring by the `should` and `must` clauses. And of course, `mustNot` clauses don’t contribute to the score, as documents matching those clauses are omitted from the results altogether.
Use multiple `compound.should` and `compound.must` to weight matches in different fields in different ways. It’s a common practice, for example, to weight matches in a `title` field higher than matches in a `description` field (or `plot` field in the movies collection), using boosts on different query operator clauses.
## Boosting clauses
With a query composed of multiple clauses, you have control over modifying the score in various ways using the optional `score` setting available on all search operators. Scoring factors for a clause can be controlled in these four ways:
* `constant`: The scoring factor for the clause is set to an explicit value.
* `boost`: Multiply the normal computed scoring factor for the clause by either a specified value or by the value of a field on the document being scored.
* `function`: Compute the scoring factor using the specified formula expression.
* `embedded`: Work with the `embeddedDocument` search operator to control how matching embedded documents contribute to the score of the top-level parent document.
That’s a lot of nuanced control! These are important controls to have when you’re deep into tuning search results rankings.
## Relevancy tuning: a delicate balance
With the tools and mechanisms illustrated here, you’ve got the basics of Atlas Search scoring insight. When presented with the inevitable results ranking challenges, you’ll be able to assess the situation and understand why and how the scores are computed as they are. Tuning those results is tricky. Nudging one query’s results to the desired order is fairly straightforward, but that’s just one query.
Adjusting boost factors, leveraging more nuanced compound clauses, and tinkering with analysis will affect other query results. To make sure your users get relevant results:
* Test, test, and test again, across many queries — especially real-world queries mined from your logs, not just your pet queries.
* Test with a complete collection of data (as representative or as real-world as you can get), not just a subset of data for development purposes.
* Remember, index stats matter for scores, such as the average length in number of terms of each field. If you test with non-production quality and scale data, relevance measures won’t match a production environment's stats.
Relevancy concerns vary dramatically by domain, scale, sensitivity, and monetary value of search result ordering. Ensuring the “best” (by whatever metrics are important to you) documents appear in the top positions presented is both an art and a science. The e-commerce biggies are constantly testing query results, running regression tests and A/B experiments behind the scenes , fiddling with all the parameters available. For website search, however, setting a boost for `title` can be all you need.
You’ve got the tools, and it’s just math, but be judicious about adjusting things, and do so with full real data, real queries, and some time and patience to set up tests and experiments.
Relevancy understanding and tuning is an on-going process and discussion. Questions? Comments? Let's continue the conversation over at our Atlas Search community forum. | md | {
"tags": [
"Atlas"
],
"pageDescription": "We've grown accustomed to expecting the best results for our search intentions. Now it’s your turn to build the same experience into your Atlas-powered app. ",
"contentType": "Article"
} | Atlas Search Relevancy Explained | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/python/pymongoarrow-and-data-analysis | created | # PyMongoArrow: Bridging the Gap Between MongoDB and Your Data Analysis App
## Overview
MongoDB has always been a great database for data science and data analysis, and that's because you can:
* Import data without a fixed schema.
* Clean it up within the database.
* Listen in real-time for updates (a very handy feature that's used by our MongoDB Kafka Connector).
* Query your data with the super-powerful and intuitive Aggregation Framework.
But MongoDB is a general-purpose database, and not a data analysis tool, so a common pattern when analysing data that's stored within MongoDB is to extract the results of a query into a Numpy array, or Pandas dataframe, and to run complex and potentially long running analyses using the toolkit those frameworks provide. Until recently, the performance hit of converting large amounts of BSON data, as provided by MongoDB into these data structures, has been slower than we'd like.
Fortunately, MongoDB recently released PyMongoArrow, a Python library for efficiently converting the result of a MongoDB query into the Apache Arrow data model. If you're not aware of Arrow, you may now be thinking, "Mark, how does converting to Apache Arrow help me with my Numpy or Pandas analysis?" The answer is: Conversion between Arrow, Numpy, and Pandas is super efficient, so it provides a useful intermediate format for your tabular data. This way, we get to focus on building a powerful tool for mapping between MongoDB and Arrow, and leverage the existing PyArrow library for integration with Numpy and MongoDB
## Prerequisites
You'll need a recent version of Python (I'm using 3.8) with pip available. You can use conda if you like, but PyMongoArrow is released on PyPI, so you'll still need to use pip to install it into your conda Python environment.
This tutorial was written for PyMongoArrow v0.1.1.
## Getting Started
In this tutorial, I'm going to be using a sample database you can install when creating a cluster hosted on MongoDB Atlas. The database I'll be using is the "sample\_weatherdata" database. You'll access this with a `mongodb+srv` URI, so you'll need to install PyMongo with the "srv" extra, like this:
``` shell
$ python -m pip install jupyter pymongoarrow 'pymongosrv]' pandas
```
> **Useful Tip**: If you just run `pip`, you may end up using a copy of `pip` that was installed for a different version of `python` than the one you're using. For some reason, the `PATH` getting messed up this way happens more often than you'd think. A solution to this is to run pip via Python, with the command `python -m pip`. That way, it'll always run the version of `pip` that's associated with the version of `python` in your `PATH`. This is now the [officially recommended way to run `pip`!
You'll also need a MongoDB cluster set up with the sample datasets imported. Follow these instructions to import them into your MongoDB cluster and then set an environment variable, `MDB_URI`, pointing to your database. It should look like the line below, but with the URI you copy out of the Atlas web interface. (Click the "Connect" button for your cluster.)
``` shell
export MDB_URI=mongodb+srv://USERNAME:[email protected]/sample_weatherdata?retryWrites=true&w=majority
```
A sample document from the "data" collection looks like this:
``` json
{'_id': ObjectId('5553a998e4b02cf7151190bf'),
'st': 'x+49700-055900',
'ts': datetime.datetime(1984, 3, 5, 15, 0),
'position': {'type': 'Point', 'coordinates': -55.9, 49.7]},
'elevation': 9999,
'callLetters': 'SCGB',
'qualityControlProcess': 'V020',
'dataSource': '4',
'type': 'FM-13',
'airTemperature': {'value': -5.1, 'quality': '1'},
'dewPoint': {'value': 999.9, 'quality': '9'},
'pressure': {'value': 1020.8, 'quality': '1'},
'wind': {'direction': {'angle': 100, 'quality': '1'},
'type': 'N',
'speed': {'rate': 3.1, 'quality': '1'}},
'visibility': {'distance': {'value': 20000, 'quality': '1'},
'variability': {'value': 'N', 'quality': '9'}},
'skyCondition': {'ceilingHeight': {'value': 22000,
'quality': '1',
'determination': 'C'},
'cavok': 'N'},
'sections': ['AG1', 'AY1', 'GF1', 'MD1', 'MW1'],
'precipitationEstimatedObservation': {'discrepancy': '2',
'estimatedWaterDepth': 0},
'pastWeatherObservationManual': [{'atmosphericCondition': {'value': '0',
'quality': '1'},
'period': {'value': 3, 'quality': '1'}}],
'skyConditionObservation': {'totalCoverage': {'value': '01',
'opaque': '99',
'quality': '1'},
'lowestCloudCoverage': {'value': '01', 'quality': '1'},
'lowCloudGenus': {'value': '01', 'quality': '1'},
'lowestCloudBaseHeight': {'value': 800, 'quality': '1'},
'midCloudGenus': {'value': '00', 'quality': '1'},
'highCloudGenus': {'value': '00', 'quality': '1'}},
'atmosphericPressureChange': {'tendency': {'code': '8', 'quality': '1'},
'quantity3Hours': {'value': 0.5, 'quality': '1'},
'quantity24Hours': {'value': 99.9, 'quality': '9'}},
'presentWeatherObservationManual': [{'condition': '02', 'quality': '1'}]}
```
To keep things simpler in this tutorial, I'll ignore all the fields except for "ts," "wind," and the "\_id" field.
I set the `MDB_URI` environment variable, installed the dependencies above, and then fired up a new Python 3 Jupyter Notebook. I've put the notebook [on GitHub, if you want to follow along, or run it yourself.
I added the following code to a cell at the top of the file to import the necessary modules, and to connect to my database:
``` python
import os
import pyarrow
import pymongo
import bson
import pymongoarrow.monkey
from pymongoarrow.api import Schema
MDB_URI = os.environ'MDB_URI']
# Add extra find_* methods to pymongo collection objects:
pymongoarrow.monkey.patch_all()
client = pymongo.MongoClient(MDB_URI)
database = client.get_default_database()
collection = database.get_collection("data")
```
## Working With Flat Data
If the data you wish to convert to Arrow, Pandas, or Numpy data tables is already flat—i.e., the fields are all at the top level of your documents—you can use the methods `find\_arrow\_all`, `find\_pandas\_all`, and `find\_numpy\_all` to query your collection and return the appropriate data structure.
``` python
collection.find_pandas_all(
{},
schema=Schema({
'ts': pyarrow.timestamp('ms'),
})
)
```
| | ts |
| --- | ---: |
| 0 | 1984-03-05 15:00:00 |
| 1 | 1984-03-05 18:00:00 |
| 2 | 1984-03-05 18:00:00 |
| 3 | 1984-03-05 18:00:00 |
| 4 | 1984-03-05 18:00:00 |
| ... | ... |
| 9995 | 1984-03-13 06:00:00 |
| 9996 | 1984-03-13 06:00:00 |
| 9997 | 1984-03-13 06:00:00 |
| 9998 | 1984-03-12 09:00:00 |
| 9999 | 1984-03-12 12:00:00 |
10000 rows × 1 columns
The first argument to find\_pandas\_all is the `filter` argument. I'm interested in all the documents in the collection, so I've left it empty. The documents in the data collection are quite nested, so the only real value I can access with a find query is the timestamp of when the data was recorded, the "ts" field. Don't worry—I'll show you how to access the rest of the data in a moment!
Because Arrow tables (and the other data types) are strongly typed, you'll also need to provide a Schema to map from MongoDB's permissive dynamic schema into the types you want to handle in your in-memory data structure.
The `Schema` is a mapping of the field name, to the appropriate type to be used by Arrow, Pandas, or Numpy. At the current time, these types are 64-bit ints, 64-bit floating point numbers, and datetimes. The easiest way to specify these is with the native python types `int` and `float`, and with `pyarrow.datetime`. Any fields in the document that aren't listed in the schema will be ignored.
PyMongoArrow currently hijacks the `projection` parameter to the `find_*_all` methods, so unfortunately, you can't write a projection to flatten the structure at the moment.
## Convert Your Documents to Tabular Data
MongoDB documents are very flexible, and can support nested arrays and documents. Although Apache Arrow also supports nested lists, structs, and dictionaries, Numpy arrays and Pandas dataframes, in contrast, are tabular or columnar data structures. There are plans to support mapping to the nested Arrow data types in future, but at the moment, only scalar values are supported with all three libraries. So in all these cases, it will be necessary to flatten the data you are exporting from your documents.
To project your documents into a flat structure, you'll need to use the more powerful `aggregate_*_all` methods that PyMongoArrow adds to your PyMongo Collection objects.
In an aggregation pipeline, you can add a `$project` stage to your query to project the nested fields you want in your table to top level fields in the aggregation result.
In order to test my `$project` stage, I first ran it with the standard PyMongo aggregate function. I converted it to a `list` so that Jupyter would display the results.
``` python
list(collection.aggregate([
{'$match': {'_id': bson.ObjectId("5553a998e4b02cf7151190bf")}},
{'$project': {
'windDirection': '$wind.direction.angle',
'windSpeed': '$wind.speed.rate',
}}
]))
[{'_id': ObjectId('5553a998e4b02cf7151190bf'),
'windDirection': 100,
'windSpeed': 3.1}]
```
Because I've matched a single document by "\_id," only one document is returned, but you can see that the `$project` stage has mapped `$wind.direction.angle` to the top-level "windDirection" field in the result, and the same with `$wind.speed.rate` and "windSpeed" in the result.
I can take this `$project` stage and use it to flatten all the results from an aggregation query, and then provide a schema to identify "windDirection" as an integer value, and "windSpeed" as a floating point number, like this:
``` python
collection.aggregate_pandas_all([
{'$project': {
'windDirection': '$wind.direction.angle',
'windSpeed': '$wind.speed.rate',
}}
],
schema=Schema({'windDirection': int, 'windSpeed': float})
)
```
| A | B | C |
| --- | --- | --- |
| | windDirection | windSpeed |
| 0 | 100 | 3.1 |
| 1 | 50 | 9.0 |
| 2 | 30 | 7.7 |
| 3 | 270 | 19.0 |
| 4 | 50 | 8.2 |
| ... | ... | ... |
| 9995 | 10 | 7.0 |
| 9996 | 60 | 5.7 |
| 9997 | 330 | 3.0 |
| 9998 | 140 | 7.7 |
| 9999 | 80 | 8.2 |
10000 rows × 2 columns
There are only 10000 documents in this collection, but some basic benchmarks I wrote show this to be around 20% faster than working directly with `DataFrame.from_records` and `PyMongo`. With larger datasets, I'd expect the difference in performance to be more significant. It's early days for the PyMongoArrow library, and so there are some limitations at the moment, such as the ones I've mentioned above, but the future looks bright for this library in providing fast mappings between your rich, flexible MongoDB collections and any in-memory analysis requirements you might have with Arrow, Pandas, or Numpy.
## Next Steps
If you're planning to do lots of analysis of data that's stored in MongoDB, then make sure you're up on the latest features of MongoDB's powerful [aggregation framework. You can do many things within the database so you may not need to export your data at all. You can connect to secondary servers in your cluster to reduce load on the primary for analytics queries, or even have dedicated analytics nodes for running these kinds of queries.
Check out MongoDB 5.0's new window functions and if you're working with time series data, you'll definitely want to know about MongoDB 5.0's new time-series collections. | md | {
"tags": [
"Python",
"MongoDB",
"Pandas",
"AI"
],
"pageDescription": "MongoDB has always been a great database for data science and data analysis, and now with PyMongoArrow, it integrates optimally with Apache Arrow, Python's Numpy, and Pandas libraries.",
"contentType": "Quickstart"
} | PyMongoArrow: Bridging the Gap Between MongoDB and Your Data Analysis App | 2024-05-20T17:32:23.501Z |