sourceName
stringclasses 1
value | url
stringlengths 52
145
| action
stringclasses 1
value | body
stringlengths 0
60.5k
| format
stringclasses 1
value | metadata
dict | title
stringlengths 5
125
| updated
stringclasses 3
values |
---|---|---|---|---|---|---|---|
devcenter | https://www.mongodb.com/developer/languages/javascript/hapijs-nodejs-driver | created | # Build a RESTful API with HapiJS and MongoDB
While JAMStack, static site generators, and serverless functions continue to be all the rage in 2020, traditional frameworks like Express.js and Hapi.js remain the go-to solution for many developers. These frameworks are battle-tested, reliable, and scalable, so while they may not be the hottest tech around, you can count on them to get the job done.
In this post, we're going to build a web application with Hapi.js and MongoDB. If you would like to follow along with this tutorial, you can get the code from this GitHub repo. Also, be sure to sign up for a free MongoDB Atlas account to make sure you can implement all of the code in this tutorial.
## Prerequisites
For this tutorial you'll need:
- Node.js
- npm
- MongoDB
You can download Node.js here, and it will come with the latest version of npm. For MongoDB, use MongoDB Atlas for free. While you can use a local MongoDB install, you will not be able to implement some of the functionality that relies on MongoDB Atlas Search, so I encourage you to give Atlas a try. All other required items will be covered in the article.
## What is Hapi.js
Hapi.js or simply Hapi is a Node.js framework for "building powerful, scalable applications, with minimal overhead and full out-of-the-box functionality". Originally developed for Walmart's e-commerce platform, the framework has been adopted by many enterprises. In my personal experience, I've worked with numerous companies who heavily relied on Hapi.js for their most critical infrastructure ranging from RESTful APIs to traditional web applications.
For this tutorial, I'll assume that you are already familiar with JavaScript and Node.js. If not, I would suggest checking out the Nodejs.dev website which offers an excellent introduction to Node.js and will get you up and running in no time.
## What We're Building: RESTful Movie Database
The app that we're going to build today is going to expose a series of RESTful endpoints for working with a movies collection. The dataset we'll be relying on can be accessed by loading sample datasets into your MongoDB Atlas cluster. In your MongoDB dashboard, navigate to the **Clusters** tab. Click on the ellipses (...) button on the cluster you wish to use and select the **Load Sample Dataset** option. Within a few minutes, you'll have a series of new databases created and the one we'll work with is called `sample_mflix`.
We will not build a UI as part of this tutorial, instead, we'll focus on getting the most out of our Hapi.js backend.
## Setting up a Hapi.js Application
Like with any Node.js application, we'll start off our project by installing some packages from the node package manager or npm. Navigate to a directory where you would like to store your application and execute the following commands:
``` bash
npm init
npm install @hapi/hapi --save
```
Executing `npm init` will create a `package.json` file where we can store our dependencies. When you run this command you'll be asked a series of questions that will determine how the file gets populated. It's ok to leave all the defaults as is. The `npm install @hapi/hapi --save` command will pull down the latest
version of the Hapi.js framework and save a reference to this version in the newly created `package.json` file. When you've completed this step, create an `index.js` file in the root directory and open it up.
Much like Express, Hapi.js is not a very prescriptive framework. What I mean by this is that we as the developer have the total flexibility to decide how we want our directory structure to look. We could have our entire application in a single file, or break it up into hundreds of components, Hapi.js does not care. To make sure our install was successful, let's write a simple app to display a message in our browser. The code will look like this:
``` javascript
const Hapi = require('@hapi/hapi');
const server = Hapi.server({
port: 3000,
host: 'localhost'
});
server.route({
method: 'GET',
path: '/',
handler: (req, h) => {
return 'Hello from HapiJS!';
}
});
server.start();
console.log('Server running on %s', server.info.uri);
```
Let's go through the code above to understand what is going on here. At the start of our program, we are requiring the hapi package which imports all of the Hapi.js API's and makes them available in our app. We then use the `Hapi.server` method to create an instance of a Hapi server and pass in our parameters. Now that we have a server, we can add routes to it, and that's what we do in the subsequent section. We are defining a single route for our homepage, saying that this route can only be accessed via a **GET** request, and the handler function is just going to return the message **"Hello from HapiJS!"**. Finally, we start the Hapi.js server and display a message to the console that tells us the server is running. To start the server, execute the following command in your terminal window:
``` bash
node index.js
```
If we navigate to `localhost:3000` in our web browser of choice, our result will look as follows:
If you see the message above in your browser, then you are ready to proceed to the next section. If you run into any issues, I would first ensure that you have the latest version of Node.js installed and that you have a `@hapi/hapi` folder inside of your `node_modules` directory.
## Building a RESTful API with Hapi.js
Now that we have the basics down, let's go ahead and create the actual routes for our API. The API routes that we'll need to create are as follows:
- Get all movies
- Get a single movie
- Insert a movie
- Update a movie
- Delete a movie
- Search for a movie
For the most part, we just have traditional CRUD operations that you are likely familiar with. But, our final route is a bit more advanced. This route is going to implement search functionality and allow us to highlight some of the more advanced features of both Hapi.js and MongoDB. Let's update our `index.js` file with the routes we need.
``` javascript
const Hapi = require('@hapi/hapi');
const server = Hapi.server({
port: 3000,
host: 'localhost'
});
// Get all movies
server.route({
method: 'GET',
path: '/movies',
handler: (req, h) => {
return 'List all the movies';
}
});
// Add a new movie to the database
server.route({
method: 'POST',
path: '/movies',
handler: (req, h) => {
return 'Add new movie';
}
});
// Get a single movie
server.route({
method: 'GET',
path: '/movies/{id}',
handler: (req, h) => {
return 'Return a single movie';
}
});
// Update the details of a movie
server.route({
method: 'PUT',
path: '/movies/{id}',
handler: (req, h) => {
return 'Update a single movie';
}
});
// Delete a movie from the database
server.route({
method: 'DELETE',
path: '/movies/{id}',
handler: (req, h) => {
return 'Delete a single movie';
}
});
// Search for a movie
server.route({
method: 'GET',
path: '/search',
handler: (req, h) => {
return 'Return search results for the specified term';
}
});
server.start();
console.log('Server running on %s', server.info.uri);
```
We have created our routes, but currently, all they do is return a string saying what the route is meant to do. That's no good. Next, we'll connect our Hapi.js app to our MongoDB database so that we can return actual data. We'll use the MongoDB Node.js Driver to accomplish this.
>If you are interested in learning more about the MongoDB Node.js Driver through in-depth training, check out the MongoDB for JavaScript Developers course on MongoDB University. It's free and will teach you all about reading and writing data with the driver, using the aggregation framework, and much more.
## Connecting Our Hapi.js App to MongoDB
Connecting a Hapi.js backend to a MongoDB database can be done in multiple ways. We could use the traditional method of just bringing in the MongoDB Node.js Driver via npm, we could use an ODM library like Mongoose, but I believe there is a better way to do it. The way we're going to connect to our MongoDB database in our Atlas cluster is using a Hapi.js plugin.
Hapi.js has many excellent plugins for all your development needs. Whether that need is authentication, logging, localization, or in our case data access, the Hapi.js plugins page provides many options. The plugin we're going to use is called `hapi-mongodb`. Let's install this package by running:
``` bash
npm install hapi-mongodb --save
```
With the package installed, let's go back to our `index.js` file and
configure the plugin. The process for this relies on the `register()`
method provided in the Hapi API. We'll register our plugin like so:
``` javascript
server.register({
plugin: require('hapi-mongodb'),
options: {
uri: 'mongodb+srv://{YOUR-USERNAME}:{YOUR-PASSWORD}@main.zxsxp.mongodb.net/sample_mflix?retryWrites=true&w=majority',
settings : {
useUnifiedTopology: true
},
decorate: true
}
});
```
We would want to register this plugin before our routes. For the options object, we are passing our MongoDB Atlas service URI as well as the name of our database, which in this case will be `sample_mflix`. If you're working with a different database, make sure to update it accordingly. We'll also want to make one more adjustment to our entire code base before moving on. If we try to run our Hapi.js application now, we'll get an error saying that we cannot start our server before plugins are finished registering. The register method will take some time to run and we'll have to wait on it. Rather than deal with this in a synchronous fashion, we'll wrap an async function around our server instantiation. This will make our code much cleaner and easier to reason about. The final result will look like this:
``` javascript
const Hapi = require('@hapi/hapi');
const init = async () => {
const server = Hapi.server({
port: 3000,
host: 'localhost'
});
await server.register({
plugin: require('hapi-mongodb'),
options: {
url: 'mongodb+srv://{YOUR-USERNAME}:{YOUR-PASSWORD}@main.zxsxp.mongodb.net/sample_mflix?retryWrites=true&w=majority',
settings: {
useUnifiedTopology: true
},
decorate: true
}
});
// Get all movies
server.route({
method: 'GET',
path: '/movies',
handler: (req, h) => {
return 'List all the movies';
}
});
// Add a new movie to the database
server.route({
method: 'POST',
path: '/movies',
handler: (req, h) => {
return 'Add new movie';
}
});
// Get a single movie
server.route({
method: 'GET',
path: '/movies/{id}',
handler: (req, h) => {
return 'Return a single movie';
}
});
// Update the details of a movie
server.route({
method: 'PUT',
path: '/movies/{id}',
handler: (req, h) => {
return 'Update a single movie';
}
});
// Delete a movie from the database
server.route({
method: 'DELETE',
path: '/movies/{id}',
handler: (req, h) => {
return 'Delete a single movie';
}
});
// Search for a movie
server.route({
method: 'GET',
path: '/search',
handler: (req, h) => {
return 'Return search results for the specified term';
}
});
await server.start();
console.log('Server running on %s', server.info.uri);
}
init();
```
Now we should be able to restart our server and it will register the plugin properly and work as intended. To ensure that our connection to the database does work, let's run a sample query to return just a single movie when we hit the `/movies` route. We'll do this with a `findOne()` operation. The `hapi-mongodb` plugin is just a wrapper for the official MongoDB Node.js driver so all the methods work exactly the same. Check out the official docs for details on all available methods. Let's use the `findOne()` method to return a single movie from the database.
``` javascript
// Get all movies
server.route({
method: 'GET',
path: '/movies',
handler: async (req, h) => {
const movie = await req.mongo.db.collection('movies').findOne({})
return movie;
}
});
```
We'll rely on the async/await pattern in our handler functions as well to keep our code clean and concise. Notice how our MongoDB database is now accessible through the `req` or request object. We didn't have to pass in an instance of our database, the plugin handled all of that for us, all we have to do was decide what our call to the database was going to be. If we restart our server and navigate to `localhost:3000/movies` in our browser we should see the following response:
If you do get the JSON response, it means your connection to the database is good and your plugin has been correctly registered with the Hapi.js application. If you see any sort of error, look at the above instructions carefully. Next, we'll implement our actual database calls to our routes.
## Implementing the RESTful Routes
We have six API routes to implement. We'll tackle each one and introduce new concepts for both Hapi.js and MongoDB. We'll start with the route that gets us all the movies.
### Get All Movies
This route will retrieve a list of movies. Since our dataset contains thousands of movies, we would not want to return all of them at once as this would likely cause the user's browser to crash, so we'll limit the result set to 20 items at a time. We'll allow the user to pass an optional query parameter that will give them the next 20 results in the set. My implementation is below.
``` javascript
// Get all movies
server.route({
method: 'GET',
path: '/movies',
handler: async (req, h) => {
const offset = Number(req.query.offset) || 0;
const movies = await req.mongo.db.collection('movies').find({}).sort({metacritic:-1}).skip(offset).limit(20).toArray();
return movies;
}
});
```
In our implementation, the first thing we do is sort our collection to ensure we get a consistent order of documents. In our case, we're sorting by the `metacritic` score in descending order, meaning we'll get the highest rated movies first. Next, we check to see if there is an `offset` query parameter. If there is one, we'll take its value and convert it into an integer, otherwise, we'll set the offset value to 0. Next, when we make a call to our MongoDB database, we are going to use that `offset` value in the `skip()` method which will tell MongoDB how many documents to skip. Finally, we'll use the `limit()` method to limit our results to 20 records and the `toArray()` method to turn the cursor we get back into an object.
Try it out. Restart your Hapi.js server and navigate to `localhost:3000/movies`. Try passing an offset query parameter to see how the results change. For example try `localhost:3000/movies?offset=500`. Note that if you pass a non-integer value, you'll likely get an error. We aren't doing any sort of error handling in this tutorial but in a real-world application, you should handle all errors accordingly. Next, let's implement the method to return a single movie.
### Get Single Movie
This route will return the data on just a single movie. For this method, we'll also play around with projection, which will allow us to pick and choose which fields we get back from MongoDB. Here is my implementation:
``` javascript
// Get a single movie
server.route({
method: 'GET',
path: '/movies/{id}',
handler: async (req, h) => {
const id = req.params.id
const ObjectID = req.mongo.ObjectID;
const movie = await req.mongo.db.collection('movies').findOne({_id: new ObjectID(id)},{projection:{title:1,plot:1,cast:1,year:1, released:1}});
return movie;
}
});
```
In this implementation, we're using the `req.params` object to get the dynamic value from our route. We're also making use of the `req.mongo.ObjectID` method which will allow us to transform the string id into an ObjectID that we use as our unique identifier in the MongoDB database. We'll have to convert our string to an ObjectID otherwise our `findOne()` method would not work as our `_id` field is not stored as a string. We're also using a projection to return only the `title`, `plot`, `cast`, `year`, and `released` fields. The result is below.
A quick tip on projection. In the above example, we used the `{ fieldName: 1 }` format, which told MongoDB to return only this specific field. If instead we only wanted to omit a few fields, we could have used the inverse `{ fieldName: 0}` format instead. This would send us all fields, except the ones named and given a value of zero in the projection option. Note that you can't mix and match the 1 and 0 formats, you have to pick one. The only exception is the `_id` field, where if you don't want it you can pass `{_id:0}`.
### Add A Movie
The next route we'll implement will be our insert operation and will allow us to add a document to our collection. The implementation looks like this:
``` javascript
// Add a new movie to the database
server.route({
method: 'POST',
path: '/movies',
handler: async (req, h) => {
const payload = req.payload
const status = await req.mongo.db.collection('movies').insertOne(payload);
return status;
}
});
The payload that we are going to submit to this endpoint will look like this:
.. code-block:: javascript
{
"title": "Avengers: Endgame",
"plot": "The avengers save the day",
"cast" : "Robert Downey Jr.", "Chris Evans", "Scarlett Johansson", "Samuel L. Jackson"],
"year": 2019
}
```
In our implementation we're again using the `req` object but this time we're using the `payload` sub-object to get the data that is sent to the endpoint. To test that our endpoint works, we'll use [Postman to send the request. Our response will give us a lot of info on what happened with the operation so for educational purposes we'll just return the entire document. In a real-world application, you would just send back a `{message: "ok"}` or similar statement. If we look at the response we'll find a field titled `insertedCount: 1` and this will tell us that our document was successfully inserted.
In this route, we added the functionality to insert a brand new document, in the next route, we'll update an existing one.
### Update A Movie
Updating a movie works much the same way adding a new movie does. I do want to introduce a new concept in Hapi.js here though and that is the concept of validation. Hapi.js can help us easily validate data before our handler function is called. To do this, we'll import a package that is maintained by the Hapi.js team called Joi. To work with Joi, we'll first need to install the package and include it in our `index.js` file.
``` bash
npm install @hapi/joi --save
npm install joi-objectid --save
```
Next, let's take a look at our implementation of the update route and then I'll explain how it all ties together.
``` javascript
// Add this below the @hapi/hapi require statement
const Joi = require('@hapi/joi');
Joi.objectId = require('joi-objectid')(Joi)
// Update the details of a movie
server.route({
method: 'PUT',
path: '/movies/{id}',
options: {
validate: {
params: Joi.object({
id: Joi.objectId()
})
}
},
handler: async (req, h) => {
const id = req.params.id
const ObjectID = req.mongo.ObjectID;
const payload = req.payload
const status = await req.mongo.db.collection('movies').updateOne({_id: ObjectID(id)}, {$set: payload});
return status;
}
});
```
With this route we are really starting to show the strength of Hapi.js. In this implementation, we added an `options` object and passed in a `validate` object. From here, we validated that the `id` parameter matches what we'd expect an ObjectID string to look like. If it did not, our handler function would never be called, instead, the request would short-circuit and we'd get an appropriate error message. Joi can be used to validate not only the defined parameters but also query parameters, payload, and even headers. We barely scratched the surface.
The rest of the implementation had us executing an `updateOne()` method which updated an existing object with the new data. Again, we're returning the entire status object here for educational purposes, but in a real-world application, you wouldn't want to send that raw data.
### Delete A Movie
Deleting a movie will simply remove the record from our collection. There isn't a whole lot of new functionality to showcase here, so let's get right into the implementation.
``` javascript
// Update the details of a movie
server.route({
method: 'PUT',
path: '/movies/{id}',
options: {
validate: {
params: Joi.object({
id: Joi.objectId()
})
}
},
handler: async (req, h) => {
const id = req.params.id
const ObjectID = req.mongo.ObjectID;
const payload = req.payload
const status = await req.mongo.db.collection('movies').deleteOne({_id: ObjectID(id)});
return status;
}
});
```
In our delete route implementation, we are going to continue to use the Joi library to validate that the parameter to delete is an actual ObjectId. To remove a document from our collection, we'll use the `deleteOne()` method and pass in the ObjectId to delete.
Implementing this route concludes our discussion on the basic CRUD operations. To close out this tutorial, we'll implement one final route that will allow us to search our movie database.
### Search For A Movie
To conclude our routes, we'll add the ability for a user to search for a movie. To do this we'll rely on a MongoDB Atlas feature called Atlas Search. Before we can implement this functionality on our backend, we'll first need to enable Atlas Search and create an index within our MongoDB Atlas dashboard. Navigate to your dashboard, and locate the `sample_mflix` database. Select the `movies` collection and click on the **Search (Beta)** tab.
Click the **Create Search Index** button, and for this tutorial, we can leave the field mappings to their default dynamic state, so just hit the **Create Index** button. While our index is built, we can go ahead and implement our backend functionality. The implementation will look like this:
``` javascript
// Search for a movie
server.route({
method: 'GET',
path: '/search',
handler: async(req, h) => {
const query = req.query.term;
const results = await req.mongo.db.collection("movies").aggregate(
{
$searchBeta: {
"search": {
"query": query,
"path":"title"
}
}
},
{
$project : {title:1, plot: 1}
},
{
$limit: 10
}
]).toArray()
return results;
}
});
```
Our `search` route has us using the extremely powerful MongoDB aggregation pipeline. In the first stage of the pipeline, we are using the `$searchBeta` attribute and passing along our search term. In the next stage of the pipeline, we run a `$project` to only return specific fields, in our case the `title` and `plot` of the movie. Finally, we limit our search results to ten items and convert the cursor to an array and send it to the browser. Let's try to run a search query against our movies collection. Try search for `localhost:3000/search?term=Star+Wars`. Your results will look like this:
![Atlas Search Results
MongoDB Atlas Search is very powerful and provides all the tools to add superb search functionality for your data without relying on external APIs. Check out the documentation to learn more about how to best leverage it in your applications.
## Putting It All Together
In this tutorial, I showed you how to create a RESTful API with Hapi.js and MongoDB. We scratched the surface of the capabilities of both, but I hope it was a good introduction and gives you an idea of what's possible. Hapi.js has an extensive plug-in system that will allow you to bring almost any functionality to your backend with just a few lines of code. Integrating MongoDB into Hapi.js using the `hapi-mongo` plugin allows you to focus on building features and functionality rather than figuring out best practices and how to glue everything together. Speaking of glue, Hapi.js has a package called glue that makes it easy to break your server up into multiple components, we didn't need to do that in our tutorial, but it's a great next step for you to explore.
>If you'd like to get the code for this tutorial, you can find it here. If you want to give Atlas Search a try, sign up for MongoDB Atlas for free.
Happy, er.. Hapi coding! | md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Learn how to build an API with HapiJS and MongoDB.",
"contentType": "Tutorial"
} | Build a RESTful API with HapiJS and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-change-streams | created | # MongoDB Change Streams with Python
## Introduction
Change streams allow you to listen to changes that occur in your MongoDB database. On MongoDB 3.6 or above, this functionality allows you to build applications that can immediately respond to real time data changes. In this tutorial, we'll show you how to use change streams with Python. In particular you will:
- Learn about change streams
- Create a program that listens to inserts
- Change the program to listen to other event types
- Change the program to listen to specific notifications
To follow along, you can create a test environment using the steps below. This is optional but highly encouraged as it will allow you to test usage of the change stream functionality with the examples provided. You will be given all commands, but some familiarity with MongoDB is needed.
## Learn about Change Streams
The ability to listen to specific changes in the data allows an application to be much faster in responding to change. If a user of your system updates their information, the system can listen to and propagate these changes right away. For example, this could mean users no longer have to click refresh to see when changes have been applied. Or if a user's changes in one system need approval by someone, another system could listen to changes and send notifications requesting approvals instantaneously.
Before change streams, applications that needed to know about the addition of new data in real-time had to continuously poll data or rely on other update mechanisms. One common, if complex, technique for monitoring changes was tailing MongoDB's Operation Log (Oplog). The Oplog is part of the replication system of MongoDB and as such already tracks modifications to the database but is not easy to use for business logic. Change streams are built on top of the Oplog but they provide a native API that improves efficiency and usability. Note that you cannot open a change stream against a collection in a standalone MongoDB server because the feature relies on the Oplog which is only used on replica sets.
When registering a change stream you need to specify the collection and what types of changes you want to listen to. You can do this by using the `$match` and a few other aggregation pipeline stages which limit the amount of data you will receive. If your database enforces authentication and authorization, change streams provide the same access control as for normal queries.
## Test the Change Stream Features
The best way to understand how change streams operate is to work with them. In the next section, we'll show you how to set up a server and scripts. After completing the setup, you will get two scripts: One Python script will listen to notifications from the change stream and print them. The other script will mimic an application by performing insert, update, replace, and delete operations so that you can see the notifications in the output of the first script. You will also learn how to limit the notifications to the ones you are interested in.
## Set up PyMongo
To get started, set up a virtual environment using Virtualenv. Virtualenv allows you to isolate dependencies of your project from other projects. Create a directory for this project and copy the following into a file called requirements.txt in your new directory:
``` none
pymongo==3.8.0
dnspython
```
To create and activate your virtual environment, run the following commands in your terminal:
``` bash
virtualenv venv # sets up the environment
source venv/bin/activate # activates the environment
pip3 install -r requirements.txt # installs our dependencies
```
>For ease of reading, we assume you are running Python 3 with the python3 and pip3 commands. If you are running Python 2.7, substitute python and pip for those commands.
## Set up your Cluster
We will go through two options for setting up a test MongoDB Replica Set for us to connect to. If you have MongoDB 3.6 or later installed and are comfortable making changes to your local setup choose this option and follow the guide in the appendix and skip to the next section.
If you do not have MongoDB installed, would prefer not to mess with your local setup or if you are fairly new to MongoDB then we recommend that you set up a MongoDB Atlas cluster; there's a free tier which gives you a three node replica set which is ideal for experimenting and learning with. Simply follow these steps until you get the URI connection string in step 8. Take that URI connection string, insert the password where it says ``, and add it to your environment by running
``` bash
export CHANGE_STREAM_DB="mongodb+srv://user:@example-xkfzv.mongodb.net/test?retryWrites=true"
```
in your terminal. The string you use as a value will be different.
## Listen to Inserts from an Application
Before continuing, quickly test your setup. Create a file `test.py` with the following contents:
``` python
import os
import pymongo
client = pymongo.MongoClient(os.environ'CHANGE_STREAM_DB'])
print(client.changestream.collection.insert_one({"hello": "world"}).inserted_id)
```
When you run `python3 test.py` you should see an `ObjectId` being printed.
Now that you've confirmed your setup, let's create the small program that will listen to changes in the database using a change stream. Create a different file `change_streams.py` with the following content:
``` python
import os
import pymongo
from bson.json_util import dumps
client = pymongo.MongoClient(os.environ['CHANGE_STREAM_DB'])
change_stream = client.changestream.collection.watch()
for change in change_stream:
print(dumps(change))
print('') # for readability only
```
Go ahead and run `python3 change_streams.py`, you will notice that the program doesn't print anything and just waits for operations to happen on the specified collection. While keeping the `change_streams` program running, open up another terminal window and run `python3 test.py`. You will have to run the same export command you ran in the *Set up your Cluster* section to add the environment variable to the new terminal window.
Checking the terminal window that is running the `change_streams` program, you will see that the insert operation was logged. It should look like the output below but with a different `ObjectId` and with a different value for `$binary`.
``` json
➜ python3 change_streams.py
{"_id": {"_data": {"$binary": "glsIjGUAAAABRmRfaWQAZFsIjGXiJuWPOIv2PgBaEAQIaEd7r8VFkazelcuRgfgeBA==", "$type": "00"}}, "operationType": "insert", "fullDocument": {"_id": {"$oid": "5b088c65e226e58f388bf63e"}, "hello": "world"}, "ns": {"db": "changestream", "coll": "collection"}, "documentKey": {"_id": {"$oid": "5b088c65e226e58f388bf63e"}}}
```
## Listen to Different Event Types
You can listen to four types of document-based events:
- Insert
- Update
- Replace
- Delete
Depending on the type of event the document structure you will receive will differ slightly but you will always receive the following:
``` json
{
_id: ,
operationType: "",
ns: {db: "", coll: ""},
documentKey: { }
}
```
In the case of inserts and replace operations the `fullDocument` is provided by default as well. In the case of update operations the extra field provided is `updateDescription` and it gives you the document delta (i.e. the difference between the document before and after the operation). By default update operations only include the delta between the document before and after the operation. To get the full document with each update you can [pass in "updateLookup" to the full document option. If an update operation ends up changing multiple documents, there will be one notification for each updated document. This transformation occurs to ensure that statements in the oplog are idempotent.
There is one further type of event that can be received which is the invalidate event. This tells the driver that the change stream is no longer valid. The driver will then close the stream. Potential reasons for this include the collection being dropped or renamed.
To see this in action update your `test.py` and run it while also running the `change_stream` program:
``` python
import os
import pymongo
client = pymongo.MongoClient(os.environ'CHANGE_STREAM_DB'])
client.changestream.collection.insert_one({"_id": 1, "hello": "world"})
client.changestream.collection.update_one({"_id": 1}, {"$set": {"hello": "mars"}})
client.changestream.collection.replace_one({"_id": 1} , {"bye": "world"})
client.changestream.collection.delete_one({"_id": 1})
client.changestream.collection.drop()
```
The output should be similar to:
``` json
➜ python3 change_streams.py
{"fullDocument": {"_id": 1, "hello": "world"}, "documentKey": {"_id": 1}, "_id": {"_data": {"$binary": "glsIjuEAAAABRh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=", "$type": "00"}}, "ns": {"coll": "collection", "db": "changestream"}, "operationType": "insert"}
{"documentKey": {"_id": 1}, "_id": {"_data": {"$binary": "glsIjuEAAAACRh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=", "$type": "00"}}, "updateDescription": {"removedFields": [], "updatedFields": {"hello": "mars"}}, "ns": {"coll": "collection", "db": "changestream"}, "operationType": "update"}
{"fullDocument": {"bye": "world", "_id": 1}, "documentKey": {"_id": 1}, "_id": {"_data": {"$binary": "glsIjuEAAAADRh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=", "$type": "00"}}, "ns": {"coll": "collection", "db": "changestream"}, "operationType": "replace"}
{"documentKey": {"_id": 1}, "_id": {"_data": {"$binary": "glsIjuEAAAAERh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=", "$type": "00"}}, "ns": {"coll": "collection", "db": "changestream"}, "operationType": "delete"}
{"_id": {"_data": {"$binary": "glsIjuEAAAAFFFoQBAhoR3uvxUWRrN6Vy5GB+B4E", "$type": "00"}}, "operationType": "invalidate"}
```
## Listen to Specific Notifications
So far, your program has been listening to *all* operations. In a real application this would be overwhelming and often unnecessary as each part of your application will generally want to listen only to specific operations. To limit the amount of operations, you can use certain aggregation stages when setting up the stream. These stages are: `$match`, `$project`, `$addfields`, `$replaceRoot`, and `$redact`. All other aggregation stages are not available.
You can test this functionality by changing your `change_stream.py` file with the code below and running the `test.py` script. The output should now only contain insert notifications.
``` python
import os
import pymongo
from bson.json_util import dumps
client = pymongo.MongoClient(os.environ['CHANGE_STREAM_DB'])
change_stream = client.changestream.collection.watch([{
'$match': {
'operationType': { '$in': ['insert'] }
}
}])
for change in change_stream:
print(dumps(change))
print('')
```
You can also *match* on document fields and thus limit the stream to certain `DocumentIds` or to documents that have a certain document field, etc.
## Resume your Change Streams
No matter how good your network, there will be situations when connections fail. To make sure that no changes are missed in such cases, you need to add some code for storing and handling `resumeToken`s. Each event contains a `resumeToken`, for example:
``` json
"_id": {"_data": {"$binary": "glsIj84AAAACRh5faWQAKwIAWhAEvyfcy4djS8CUKRZ8tvWuOgQ=", "$type": "00"}}
```
When a failure occurs, the driver should automatically make one attempt to reconnect. The application has to handle further retries as needed. This means that the application should take care of always persisting the `resumeToken`.
To retry connecting, the `resumeToken` has to be passed into the optional field resumeAfter when creating the new change stream. This does not guarantee that we can always resume the change stream. MongoDB's oplog is a capped collection that keeps a rolling record of the most recent operations. Resuming a change stream is only possible if the oplog has not rolled yet (that is if the changes we are interested in are still in the oplog).
## Caveats
- **Change Streams in Production**: If you plan to use change streams in production, please read [MongoDB's recommendations.
- **Ordering and Rollbacks**: MongoDB guarantees that the received events will be in the order they occurred (thus providing a total ordering of changes across shards if you use shards). On top of that only durable, i.e. majority committed changes will be sent to listeners. This means that listeners do not have to consider rollbacks in their applications.
- **Reading from Secondaries**: Change streams can be opened against any data-bearing node in a cluster regardless whether it's primary or secondary. However, it is generally not recommended to read from secondaries as failovers can lead to increased load and failures in this setup.
- **Updates with the fullDocument Option**: The fullDocument option for Update Operations does not guarantee the returned document does not include further changes. In contrast to the document deltas that are guaranteed to be sent in order with update notifications, there is no guarantee that the *fullDocument* returned represents the document as it was exactly after the operation. `updateLookup` will poll the current version of the document. If changes happen quickly it is possible that the document was changed before the `updateLookup` finished. This means that the fullDocument might not represent the document at the time of the event thus potentially giving the impression events took place in a different order.
- **Impact on Performance**: Up to 1,000 concurrent change streams to each node are supported with negligible impact on the overall performance. However, on sharded clusters, the guarantee of total ordering could cause response times of the change stream to be slower.
- **WiredTiger**: Change streams are a MongoDB 3.6 and later feature. It is not available for older versions, MMAPv1 storage or pre pv1 replications.
## Learn More
To read more about this check out the Change Streams documentation.
If you're interested in more MongoDB tips, follow us on Twitter @mongodb.
## Appendix
### How to set up a Cluster in the Cloud
>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
### How to set up a Local Cluster
Before setting up the instances please confirm that you are running version 3.6 or later of the MongoDB Server (mongod) and the MongoDB shell (mongo). You can do this by running `mongod --version` and `mongo --version`. If either of these do not satisfy our requirements, please upgrade to a more recent version before continuing.
In the following you will set up a single-node replica-set named `test-change-streams`. For a production replica-set, at least three nodes are recommended.
1. Run the following commands in your terminal to create a directory for the database files and start the mongod process on port `27017`:
``` bash
mkdir -p /data/test-change-streams
mongod --replSet test-change-streams --logpath "mongodb.log" --dbpath /data/test-change-streams --port 27017 --fork
```
2. Now open up a mongo shell on port `27017`:
``` bash
mongo --port 27017
```
3. Within the mongo shell you just opened, configure your replica set:
``` javascript
config = {
_id: "test-change-streams",
members: { _id : 0, host : "localhost:27017"}]
};
rs.initiate(config);
```
4. Still within the mongo shell, you can now check that your replica set is working by running: `rs.status();`. The output should indicate that your node has become primary. It may take a few seconds to show this so if you are not seeing this immediately, run the command again after a few seconds.
5. Run
``` bash
export CHANGE_STREAM_DB=mongodb://localhost:27017
```
in your shell and [continue. | md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": "Change streams allow you to listen to changes that occur in your MongoDB database.",
"contentType": "Quickstart"
} | MongoDB Change Streams with Python | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/introduction-aggregation-framework | created | # Introduction to the MongoDB Aggregation Framework
One of the difficulties when storing any data is knowing how it will be accessed in the future. What reports need to be run on it? What information is "hidden" in there that will allow for meaningful insights for your business? After spending the time to design your data schema in an appropriate fashion for your application, one needs to be able to retrieve it. In MongoDB, there are two basic ways that data retrieval can be done: through queries with the find() command, and through analytics using the aggregation framework and the aggregate() command.
`find()` allows for the querying of data based on a condition. One can filter results, do basic document transformations, sort the documents, limit the document result set, etc. The `aggregate()` command opens the door to a whole new world with the aggregation framework. In this series of posts, I'll take a look at some of the reasons why using the aggregation framework is so powerful, and how to harness that power.
## Why Aggregate with MongoDB?
A frequently asked question is why do aggregation inside MongoDB at all? From the MongoDB documentation:
>
>
>Aggregation operations process data records and return computed results. Aggregation operations group values from multiple documents together, and can perform a variety of operations on the grouped data to return a single result.
>
>
By using the built-in aggregation operators available in MongoDB, we are able to do analytics on a cluster of servers we're already using without having to move the data to another platform, like Apache Spark or Hadoop. While those, and similar, platforms are fast, the data transfer from MongoDB to them can be slow and potentially expensive. By using the aggregation framework the work is done inside MongoDB and then the final results can be sent to the application typically resulting in a smaller amount of data being moved around. It also allows for the querying of the **LIVE** version of the data and not an older copy of data from a batch.
Aggregation in MongoDB allows for the transforming of data and results in a more powerful fashion than from using the `find()` command. Through the use of multiple stages and expressions, you are able to build a "pipeline" of operations on your data to perform analytic operations. What do I mean by a "pipeline"? The aggregation framework is conceptually similar to the `*nix` command line pipe, `|`. In the `*nix` command line pipeline, a pipe transfers the standard output to some other destination. The output of one command is sent to another command for further processing.
In the aggregation framework, we think of stages instead of commands. And the stage "output" is documents. Documents go into a stage, some work is done, and documents come out. From there they can move onto another stage or provide output.
## Aggregation Stages
At the time of this writing, there are twenty-eight different aggregation stages available. These different stages provide the ability to do a wide variety of tasks. For example, we can build an aggregation pipeline that *matches* a set of documents based on a set of criteria, *groups* those documents together, *sorts* them, then returns that result set to us.
Or perhaps our pipeline is more complicated and the document flows through the `$match`, `$unwind`, `$group`, `$sort`, `$limit`, `$project`, and finally a `$skip` stage.
This can be confusing and some of these concepts are worth repeating. Therefore, let's break this down a bit further:
- A pipeline starts with documents
- These documents come from a collection, a view, or a specially designed stage
- In each stage, documents enter, work is done, and documents exit
- The stages themselves are defined using the document syntax
Let's take a look at an example pipeline. Our documents are from the Sample Data that's available in MongoDB Atlas and the `routes` collection in the `sample_training` database. Here's a sample document:
``` json
{
"_id":{
"$oid":"56e9b39b732b6122f877fa31"
},
"airline":{
"id":{
"$numberInt":"410"
},
"name":"Aerocondor"
,"alias":"2B"
,"iata":"ARD"
},
"src_airport":"CEK",
"dst_airport":"KZN",
"Codeshare":"",
"stops":{
"$numberInt":"0"
},
"airplane":"CR2"
}
```
>
>
>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
>
>
For this example query, let's find the top three airlines that offer the most direct flights out of the airport in Portland, Oregon, USA (PDX). To start with, we'll do a `$match` stage so that we can concentrate on doing work only on those documents that meet a base of conditions. In this case, we'll look for documents with a `src_airport`, or source airport, of PDX and that are direct flights, i.e. that have zero stops.
``` javascript
{
$match: {
"src_airport": "PDX",
"stops": 0
}
}
```
That reduces the number of documents in our pipeline down from 66,985 to 113. Next, we'll group by the airline name and count the number of flights:
``` javascript
{
$group: {
_id: {
"airline name": "$airline.name"
},
count: {
$sum: 1
}
}
}
```
With the addition of the `$group` stage, we're down to 16 documents. Let's sort those with a `$sort` stage and sort in descending order:
``` javascript
{
$sort: {
count: -1
}
```
Then we can add a `$limit` stage to just have the top three airlines that are servicing Portland, Oregon:
``` javascript
{
$limit: 3
}
```
After putting the documents in the `sample_training.routes` collection through this aggregation pipeline, our results show us that the top three airlines offering non-stop flights departing from PDX are Alaska, American, and United Airlines with 39, 17, and 13 flights, respectively.
How does this look in code? It's fairly straightforward with using the `db.aggregate()` function. For example, in Python you would do something like:
``` python
from pymongo import MongoClient
# Requires the PyMongo package.
# The dnspython package is also required to use a mongodb+src URI string
# https://api.mongodb.com/python/current
client = MongoClient('YOUR-ATLAS-CONNECTION-STRING')
result = client['sample_training']['routes'].aggregate([
{
'$match': {
'src_airport': 'PDX',
'stops': 0
}
}, {
'$group': {
'_id': {
'airline name': '$airline.name'
},
'count': {
'$sum': 1
}
}
}, {
'$sort': {
'count': -1
}
}, {
'$limit': 3
}
])
```
The aggregation code is pretty similar in other languages as well.
## Wrap Up
The MongoDB aggregation framework is an extremely powerful set of tools. The processing is done on the server itself which results in less data being sent over the network. In the example used here, instead of pulling **all** of the documents into an application and processing them in the application, the aggregation framework allows for only the three documents we wanted from our query to be sent back to the application.
This was just a brief introduction to some of the operators available. Over the course of this series, I'll take a closer look at some of the most popular aggregation framework operators as well as some interesting, but less used ones. I'll also take a look at performance considerations of using the aggregation framework.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn about MongoDB's aggregation framework and aggregation operators.",
"contentType": "Quickstart"
} | Introduction to the MongoDB Aggregation Framework | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/bson-data-types-decimal128 | created | # Quick Start: BSON Data Types - Decimal128
Think back to when you were first introduced to the concept of decimals in numerical calculations. Doing math problems along the lines of 3.231 / 1.28 caused problems when starting out because 1.28 doesn't go into 3.231 evenly. This causes a long string of numbers to be created to provide a more precise answer. In programming languages, we must choose which number format is correct depending on the amount of precision we need. When one needs high precision when working with BSON data types, the `decimal128` is the one to use.
As the name suggests, decimal128 provides 128 bits of decimal representation for storing really big (or really small) numbers when rounding decimals exactly is important. Decimal128 supports 34 decimal digits of precision, or significand along with an exponent range of -6143 to +6144. The significand is not normalized in the decimal128 standard allowing for multiple possible representations: 10 x 10^-1 = 1 x 10^0 = .1 x 10^1 = .01 x 10^2 and so on. Having the ability to store maximum and minimum values in the order of 10^6144 and 10^-6143, respectively, allows for a lot of precision.
## Why & Where to Use
Sometimes when doing mathematical calculations in a programmatic way, results are unexpected. For example in Node.js:
``` bash
> 0.1
0.1
> 0.2
0.2
> 0.1 * 0.2
0.020000000000000004
> 0.1 + 0.1
0.010000000000000002
```
This issue is not unique to Node.js, in Java:
``` java
class Main {
public static void main(String] args) {
System.out.println("0.1 * 0.2:");
System.out.println(0.1 * 0.2);
}
}
```
Produces an output of:
``` bash
0.1 * 0.2:
0.020000000000000004
```
The same computations in Python, Ruby, Rust, and others produce the same results. What's going on here? Are these languages just bad at math? Not really, binary floating-point numbers just aren't great at representing base 10 values. For example, the `0.1` used in the above examples is represented in binary as `0.0001100110011001101`.
For many situations, this isn't a huge issue. However, in monetary applications precision is very important. Who remembers the [half-cent issue from Superman III? When precision and accuracy are important for computations, decimal128 should be the data type of choice.
## How to Use
In MongoDB, storing data in decimal128 format is relatively straight forward with the NumberDecimal() constructor:
``` bash
NumberDecimal("9823.1297")
```
Passing in the decimal value as a string, the value gets stored in the database as:
``` bash
NumberDecimal("9823.1297")
```
If values are passed in as `double` values:
``` bash
NumberDecimal(1234.99999999999)
```
Loss of precision can occur in the database:
``` bash
NumberDecimal("1234.50000000000")
```
Another consideration, beyond simply the usage in MongoDB, is the usage and support your programming has for decimal128. Many languages don't natively support this feature and will require a plugin or additional package to get the functionality. Some examples...
Python: The `decimal.Decimal` module can be used for floating-point arithmetic.
Java: The Java BigDecimal class provides support for decimal128 numbers.
Node.js: There are several packages that provide support, such as js-big-decimal or node.js bigdecimal available on npm.
## Wrap Up
>Get started exploring BSON types, like decimal128, with MongoDB Atlas today!
The `decimal128` field came about in August 2009 as part of the IEEE 754-2008 revision of floating points. MongoDB 3.4 is when support for decimal128 first appeared and to use the `decimal` data type with MongoDB, you'll want to make sure you use a driver version that supports this great feature. Decimal128 is great for huge (or very tiny) numbers and for when precision in those numbers is important. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Working with decimal numbers can be a challenge. The Decimal128 BSON data type allows for high precision options when working with numbers.",
"contentType": "Quickstart"
} | Quick Start: BSON Data Types - Decimal128 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/getting-started-kotlin-driver | created | # Getting Started with the MongoDB Kotlin Driver
> This is an introductory article on how to build an application in Kotlin using MongoDB Atlas and
> the MongoDB Kotlin driver, the latest addition to our list of official drivers.
> Together, we'll build a CRUD application that covers the basics of how to use MongoDB as a database, while leveraging the benefits of Kotlin as a
> programming language, like data classes, coroutines, and flow.
## Prerequisites
This is a getting-started article. Therefore, not much is needed as a prerequisite, but familiarity with Kotlin as a programming language will be
helpful.
Also, we need an Atlas account, which is free forever. Create an account if you haven't got one. This
provides MongoDB as a cloud database and much more. Later in this tutorial, we'll use this account to create a new cluster, load a dataset, and
eventually query against it.
In general, MongoDB is an open-source, cross-platform, and distributed document database that allows building apps with flexible schema. In case you
are not familiar with it or would like a quick recap, I recommend exploring
the MongoDB Jumpstart series to get familiar with MongoDB and
its various services in under 10 minutes. Or if you prefer to read, then you can follow
our guide.
And last, to aid our development activities, we will be using Jetbrains IntelliJ IDEA (Community Edition),
which has default support for the Kotlin language.
## MongoDB Kotlin driver vs MongoDB Realm Kotlin SDK
Before we start, I would like to touch base on Realm Kotlin SDK, one of the SDKs used to create
client-side mobile applications using the MongoDB ecosystem. It shouldn't be confused with
the MongoDB Kotlin driver for server-side programming.
The MongoDB Kotlin driver, a language driver, enables you to seamlessly interact
with Atlas, a cloud database, with the benefits of the Kotlin language paradigm. It's appropriate to create
backend apps, scripts, etc.
To make learning more meaningful and practical, we'll be building a CRUD application. Feel free to check out our
Github repo if you would like to follow along together. So, without further ado,
let's get started.
## Create a project
To create the project, we can use the project wizard, which can be found under the `File` menu options. Then, select `New`, followed by `Project`.
This will open the `New Project` screen, as shown below, then update the project and language to Kotlin.
After the initial Gradle sync, our project is ready to run. So, let's give it a try using the run icon in the menu bar, or simply press CTRL + R on
Mac. Currently, our project won't do much apart from printing `Hello World!` and arguments supplied, but the `BUILD SUCCESSFUL` message in the run
console is what we're looking for, which tells us that our project setup is complete.
Now, the next step is to add the Kotlin driver to our project, which allows us to interact
with MongoDB Atlas.
## Adding the MongoDB Kotlin driver
Adding the driver to the project is simple and straightforward. Just update the `dependencies` block with the Kotlin driver dependency in the build
file — i.e., `build.gradle`.
```groovy
dependencies {
// Kotlin coroutine dependency
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.6.4")
// MongoDB Kotlin driver dependency
implementation("org.mongodb:mongodb-driver-kotlin-coroutine:4.10.1")
}
```
And now, we are ready to connect with MongoDB Atlas using the Kotlin driver.
## Connecting to the database
To connect with the database, we first need the `Connection URI` that can be found by pressing `connect to cluster` in
our Atlas account, as shown below.
For more details, you can also refer to our documentation.
With the connection URI available, the next step is to create a Kotlin file. `Setup.kt` is where we write the code for connecting
to MongoDB Atlas.
Connection with our database can be split into two steps. First, we create a MongoClient instance using `Connection URI`.
```kotlin
val connectionString = "mongodb+srv://:@cluster0.sq3aiau.mongodb.net/?retryWrites=true&w=majority"
val client = MongoClient.create(connectionString = connectString)
```
And second, use client to connect with the database, `sample_restaurants`, which is a sample dataset for
restaurants. A sample dataset is a great way to explore the platform and build a more realistic POC
to validate your ideas. To learn how to seed your first Atlas database with sample
data, visit the docs.
```kotlin
val databaseName = "sample_restaurants"
val db: MongoDatabase = client.getDatabase(databaseName = databaseName)
```
Hardcoding `connectionString` isn't a good approach and can lead to security risks or an inability to provide role-based access. To avoid such issues
and follow the best practices, we will be using environment variables. Other common approaches are the use of Vault, build configuration variables,
and CI/CD environment variables.
To add environment variables, use `Modify run configuration`, which can be found by right-clicking on the file.
Together with code to access the environment variable, our final code looks like this.
```kotlin
suspend fun setupConnection(
databaseName: String = "sample_restaurants",
connectionEnvVariable: String = "MONGODB_URI"
): MongoDatabase? {
val connectString = if (System.getenv(connectionEnvVariable) != null) {
System.getenv(connectionEnvVariable)
} else {
"mongodb+srv://:@cluster0.sq3aiau.mongodb.net/?retryWrites=true&w=majority"
}
val client = MongoClient.create(connectionString = connectString)
val database = client.getDatabase(databaseName = databaseName)
return try {
// Send a ping to confirm a successful connection
val command = Document("ping", BsonInt64(1))
database.runCommand(command)
println("Pinged your deployment. You successfully connected to MongoDB!")
database
} catch (me: MongoException) {
System.err.println(me)
null
}
}
```
> In the code snippet above, we still have the ability to use a hardcoded string. This is only done for demo purposes, allowing you to use a
> connection URI directly for ease and to run this via any online editor. But it is strongly recommended to avoid hardcoding a connection URI.
With the `setupConnection` function ready, let's test it and query the database for the collection count and name.
```kotlin
suspend fun listAllCollection(database: MongoDatabase) {
val count = database.listCollectionNames().count()
println("Collection count $count")
print("Collection in this database are -----------> ")
database.listCollectionNames().collect { print(" $it") }
}
```
Upon running that code, our output looks like this:
By now, you may have noticed that we are using the `suspend` keyword with `listAllCollection()`. `listCollectionNames()` is an asynchronous function
as it interacts with the database and therefore would ideally run on a different thread. And since the MongoDB Kotlin driver
supports Coroutines, the
native Kotlin asynchronous language paradigm, we can benefit from it by using `suspend`
functions.
Similarly, to drop collections, we use the `suspend` function.
```kotlin
suspend fun dropCollection(database: MongoDatabase) {
database.getCollection(collectionName = "restaurants").drop()
}
```
With this complete, we are all set to start working on our CRUD application. So to start with, we need to create a `data` class that represents
restaurant information that our app saves into the database.
```kotlin
data class Restaurant(
@BsonId
val id: ObjectId,
val address: Address,
val borough: String,
val cuisine: String,
val grades: List,
val name: String,
@BsonProperty("restaurant_id")
val restaurantId: String
)
data class Address(
val building: String,
val street: String,
val zipcode: String,
val coord: List
)
data class Grade(
val date: LocalDateTime,
val grade: String,
val score: Int
)
```
In the above code snippet, we used two annotations:
1. `@BsonId`, which represents the unique identity or `_id` of a document.
2. `@BsonProperty`, which creates an alias for keys in the document — for example, `restaurantId` represents `restaurant_id`.
> Note: Our `Restaurant` data class here is an exact replica of a restaurant document in the sample dataset, but a few fields can be skipped or marked
> as optional — e.g., `grades` and `address` — while maintaining the ability to perform CRUD operations. We are able to do so, as MongoDB’s document
> model allows flexible schema for our data.
## Create
With all the heavy lifting done (10 lines of code for connecting), adding a new document to the database is really simple and can be done with one
line of code using `insertOne`. So, let's create a new file called `Create.kt`, which will contain all the create operations.
```kotlin
suspend fun addItem(database: MongoDatabase) {
val collection = database.getCollection(collectionName = "restaurants")
val item = Restaurant(
id = ObjectId(),
address = Address(
building = "Building", street = "street", zipcode = "zipcode", coord =
listOf(Random.nextDouble(), Random.nextDouble())
),
borough = "borough",
cuisine = "cuisine",
grades = listOf(
Grade(
date = LocalDateTime.now(),
grade = "A",
score = Random.nextInt()
)
),
name = "name",
restaurantId = "restaurantId"
)
collection.insertOne(item).also {
println("Item added with id - ${it.insertedId}")
}
}
```
When we run it, the output on the console is:
> Again, don't forget to add an environment variable again for this file, if you had trouble while running it.
If we want to add multiple documents to the collection, we can use `insertMany`, which is recommended over running `insertOne` in a loop.
```kotlin
suspend fun addItems(database: MongoDatabase) {
val collection = database.getCollection(collectionName = "restaurants")
val newRestaurants = collection.find().first().run {
listOf(
this.copy(
id = ObjectId(), name = "Insert Many Restaurant first", restaurantId = Random
.nextInt().toString()
),
this.copy(
id = ObjectId(), name = "Insert Many Restaurant second", restaurantId = Random
.nextInt().toString()
)
)
}
collection.insertMany(newRestaurants).also {
println("Total items added ${it.insertedIds.size}")
}
}
```
With these outputs on the console, we can say that the data has been added successfully.
But what if we want to see the object in the database? One way is with a read operation, which we would do shortly or
use MongoDB Compass to view the information.
MongoDB Compass is a free, interactive GUI tool for querying, optimizing, and analyzing the MongoDB data
from your system. To get started, download the tool and use the `connectionString` to connect with the
database.
## Read
To read the information from the database, we can use the `find` operator. Let's begin by reading any document.
```kotlin
val collection = database.getCollection(collectionName = "restaurants")
collection.find().limit(1).collect {
println(it)
}
```
The `find` operator returns a list of results, but since we are only interested in a single document, we can use the `limit` operator in conjunction
to limit our result set. In this case, it would be a single document.
If we extend this further and want to read a specific document, we can add filter parameters over the top of it:
```kotlin
val queryParams = Filters
.and(
listOf(
eq("cuisine", "American"),
eq("borough", "Queens")
)
)
```
Or, we can use any of the operators from our list. The final code looks like this.
```kotlin
suspend fun readSpecificDocument(database: MongoDatabase) {
val collection = database.getCollection(collectionName = "restaurants")
val queryParams = Filters
.and(
listOf(
eq("cuisine", "American"),
eq("borough", "Queens")
)
)
collection
.find(queryParams)
.limit(2)
.collect {
println(it)
}
}
```
For the output, we see this:
> Don't forget to add the environment variable again for this file, if you had trouble while running it.
Another practical use case that comes with a read operation is how to add pagination to the results. This can be done with the `limit` and `offset`
operators.
```kotlin
suspend fun readWithPaging(database: MongoDatabase, offset: Int, pageSize: Int) {
val collection = database.getCollection(collectionName = "restaurants")
val queryParams = Filters
.and(
listOf(
eq(Restaurant::cuisine.name, "American"),
eq(Restaurant::borough.name, "Queens")
)
)
collection
.find(queryParams)
.limit(pageSize)
.skip(offset)
.collect {
println(it)
}
}
```
But with this approach, often, the query response time increases with value of the `offset`. To overcome this, we can benefit by creating an `Index`,
as shown below.
```kotlin
val collection = database.getCollection(collectionName = "restaurants")
val options = IndexOptions().apply {
this.name("restaurant_id_index")
this.background(true)
}
collection.createIndex(
keys = Indexes.ascending("restaurant_id"),
options = options
)
```
## Update
Now, let's discuss how to edit/update an existing document. Again, let's quickly create a new Kotlin file, `Update.Kt`.
In general, there are two ways of updating any document:
* Perform an **update** operation, which allows us to update specific fields of the matching documents without impacting the other fields.
* Perform a **replace** operation to replace the matching document with the new document.
For this exercise, we'll use the document we created earlier with the create operation `{restaurant_id: "restaurantId"}` and update
the `restaurant_id` with a more realistic value. Let's split this into two sub-tasks for clarity.
First, using `Filters`, we query to filter the document, similar to the read operation earlier.
```kotlin
val collection = db.getCollection("restaurants")
val queryParam = Filters.eq("restaurant_id", "restaurantId")
```
Then, we can set the `restaurant_id` with a random integer value using `Updates`.
```kotlin
val updateParams = Updates.set("restaurant_id", Random.nextInt().toString())
```
And finally, we use `updateOne` to update the document in an atomic operation.
```kotlin
collection.updateOne(filter = queryParam, update = updateParams).also {
println("Total docs modified ${it.matchedCount} and fields modified ${it.modifiedCount}")
}
```
In the above example, we were already aware of which document we wanted to update — the restaurant with an id `restauratantId` — but there could be a
few use cases where that might not be the situation. In such cases, we would first look up the document and then update it. `findOneAndUpdate` can be
handy. It allows you to combine both of these processes into an atomic operation, unlocking additional performance.
Another variation of the same could be updating multiple documents with one call. `updateMany` is useful for such use cases — for example, if we want
to update the `cuisine` of all restaurants to your favourite type of cuisine and `borough` to Brooklyn.
```kotlin
suspend fun updateMultipleDocuments(db: MongoDatabase) {
val collection = db.getCollection("restaurants")
val queryParam = Filters.eq(Restaurant::cuisine.name, "Chinese")
val updateParams = Updates.combine(
Updates.set(Restaurant::cuisine.name, "Indian"),
Updates.set(Restaurant::borough.name, "Brooklyn")
)
collection.updateMany(filter = queryParam, update = updateParams).also {
println("Total docs matched ${it.matchedCount} and modified ${it.modifiedCount}")
}
}
```
In these examples, we used `set` and `combine` with `Updates`. But there are many more types of update operator to explore that allow us to do many
intuitive operations, like set the currentDate or timestamp, increase or decrease the value of the field, and so on. To learn more about the different
types of update operators you can perform with Kotlin and MongoDB, refer to
our docs.
## Delete
Now, let's explore one final CRUD operation: delete. We'll start by exploring how to delete a single document. To do this, we'll
use `findOneAndDelete` instead of `deleteOne`. As an added benefit, this also returns the deleted document as output. In our example, we delete the
restaurant:
```kotlin
val collection = db.getCollection(collectionName = "restaurants")
val queryParams = Filters.eq("restaurant_id", "restaurantId")
collection.findOneAndDelete(filter = queryParams).also {
it?.let {
println(it)
}
}
```
To delete multiple documents, we can use `deleteMany`. We can, for example, use this to delete all the data we created earlier with our create
operation.
```kotlin
suspend fun deleteRestaurants(db: MongoDatabase) {
val collection = db.getCollection(collectionName = "restaurants")
val queryParams = Filters.or(
listOf(
Filters.regex(Restaurant::name.name, Pattern.compile("^Insert")),
Filters.regex("restaurant_id", Pattern.compile("^restaurant"))
)
)
collection.deleteMany(filter = queryParams).also {
println("Document deleted : ${it.deletedCount}")
}
}
```
## Summary
Congratulations! You now know how to set up your first Kotlin application with MongoDB and perform CRUD operations. The complete source code of the
app can be found on GitHub.
If you have any feedback on your experience working with the MongoDB Kotlin driver, please submit a comment in our
user feedback portal or reach out to me on Twitter: @codeWithMohit. | md | {
"tags": [
"MongoDB",
"Kotlin"
],
"pageDescription": "This is an introductory article on how to build an application in Kotlin using MongoDB Atlas and the MongoDB Kotlin driver, the latest addition to our list of official drivers.",
"contentType": "Tutorial"
} | Getting Started with the MongoDB Kotlin Driver | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-quickstart-starlette | created | # Getting Started with MongoDB and Starlette
Starlette is a lightweight ASGI framework/toolkit, which is ideal for building high-performance asyncio services. It provides everything you need to create JSON APIs, with very little boilerplate. However, if you would prefer an async web framework that is a bit more "batteries included," be sure to read my tutorial on Getting Started with MongoDB and FastAPI.
In this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your Starlette projects.
## Prerequisites
- Python 3.9.0
- A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.
## Running the Example
To begin, you should clone the example code from GitHub.
``` shell
git clone [email protected]:mongodb-developer/mongodb-with-starlette.git
```
You will need to install a few dependencies: Starlette, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.
``` shell
cd mongodb-with-starlette
pip install -r requirements.txt
```
It may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.
Once you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.
``` shell
export MONGODB_URL="mongodb+srv://:@/?retryWrites=true&w=majority"
```
Remember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.
The final step is to start your Starlette server.
``` shell
uvicorn app:app --reload
```
Once the application has started, you can view it in your browser at . There won't be much to see at the moment as you do not have any data! We'll look at each of the end-points a little later in the tutorial; but if you would like to create some data now to test, you need to send a `POST` request with a JSON body to the local URL.
``` shell
curl -X "POST" "http://localhost:8000/" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json; charset=utf-8' \
-d '{
"name": "Jane Doe",
"email": "[email protected]",
"gpa": "3.9"
}'
```
Try creating a few students via these `POST` requests, and then refresh your browser.
## Creating the Application
All the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.
### Connecting to MongoDB
One of the very first things we do is connect to our MongoDB database.
``` python
client = motor.motor_asyncio.AsyncIOMotorClient(os.environ"MONGODB_URL"])
db = client.college
```
We're using the async motor driver to create our MongoDB client, and then we specify our database name `college`.
### Application Routes
Our application has five routes:
- POST / - creates a new student.
- GET / - view a list of all students.
- GET /{id} - view a single student.
- PUT /{id} - update a student.
- DELETE /{id} - delete a student.
#### Create Student Route
``` python
async def create_student(request):
student = await request.json()
student["_id"] = str(ObjectId())
new_student = await db["students"].insert_one(student)
created_student = await db["students"].find_one({"_id": new_student.inserted_id})
return JSONResponse(status_code=201, content=created_student)
```
Note how I am converting the `ObjectId` to a string before assigning it as the `_id`. MongoDB stores data as [BSON; Starlette encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId`, but JSON does not. Fortunately, MongoDB `_id` values don't need to be ObjectIDs. Because of this, for simplicity, we convert ObjectIds to strings before storing them.
The `create_student` route receives the new student data as a JSON string in a `POST` request. The `request.json` function converts this JSON string back into a Python dictionary which we can then pass to our MongoDB client.
The `insert_one` method response includes the `_id` of the newly created student. After we insert the student into our collection, we use the `inserted_id` to find the correct document and return this in our `JSONResponse`.
Starlette returns an HTTP `200` status code by default, but in this instance, a `201` created is more appropriate.
##### Read Routes
The application has two read routes: one for viewing all students and the other for viewing an individual student.
``` python
async def list_students(request):
students = await db"students"].find().to_list(1000)
return JSONResponse(students)
```
Motor's `to_list` method requires a max document count argument. For this example, I have hardcoded it to `1000`, but in a real application, you would use the [skip and limit parameters in find to paginate your results.
``` python
async def show_student(request):
id = request.path_params"id"]
if (student := await db["students"].find_one({"_id": id})) is not None:
return JSONResponse(student)
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
The student detail route has a path parameter of `id`, which Starlette passes as an argument to the `show_student` function. We use the id to attempt to find the corresponding student in the database. The conditional in this section is using an [assignment expression, a recent addition to Python (introduced in version 3.8) and often referred to by the incredibly cute sobriquet "walrus operator."
If a document with the specified `id` does not exist, we raise an `HTTPException` with a status of `404`.
##### Update Route
``` python
async def update_student(request):
id = request.path_params"id"]
student = await request.json()
update_result = await db["students"].update_one({"_id": id}, {"$set": student})
if update_result.modified_count == 1:
if (updated_student := await db["students"].find_one({"_id": id})) is not None:
return JSONResponse(updated_student)
if (existing_student := await db["students"].find_one({"_id": id})) is not None:
return JSONResponse(existing_student)
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
The `update_student` route is like a combination of the `create_student` and the `show_student` routes. It receives the id of the document to update as well as the new data in the JSON body.
We attempt to `$set` the new values in the correct document with `update_one`, and then check to see if it correctly modified a single document. If it did, then we find that document that was just updated and return it.
If the `modified_count` is not equal to one, we still check to see if there is a document matching the id. A `modified_count` of zero could mean that there is no document with that id, but it could also mean that the document does exist, but it did not require updating because the current values are the same as those supplied in the `PUT` request.
It is only after that final find fails that we raise a `404` Not Found exception.
##### Delete Route
``` python
async def delete_student(request):
id = request.path_params["id"]
delete_result = await db["students"].delete_one({"_id": id})
if delete_result.deleted_count == 1:
return JSONResponse(status_code=204)
raise HTTPException(status_code=404, detail=f"Student {id} not found")
```
Our last route is `delete_student`. Again, because this is acting upon a single document, we have to supply an id in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or "No Content." In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified id, then instead we return a `404`.
### Creating the Starlette App
``` python
app = Starlette(
debug=True,
routes=[
Route("/", create_student, methods=["POST"]),
Route("/", list_students, methods=["GET"]),
Route("/{id}", show_student, methods=["GET"]),
Route("/{id}", update_student, methods=["PUT"]),
Route("/{id}", delete_student, methods=["DELETE"]),
],
)
```
The final piece of code creates an instance of Starlette and includes each of the routes we defined. You can see that many of the routes share the same URL but use different HTTP methods. For example, a `GET` request to `/{id}` will return the corresponding student document for you to view, whereas a `DELETE` request to the same URL will delete it. So, be very thoughtful about the which HTTP method you use for each request!
## Wrapping Up
I hope you have found this introduction to Starlette with MongoDB useful. Now is a fascinating time for Python developers as more and more frameworks—both new and old—begin taking advantage of async.
If you would like to learn more and take your MongoDB and Starlette knowledge to the next level, check out Ado's very in-depth tutorial on how to [Build a Property Booking Website with Starlette, MongoDB, and Twilio. Also, if you're interested in FastAPI (a web framework built upon Starlette), you should view my tutorial on getting started with the FARM stack: FastAPI, React, & MongoDB.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": "Getting Started with MongoDB and Starlette",
"contentType": "Quickstart"
} | Getting Started with MongoDB and Starlette | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/creating-multiplayer-drawing-game-phaser | created | # Creating a Multiplayer Drawing Game with Phaser and MongoDB
When it comes to MongoDB, an often overlooked industry that it works amazingly well in is gaming. It works great in gaming because of its performance, but more importantly its ability to store whatever complex data the game throws at it.
Let's say you wanted to create a drawing game like Pictionary. I know what you're thinking: why would I ever want to create a Pictionary game with MongoDB integration? Well, what if you wanted to be able to play with friends remotely? In this scenario, you could store your brushstrokes in MongoDB and load those brushstrokes on your friend's device. These brushstrokes can be pretty much anything. They could be images, vector data, or something else entirely.
A drawing game is just one of many possible games that would pair well with MongoDB.
In this tutorial, we're going to create a drawing game using Phaser. The data will be stored and synced with MongoDB and be visible on everyone else's device whether that is desktop or mobile.
Take the following animated image for example:
In the above example, I have my MacBook as well as my iOS device in the display. I'm drawing on my iOS device, on the right, and after the brushstrokes are considered complete, they are sent to MongoDB and the other clients, such as the MacBook. This is why the strokes are not instantly available as the strokes are in progress.
## The Tutorial Requirements
There are a few requirements that must be met prior to starting this
tutorial:
- A MongoDB Atlas free tier cluster or better must be available.
- A MongoDB Realm application configured to use the Atlas cluster.
The heavy lifting of this example will be with Phaser, MongoDB
Atlas, and MongoDB Realm.
>MongoDB Atlas has a forever FREE tier that can be configured in the MongoDB Cloud.
There's no account requirement or downloads necessary when it comes to building Phaser games. These games are both web and mobile compatible.
## Drawing with Phaser, HTML, and Simple JavaScript
When it comes to Phaser, you can do everything within a single HTML
file. This file must be served rather than opened from the local
filesystem, but nothing extravagant needs to be done with the project.
Let's start by creating a project somewhere on your computer with an
**index.html** file and a **game.js** file. We're going to add some
boilerplate code to our **index.html** prior to adding our game logic to
the **game.js** file.
Within the **index.html** file, add the following:
``` xml
```
In the above HTML, we've added scripts for both Phaser and MongoDB
Realm. We've also defined an HTML container `
` element, as seen by
the `game` id, to hold our game when the time comes.
We could add all of our Phaser and MongoDB logic into the unused
`
```
In the above code, we're defining that our game should be rendered in the HTML element with the `game` id. We're also saying that it should take the full width and height that's available to us in the browser. This full width and height works for both computers and mobile devices.
Now we can take a look at each of our scenes in the **game.js** file, starting with the `initScene` function:
``` javascript
async initScene(data) {
this.strokes = ];
this.isDrawing = false;
}
```
For now, the `initScene` function will remain short. This is because we are not going to worry about initializing any database information yet. When it comes to `strokes`, this will represent independent collections of points. A brushstroke is just a series of connected points, so we want to maintain them. We need to be able to determine when a stroke starts and finishes, so we can use `isDrawing` to determine if we've lifted our cursor or pencil.
Now let's have a look at the `createScene` function:
``` javascript
async createScene() {
this.graphics = this.add.graphics();
this.graphics.lineStyle(4, 0x00aa00);
}
```
Like with the `initScene`, this function will change as we add the database functionality. For now, we're initializing the graphics layer in our scene and defining the line size and color that should be rendered. This is a simple game so all lines will be 4 pixels in size and the color green.
This brings us into the most extravagant of the scenes. Let's take a look at the `updateScene` function:
``` javascript
async updateScene() {
if(!this.input.activePointer.isDown && this.isDrawing) {
this.isDrawing = false;
} else if(this.input.activePointer.isDown) {
if(!this.isDrawing) {
this.path = new Phaser.Curves.Path(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);
this.isDrawing = true;
} else {
this.path.lineTo(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);
}
this.path.draw(this.graphics);
}
}
```
The `updateScene` function is responsible for continuously rendering things to the screen. It is constantly run, unlike the `createScene` which is only ran once. When updating, we want to check to see if we are either drawing or not drawing.
If the `activePointer` is not down, it means we are not drawing. If we are not drawing, we probably want to indicate so with the `isDrawing` variable. This condition will get more advanced when we start adding database logic.
If the `activePointer` is down, it means we are drawing. In Phaser, to draw a line, we need a starting point and then a series of points we can render as a path. If we're starting the brushstroke, we should probably create a new path. Because we set our line to be 4 pixels, if we want the line to draw at the center of our cursor, we need to use half the size for the x and y position.
We're not ever clearing the canvas, so we don't actually need to draw the path unless the pointer is active. When the pointer is active, whatever was previously drawn will stay on the screen. This saves us some processing resources.
We're almost at a point where we can test our offline game!
The scenes are good, even though we haven't added MongoDB logic to them. We need to actually create the game so the scenes can be used. Within the **game.js** file, update the following function:
``` javascript
async createGame(id, authId) {
this.game = new Phaser.Game(this.phaserConfig);
this.game.scene.start("default", {});
}
```
The above code will take the Phaser configuration that we had set in the `constructor` method and start the `default` scene. As of right now we aren't passing any data to our scenes, but we will in the future.
With the `createGame` function available, we need to make use of it. Within the **index.html** file, add the following line to your `
Create / Join
Game ID:
Not in a game...
```
The above code has a little more going on now, but don't forget to use your own application ids, database names, and collections. You'll start by probably noticing the following markup:
``` xml
Create / Join
Game ID:
Not in a game...
```
Not all of it was absolutely necessary, but it does give our game a better look and feel. Essentially now we have an input field. When the input field is submitted, whether that be with keypress or click, the `joinOrCreateGame` function is called The `keyCode == 13` represents that the enter key was pressed. The function isn't called directly, but the wrapper functions call it. The game id is extracted from the input, and the HTML components are transformed based on the information about the game.
To summarize what happens, the user submits a game id. The game id floats on top of the game scene as well as information regarding if you're the owner of the game or not.
The markup looks worse than it is.
Now that we can create or join games both from a UX perspective and a logic perspective, we need to change what happens when it comes to interacting with the game itself. We need to be able to store our brush strokes in MongoDB. To do this, we're going to revisit the `updateScene` function:
``` javascript
updateScene() {
if(this.authId == this.ownerId) {
if(!this.input.activePointer.isDown && this.isDrawing) {
this.collection.updateOne(
{
"owner_id": this.authId,
"_id": this.gameId
},
{
"$push": {
"strokes": this.path.toJSON()
}
}
).then(result => console.log(result));
this.isDrawing = false;
} else if(this.input.activePointer.isDown) {
if(!this.isDrawing) {
this.path = new Phaser.Curves.Path(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);
this.isDrawing = true;
} else {
this.path.lineTo(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);
}
this.path.draw(this.graphics);
}
}
}
```
Remember, this time around we have access to the game id and the owner id information. It was passed into the scene when we created or joined a game.
When it comes to actually drawing, nothing is going to change. However, when we aren't drawing, we want to update the game document to push our new strokes. Phaser makes it easy to convert our line information to JSON which inserts very easily into MongoDB. Remember earlier when I said accepting flexible data was a huge benefit for gaming?
So we are pushing these brushstrokes to MongoDB. We need to be able to load them from MongoDB.
Let's update our `createScene` function:
``` javascript
async createScene() {
this.graphics = this.add.graphics();
this.graphics.lineStyle(4, 0x00aa00);
this.strokes.forEach(stroke => {
this.path = new Phaser.Curves.Path();
this.path.fromJSON(stroke);
this.path.draw(this.graphics);
});
}
```
When the `createScene` function executes, we are taking the `strokes` array that was provided by the `createGame` and `joinGame` functions and looping over it. Remember, in the `updateScene` function we are storing the exact path. This means we can load the exact path and draw it.
This is great, but the users on the other end will only see the brush strokes when they first launch the game. We need to make it so they get new brushstrokes as they are pushed into our document. We can do this with [change streams in Realm.
Let's update our `createScene` function once more:
``` javascript
async createScene() {
this.graphics = this.add.graphics();
this.graphics.lineStyle(4, 0x00aa00);
this.strokes.forEach(stroke => {
this.path = new Phaser.Curves.Path();
this.path.fromJSON(stroke);
this.path.draw(this.graphics);
});
const stream = await this.collection.watch({ "fullDocument._id": this.gameId });
stream.onNext(event => {
let updatedFields = event.updateDescription.updatedFields;
if(updatedFields.hasOwnProperty("strokes")) {
updatedFields = [updatedFields.strokes["0"]];
}
for(let strokeNumber in updatedFields) {
let changeStreamPath = new Phaser.Curves.Path();
changeStreamPath.fromJSON(updatedFields[strokeNumber]);
changeStreamPath.draw(this.graphics);
}
});
}
```
We're now watching our collection for documents that have an `_id` field that matches our game id. Remember, we're in a game, we don't need to watch documents that are not our game. When a new document comes in, we can look at the updated fields and render the new strokes to the scene.
So why are we not using `path` like all the other areas of the code?
You don't know when new strokes are going to come in. If you're using the same global variable between the active drawing canvas and the change stream, there's a potential for the strokes to merge together given certain race conditions. It's just easier to let the change stream make its own path.
At this point in time, assuming your cluster is available and the configurations were made correctly, any drawing you do will be added to MongoDB and essentially synchronized to other computers and devices watching the document.
## Conclusion
You just saw how to make a simple drawing game with Phaser and MongoDB. Given the nature of Phaser, this game is compatible on desktops as well as mobile devices, and, given the nature of MongoDB and Realm, anything you add to the game will sync across devices and platforms as well.
This is just one of many possible gaming examples that could use MongoDB, and these interactive applications don't even need to be a game. You could be creating the next Photoshop application and you want every brushstroke, every layer, etc., to be synchronized to MongoDB. What you can do is limitless. | md | {
"tags": [
"JavaScript",
"Realm"
],
"pageDescription": "Learn how to build a drawing game with Phaser that synchronizes with MongoDB Realm for multiplayer.",
"contentType": "Article"
} | Creating a Multiplayer Drawing Game with Phaser and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/maintaining-geolocation-specific-game-leaderboard-phaser-mongodb | created | md | {
"tags": [
"JavaScript",
"Atlas"
],
"pageDescription": "Learn how to create a game with a functioning leaderboard using Phaser, JavaScript, and MongoDB.",
"contentType": "Tutorial"
} | Maintaining a Geolocation Specific Game Leaderboard with Phaser and MongoDB | 2024-05-20T17:32:23.501Z |
|
devcenter | https://www.mongodb.com/developer/products/realm/build-infinite-runner-game-unity-realm-unity-sdk | created | # Build an Infinite Runner Game with Unity and the Realm Unity SDK
> The Realm .NET SDK for Unity is now in GA. Learn more here.
>
Did you know that MongoDB has a Realm SDK for the Unity game development framework that makes working with game data effortless when making mobile games, PC games, and similar? It's currently an alpha release, but you can already start using it to build persistence into your cross platform gaming projects.
A popular game template for the past few years has been in infinite runner style games. Great games such as Temple Run and Subway Surfers have had many competitors, each with their own spin on the subject. If you're unfamiliar with the infinite runner concept, the idea is that you have a player that can move horizontally to fixed positions. As the game progresses, obstacles and rewards enter the scene. The player must dodge or obtain depending on the object and this happens until the player collides with an obstacle. As time progresses, the game generally speeds up to make things more difficult.
While the game might sound complicated, there's actually a lot of repetition.
In this tutorial, we'll look at how to make a game in Unity and C#, particularly our own infinite runner 2d game. We'll look at important concepts such as object pooling and collision, as well as data persistence using the Realm SDK for Unity.
To get an idea of what we want to build, check out the following animated image:
As you can see in the above image, we have simple shapes as well as cake. The score increases as time increases or when cake is obtained. The level restarts when you collide with an obstacle and depending on what your score was, it could now be the new high score.
## The Requirements
There are a few requirements, some of which will change once the Realm SDK for Unity becomes a stable release.
- Unity 2020.2.4f1 or newer
- The Realm SDK for Unity, 10.1.1 or newer
This tutorial might work with earlier versions of the Unity editor. However, 2020.2.4f1 is the version that I'm using. As of right now, the Realm SDK for Unity is only available as a tarball through GitHub rather than through the Unity Asset Store. For now, you'll have to dig through the releases on GitHub.
## Creating the Game Objects for the Player, Obstacles, and Rewards
Even though there are a lot of visual components moving around on the screen, there's not a lot happening behind the scenes in terms of the Unity project. There are three core visual objects that make up this game example.
We have the player, the obstacles, and the rewards, which we're going to interchangeably call cake. Each of the objects will have the same components, but different scripts. We'll add the components here, but create the scripts later.
Within your project, create the three different game objects in the Unity editor. To start, each will be an empty game object.
Rather than working with all kinds of fancy graphics, create a 1x1 pixel image that is white. We're going to use it for all of our game objects, just giving them a different color or size. If you'd prefer the fancy graphics, consider checking out the Unity Asset Store for more options.
Each game object should have a **Sprite Renderer**, **Rigidbody 2D**, and a **Box Collider 2D** component attached. The **Sprite Renderer** component can use the 1x1 pixel graphic or one of your choosing. For the **Rigidbody 2D**, make sure the **Body Type is Kinematic** on all game objects because we won't be using things like gravity. Likewise, make sure the **Is Trigger** is enabled for each of the **Box Collider 2D** components.
We'll be adding more as we go along, but for now, we have a starting point.
## Creating an Object to Represent the Game Controller
There are a million different ways to create a great game with Unity. However, for this game, we're going to not rely on any particular visually rendered object for managing the game itself. Instead, we're going to create a game object responsible for game management.
Add an empty game object to your scene titled **GameController**. While we won't be doing anything with it now, we'll be attaching scripts to it for managing the object pool and the score.
## Adding Logic to the Game Objects Within the Scene with C# Scripts
With the three core game objects (player, obstacle, reward) in the scene, we need to give each of them some game logic. Let's start with the logic for the obstacle and reward since they are similar.
The idea behind the obstacle and reward is that they are constantly moving down from the top of the screen. As they become visible, the position along the x-axis is randomized. As they fall off the screen, the object is disabled and eventually reset.
Create an **Obstacle.cs** file with the following C# code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Obstacle : MonoBehaviour {
public float movementSpeed;
private float] _fixedPositionX = new float[] { -8.0f, 0.0f, 8.0f };
void OnEnable() {
int randomPositionX = Random.Range(0, 3);
transform.position = new Vector3(_fixedPositionX[randomPositionX], 6.0f, 0);
}
void Update() {
transform.position += Vector3.down * movementSpeed * Time.deltaTime;
if(transform.position.y < -5.25) {
gameObject.SetActive(false);
}
}
}
```
In the above code, we have fixed position possibilities. When the game object is enabled, we randomly choose from one of the possible fixed positions and update the overall position of the game object.
For every frame of the game, the position of the game object falls down on the y-axis. If the object reaches a certain position, it is then disabled.
Similarly, create a **Cake.cs** file with the following code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Cake : MonoBehaviour {
public float movementSpeed;
private float[] _fixedPositionX = new float[] { -8.0f, 0.0f, 8.0f };
void OnEnable() {
int randomPositionX = Random.Range(0, 3);
transform.position = new Vector3(_fixedPositionX[randomPositionX], 6.0f, 0);
}
void Update() {
transform.position += Vector3.down * movementSpeed * Time.deltaTime;
if (transform.position.y < -5.25) {
gameObject.SetActive(false);
}
}
void OnTriggerEnter2D(Collider2D collider) {
if (collider.gameObject.tag == "Player") {
gameObject.SetActive(false);
}
}
}
```
The above code should look the same with the exception of the `OnTriggerEnter2D` function. In the `OnTriggerEnter2D` function, we have the following code:
``` csharp
void OnTriggerEnter2D(Collider2D collider) {
if (collider.gameObject.tag == "Player") {
gameObject.SetActive(false);
}
}
```
If the current reward game object collides with another game object and that other game object is tagged as being a "Player", then the reward object is disabled. We'll handle the score keeping of the consumed reward elsewhere.
Make sure to attach the `Obstacle` and `Cake` scripts to the appropriate game objects within your scene.
With the obstacles and rewards out of the way, let's look at the logic for the player. Create a **Player.cs** file with the following code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;
public class Player : MonoBehaviour {
public float movementSpeed;
void Update() {
if(Input.GetKey(KeyCode.LeftArrow)) {
transform.position += Vector3.left * movementSpeed * Time.deltaTime;
} else if(Input.GetKey(KeyCode.RightArrow)) {
transform.position += Vector3.right * movementSpeed * Time.deltaTime;
}
}
void OnTriggerEnter2D(Collider2D collider) {
if(collider.gameObject.tag == "Obstacle") {
// Handle Score Here
SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);
} else if(collider.gameObject.tag == "Cake") {
// Handle Score Here
}
}
}
```
The **Player.cs** file will change in the future, but for now, we can move the player around based on the arrow keys on the keyboard. We are also looking at collisions with other objects. If the player object collides with an object tagged as being an obstacle, then the goal is to change the score and restart the scene. Otherwise, if the player object collides with an object tagged as being "Cake", which is a reward, then the goal is to just change the score.
Make sure to attach the `Player` script to the appropriate game object within your scene.
## Pooling Obstacles and Rewards with Performance-Maximizing Object Pools
As it stands, when an obstacle falls off the screen, it becomes disabled. As a reward is collided with or as it falls off the screen, it becomes disabled. In an infinite runner, we need those obstacles and rewards to be constantly resetting to look infinite. While we could just destroy and instantiate as needed, that is a performance-heavy task. Instead, we should make use of an object pool.
The idea behind an object pool is that you instantiate objects when the game starts. The number you instantiate is up to you. Then, while the game is being played, objects are pulled from the pool if they are available and when they are done, they are added back to the pool. Remember the enabling and disabling of our objects in the obstacle and reward scripts? That has to do with pooling.
Ages ago, I had [written a tutorial around object pooling, but we'll explore it here as a refresher. Create an **ObjectPool.cs** file with the following code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ObjectPool : MonoBehaviour
{
public static ObjectPool SharedInstance;
private List pooledObstacles;
private List pooledCake;
public GameObject obstacleToPool;
public GameObject cakeToPool;
public int amountToPool;
void Awake() {
SharedInstance = this;
}
void Start() {
pooledObstacles = new List();
pooledCake = new List();
GameObject tmpObstacle;
GameObject tmpCake;
for(int i = 0; i < amountToPool; i++) {
tmpObstacle = Instantiate(obstacleToPool);
tmpObstacle.SetActive(false);
pooledObstacles.Add(tmpObstacle);
tmpCake = Instantiate(cakeToPool);
tmpCake.SetActive(false);
pooledCake.Add(tmpCake);
}
}
public GameObject GetPooledObstacle() {
for(int i = 0; i < amountToPool; i++) {
if(pooledObstaclesi].activeInHierarchy == false) {
return pooledObstacles[i];
}
}
return null;
}
public GameObject GetPooledCake() {
for(int i = 0; i < amountToPool; i++) {
if(pooledCake[i].activeInHierarchy == false) {
return pooledCake[i];
}
}
return null;
}
}
```
If the code looks a little familiar, a lot of it was taken from the Unity educational resources, particularly [Introduction to Object Pooling.
The `ObjectPool` class is meant to be a singleton instance, meaning that we want to use the same pool regardless of where we are and not accidentally create numerous pools. We start by initializing each pool, which in our example is a pool of obstacles and a pool of rewards. For each object in the pool, we initialize them as disabled. The instantiation of our objects will be done with prefabs, but we'll get to that soon.
With the pool initialized, we can make use of the GetPooledObstacle or GetPooledCake methods to pull from the pool. Remember, items in the pool should be disabled. Otherwise, they are considered to be in use. We loop through our pools to find the first object that is disabled and if none exist, then we return null.
Alright, so we have object pooling logic and need to fill the pool. This is where the object prefabs come in.
As of right now, you should have an **Obstacle** game object and a **Cake** game object in your scene. These game objects should have various physics and collision-related components attached, as well as the logic scripts. Create a **Prefabs** directory within your **Assets** directory and then drag each of the two game objects into that directory. Doing this will convert them from a game object in the scene to a reusable prefab.
With the prefabs in your **Prefabs** directory, delete the obstacle and reward game objects from your scene. We're going to add them to the scene via our object pooling script, not through the Unity UI.
You should have the `ObjectPool` script completed. Make sure you attach this script to the **GameController** game object. Then, drag each of your prefabs into the public variables of that script in the inspector for the **GameController** game object.
Just like that, your prefabs will be pooled at the start of your game. However, just because we are pooling them doesn't mean we are using them. We need to create another script to take objects from the pool.
Create a **GameController.cs** file and include the following C# code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class GameController : MonoBehaviour {
public float obstacleTimer = 2;
public float timeUntilObstacle = 1;
public float cakeTimer = 1;
public float timeUntilCake = 1;
void Update() {
timeUntilObstacle -= Time.deltaTime;
timeUntilCake -= Time.deltaTime;
if(timeUntilObstacle <= 0) {
GameObject obstacle = ObjectPool.SharedInstance.GetPooledObstacle();
if(obstacle != null) {
obstacle.SetActive(true);
}
timeUntilObstacle = obstacleTimer;
}
if(timeUntilCake <= 0) {
GameObject cake = ObjectPool.SharedInstance.GetPooledCake();
if(cake != null) {
cake.SetActive(true);
}
timeUntilCake = cakeTimer;
}
}
}
```
In the above code, we are making use of a few timers. We're creating timers to determine how frequently an object should be taken from the object pool.
When the timer indicates we are ready to take from the pool, we use the `GetPooledObstacle` or `GetPooledCake` methods, set the object taken as enabled, and then reset the timer. Each instantiated prefab has the logic script attached, so once the object is enabled, it will start falling from the top of the screen.
To activate this script, make sure to attach it to the **GameController** game object within the scene.
## Persisting Game Scores with the Realm SDK for Unity
If you ran the game as of right now, you'd be able to move your player around and collide with obstacles or rewards that continuously fall from the top of the screen. There's no concept of score-keeping or data persistence in the game up until this point.
Including Realm in the game can be broken into two parts. For now, it is three parts due to needing to manually add the dependency to your project, but two parts will be evergreen.
From the Realm .NET releases, find the latest release that includes Unity. For this tutorial, I'm using the **realm.unity.bundle-10.1.1.tgz** file.
In Unity, click **Window -> Package Manager** and choose to **Add package from tarball...**, then find the Realm SDK that you had just downloaded.
It may take a few minutes to import the SDK, but once it's done, we can start using it.
Before we start adding code, we need to be able to display our score information to the user. In your Unity scene, add three **Text** game objects: one for the high score, one for the current score, and one for the amount of cake or rewards obtained. We'll be using these game objects soon.
Let's create a **PlayerStats.cs** file and add the following C# code:
``` csharp
using Realms;
public class PlayerStats : RealmObject {
PrimaryKey]
public string Username { get; set; }
public RealmInteger Score { get; set; }
public PlayerStats() {}
public PlayerStats(string Username, int Score) {
this.Username = Username;
this.Score = Score;
}
}
```
The above code represents an object within our Realm data store. For our example, we want the high score for any given player to be in our Realm. While we won't have multiple users in our example, the foundation is there.
To use the above `RealmObject`, we'll want to create another script. Create a **Score.cs** file and add the following code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using Realms;
public class Score : MonoBehaviour {
private Realm _realm;
private PlayerStats _playerStats;
private int _cake;
public Text highScoreText;
public Text currentScoreText;
public Text cakeText;
void Start() {
_realm = Realm.GetInstance();
_playerStats = _realm.Find("nraboy");
if(_playerStats is null) {
_realm.Write(() => {
_playerStats = _realm.Add(new PlayerStats("nraboy", 0));
});
}
highScoreText.text = "HIGH SCORE: " + _playerStats.Score.ToString();
_cake = 0;
}
void OnDisable() {
_realm.Dispose();
}
void Update() {
currentScoreText.text = "SCORE: " + (Mathf.Floor(Time.timeSinceLevelLoad) + _cake).ToString();
cakeText.text = "CAKE: " + _cake;
}
public void CalculateHighScore() {
int snapshotScore = (int)Mathf.Floor(Time.timeSinceLevelLoad) + _cake;
if(_playerStats.Score < snapshotScore) {
_realm.Write(() => {
_playerStats.Score = snapshotScore;
});
}
}
public void AddCakeToScore() {
_cake++;
}
}
```
In the above code, when the `Start` method is called, we get the Realm instance and do a find for a particular user. If the user doesn't exist, we create a new one, at which point we can use our Realm like any other object in our application.
When we decide to call the `CalculateHighScore` method, we do a check to see if the new score should be saved. In this example, we are using the rewards as a multiplier to the score.
If you've never used Realm before, the Realm SDK for Unity uses the same API as the .NET SDK. You can learn more about how to use it in the [getting started guide. You can also swing by the community to get additional help.
So, we have the `Score` class. This script should be attached to the **GameController** game object and each of the **Text** game objects should be dragged into the appropriate areas using the inspector.
We're not done yet. Remember, our **Player.cs** file needed to update the score. Before we open our class, make sure to drag the **GameController** into the appropriate area of the **Player** game object using the Unity inspector.
Open the **Player.cs** file and add the following to the `OnTriggerEnter2D` method:
``` csharp
void OnTriggerEnter2D(Collider2D collider) {
if(collider.gameObject.tag == "Obstacle") {
score.CalculateHighScore();
SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);
} else if(collider.gameObject.tag == "Cake") {
score.AddCakeToScore();
}
}
```
When running the game, not only will we have something playable, but the score should update and persist depending on if we've failed at the level or not.
The above image is a reminder of what we've built, minus the graphic for the cake.
## Conclusion
You just saw how to create an infinite runner type game with Unity and C# that uses the MongoDB Realm SDK for Unity when it comes to data persistence. Like previously mentioned, the Realm SDK is currently an alpha release, so it isn't a flawless experience and there are features missing. However, it works great for a simple game example like we saw here.
If you're interested in checking out this project, it can be found on GitHub. There's also a video version of this tutorial, an on-demand live-stream, which can be found below.
As a fun fact, this infinite runner example wasn't my first attempt at one. I built something similar a long time ago and it was quite fun. Check it out and continue your journey as a game developer.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Realm",
"C#",
"Unity"
],
"pageDescription": "Learn how to use Unity and the Realm SDK for Unity to build an infinite runner style game.",
"contentType": "Tutorial"
} | Build an Infinite Runner Game with Unity and the Realm Unity SDK | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-how-to-add-realm-to-your-unity-project | created | # Persistence in Unity Using Realm
When creating a game with Unity, we often reach the point where we need to save data that we need at a later point in time. This could be something simple, like a table of high scores, or a lot more complex, like the state of the game that got paused and now needs to be resumed exactly the way the user left it when they quit it earlier. Maybe you have tried this before using `PlayerPrefs` but your data was too complex to save it in there. Or you have tried SQL only to find it to be very complicated and cumbersome to use.
Realm can help you achieve this easily and quickly with just some minor adjustments to your code.
The goal of this article is to show you how to add Realm to your Unity game and make sure your data is persisted. The Realm Unity SDK is part of our Realm .NET SDK. The documentation for the Realm .NET SDK will help you get started easily.
The first part of this tutorial will describe the example itself. If you are already familiar with Unity or really just want to see Realm in action, you can also skip it and jump straight to the second part.
## Example game
We will be using a simple 3D chess game for demonstration purposes. Creating this game itself will not be part of this tutorial. However, this section will provide you with an overview so that you can follow along and add Realm to the game. This example can be found in our Unity examples repository.
The final implementation of the game including the usage of Realm is also part of the example repository.
To make it easy to find your way around this example, here are some notes to get you started:
The interesting part in the `MainScene` to look at is the `Board` which is made up of `Squares` and `Pieces`. The `Squares` are just slightly scaled and colored default `Cube` objects which we utilize to visualize the `Board` but also detect clicks for moving `Pieces` by using its already attached `Box Collider` component.
The `Pieces` have to be activated first, which happens by making them clickable as well. `Pieces` are not initially added to the `Board` but instead will be spawned by the `PieceSpawner`. You can find them in the `Prefabs` folder in the `Project` hierarchy.
The important part to look for here is the `Piece` script which detects clicks on this `Piece` (3) and offers a color change via `Select()` (1) and `Deselect()` (2) to visualize if a `Piece` is active or not.
```cs
using UnityEngine;
public class Piece : MonoBehaviour
{
private Events events = default;
private readonly Color selectedColor = new Color(1, 0, 0, 1);
private readonly Color deselectedColor = new Color(1, 1, 1, 1);
// 1
public void Select()
{
gameObject.GetComponent().material.color = selectedColor;
}
// 2
public void Deselect()
{
gameObject.GetComponent().material.color = deselectedColor;
}
// 3
private void OnMouseDown()
{
events.PieceClickedEvent.Invoke(this);
}
private void Awake()
{
events = FindObjectOfType();
}
}
```
We use two events to actually track the click on a `Piece` (1) or a `Square` (2):
```cs
using UnityEngine;
using UnityEngine.Events;
public class PieceClickedEvent : UnityEvent { }
public class SquareClickedEvent : UnityEvent { }
public class Events : MonoBehaviour
{
// 1
public readonly PieceClickedEvent PieceClickedEvent = new PieceClickedEvent();
// 2
public readonly SquareClickedEvent SquareClickedEvent = new SquareClickedEvent();
}
```
The `InputListener` waits for those events to be invoked and will then notify other parts of our game about those updates. Pieces need to be selected when clicked (1) and deselected if another one was clicked (2).
Clicking a `Square` while a `Piece` is selected will send a message (3) to the `GameState` to update the position of this `Piece`.
```cs
using UnityEngine;
public class InputListener : MonoBehaviour
{
SerializeField] private Events events = default;
[SerializeField] private GameState gameState = default;
private Piece activePiece = default;
private void OnEnable()
{
events.PieceClickedEvent.AddListener(OnPieceClicked);
events.SquareClickedEvent.AddListener(OnSquareClicked);
}
private void OnDisable()
{
events.PieceClickedEvent.RemoveListener(OnPieceClicked);
events.SquareClickedEvent.RemoveListener(OnSquareClicked);
}
private void OnPieceClicked(Piece piece)
{
if (activePiece != null)
{
// 2
activePiece.Deselect();
}
// 1
activePiece = piece;
activePiece.Select();
}
private void OnSquareClicked(Vector3 position)
{
if (activePiece != null)
{
// 3
gameState.MovePiece(activePiece, position);
activePiece.Deselect();
activePiece = null;
}
}
}
```
The actual movement as well as controlling the spawning and destroying of pieces is done by the `GameState`, in which all the above information eventually comes together to update `Piece` positions and possibly destroy other `Piece` objects. Whenever we move a `Piece` (1), we not only update its position (2) but also need to check if there is a `Piece` in that position already (3) and if so, destroy it (4).
In addition to updating the game while it is running, the `GameState` offers two more functionalities:
- set up the initial board (5)
- reset the board to its initial state (6)
```cs
using System.Linq;
using UnityEngine;
public class GameState : MonoBehaviour
{
[SerializeField] private PieceSpawner pieceSpawner = default;
[SerializeField] private GameObject pieces = default;
// 1
public void MovePiece(Piece movedPiece, Vector3 newPosition)
{
// 3
// Check if there is already a piece at the new position and if so, destroy it.
var attackedPiece = FindPiece(newPosition);
if (attackedPiece != null)
{
// 4
Destroy(attackedPiece.gameObject);
}
// 2
// Update the movedPiece's GameObject.
movedPiece.transform.position = newPosition;
}
// 6
public void ResetGame()
{
// Destroy all GameObjects.
foreach (var piece in pieces.GetComponentsInChildren())
{
Destroy(piece.gameObject);
}
// Recreate the GameObjects.
pieceSpawner.CreateGameObjects(pieces);
}
private void Awake()
{
// 5
pieceSpawner.CreateGameObjects(pieces);
}
private Piece FindPiece(Vector3 position)
{
return pieces.GetComponentsInChildren()
.FirstOrDefault(piece => piece.transform.position == position);
}
}
```
Go ahead and try it out yourself if you like. You can play around with the board and pieces and reset if you want to start all over again.
To make sure the example is not overly complex and easy to follow, there are no rules implemented. You can move the pieces however you want. Also, the game is purely local for now and will be expanded using our Sync component in a later article to be playable online with others.
In the following section, I will explain how to make sure that the current game state gets saved and the players can resume the game at any state.
## Adding Realm to your project
The first thing we need to do is to import the Realm framework into Unity.
The easiest way to do this is by using NPM.
You'll find it via `Windows` → `Package Manager` → cogwheel in the top right corner → `Advanced Project Settings`:
![
Within the `Scoped Registries`, you can add the `Name`, `URL`, and `Scope` as follows:
This adds `NPM` as a source for libraries. The final step is to tell the project which dependencies to actually integrate into the project. This is done in the `manifest.json` file which is located in the `Packages` folder of your project.
Here you need to add the following line to the `dependencies`:
```json
"io.realm.unity": ""
```
Replace `` with the most recent Realm version found in https://github.com/realm/realm-dotnet/releases and you're all set.
The final `manifest.json` should look something like this:
```json
{
"dependencies": {
...
"io.realm.unity": "10.3.0"
},
"scopedRegistries":
{
"name": "NPM",
"url": "https://registry.npmjs.org/",
"scopes": [
"io.realm.unity"
]
}
]
}
```
When you switch back to Unity, it will reload the dependencies. If you then open the `Package Manager` again, you should see `Realm` as a new entry in the list on the left:
![
We can now start using Realm in our Unity project.
## Top-down or bottom-up?
Before we actually start adding Realm to our code, we need to think about how we want to achieve this and how the UI and database will interact with each other.
There are basically two options we can choose from: top-down or bottom-up.
The top-down approach would be to have the UI drive the changes. The `Piece` would know about its database object and whenever a `Piece` is moved, it would also update the database with its new position.
The preferred approach would be bottom-up, though. Changes will be applied to the Realm and it will then take care of whatever implications this has on the UI by sending notifications.
Let's first look into the initial setup of the board.
## Setting up the board
The first thing we want to do is to define a Realm representation of our piece since we cannot save the `MonoBehaviour` directly in Realm. Classes that are supposed to be saved in Realm need to subclass `RealmObject`. The class `PieceEntity` will represent such an object. Note that we cannot just duplicate the types from `Piece` since not all of them can be saved in Realm, like `Vector3` and `enum`.
Add the following scripts to the project:
```cs
using Realms;
using UnityEngine;
public class PieceEntity : RealmObject
{
// 1
public PieceType PieceType
{
get => (PieceType)Type;
private set => Type = (int)value;
}
// 2
public Vector3 Position
{
get => PositionEntity.ToVector3();
set => PositionEntity = new Vector3Entity(value);
}
// 3
private int Type { get; set; }
private Vector3Entity PositionEntity { get; set; }
// 4
public PieceEntity(PieceType type, Vector3 position)
{
PieceType = type;
Position = position;
}
// 5
protected override void OnPropertyChanged(string propertyName)
{
if (propertyName == nameof(PositionEntity))
{
RaisePropertyChanged(nameof(Position));
}
}
// 6
private PieceEntity()
{
}
}
```
```cs
using Realms;
using UnityEngine;
public class Vector3Entity : EmbeddedObject // 7
{
public float X { get; private set; }
public float Y { get; private set; }
public float Z { get; private set; }
public Vector3Entity(Vector3 vector) // 8
{
X = vector.x;
Y = vector.y;
Z = vector.z;
}
public Vector3 ToVector3() => new Vector3(X, Y, Z); // 9
private Vector3Entity() // 10
{
}
}
```
Even though we cannot save the `PieceType` (1) and the position (2) directly in the Realm, we can still expose them using backing variables (3) to make working with this class easier while still fulfilling the requirements for saving data in Realm.
Additionally, we provide a convenience constructor (4) for setting those two properties. A default constructor (6) also has to be provided for every `RealmObject`. Since we are not going to use it here, though, we can set it to `private`.
Note that one of these backing variables is a `RealmObject` itself, or rather a subclass of it: `EmbeddedObject` (7). By extracting the position to a separate class `Vector3Entity` the `PieceEntity` is more readable. Another plus is that we can use the `EmbeddedObject` to represent a 1:1 relationship. Every `PieceEntity` can only have one `Vector3Entity` and even more importantly, every `Vector3Entity` can only belong to one `PieceEntity` because there can only ever be one `Piece` on any given `Square`.
The `Vector3Entity`, like the `PieceEntity`, has some convenience functionality like a constructor that takes a `Vector3` (8), the `ToVector3()` function (9) and the private, mandatory default constructor (10) like `PieceEntity`.
Looking back at the `PieceEntity`, you will notice one more function: `OnPropertyChanged` (5). Realm sends notifications for changes to fields saved in the database. Since we expose those fields using `PieceType` and `Position`, we need to make sure those notifications are passed on. This is achieved by calling `RaisePropertyChanged(nameof(Position));` whenever `PositionEntity` changes.
The next step is to add some way to actually add `Pieces` to the `Realm`. The current database state will always represent the current state of the board. When we create a new `PieceEntity`—for example, when setting up the board—the `GameObject` for it (`Piece`) will be created. If a `Piece` gets moved, the `PieceEntity` will be updated by the `GameState` which then leads to the `Piece`'s `GameObject` being updated using above mentioned notifications.
First, we will need to set up the board. To achieve this using the bottom-up approach, we adjust the `PieceSpawner` as follows:
```cs
using Realms;
using UnityEngine;
public class PieceSpawner : MonoBehaviour
{
SerializeField] private Piece prefabBlackBishop = default;
[SerializeField] private Piece prefabBlackKing = default;
[SerializeField] private Piece prefabBlackKnight = default;
[SerializeField] private Piece prefabBlackPawn = default;
[SerializeField] private Piece prefabBlackQueen = default;
[SerializeField] private Piece prefabBlackRook = default;
[SerializeField] private Piece prefabWhiteBishop = default;
[SerializeField] private Piece prefabWhiteKing = default;
[SerializeField] private Piece prefabWhiteKnight = default;
[SerializeField] private Piece prefabWhitePawn = default;
[SerializeField] private Piece prefabWhiteQueen = default;
[SerializeField] private Piece prefabWhiteRook = default;
public void CreateNewBoard(Realm realm)
{
realm.Write(() =>
{
// 1
realm.RemoveAll();
// 2
realm.Add(new PieceEntity(PieceType.WhiteRook, new Vector3(1, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhiteKnight, new Vector3(2, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhiteBishop, new Vector3(3, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhiteQueen, new Vector3(4, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhiteKing, new Vector3(5, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhiteBishop, new Vector3(6, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhiteKnight, new Vector3(7, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhiteRook, new Vector3(8, 0, 1)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(1, 0, 2)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(2, 0, 2)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(3, 0, 2)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(4, 0, 2)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(5, 0, 2)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(6, 0, 2)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(7, 0, 2)));
realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(8, 0, 2)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(1, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(2, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(3, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(4, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(5, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(6, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(7, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(8, 0, 7)));
realm.Add(new PieceEntity(PieceType.BlackRook, new Vector3(1, 0, 8)));
realm.Add(new PieceEntity(PieceType.BlackKnight, new Vector3(2, 0, 8)));
realm.Add(new PieceEntity(PieceType.BlackBishop, new Vector3(3, 0, 8)));
realm.Add(new PieceEntity(PieceType.BlackQueen, new Vector3(4, 0, 8)));
realm.Add(new PieceEntity(PieceType.BlackKing, new Vector3(5, 0, 8)));
realm.Add(new PieceEntity(PieceType.BlackBishop, new Vector3(6, 0, 8)));
realm.Add(new PieceEntity(PieceType.BlackKnight, new Vector3(7, 0, 8)));
realm.Add(new PieceEntity(PieceType.BlackRook, new Vector3(8, 0, 8)));
});
}
public void SpawnPiece(PieceEntity pieceEntity, GameObject parent)
{
var piecePrefab = pieceEntity.PieceType switch
{
PieceType.BlackBishop => prefabBlackBishop,
PieceType.BlackKing => prefabBlackKing,
PieceType.BlackKnight => prefabBlackKnight,
PieceType.BlackPawn => prefabBlackPawn,
PieceType.BlackQueen => prefabBlackQueen,
PieceType.BlackRook => prefabBlackRook,
PieceType.WhiteBishop => prefabWhiteBishop,
PieceType.WhiteKing => prefabWhiteKing,
PieceType.WhiteKnight => prefabWhiteKnight,
PieceType.WhitePawn => prefabWhitePawn,
PieceType.WhiteQueen => prefabWhiteQueen,
PieceType.WhiteRook => prefabWhiteRook,
_ => throw new System.Exception("Invalid piece type.")
};
var piece = Instantiate(piecePrefab, pieceEntity.Position, Quaternion.identity, parent.transform);
piece.Entity = pieceEntity;
}
}
```
The important change here is `CreateNewBoard`. Instead of spawning the `Piece`s, we now add `PieceEntity` objects to the Realm. When we look at the changes in `GameState`, we will see how this actually creates a `Piece` per `PieceEntity`.
Here we just wipe the database (1) and then add new `PieceEntity` objects (2). Note that this is wrapped by a `realm.write` block. Whenever we want to change the database, we need to enclose it in a write transaction. This makes sure that no other piece of code can change the database at the same time since transactions block each other.
The last step to create a new board is to update the `GameState` to make use of the new `PieceSpawner` and the `PieceEntity` that we just created.
We'll go through these changes step by step. First we also need to import Realm here as well:
```cs
using Realms;
```
Then we add a private field to save our `Realm` instance to avoid creating it over and over again. We also create another private field to save the collection of pieces that are on the board and a notification token which we need for above mentioned notifications:
```cs
private Realm realm;
private IQueryable pieceEntities;
private IDisposable notificationToken;
```
In `Awake`, we do need to get access to the `Realm`. This is achieved by opening an instance of it (1) and then asking it for all `PieceEntity` objects currently saved using `realm.All` (2) and assigning them to our `pieceEntities` field:
```cs
private void Awake()
{
realm = Realm.GetInstance(); // 1
pieceEntities = realm.All(); // 2
// 3
notificationToken = pieceEntities.SubscribeForNotifications((sender, changes, error) =>
{
// 4
if (error != null)
{
Debug.Log(error.ToString());
return;
}
// 5
// Initial notification
if (changes == null)
{
// Check if we actually have `PieceEntity` objects in our Realm (which means we resume a game).
if (sender.Count > 0)
{
// 6
// Each `RealmObject` needs a corresponding `GameObject` to represent it.
foreach (PieceEntity pieceEntity in sender)
{
pieceSpawner.SpawnPiece(pieceEntity, pieces);
}
}
else
{
// 7
// No game was saved, create a new board.
pieceSpawner.CreateNewBoard(realm);
}
return;
}
// 8
foreach (var index in changes.InsertedIndices)
{
var pieceEntity = sender[index];
pieceSpawner.SpawnPiece(pieceEntity, pieces);
}
});
}
```
Note that collections are live objects. This has two positive implications: Every access to the object reference always returns an updated representation of said object. Because of this, every subsequent change to the object will be visible any time the object is accessed again. We also get notifications for those changes if we subscribed to them. This can be done by calling `SubscribeForNotifications` on a collection (3).
Apart from an error object that we need to check (4), we also receive the `changes` and the `sender` (the updated collection itself) with every notification. For every new collection of objects, an initial notification is sent that does not include any `changes` but gives us the opportunity to do some initial setup work (5).
In case we resume a game, we'll already see `PieceEntity` objects in the database even for the initial notification. We need to spawn one `Piece` per `PieceEntity` to represent it (6). We make use of the `SpawnPiece` function in `PieceSpawner` to achieve this. In case the database does not have any objects yet, we need to create the board from scratch (7). Here we use the `CreateNewBoard` function we added earlier to the `PieceSpawner`.
On top of the initial notification, we also expect to receive a notification every time a `PieceEntity` is inserted into the Realm. This is where we continue the `CreateNewBoard` functionality we started in the `PieceSpawner` by adding new objects to the database. After those changes happen, we end up with `changes` (8) inside the notifications. Now we need to iterate over all new `PieceEntity` objects in the `sender` (which represents the `pieceEntities` collection) and add a `Piece` for each new `PieceEntity` to the board.
Apart from inserting new pieces when the board gets set up, we also need to take care of movement and pieces attacking each other. This will be explained in the next section.
## Updating the position of a PieceEntity
Whenever we receive a click on a `Square` and therefore call `MovePiece` in `GameState`, we need to update the `PieceEntity` instead of directly moving the corresponding `GameObject`. The movement of the `Piece` will then happen via the `PropertyChanged` notifications as we saw earlier.
```cs
public void MovePiece(Vector3 oldPosition, Vector3 newPosition)
{
realm.Write(() =>
{
// 1
var attackedPiece = FindPieceEntity(newPosition);
if (attackedPiece != null)
{
realm.Remove(attackedPiece);
}
// 2
var movedPieceEntity = FindPieceEntity(oldPosition);
movedPieceEntity.Position = newPosition;
});
}
// 3
private PieceEntity FindPieceEntity(Vector3 position)
{
return pieceEntities
.Filter("PositionEntity.X == $0 && PositionEntity.Y == $1 && PositionEntity.Z == $2",
position.x, position.y, position.z)
.FirstOrDefault();
}
```
Before actually moving the `PieceEntity`, we do need to check if there is already a `PieceEntity` at the desired position and if so, destroy it. To find a `PieceEntity` at the `newPosition` and also to find the `PieceEntity` that needs to be moved from `oldPosition` to `newPosition`, we can use queries on the `pieceEntities` collection (3).
By querying the collection (calling `Filter`), we can look for one or multiple `RealmObject`s with specific characteristics. In this case, we're interested in the `RealmObject` that represents the `Piece` we are looking for. Note that when using a `Filter` we can only filter using the Realm properties saved in the database, not the exposed properties (`Position` and `PieceType`) exposed for convenience by the `PieceEntity`.
If there is an `attackedPiece` at the target position, we need to delete the corresponding `PieceEntity` for this `GameObject` (1). After the `attackedPiece` is updated, we can then also update the `movedPiece` (2).
Like the initial setup of the board, this has to be called within a write transaction to make sure no other code is changing the database at the same time.
This is all we had to do to update and persist the position. Go ahead and start the game. Stop and start it again and you should now see the state being persisted.
## Resetting the board
The final step will be to also update our `ResetGame` button to update (or rather, wipe) the `Realm`. At the moment, it does not update the state in the database and just recreates the `GameObject`s.
Resetting works similar to what we do in `Awake` in case there were no entries in the database—for example, when starting the game for the first time.
We can reuse the `CreateNewBoard` functionality here since it includes wiping the database before actually re-creating it:
```cs
public void ResetGame()
{
pieceSpawner.CreateNewBoard(realm);
}
```
With this change, our game is finished and fully functional using a local `Realm` to save the game's state.
## Recap and conclusion
In this tutorial, we have seen that saving your game and resuming it later can be easily achieved by using `Realm`.
The steps we needed to take:
- Add `Realm` via NPM as a dependency.
- Import `Realm` in any class that wants to use it by calling `using Realms;`.
- Create a new `Realm` instance via `Realm.GetInstance()` to get access to the database.
- Define entites by subclassing `RealmObject` (or any of its subclasses):
- Fields need to be public and primitive values or lists.
- A default constructor is mandatory.
- A convenience constructor and additional functions can be defined.
- Write to a `Realm` using `realm.Write()` to avoid data corruption.
- CRUD operations (need to use a `write` transaction):
- Use `realm.Add()` to `Create` a new object.
- Use `realm.Remove()` to `Delete` an object.
- `Read` and `Update` can be achieved by simply `getting` and `setting` the `public fields`.
With this, you should be ready to use Realm in your games.
If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB and Realm. | md | {
"tags": [
"Realm",
"C#",
"Unity"
],
"pageDescription": "This article shows how to integrate the Realm Unity SDK into your Unity game. We will cover everything you need to know to get started: installing the SDK, defining your models, and connecting the database to your GameObjects.",
"contentType": "Tutorial"
} | Persistence in Unity Using Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-quickstart-aggregation | created | # Getting Started with Aggregation Pipelines in Python
MongoDB's aggregation pipelines are one of its most powerful features. They allow you to write expressions, broken down into a series of stages, which perform operations including aggregation, transformations, and joins on the data in your MongoDB databases. This allows you to do calculations and analytics across documents and collections within your MongoDB database.
## Prerequisites
This quick start is the second in a series of Python posts. I *highly* recommend you start with my first post, Basic MongoDB Operations in Python, which will show you how to get set up correctly with a free MongoDB Atlas database cluster containing the sample data you'll be working with here. Go read it and come back. I'll wait. Without it, you won't have the database set up correctly to run the code in this quick start guide.
In summary, you'll need:
- An up-to-date version of Python 3. I wrote the code in this tutorial in Python 3.8, but it should run fine in version 3.6+.
- A code editor of your choice. I recommend either PyCharm or the free VS Code with the official Python extension.
- A MongoDB cluster containing the `sample_mflix` dataset. You can find instructions to set that up in the first blog post in this series.
## Getting Started
MongoDB's aggregation pipelines are very powerful and so they can seem a little overwhelming at first. For this reason, I'll start off slowly. First, I'll show you how to build up a pipeline that duplicates behaviour that you can already achieve with MQL queries, using PyMongo's `find()` method, but instead using an aggregation pipeline with `$match`, `$sort`, and `$limit` stages. Then, I'll show how to make queries that go beyond MQL, demonstrating using `$lookup` to include related documents from another collection. Finally, I'll put the "aggregation" into "aggregation pipeline" by showing you how to use `$group` to group together documents to form new document summaries.
>All of the sample code for this quick start series can be found on GitHub. I recommend you check it out if you get stuck, but otherwise, it's worth following the tutorial and writing the code yourself!
All of the pipelines in this post will be executed against the sample_mflix database's `movies` collection. It contains documents that look like this:
``` python
{
'_id': ObjectId('573a1392f29313caabcdb497'),
'awards': {'nominations': 7,
'text': 'Won 1 Oscar. Another 2 wins & 7 nominations.',
'wins': 3},
'cast': 'Janet Gaynor', 'Fredric March', 'Adolphe Menjou', 'May Robson'],
'countries': ['USA'],
'directors': ['William A. Wellman', 'Jack Conway'],
'fullplot': 'Esther Blodgett is just another starry-eyed farm kid trying to '
'break into the movies. Waitressing at a Hollywood party, she '
'catches the eye of alcoholic star Norman Maine, is given a test, '
'and is caught up in the Hollywood glamor machine (ruthlessly '
'satirized). She and her idol Norman marry; but his career '
'abruptly dwindles to nothing',
'genres': ['Drama'],
'imdb': {'id': 29606, 'rating': 7.7, 'votes': 5005},
'languages': ['English'],
'lastupdated': '2015-09-01 00:55:54.333000000',
'plot': 'A young woman comes to Hollywood with dreams of stardom, but '
'achieves them only with the help of an alcoholic leading man whose '
'best days are behind him.',
'poster': 'https://m.media-amazon.com/images/M/MV5BMmE5ODI0NzMtYjc5Yy00MzMzLTk5OTQtN2Q3MzgwOTllMTY3XkEyXkFqcGdeQXVyNjc0MzMzNjA@._V1_SY1000_SX677_AL_.jpg',
'rated': 'NOT RATED',
'released': datetime.datetime(1937, 4, 27, 0, 0),
'runtime': 111,
'title': 'A Star Is Born',
'tomatoes': {'critic': {'meter': 100, 'numReviews': 11, 'rating': 7.4},
'dvd': datetime.datetime(2004, 11, 16, 0, 0),
'fresh': 11,
'lastUpdated': datetime.datetime(2015, 8, 26, 18, 58, 34),
'production': 'Image Entertainment Inc.',
'rotten': 0,
'viewer': {'meter': 79, 'numReviews': 2526, 'rating': 3.6},
'website': 'http://www.vcientertainment.com/Film-Categories?product_id=73'},
'type': 'movie',
'writers': ['Dorothy Parker (screen play)',
'Alan Campbell (screen play)',
'Robert Carson (screen play)',
'William A. Wellman (from a story by)',
'Robert Carson (from a story by)'],
'year': 1937}
```
There's a lot of data there, but I'll be focusing mainly on the `_id`, `title`, `year`, and `cast` fields.
## Your First Aggregation Pipeline
Aggregation pipelines are executed by PyMongo using Collection's [aggregate() method.
The first argument to `aggregate()` is a sequence of pipeline stages to be executed. Much like a query, each stage of an aggregation pipeline is a BSON document, and PyMongo will automatically convert a `dict` into a BSON document for you.
An aggregation pipeline operates on *all* of the data in a collection. Each stage in the pipeline is applied to the documents passing through, and whatever documents are emitted from one stage are passed as input to the next stage, until there are no more stages left. At this point, the documents emitted from the last stage in the pipeline are returned to the client program, in a similar way to a call to `find()`.
Individual stages, such as `$match`, can act as a filter, to only pass through documents matching certain criteria. Other stage types, such as `$project`, `$addFields`, and `$lookup` will modify the content of individual documents as they pass through the pipeline. Finally, certain stage types, such as `$group`, will create an entirely new set of documents based on the documents passed into it taken as a whole. None of these stages change the data that is stored in MongoDB itself. They just change the data before returning it to your program! There *is* a stage, $set, which can save the results of a pipeline back into MongoDB, but I won't be covering it in this quick start.
I'm going to assume that you're working in the same environment that you used for the last post, so you should already have PyMongo and python-dotenv installed, and you should have a `.env` file containing your `MONGODB_URI` environment variable.
### Finding and Sorting
First, paste the following into your Python code:
``` python
import os
from pprint import pprint
import bson
from dotenv import load_dotenv
import pymongo
# Load config from a .env file:
load_dotenv(verbose=True)
MONGODB_URI = os.environ"MONGODB_URI"]
# Connect to your MongoDB cluster:
client = pymongo.MongoClient(MONGODB_URI)
# Get a reference to the "sample_mflix" database:
db = client["sample_mflix"]
# Get a reference to the "movies" collection:
movie_collection = db["movies"]
```
The above code will provide a global variable, a Collection object called `movie_collection`, which points to the `movies` collection in your database.
Here is some code which creates a pipeline, executes it with `aggregate`, and then loops through and prints the detail of each movie in the results. Paste it into your program.
``` python
pipeline = [
{
"$match": {
"title": "A Star Is Born"
}
},
{
"$sort": {
"year": pymongo.ASCENDING
}
},
]
results = movie_collection.aggregate(pipeline)
for movie in results:
print(" * {title}, {first_castmember}, {year}".format(
title=movie["title"],
first_castmember=movie["cast"][0],
year=movie["year"],
))
```
This pipeline has two stages. The first is a [$match stage, which is similar to querying a collection with `find()`. It filters the documents passing through the stage based on an MQL query. Because it's the first stage in the pipeline, its input is all of the documents in the `movie` collection. The MQL query for the `$match` stage filters on the `title` field of the input documents, so the only documents that will be output from this stage will have a title of "A Star Is Born."
The second stage is a $sort stage. Only the documents for the movie "A Star Is Born" are passed to this stage, so the result will be all of the movies called "A Star Is Born," now sorted by their year field, with the oldest movie first.
Calls to aggregate() return a cursor pointing to the resulting documents. The cursor can be looped through like any other sequence. The code above loops through all of the returned documents and prints a short summary, consisting of the title, the first actor in the `cast` array, and the year the movie was produced.
Executing the code above results in:
``` none
* A Star Is Born, Janet Gaynor, 1937
* A Star Is Born, Judy Garland, 1954
* A Star Is Born, Barbra Streisand, 1976
```
### Refactoring the Code
It is possible to build up whole aggregation pipelines as a single data structure, as in the example above, but it's not necessarily a good idea. Pipelines can get long and complex. For this reason, I recommend you build up each stage of your pipeline as a separate variable, and then combine the stages into a pipeline at the end, like this:
``` python
# Match title = "A Star Is Born":
stage_match_title = {
"$match": {
"title": "A Star Is Born"
}
}
# Sort by year, ascending:
stage_sort_year_ascending = {
"$sort": { "year": pymongo.ASCENDING }
}
# Now the pipeline is easier to read:
pipeline =
stage_match_title,
stage_sort_year_ascending,
]
```
### Limit the Number of Results
Imagine I wanted to obtain the most recent production of "A Star Is Born" from the movies collection.
This can be thought of as three stages, executed in order:
1. Obtain the movie documents for "A Star Is Born."
2. Sort by year, descending.
3. Discard all but the first document.
The first stage is already the same as `stage_match_title` above. The second stage is the same as `stage_sort_year_ascending`, but with `pymongo.ASCENDING` changed to `pymongo.DESCENDING`. The third stage is a [$limit stage.
The **modified and new** code looks like this:
``` python
# Sort by year, descending:
stage_sort_year_descending = {
"$sort": { "year": pymongo.DESCENDING }
}
# Limit to 1 document:
stage_limit_1 = { "$limit": 1 }
pipeline =
stage_match_title,
stage_sort_year_descending,
stage_limit_1,
]
```
If you make the changes above and execute your code, then you should see just the following line:
``` none
* A Star Is Born, Barbra Streisand, 1976
```
>Wait a minute! Why isn't there a document for the amazing production with Lady Gaga and Bradley Cooper?
>
>Hold on there! You'll find the answer to this mystery, and more, later on in this blog post.
Okay, so now you know how to filter, sort, and limit the contents of a collection using an aggregation pipeline. But these are just operations you can already do with `find()`! Why would you want to use these complex, new-fangled aggregation pipelines?
Read on, my friend, and I will show you the *true power* of MongoDB aggregation pipelines.
## Look Up Related Data in Other Collections
There's a dirty secret, hiding in the `sample_mflix` database. As well as the `movies` collection, there's also a collection called `comments`. Documents in the `comments` collection look like this:
``` python
{
'_id': ObjectId('5a9427648b0beebeb69579d3'),
'movie_id': ObjectId('573a1390f29313caabcd4217'),
'date': datetime.datetime(1983, 4, 27, 20, 39, 15),
'email': '[email protected]',
'name': 'Cameron Duran',
'text': 'Quasi dicta culpa asperiores quaerat perferendis neque. Est animi '
'pariatur impedit itaque exercitationem.'}
```
It's a comment for a movie. I'm not sure why people are writing Latin comments for these movies, but let's go with it. The second field, `movie_id,` corresponds to the `_id` value of a document in the `movies` collection.
So, it's a comment *related* to a movie!
Does MongoDB enable you to query movies and embed the related comments, like a JOIN in a relational database? *Yes it does!* With the [$lookup stage.
I'll show you how to obtain related documents from another collection, and embed them in the documents from your primary collection. First, create a new pipeline from scratch, and start with the following:
``` python
# Look up related documents in the 'comments' collection:
stage_lookup_comments = {
"$lookup": {
"from": "comments",
"localField": "_id",
"foreignField": "movie_id",
"as": "related_comments",
}
}
# Limit to the first 5 documents:
stage_limit_5 = { "$limit": 5 }
pipeline =
stage_lookup_comments,
stage_limit_5,
]
results = movie_collection.aggregate(pipeline)
for movie in results:
pprint(movie)
```
The stage I've called `stage_lookup_comments` is a `$lookup` stage. This `$lookup` stage will look up documents from the `comments` collection that have the same movie id. The matching comments will be listed as an array in a field named 'related_comments,' with an array value containing all of the comments that have this movie's '\_id' value as 'movie_id.'
I've added a `$limit` stage just to ensure that there's a reasonable amount of output without being overwhelming.
Now, execute the code.
>You may notice that the pipeline above runs pretty slowly! There are two reasons for this:
>
>- There are 23.5k movie documents and 50k comments.
>- There's a missing index on the `comments` collection. It's missing on purpose, to teach you about indexes!
>
>I'm not going to show you how to fix the index problem right now. I'll write about that in a later post in this series, focusing on indexes. Instead, I'll show you a trick for working with slow aggregation pipelines while you're developing.
>
>Working with slow pipelines is a pain while you're writing and testing the pipeline. *But*, if you put a temporary `$limit` stage at the *start* of your pipeline, it will make the query faster (although the results may be different because you're not running on the whole dataset).
>
>When I was writing this pipeline, I had a first stage of `{ "$limit": 1000 }`.
>
>When you have finished crafting the pipeline, you can comment out the first stage so that the pipeline will now run on the whole collection. **Don't forget to remove the first stage, or you're going to get the wrong results!**
The aggregation pipeline above will print out all of the contents of five movie documents. It's quite a lot of data, but if you look carefully, you should see that there's a new field in each document that looks like this:
``` python
'related_comments': []
```
### Matching on Array Length
If you're *lucky*, you may have some documents in the array, but it's unlikely, as most of the movies have no comments. Now, I'll show you how to add some stages to match only movies which have more than two comments.
Ideally, you'd be able to add a single `$match` stage which obtained the length of the `related_comments` field and matched it against the expression `{ "$gt": 2 }`. In this case, it's actually two steps:
- Add a field (I'll call it `comment_count`) containing the length of the `related_comments` field.
- Match where the value of `comment_count` is greater than two.
Here is the code for the two stages:
``` python
# Calculate the number of comments for each movie:
stage_add_comment_count = {
"$addFields": {
"comment_count": {
"$size": "$related_comments"
}
}
}
# Match movie documents with more than 2 comments:
stage_match_with_comments = {
"$match": {
"comment_count": {
"$gt": 2
}
}
}
```
The two stages go after the `$lookup` stage, and before the `$limit` 5 stage:
``` python
pipeline = [
stage_lookup_comments,
stage_add_comment_count,
stage_match_with_comments,
limit_5,
]
```
While I'm here, I'm going to clean up the output of this code, instead of using `pprint`:
``` python
results = movie_collection.aggregate(pipeline)
for movie in results:
print(movie["title"])
print("Comment count:", movie["comment_count"])
# Loop through the first 5 comments and print the name and text:
for comment in movie["related_comments"][:5]:
print(" * {name}: {text}".format(
name=comment["name"],
text=comment["text"]))
```
*Now* when you run this code, you should see something more like this:
``` none
Footsteps in the Fog
--------------------
Comment count: 3
* Sansa Stark: Error ex culpa dignissimos assumenda voluptates vel. Qui inventore quae quod facere veniam quaerat quibusdam. Accusamus ab deleniti placeat non.
* Theon Greyjoy: Animi dolor minima culpa sequi voluptate. Possimus necessitatibus voluptatem hic cum numquam voluptates.
* Donna Smith: Et esse nulla ducimus tempore aliquid. Suscipit iste dignissimos voluptate velit. Laboriosam sequi quae fugiat similique alias. Corporis cumque labore veniam dignissimos.
```
It's good to see Sansa Stark from Game of Thrones really knows her Latin, isn't it?
Now I've shown you how to work with lookups in your pipelines, I'll show you how to use the `$group` stage to do actual *aggregation*.
## Grouping Documents with `$group`
I'll start with a new pipeline again.
The `$group` stage is one of the more difficult stages to understand, so I'll break this down slowly.
Start with the following code:
``` python
# Group movies by year, producing 'year-summary' documents that look like:
# {
# '_id': 1917,
# }
stage_group_year = {
"$group": {
"_id": "$year",
}
}
pipeline = [
stage_group_year,
]
results = movie_collection.aggregate(pipeline)
# Loop through the 'year-summary' documents:
for year_summary in results:
pprint(year_summary)
```
Execute this code, and you should see something like this:
``` none
{'_id': 1978}
{'_id': 1996}
{'_id': 1931}
{'_id': '2000è'}
{'_id': 1960}
{'_id': 1972}
{'_id': 1943}
{'_id': '1997è'}
{'_id': 2010}
{'_id': 2004}
{'_id': 1947}
{'_id': '1987è'}
{'_id': 1954}
...
```
Each line is a document emitted from the aggregation pipeline. But you're not looking at *movie* documents any more. The `$group` stage groups input documents by the specified `_id` expression and output one document for each unique `_id` value. In this case, the expression is `$year`, which means one document will be emitted for each unique value of the `year` field. Each document emitted can (and usually will) also contain values generated from aggregating data from the grouped documents.
Change the stage definition to the following:
``` python
stage_group_year = {
"$group": {
"_id": "$year",
# Count the number of movies in the group:
"movie_count": { "$sum": 1 },
}
}
```
This will add a `movie_count` field, containing the result of adding `1` for every document in the group. In other words, it counts the number of movie documents in the group. If you execute the code now, you should see something like the following:
``` none
{'_id': '1997è', 'movie_count': 2}
{'_id': 2010, 'movie_count': 970}
{'_id': 1947, 'movie_count': 38}
{'_id': '1987è', 'movie_count': 1}
{'_id': 2012, 'movie_count': 1109}
{'_id': 1954, 'movie_count': 64}
...
```
There are a number of [accumulator operators, like `$sum`, that allow you to summarize data from the group. If you wanted to build an array of all the movie titles in the emitted document, you could add `"movie_titles": { "$push": "$title" },` to the `$group` stage. In that case, you would get documents that look like this:
``` python
{
'_id': 1917,
'movie_count': 3,
'movie_titles':
'The Poor Little Rich Girl',
'Wild and Woolly',
'The Immigrant'
]
}
```
Something you've probably noticed from the output above is that some of the years contain the "è" character. This database has some messy values in it. In this case, there's only a small handful of documents, and I think we should just remove them. Add the following two stages to only match documents with a numeric `year` value, and to sort the results:
``` python
stage_match_years = {
"$match": {
"year": {
"$type": "number",
}
}
}
stage_sort_year_ascending = {
"$sort": {"_id": pymongo.ASCENDING}
}
pipeline = [
stage_match_years, # Match numeric years
stage_group_year,
stage_sort_year_ascending, # Sort by year
]
```
Note that the `$match` stage is added to the start of the pipeline, and the `$sort` is added to the end. A general rule is that you should filter documents out early in your pipeline, so that later stages have fewer documents to deal with. It also ensures that the pipeline is more likely to be able to take advantages of any appropriate indexes assigned to the collection.
>
>
>Remember, all of the sample code for this quick start series can be found [on GitHub.
>
>
Aggregations using `$group` are a great way to discover interesting things about your data. In this example, I'm illustrating the number of movies made each year, but it would also be interesting to see information about movies for each country, or even look at the movies made by different actors.
## What Have You Learned?
You've learned how to construct aggregation pipelines to filter, group, and join documents with other collections. You've hopefully learned that putting a `$limit` stage at the start of your pipeline can be useful to speed up development (but should be removed before going to production). You've also learned some basic optimization tips, like putting filtering expressions towards the start of your pipeline instead of towards the end.
As you've gone through, you'll probably have noticed that there's a *ton* of different stage types, operators, and accumulator operators. Learning how to use the different components of aggregation pipelines is a big part of learning to use MongoDB effectively as a developer.
I love working with aggregation pipelines, and I'm always surprised at what you can do with them!
## Next Steps
Aggregation pipelines are super powerful, and because of this, they're a big topic to cover. Check out the full documentation to get a better idea of their full scope.
MongoDB University also offers a *free* online course on The MongoDB Aggregation Framework.
Note that aggregation pipelines can also be used to generate new data and write it back into a collection, with the $out stage.
MongoDB provides a *free* GUI tool called Compass. It allows you to connect to your MongoDB cluster, so you can browse through databases and analyze the structure and contents of your collections. It includes an aggregation pipeline builder which makes it easier to build aggregation pipelines. I highly recommend you install it, or if you're using MongoDB Atlas, use its similar aggregation pipeline builder in your browser. I often use them to build aggregation pipelines, and they include export buttons which will export your pipeline as Python code.
I don't know about you, but when I was looking at some of the results above, I thought to myself, "It would be fun to visualise this with a chart." MongoDB provides a hosted service called Charts which just *happens* to take aggregation pipelines as input. So, now's a good time to give it a try!
I consider aggregation pipelines to be one of MongoDB's two "power tools," along with Change Streams. If you want to learn more about change streams, check out this blog post by my awesome colleague, Naomi Pentrel. | md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": "Query, group, and join data in MongoDB using aggregation pipelines with Python.",
"contentType": "Quickstart"
} | Getting Started with Aggregation Pipelines in Python | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/integration-test-atlas-serverless-apps | created | # How to Write Integration Tests for MongoDB Atlas Functions
> As of June 2022, the functionality previously known as MongoDB Realm is now named Atlas App Services. Atlas App Services refers to the cloud services that simplify building applications with Atlas – Atlas Data API, Atlas GraphQL API, Atlas Triggers, and Atlas Device Sync. Realm will continue to be used to refer to the client-side database and SDKs. Some of the naming or references in this article may be outdated.
Integration tests are vital for apps built with a serverless architecture. Unfortunately, figuring out how to build integration tests for serverless apps can be challenging.
Today, I'll walk you through how to write integration tests for apps built with MongoDB Atlas Functions.
This is the second post in the *DevOps + MongoDB Atlas Functions = 😍* blog series. Throughout this series, I'm explaining how I built automated tests and a CI/CD pipeline for the Social Stats app. In the first post, I explained what the Social Stats app does and how I architected it. Then I walked through how I wrote unit tests for the app's serverless functions. If you haven't read the first post, I recommend starting there to understand what is being tested and then returning to this post.
>Prefer to learn by video? Many of the concepts I cover in this series are available in this video.
## Integration Testing MongoDB Atlas Functions
Today we'll focus on the middle layer of the testing pyramid: integration tests.
Integration tests are designed to test the integration of two or more components that work together as part of the application. A component could be a piece of the code base. A component could also exist outside of the code base. For example, an integration test could check that a function correctly saves information in a database. An integration test could also test that a function is correctly interacting with an external API.
When following the traditional test pyramid, a developer will write significantly more unit tests than integration tests. When testing a serverless app, developers tend to write nearly as many (or sometimes more!) integration tests as unit tests. Why?
Serverless apps rely on integrations. Serverless functions tend to be small pieces of code that interact with other services. Testing these interactions is vital to ensure the application is functioning as expected.
### Example Integration Test
Let's take a look at how I tested the integration between the `storeCsvInDb` Atlas Function, the `removeBreakingCharacters` Atlas Function, and the MongoDB database hosted on Atlas. (I discuss what these functions do and how they interact with each other and the database in my previous post.)
I decided to build my integration tests using Jest since I was already using Jest for my unit tests. You can use whatever testing framework you prefer; the principles described below will still apply.
Let's focus on one test case: storing the statistics about a single Tweet.
As we discussed in the previous post, the storeCsvInDb function completes the following:
- Calls the `removeBreakingCharacters` function to remove breaking characters like emoji.
- Converts the Tweets in the CSV to JSON documents.
- Loops through the JSON documents to clean and store each one in the database.
- Returns an object that contains a list of Tweets that were inserted, updated, or unable to be inserted or updated.
When I wrote unit tests for this function, I created mocks to simulate the `removeBreakingCharacters` function and the database.
We won't use any mocks in the integration tests. Instead, we'll let the `storeCsvInDb` function call the `removeBreakingCharacters` function and the database.
The first thing I did was import `MongoClient` from the `mongodb` module. We will use MongoClient later to connect to the MongoDB database hosted on Atlas.
``` javascript
const { MongoClient } = require('mongodb');
```
Next, I imported several constants from `constants.js`. I created the `constants.js` file to store constants I found myself using in several test files.
``` javascript
const { TwitterStatsDb, statsCollection, header, validTweetCsv, validTweetJson, validTweetId, validTweetUpdatedCsv, validTweetUpdatedJson, emojiTweetId, emojiTweetCsv, emojiTweetJson, validTweetKenId, validTweetKenCsv, validTweetKenJson } = require('../constants.js');
```
Next, I imported the `realm-web` SDK. I'll be able to use this module to call the Atlas Functions.
``` javascript
const RealmWeb = require('realm-web');
```
Then I created some variables that I'll set later.
``` javascript
let collection;
let mongoClient;
let app;
```
Now that I had all of my prep work completed, I was ready to start setting up my test structure. I began by implementing the beforeAll() function. Jest runs `beforeAll()` once before any of the tests in the file are run. Inside of `beforeAll()` I connected to a copy of the App Services app I'm using for testing. I also connected to the test database hosted on Atlas that is associated with that App Services app. Note that this database is NOT my production database. (We'll explore how I created Atlas App Services apps for development, staging, and production later in this series.)
``` javascript
beforeAll(async () => {
// Connect to the App Services app
app = new RealmWeb.App({ id: `${process.env.REALM_APP_ID}` });
// Login to the app with anonymous credentials
await app.logIn(RealmWeb.Credentials.anonymous());
// Connect directly to the database
const uri = `mongodb+srv://${process.env.DB_USERNAME}:${process.env.DB_PASSWORD}@${process.env.CLUSTER_URI}/test?retryWrites=true&w=majority`;
mongoClient = new MongoClient(uri);
await mongoClient.connect();
collection = mongoClient.db(TwitterStatsDb).collection(statsCollection);
});
```
I chose to use the same app with the same database for all of my tests. As a result, these tests cannot be run in parallel as they could interfere with each other.
My app is architected in a way that it cannot be spun up completely using APIs and command line interfaces. Manual intervention is required to get the app configured correctly. If your app is architected in a way that you can completely generate your app using APIs and/or command line interfaces, you could choose to spin up a copy of your app with a new database for every test case or test file. This would allow you to run your test cases or test files in parallel.
I wanted to ensure I always closed the connection to my database, so I added a call to do so in the afterAll() function.
``` javascript
afterAll(async () => {
await mongoClient.close();
})
```
I also wanted to ensure each test started with clean data since all of my tests are using the same database. In the beforeEach() function, I added a call to delete all documents from the collection the tests will be using.
``` javascript
beforeEach(async () => {
await collection.deleteMany({});
});
```
Now that my test infrastructure was complete, I was ready to start writing a test case that focuses on storing a single valid Tweet.
``` javascript
test('Single tweet', async () => {
expect(await app.functions.storeCsvInDb(header + "\n" + validTweetCsv)).toStrictEqual({
newTweets: validTweetId],
tweetsNotInsertedOrUpdated: [],
updatedTweets: []
});
const tweet = await collection.findOne({ _id: validTweetId });
expect(tweet).toStrictEqual(validTweetJson);
});
```
The test begins by calling the `storeCsvInDb` Atlas Function just as application code would. The test simulates the contents of a Twitter statistics CSV file by concatenating a valid header, a new line character, and the statistics for a Tweet with standard characters.
The test then asserts that the function returns an object that indicates the Tweet statistics were successfully saved.
Finally, the test checks the database directly to ensure the Tweet statistics were stored correctly.
After I finished this integration test, I wrote similar tests for Tweets that contain emoji as well as for updating statistics for Tweets already stored in the database.
You can find the complete set of integration tests in [storeCsvInDB.test.js.
## Wrapping Up
Integration tests are especially important for apps built with a serverless architecture. The tests ensure that the various components that make up the app are working together as expected.
The Social Stats application source code and associated test files are available in a GitHub repo: . The repo's readme has detailed instructions on how to execute the test files.
Be on the lookout for the next post in this series where I'll walk you through how to write end-to-end tests (sometimes referred to as UI tests) for serverless apps.
## Related Links
Check out the following resources for more information:
- GitHub Repository: Social Stats
- Video: DevOps + MongoDB Atlas Functions = 😍
- Documentation: MongoDB Atlas Functions
- MongoDB Atlas
- MongoDB Charts
| md | {
"tags": [
"Realm",
"Serverless"
],
"pageDescription": "Learn how to write integration tests for MongoDB Atlas Functions.",
"contentType": "Tutorial"
} | How to Write Integration Tests for MongoDB Atlas Functions | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-client-side-field-level-encryption | created | # Java - Client Side Field Level Encryption
## Updates
The MongoDB Java quickstart repository is available on GitHub.
### February 28th, 2024
- Update to Java 21
- Update Java Driver to 5.0.0
- Update `logback-classic` to 1.2.13
### November 14th, 2023
- Update to Java 17
- Update Java Driver to 4.11.1
- Update mongodb-crypt to 1.8.0
### March 25th, 2021
- Update Java Driver to 4.2.2.
- Added Client Side Field Level Encryption example.
### October 21st, 2020
- Update Java Driver to 4.1.1.
- The Java Driver logging is now enabled via the popular SLF4J API, so I added logback in
the `pom.xml` and a configuration file `logback.xml`.
## What's the Client Side Field Level Encryption?
The Client Side Field Level Encryption (CSFLE
for short) is a new feature added in MongoDB 4.2 that allows you to encrypt some fields of your MongoDB documents prior
to transmitting them over the wire to the cluster for storage.
It's the ultimate piece of security against any kind of intrusion or snooping around your MongoDB cluster. Only the
application with the correct encryption keys can decrypt and read the protected data.
Let's check out the Java CSFLE API with a simple example.
## Video
This content is also available in video format.
:youtube]{vid=tZSH--qwdcE}
## Getting Set Up
I will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just
update it if you already have it:
``` sh
git clone [email protected]:mongodb-developer/java-quick-start.git
```
> If you didn't set up your free cluster on MongoDB Atlas, now is great time to do so. You have all the instructions in
> this [post.
For this CSFLE quickstart post, I will only use the Community Edition of MongoDB. As a matter of fact, the only part of
CSFLE that is an enterprise-only feature is the
automatic encryption of fields
which is supported
by mongocryptd
or
the Automatic Encryption Shared Library for Queryable Encryption.
> `Automatic Encryption Shared Library for Queryable Encryption` is a replacement for `mongocryptd` and should be the
> preferred solution. They are both optional and part of MongoDB Enterprise.
In this tutorial, I will be using the explicit (or manual) encryption of fields which doesn't require `mongocryptd`
or the `Automatic Encryption Shared Library` and the enterprise edition of MongoDB or Atlas. If you would like to
explore
the enterprise version of CSFLE with Java, you can find out more
in this documentation or in
my more recent
post: How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB.
> Do not confuse `mongocryptd` or the `Automatic Encryption Shared Library` with the `libmongocrypt` library which is
> the companion C library used by the drivers to
> encrypt and decrypt your data. We *need* this library to run CSFLE. I added it in the `pom.xml` file of this project.
``` xml
org.mongodb
mongodb-crypt
1.8.0
```
To keep the code samples short and sweet in the examples below, I will only share the most relevant parts. If you want
to see the code working with all its context, please check the source code in the github repository in
the csfle package
directly.
## Run the Quickstart Code
In this quickstart tutorial, I will show you the CSFLE API using the MongoDB Java Driver. I will show you how to:
- create and configure the MongoDB connections we need.
- create a master key.
- create Data Encryption Keys (DEK).
- create and read encrypted documents.
To run my code from the above repository, check out
the README.
But for short, the following command should get you up and running in no time:
``` shell
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption" -Dmongodb.uri="mongodb+srv://USERNAME:[email protected]/test?w=majority" -Dexec.cleanupDaemonThreads=false
```
This is the output you should get:
``` none
**************
* MASTER KEY *
**************
A new Master Key has been generated and saved to file "master_key.txt".
Master Key: 100, 82, 127, -61, -92, -93, 0, -11, 41, -96, 89, -39, -26, -25, -33, 37, 85, -50, 64, 70, -91, 99, -44, -57, 18, 105, -101, -111, -67, -81, -19, 56, -112, 62, 11, 106, -6, 85, -125, 49, -7, -49, 38, 81, 24, -48, -6, -15, 21, -120, -37, -5, 65, 82, 74, -84, -74, -65, -43, -15, 40, 80, -23, -52, -114, -18, -78, -64, -37, -3, -23, -33, 102, -44, 32, 65, 70, -123, -97, -49, -13, 126, 33, -63, -75, -52, 78, -5, -107, 91, 126, 103, 118, 104, 86, -79]
******************
* INITIALIZATION *
******************
=> Creating local Key Management System using the master key.
=> Creating encryption client.
=> Creating MongoDB client with automatic decryption.
=> Cleaning entire cluster.
*************************************
* CREATE KEY ALT NAMES UNIQUE INDEX *
*************************************
*******************************
* CREATE DATA ENCRYPTION KEYS *
*******************************
Created Bobby's data key ID: 668a35af-df8f-4c41-9493-8d09d3d46d3b
Created Alice's data key ID: 003024b3-a3b6-490a-9f31-7abb7bcc334d
************************************************
* INSERT ENCRYPTED DOCUMENTS FOR BOBBY & ALICE *
************************************************
2 docs have been inserted.
**********************************
* FIND BOBBY'S DOCUMENT BY PHONE *
**********************************
Bobby document found by phone number:
{
"_id": {
"$oid": "60551bc8dd8b737958e3733f"
},
"name": "Bobby",
"age": 33,
"phone": "01 23 45 67 89",
"blood_type": "A+",
"medical_record": [
{
"test": "heart",
"result": "bad"
}
]
}
****************************
* READING ALICE'S DOCUMENT *
****************************
Before we remove Alice's key, we can read her document.
{
"_id": {
"$oid": "60551bc8dd8b737958e37340"
},
"name": "Alice",
"age": 28,
"phone": "09 87 65 43 21",
"blood_type": "O+"
}
***************************************************************
* REMOVE ALICE's KEY + RESET THE CONNECTION (reset DEK cache) *
***************************************************************
Alice key is now removed: 1 key removed.
=> Creating MongoDB client with automatic decryption.
****************************************
* TRY TO READ ALICE DOC AGAIN BUT FAIL *
****************************************
We get a MongoException because 'libmongocrypt' can't decrypt these fields anymore.
```
Let's have a look in depth to understand what is happening.
## How it Works
![CSFLE diagram with master key and DEK vault
CSFLE looks complicated, like any security and encryption feature, I guess. Let's try to make it simple in a few words.
1. We need
a master key
which unlocks all
the Data Encryption Keys (
DEK for short) that we can use to encrypt one or more fields in our documents.
2. You can use one DEK for our entire cluster or a different DEK for each field of each document in your cluster. It's
up to you.
3. The DEKs are stored in a collection in a MongoDB cluster which does **not** have to be the same that contains the
encrypted data. The DEKs are stored **encrypted**. They are useless without the master key which needs to be
protected.
4. You can use the manual (community edition) or the automated (enterprise advanced or Atlas) encryption of fields.
5. The decryption can be manual or automated. Both are part of the community edition of MongoDB. In this post, I will
use manual encryption and automated decryption to stick with the community edition of MongoDB.
## GDPR Compliance
European laws enforce data protection and privacy. Any oversight can result in massive fines.
CSFLE is a great way to save millions of dollars/euros.
For example, CSFLE could be a great way to enforce
the "right-to-be-forgotten" policy of GDPR. If a user asks to be removed from your
systems, the data must be erased from your production cluster, of course, but also the logs, the dev environment, and
the backups... And let's face it: Nobody will ever remove this user's data from the backups. And if you ever restore or
use these backups, this can cost you millions of dollars/euros.
But now... encrypt each user's data with a unique Data Encryption Key (DEK) and to "forget" a user forever, all you have
to do is lose the key. So, saving the DEKs on a separated cluster and enforcing a low retention policy on this cluster
will ensure that a user is truly forgotten forever once the key is deleted.
Kenneth White, Security Principal at MongoDB who worked on CSFLE, explains this
perfectly
in this answer
in the MongoDB Community Forum.
> If the primary motivation is just to provably ensure that deleted plaintext user records remain deleted no matter
> what, then it becomes a simple timing and separation of concerns strategy, and the most straight-forward solution is
> to
> move the keyvault collection to a different database or cluster completely, configured with a much shorter backup
> retention; FLE does not assume your encrypted keyvault collection is co-resident with your active cluster or has the
> same access controls and backup history, just that the client can, when needed, make an authenticated connection to
> that
> keyvault database. Important to note though that with a shorter backup cycle, in the event of some catastrophic data
> corruption (malicious, intentional, or accidental), all keys for that db (and therefore all encrypted data) are only
> as
> recoverable to the point in time as the shorter keyvault backup would restore.
More trivial, but in the event of an intrusion, any stolen data will be completely worthless without the master key and
would not result in a ruinous fine.
## The Master Key
The master key is an array of 96 bytes. It can be stored in a Key Management Service in a cloud provider or can be
locally
managed (documentation).
One way or another, you must secure it from any threat.
It's as simple as that to generate a new one:
``` java
final byte] masterKey = new byte[96];
new SecureRandom().nextBytes(masterKey);
```
But you most probably just want to do this once and then reuse the same one each time you restart your application.
Here is my implementation to store it in a local file the first time and then reuse it for each restart.
``` java
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.security.SecureRandom;
import java.util.Arrays;
public class MasterKey {
private static final int SIZE_MASTER_KEY = 96;
private static final String MASTER_KEY_FILENAME = "master_key.txt";
public static void main(String[] args) {
new MasterKey().tutorial();
}
private void tutorial() {
final byte[] masterKey = generateNewOrRetrieveMasterKeyFromFile(MASTER_KEY_FILENAME);
System.out.println("Master Key: " + Arrays.toString(masterKey));
}
private byte[] generateNewOrRetrieveMasterKeyFromFile(String filename) {
byte[] masterKey = new byte[SIZE_MASTER_KEY];
try {
retrieveMasterKeyFromFile(filename, masterKey);
System.out.println("An existing Master Key was found in file \"" + filename + "\".");
} catch (IOException e) {
masterKey = generateMasterKey();
saveMasterKeyToFile(filename, masterKey);
System.out.println("A new Master Key has been generated and saved to file \"" + filename + "\".");
}
return masterKey;
}
private void retrieveMasterKeyFromFile(String filename, byte[] masterKey) throws IOException {
try (FileInputStream fis = new FileInputStream(filename)) {
fis.read(masterKey, 0, SIZE_MASTER_KEY);
}
}
private byte[] generateMasterKey() {
byte[] masterKey = new byte[SIZE_MASTER_KEY];
new SecureRandom().nextBytes(masterKey);
return masterKey;
}
private void saveMasterKeyToFile(String filename, byte[] masterKey) {
try (FileOutputStream fos = new FileOutputStream(filename)) {
fos.write(masterKey);
} catch (IOException e) {
e.printStackTrace();
}
}
}
```
> This is nowhere near safe for a production environment because leaving the `master_key.txt` directly in the
> application folder on your production server is like leaving the vault combination on a sticky note. Secure that file
> or
> please consider using
> a [KMS in production.
In this simple quickstart, I will only use a single master key, but it's totally possible to use multiple master keys.
## The Key Management Service (KMS) Provider
Whichever solution you choose for the master key, you need
a KMS provider to set
up the `ClientEncryptionSettings` and the `AutoEncryptionSettings`.
Here is the configuration for a local KMS:
``` java
Map> kmsProviders = new HashMap>() {{
put("local", new HashMap() {{
put("key", localMasterKey);
}});
}};
```
## The Clients
We will need to set up two different clients:
- The first one ─ `ClientEncryption` ─ will be used to create our Data Encryption Keys (DEK) and encrypt our fields
manually.
- The second one ─ `MongoClient` ─ will be the more conventional MongoDB connection that we will use to read and write
our documents, with the difference that it will be configured to automatically decrypt the encrypted fields.
### ClientEncryption
``` java
ConnectionString connection_string = new ConnectionString("mongodb://localhost");
MongoClientSettings kvmcs = MongoClientSettings.builder().applyConnectionString(connection_string).build();
ClientEncryptionSettings ces = ClientEncryptionSettings.builder()
.keyVaultMongoClientSettings(kvmcs)
.keyVaultNamespace("csfle.vault")
.kmsProviders(kmsProviders)
.build();
ClientEncryption encryption = ClientEncryptions.create(ces);
```
### MongoClient
``` java
AutoEncryptionSettings aes = AutoEncryptionSettings.builder()
.keyVaultNamespace("csfle.vault")
.kmsProviders(kmsProviders)
.bypassAutoEncryption(true)
.build();
MongoClientSettings mcs = MongoClientSettings.builder()
.applyConnectionString(connection_string)
.autoEncryptionSettings(aes)
.build();
MongoClient client = MongoClients.create(mcs);
```
> `bypassAutoEncryption(true)` is the ticket for the Community Edition. Without it, `mongocryptd` or
> the `Automatic Encryption Shared Library` would rely on the JSON schema that you would have to provide to encrypt
> automatically the documents. See
> this example in the documentation.
> You don't have to reuse the same connection string for both connections. It would actually be a lot more
> "GDPR-friendly" to use separated clusters, so you can enforce a low retention policy on the Data Encryption Keys.
## Unique Index on Key Alternate Names
The first thing you should do before you create your first Data Encryption Key is to create a unique index on the key
alternate names to make sure that you can't reuse the same alternate name on two different DEKs.
These names will help you "label" your keys to know what each one is used for ─ which is still totally up to you.
``` java
MongoCollection vaultColl = client.getDatabase("csfle").getCollection("vault");
vaultColl.createIndex(ascending("keyAltNames"),
new IndexOptions().unique(true).partialFilterExpression(exists("keyAltNames")));
```
In my example, I choose to use one DEK per user. I will encrypt all the fields I want to secure in each user document
with the same key. If I want to "forget" a user, I just need to drop that key. In my example, the names are unique so
I'm using this for my `keyAltNames`. It's a great way to enforce GDPR compliance.
## Create Data Encryption Keys
Let's create two Data Encryption Keys: one for Bobby and one for Alice. Each will be used to encrypt all the fields I
want to keep safe in my respective user documents.
``` java
BsonBinary bobbyKeyId = encryption.createDataKey("local", keyAltName("Bobby"));
BsonBinary aliceKeyId = encryption.createDataKey("local", keyAltName("Alice"));
```
We get a little help from this private method to make my code easier to read:
``` java
private DataKeyOptions keyAltName(String altName) {
return new DataKeyOptions().keyAltNames(List.of(altName));
}
```
Here is what Bobby's DEK looks like in my `csfle.vault` collection:
``` json
{
"_id" : UUID("aaa2e53d-875e-49d8-9ce0-dec9a9658571"),
"keyAltNames" : "Bobby" ],
"keyMaterial" : BinData(0,"/ozPZBMNUJU9udZyTYe1hX/KHqJJPrjdPads8UNjHX+cZVkIXnweZe5pGPpzcVcGmYctTAdxB3b+lmY5ONTzEZkqMg8JIWenIWQVY5fogIpfHDJQylQoEjXV3+e3ZY1WmWJR8mOp7pMoTyoGlZU2TwyqT9fcN7E5pNRh0uL3kCPk0sOOxLT/ejQISoY/wxq2uvyIK/C6/LrD1ymIC9w6YA=="),
"creationDate" : ISODate("2021-03-19T16:16:09.800Z"),
"updateDate" : ISODate("2021-03-19T16:16:09.800Z"),
"status" : 0,
"masterKey" : {
"provider" : "local"
}
}
```
As you can see above, the `keyMaterial` (the DEK itself) is encrypted by the master key. Without the master key to
decrypt it, it's useless. Also, you can identify that it's Bobby's key in the `keyAltNames` field.
## Create Encrypted Documents
Now that we have an encryption key for Bobby and Alice, I can create their respective documents and insert them into
MongoDB like so:
``` java
private static final String DETERMINISTIC = "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic";
private static final String RANDOM = "AEAD_AES_256_CBC_HMAC_SHA_512-Random";
private Document createBobbyDoc(ClientEncryption encryption) {
BsonBinary phone = encryption.encrypt(new BsonString("01 23 45 67 89"), deterministic(BOBBY));
BsonBinary bloodType = encryption.encrypt(new BsonString("A+"), random(BOBBY));
BsonDocument medicalEntry = new BsonDocument("test", new BsonString("heart")).append("result", new BsonString("bad"));
BsonBinary medicalRecord = encryption.encrypt(new BsonArray(List.of(medicalEntry)), random(BOBBY));
return new Document("name", BOBBY).append("age", 33)
.append("phone", phone)
.append("blood_type", bloodType)
.append("medical_record", medicalRecord);
}
private Document createAliceDoc(ClientEncryption encryption) {
BsonBinary phone = encryption.encrypt(new BsonString("09 87 65 43 21"), deterministic(ALICE));
BsonBinary bloodType = encryption.encrypt(new BsonString("O+"), random(ALICE));
return new Document("name", ALICE).append("age", 28).append("phone", phone).append("blood_type", bloodType);
}
private EncryptOptions deterministic(String keyAltName) {
return new EncryptOptions(DETERMINISTIC).keyAltName(keyAltName);
}
private EncryptOptions random(String keyAltName) {
return new EncryptOptions(RANDOM).keyAltName(keyAltName);
}
private void createAndInsertBobbyAndAlice(ClientEncryption encryption, MongoCollection usersColl) {
Document bobby = createBobbyDoc(encryption);
Document alice = createAliceDoc(encryption);
int nbInsertedDocs = usersColl.insertMany(List.of(bobby, alice)).getInsertedIds().size();
System.out.println(nbInsertedDocs + " docs have been inserted.");
}
```
Here is what Bobby and Alice documents look like in my `encrypted.users` collection:
**Bobby**
``` json
{
"_id" : ObjectId("6054d91c26a275034fe53300"),
"name" : "Bobby",
"age" : 33,
"phone" : BinData(6,"ATKkRdZWR0+HpqNyYA7zgIUCgeBE4SvLRwaXz/rFl8NPZsirWdHRE51pPa/2W9xgZ13lnHd56J1PLu9uv/hSkBgajE+MJLwQvJUkXatOJGbZd56BizxyKKTH+iy+8vV7CmY="),
"blood_type" : BinData(6,"AjKkRdZWR0+HpqNyYA7zgIUCUdc30A8lTi2i1pWn7CRpz60yrDps7A8gUJhJdj+BEqIIx9xSUQ7xpnc/6ri2/+ostFtxIq/b6IQArGi+8ZBISw=="),
"medical_record" : BinData(6,"AjKkRdZWR0+HpqNyYA7zgIUESl5s4tPPvzqwe788XF8o91+JNqOUgo5kiZDKZ8qudloPutr6S5cE8iHAJ0AsbZDYq7XCqbqiXvjQobObvslR90xJvVMQidHzWtqWMlzig6ejdZQswz2/WT78RrON8awO")
}
```
**Alice**
``` json
{
"_id" : ObjectId("6054d91c26a275034fe53301"),
"name" : "Alice",
"age" : 28,
"phone" : BinData(6,"AX7Xd65LHUcWgYj+KbUT++sCC6xaCZ1zaMtzabawAgB79quwKvld8fpA+0m+CtGevGyIgVRjtj2jAHAOvREsoy3oq9p5mbJvnBqi8NttHUJpqooUn22Wx7o+nlo633QO8+c="),
"blood_type" : BinData(6,"An7Xd65LHUcWgYj+KbUT++sCTyp+PJXudAKM5HcdX21vB0VBHqEXYSplHdZR0sCOxzBMPanVsTRrOSdAK5yHThP3Vitsu9jlbNo+lz5f3L7KYQ==")
}
```
Client Side Field Level Encryption currently
provides [two different algorithms
to encrypt the data you want to secure.
### AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic
With this algorithm, the result of the encryption ─ given the same inputs (value and DEK) ─
is deterministic. This means that we have a greater support
for read operations, but encrypted data with low cardinality is susceptible
to frequency analysis attacks.
In my example, if I want to be able to retrieve users by phone numbers, I must use the deterministic algorithm. As a
phone number is likely to be unique in my collection of users, it's safe to use this algorithm here.
### AEAD_AES_256_CBC_HMAC_SHA_512-Random
With this algorithm, the result of the encryption is
*always* different. That means that it provides the strongest
guarantees of data confidentiality, even when the cardinality is low, but prevents read operations based on these
fields.
In my example, the blood type has a low cardinality and it doesn't make sense to search in my user collection by blood
type anyway, so it's safe to use this algorithm for this field.
Also, Bobby's medical record must be very safe. So, the entire subdocument containing all his medical records is
encrypted with the random algorithm as well and won't be used to search Bobby in my collection anyway.
## Read Bobby's Document
As mentioned in the previous section, it's possible to search documents by fields encrypted with the deterministic
algorithm.
Here is how:
``` java
BsonBinary phone = encryption.encrypt(new BsonString("01 23 45 67 89"), deterministic(BOBBY));
String doc = usersColl.find(eq("phone", phone)).first().toJson();
```
I simply encrypt again, with the same key, the phone number I'm looking for, and I can use this `BsonBinary` in my query
to find Bobby.
If I output the `doc` string, I get:
``` none
{
"_id": {
"$oid": "6054d91c26a275034fe53300"
},
"name": "Bobby",
"age": 33,
"phone": "01 23 45 67 89",
"blood_type": "A+",
"medical_record":
{
"test": "heart",
"result": "bad"
}
]
}
```
As you can see, the automatic decryption worked as expected, I can see my document in clear text. To find this document,
I could use the `_id`, the `name`, the `age`, or the phone number, but not the `blood_type` or the `medical_record`.
## Read Alice's Document
Now let's put CSFLE to the test. I want to be sure that if Alice's DEK is destroyed, Alice's document is lost forever
and can never be restored, even from a backup that could be restored. That's why it's important to keep the DEKs and the
encrypted documents in two different clusters that don't have the same backup retention policy.
Let's retrieve Alice's document by name, but let's protect my code in case something "bad" has happened to her key...
``` java
private void readAliceIfPossible(MongoCollection usersColl) {
try {
String aliceDoc = usersColl.find(eq("name", ALICE)).first().toJson();
System.out.println("Before we remove Alice's key, we can read her document.");
System.out.println(aliceDoc);
} catch (MongoException e) {
System.err.println("We get a MongoException because 'libmongocrypt' can't decrypt these fields anymore.");
}
}
```
If her key still exists in the database, then I can decrypt her document:
``` none
{
"_id": {
"$oid": "6054d91c26a275034fe53301"
},
"name": "Alice",
"age": 28,
"phone": "09 87 65 43 21",
"blood_type": "O+"
}
```
Now, let's remove her key from the database:
``` java
vaultColl.deleteOne(eq("keyAltNames", ALICE));
```
In a real-life production environment, it wouldn't make sense to read her document again; and because we are all
professional and organised developers who like to keep things tidy, we would also delete Alice's document along with her
DEK, as this document is now completely worthless for us anyway.
In my example, I want to try to read this document anyway. But if I try to read it immediately after deleting her
document, there is a great chance that I will still able to do so because of
the [60 seconds Data Encryption Key Cache
that is managed by `libmongocrypt`.
This cache is very important because, without it, multiple back-and-forth would be necessary to decrypt my document.
It's critical to prevent CSFLE from killing the performances of your MongoDB cluster.
So, to make sure I'm not using this cache anymore, I'm creating a brand new `MongoClient` (still with auto decryption
settings) for the sake of this example. But of course, in production, it wouldn't make sense to do so.
Now if I try to access Alice's document again, I get the following `MongoException`, as expected:
``` none
com.mongodb.MongoException: not all keys requested were satisfied
at com.mongodb.MongoException.fromThrowableNonNull(MongoException.java:83)
at com.mongodb.client.internal.Crypt.fetchKeys(Crypt.java:286)
at com.mongodb.client.internal.Crypt.executeStateMachine(Crypt.java:244)
at com.mongodb.client.internal.Crypt.decrypt(Crypt.java:128)
at com.mongodb.client.internal.CryptConnection.command(CryptConnection.java:121)
at com.mongodb.client.internal.CryptConnection.command(CryptConnection.java:131)
at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:345)
at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:336)
at com.mongodb.internal.operation.CommandOperationHelper.executeCommandWithConnection(CommandOperationHelper.java:222)
at com.mongodb.internal.operation.FindOperation$1.call(FindOperation.java:658)
at com.mongodb.internal.operation.FindOperation$1.call(FindOperation.java:652)
at com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:583)
at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:652)
at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:80)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:170)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:200)
at com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption.readAliceIfPossible(ClientSideFieldLevelEncryption.java:91)
at com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption.demo(ClientSideFieldLevelEncryption.java:79)
at com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption.main(ClientSideFieldLevelEncryption.java:41)
Caused by: com.mongodb.crypt.capi.MongoCryptException: not all keys requested were satisfied
at com.mongodb.crypt.capi.MongoCryptContextImpl.throwExceptionFromStatus(MongoCryptContextImpl.java:145)
at com.mongodb.crypt.capi.MongoCryptContextImpl.throwExceptionFromStatus(MongoCryptContextImpl.java:151)
at com.mongodb.crypt.capi.MongoCryptContextImpl.completeMongoOperation(MongoCryptContextImpl.java:93)
at com.mongodb.client.internal.Crypt.fetchKeys(Crypt.java:284)
... 17 more
```
## Wrapping Up
In this quickstart tutorial, we have discovered how to use Client Side Field Level Encryption using the MongoDB Java
Driver, using only the community edition of MongoDB. You can learn more about
the automated encryption in our
documentation.
CSFLE is the ultimate security feature to ensure the maximal level of security for your cluster. Not even your admins
will be able to access the data in production if they don't have access to the master keys.
But it's not the only security measure you should use to protect your cluster. Preventing access to your cluster is, of
course, the first security measure that you should enforce
by enabling the authentication
and limit network exposure.
In doubt, check out the security checklist before
launching a cluster in production to make sure that you didn't overlook any of the security options MongoDB has to offer
to protect your data.
There is a lot of flexibility in the implementation of CSFLE: You can choose to use one or multiple master keys, same
for the Data Encryption Keys. You can also choose to encrypt all your phone numbers in your collection with the same DEK
or use a different one for each user. It's really up to you how you will organise your encryption strategy but, of
course, make sure it fulfills all your legal obligations. There are multiple right ways to implement CSFLE, so make sure
to find the most suitable one for your use case.
> If you have questions, please head to our developer community website where the
> MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
### Documentation
- GitHub repository with all the Java Quickstart examples of this series
- MongoDB CSFLE Doc
- MongoDB Java Driver CSFLE Doc
- MongoDB University CSFLE implementation example
| md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "Learn how to use the client side field level encryption using the MongoDB Java Driver.",
"contentType": "Quickstart"
} | Java - Client Side Field Level Encryption | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/introduction-to-linked-lists-and-mongodb | created | # A Gentle Introduction to Linked Lists With MongoDB
Are you new to data structures and algorithms? In this post, you will learn about one of the most important data structures in Computer Science, the Linked List, implemented with a MongoDB twist. This post will cover the fundamentals of the linked list data structure. It will also answer questions like, "How do linked lists differ from arrays?" and "What are the pros and cons of using a linked list?"
## Intro to Linked Lists
Did you know that linked lists are one of the foundational data structures in Computer Science? If you are like many devs that are self-taught or you graduated from a developer boot camp, then you might need a little lesson in how this data structure works. Or, if you're like me, you might need a refresher if it's been a couple of years since your last Computer Science lecture on data structures and algorithms. In this post, I will be walking through how to implement a linked list from scratch using Node.js and MongoDB. This is also a great place to start for getting a handle on the basics of MongoDB CRUD operations and this legendary data structure. Let's get started with the basics.
Diagram of a singly linked list
A linked list is a data structure that contains a list of nodes that are connected using references or pointers. A node is an object in memory. It usually contains at most two pieces of information, a data value, and a pointer to next node in the linked list. Linked lists also have separate pointer references to the head and the tail of the linked list. The head is the first node in the list, while the tail is the last object in the list.
A node that does NOT link to another node
``` json
{
"data": "Cat",
"next": null
}
```
A node that DOES link to another node
``` json
{
"data": "Cat",
"next": {
"data": "Dog",
"next": {
"data": "Bird",
"next": null
}
} // these are really a reference to an object in memory
}
```
## Why Use a Linked List?
There are a lot of reasons why linked lists are used, as opposed to other data structures like arrays (more on that later). However, we use linked lists in situations where we don't know the exact size of the data structure but anticipate that the list could potentially grow to large sizes. Often, linked lists are used when we think that the data structure might grow larger than the available memory of the computer we are working with. Linked lists are also useful if we still need to preserve order AND anticipate that order will change over time.
Linked lists are just objects in memory. One object holds a reference to another object, or one node holds a pointer to the next node. In memory, a linked list looks like this:
Diagram that demonstrates how linked lists allocate use pointers to link data in memory
### Advantages of Linked Lists
- Linked lists are dynamic in nature, which allocates the memory when required.
- Insertion and deletion operations can be easily implemented.
- Stacks and queues can be easily executed using a linked list.
### Disadvantages of Linked Lists
- Memory is wasted as pointers require extra memory for storage.
- No element can be accessed randomly; it has to access each node sequentially starting from the head.
- Reverse traversing is difficult in a singly linked list.
## Comparison Between Arrays and Linked Lists
Now, you might be thinking that linked lists feel an awful lot like arrays, and you would be correct! They both keep track of a sequence of data, and they both can be iterated and looped over. Also, both data structures preserve sequence order. However, there are some key differences.
### Advantages of Arrays
- Arrays are simple and easy to use.
- They offer faster access to elements (O(1) or constant time).
- They can access elements by any index without needing to iterate through the entire data set from the beginning.
### Disadvantages of Arrays
- Did you know that arrays can waste memory? This is because typically, compilers will preallocate a sequential block of memory when a new array is created in order to make super speedy queries. Therefore, many of these preallocated memory blocks may be empty.
- Arrays have a fixed size. If the preallocated memory block is filled to capacity, the code compiler will allocate an even larger memory block, and it will need to copy the old array over to the new array memory block before new array operations can be performed. This can be expensive with both time and space.
Diagram that demonstrates how arrays allocate continuous blocks of memory space
Diagram that demonstrates how linked lists allocate memory for new linked list nodes
- To insert an element at a given position, operation is complex. We may need to shift the existing elements to create vacancy to insert the new element at desired position.
## Other Types of Linked Lists
### Doubly Linked List
A doubly linked list is the same as a singly linked list with the exception that each node also points to the previous node as well as the next node.
Diagram of a doubly linked list
### Circular Linked List
A circular linked list is the same as a singly linked list with the exception that there is no concept of a head or tail. All nodes point to the next node circularly. There is no true start to the circular linked list.
Diagram of a circular linked list
## Let's Code A Linked List!
### First, Let's Set Up Our Coding Environment
#### Creating A Cluster On Atlas
First thing we will need to set up is a MongoDB Atlas account. And don't worry, you can create an M0 MongoDB Atlas cluster for free. No credit card is required to get started! To get up and running with a free M0 cluster, follow the MongoDB Atlas Getting Started guide.
After signing up for Atlas, we will then need to deploy a free MongoDB cluster. Note, you will need to add a rule to allow the IP address of the computer we are connecting to MongoDB Atlas Custer too, and you will need to create a database user before you are able to connect to your new cluster. These are security features that are put in place to make sure bad actors cannot access your database.
If you have any issues connecting or setting up your free MongoDB Atlas cluster, be sure to check out the MongoDB Community Forums to get help.
#### Connect to VS Code MongoDB Plugin
Next, we are going to connect to our new MongoDB Atlas database cluster using the Visual Studio Code MongoDB Plugin. The MongoDB extension allow us to:
- Connect to a MongoDB or Atlas cluster, navigate through your databases and collections, get a quick overview of your schema, and see the documents in your collections.
- Create MongoDB Playgrounds, the fastest way to prototype CRUD operations and MongoDB commands.
- Quickly access the MongoDB Shell, to launch the MongoDB Shell from the command palette and quickly connect to the active cluster.
To install MongoDB for VS Code, simply search for it in the Extensions list directly inside VS Code or head to the "MongoDB for VS Code" homepage in the VS Code Marketplace.
#### Navigate Your MongoDB Data
MongoDB for VS Code can connect to MongoDB standalone instances or clusters on MongoDB Atlas or self-hosted. Once connected, you can **browse databases**, **collections**, and **read-only views** directly from the tree view.
For each collection, you will see a list of sample documents and a **quick overview of the schema**. This is very useful as a reference while writing queries and aggregations.
Once installed, there will be a new MongoDB tab that we can use to add our connections by clicking "Add Connection." If you've used MongoDB Compass before, then the form should be familiar. You can enter your connection details in the form or use a connection string. I went with the latter, as my database is hosted on MongoDB Atlas.
To obtain your connection string, navigate to your "Clusters" page and select "Connect."
Choose the "Connect using MongoDB Compass" option and copy the connection string. Make sure to add your username and password in their respective places before entering the string in VS Code.
Once you've connected successfully, you should see an alert. At this point, you can explore the data in your cluster, as well as your schemas.
#### Creating Functions to Initialize the App
Alright, now that we have been able to connect to our MongoDB Atlas database, let's write some code to allow our linked list to connect to our database and to do some cleaning while we are developing our linked list.
The general strategy for building our linked lists with MongoDB will be as follows. We are going to use a MongoDB document to keep track of meta information, like the head and tail location. We will also use a unique MongoDB document for each node in our linked list. We will be using the unique IDs that are automatically generated by MongoDB to simulate a pointer. So the *next* value of each linked list node will store the ID of the next node in the linked list. That way, we will be able to iterate through our Linked List.
So, in order to accomplish this, the first thing that we are going to do is set up our linked list class.
``` javascript
const MongoClient = require("mongodb").MongoClient;
// Define a new Linked List class
class LinkedList {
constructor() {}
// Since the constructor cannot be an asynchronous function,
// we are going to create an async `init` function that connects to our MongoDB
// database.
// Note: You will need to replace the URI here with the one
// you get from your MongoDB Cluster. This is the same URI
// that you used to connect the MongoDB VS Code plugin to our cluster.
async init() {
const uri = "PASTE YOUR ATLAS CLUSTER URL HERE";
this.client = new MongoClient(uri, {
useNewUrlParser: true,
useUnifiedTopology: true,
});
try {
await this.client.connect();
console.log("Connected correctly to server");
this.col = this.client
.db("YOUR DATABASE NAME HERE")
.collection("YOUR COLLECTION NAME HERE");
} catch (err) {
console.log(err.stack);
}
}
}
// We are going to create an immediately invoked function expression (IFEE)
// in order for us to immediately test and run the linked list class defined above.
(async function () {
try {
const linkedList = new LinkedList();
await linkedList.init();
linkedList.resetMeta();
linkedList.resetData();
} catch (err) {
// Good programmers always handle their errors
console.log(err.stack);
}
})();
```
Next, let's create some helper functions to reset our DB every time we run the code so our data doesn't become cluttered with old data.
``` javascript
// This function will be responsible for cleaning up our metadata
// function everytime we reinitialize our app.
async resetMeta() {
await this.col.updateOne(
{ meta: true },
{ $set: { head: null, tail: null } },
{ upsert: true }
);
}
// Function to clean up all our Linked List data
async resetData() {
await this.col.deleteMany({ value: { $exists: true } });
}
```
Now, let's write some helper functions to help us query and update our meta document.
``` javascript
// This function will query our collection for our single
// meta data document. This document will be responsible
// for tracking the location of the head and tail documents
// in our Linked List.
async getMeta() {
const meta = await this.col.find({ meta: true }).next();
return meta;
}
// points to our head
async getHeadID() {
const meta = await this.getMeta();
return meta.head;
}
// Function allows us to update our head in the
// event that the head is changed
async setHead(id) {
const result = await this.col.updateOne(
{ meta: true },
{ $set: { head: id } }
);
return result;
}
// points to our tail
async getTail(data) {
const meta = await this.getMeta();
return meta.tail;
}
// Function allows us to update our tail in the
// event that the tail is changed
async setTail(id) {
const result = await this.col.updateOne(
{ meta: true },
{ $set: { tail: id } }
);
return result;
}
// Create a brand new linked list node
async newNode(value) {
const newNode = await this.col.insertOne({ value, next: null });
return newNode;
}
```
### Add A Node
The steps to add a new node to a linked list are:
- Add a new node to the current tail.
- Update the current tails next to the new node.
- Update your linked list to point tail to the new node.
``` javascript
// Takes a new node and adds it to our linked lis
async add(value) {
const result = await this.newNode(value);
const insertedId = result.insertedId;
// If the linked list is empty, we need to initialize an empty linked list
const head = await this.getHeadID();
if (head === null) {
this.setHead(insertedId);
} else {
// if it's not empty, update the current tail's next to the new node
const tailID = await this.getTail();
await this.col.updateOne({ _id: tailID }, { $set: { next: insertedId } });
}
// Update your linked list to point tail to the new node
this.setTail(insertedId);
return result;
}
```
### Find A Node
In order to traverse a linked list, we must start at the beginning of the linked list, also known as the head. Then, we follow each *next* pointer reference until we come to the end of the linked list, or the node we are looking for. It can be implemented by using the following steps:
- Start at the head node of your linked list.
- Check if the value matches what you're searching for. If found, return that node.
- If not found, move to the next node via the current node's next property.
- Repeat until next is null (tail/end of list).
``` javascript
// Reads through our list and returns the node we are looking for
async get(index) {
// If index is less than 0, return false
if (index <= -1) {
return false;
}
let headID = await this.getHeadID();
let postion = 0;
let currNode = await this.col.find({ _id: headID }).next();
// Loop through the nodes starting from the head
while (postion < index) {
// Check if we hit the end of the linked list
if (currNode.next === null) {
return false;
}
// If another node exists go to next node
currNode = await this.col.find({ _id: currNode.next }).next();
postion++;
}
return currNode;
}
```
### Delete A Node
Now, let's say we want to remove a node in our linked list. In order to do this, we must again keep track of the previous node so that we can update the previous node's *next* pointer reference to the node that is being deleted *next* value is pointing to. Or to put it another way:
- Find the node you are searching for and keep track of the previous node.
- When found, update the previous nodes next to point to the next node referenced by the node to be deleted.
- Delete the found node from memory.
Diagram that demonstrates how linked lists remove a node from a linked list by moving pointer references
``` javascript
// reads through our list and removes desired node in the linked list
async remove(index) {
const currNode = await this.get(index);
const prevNode = await this.get(index - 1);
// If index not in linked list, return false
if (currNode === false) {
return false;
}
// If removing the head, reassign the head to the next node
if (index === 0) {
await this.setHead(currNode.next);
// If removing the tail, reassign the tail to the prevNode
} else if (currNode.next === null) {
await this.setTail(prevNode._id);
await this.col.updateOne(
{ _id: prevNode._id },
{ $set: { next: currNode.next } }
);
// update previous node's next to point to the next node referenced by node to be deleted
} else {
await this.col.updateOne(
{ _id: prevNode._id },
{ $set: { next: currNode.next } }
);
}
// Delete found node from memory
await this.col.deleteOne({
_id: currNode._id,
});
return true;
}
```
### Insert A Node
The following code inserts a node after an existing node in a singly linked list. Inserting a new node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a new node after it. We can do that by following these steps:
- Find the position/node in your linked list where you want to insert your new node after.
- Update the next property of the new node to point to the node that the target node currently points to.
- Update the next property of the node you want to insert after to point to the new node.
Diagram that demonstrates how a linked list inserts a new node by moving pointer references
``` javascript
// Inserts a new node at the deisred index in the linked list
async insert(value, index) {
const currNode = await this.get(index);
const prevNode = await this.get(index - 1);
const result = await this.newNode(value);
const node = result.ops0];
// If the index is not in the linked list, return false
if (currNode === false) {
return false;
}
// If inserting at the head, reassign the head to the new node
if (index === 0) {
await this.setHead(node._id);
await this.col.updateOne(
{ _id: node._id },
{ $set: { next: currNode.next } }
);
} else {
// If inserting at the tail, reassign the tail
if (currNode.next === null) {
await this.setTail(node._id);
}
// Update the next property of the new node
// to point to the node that the target node currently points to
await this.col.updateOne(
{ _id: prevNode._id },
{ $set: { next: node._id } }
);
// Update the next property of the node you
// want to insert after to point to the new node
await this.col.updateOne(
{ _id: node._id },
{ $set: { next: currNode.next } }
);
}
return node;
}
```
## Summary
Many developers want to learn the fundamental Computer Science data structures and algorithms or get a refresher on them. In this author's humble opinion, the best way to learn data structures is by implementing them on your own. This exercise is a great way to learn data structures as well as learn the fundamentals of MongoDB CRUD operations.
>When you're ready to implement your own linked list in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.
If you want to learn more about linked lists and MongoDB, be sure to check out these resources.
## Related Links
Check out the following resources for more information:
- Want to see me implement a Linked List using MongoDB? You check out this recording of the MongoDB Twitch Stream
- Source Code
- Want to learn more about MongoDB? Be sure to take a class on the MongoDB University
- Have a question, feedback on this post, or stuck on something be sure to check out and/or open a new post on the MongoDB Community Forums:
- Quick Start: Node.js:
- Want to check out more cool articles about MongoDB? Be sure to check out more posts like this on the MongoDB Developer Hub
- For additional information on Linked Lists, be sure to check out the Wikipedia article | md | {
"tags": [
"JavaScript",
"MongoDB"
],
"pageDescription": "Want to learn about one of the most important data structures in Computer Science, the Linked List, implemented with a MongoDB twist? Click here for more!",
"contentType": "Tutorial"
} | A Gentle Introduction to Linked Lists With MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-data-architecture-ofish-app | created | # Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps
In 2020, MongoDB partnered with the WildAid Marine Protection Program to create a mobile app for officers to use while out at sea patrolling Marine Protected Areas (MPAs) worldwide. We implemented apps for iOS, Android, and web, where they all share the same Realm back end, schema, and sync strategy. This article explains the data architecture, schema, and partitioning strategy we used. If you're developing a mobile app with Realm, this post will help you design and implement your data architecture.
MPAs—like national parks on land—set aside dedicated coastal and marine environments for conservation. WildAid helps enable local agencies to protect their MPAs. We developed the O-FISH app for enforcement officers to search and create boarding reports while they're out at sea, patrolling the MPAs and boarding vessels for inspection.
O-FISH needed to be a true offline-first application as officers will typically be without network access when they're searching and creating boarding reports. It's a perfect use case for the Realm mobile database and MongoDB Realm Sync.
This video gives a great overview of the WildAid Marine Program, the requirements for the O-FISH app, and the technologies behind the app:
:youtube]{vid=54E9wfFHjiw}
This article is broken down into these sections:
- [The O-FISH Application
- System Architecture
- Data Partitioning
- Data Schema
- Handling images
- Summary
- Resources
## The O-FISH Application
There are three frontend applications.
The two mobile apps (iOS and Android) provide the same functionality. An officer logs in and can search existing boarding reports, for example, to check on past reports for a vessel before boarding it. After boarding the boat, the officer uses the app to create a new boarding report. The report contains information about the vessel, equipment, crew, catch, and any laws they're violating.
Crucially, the mobile apps need to allow users to view and create reports even when there is no network coverage (which is the norm while at sea). Data is synchronized with other app instances and the backend database when it regains network access.
iOS O-FISH App in Action
The web app also allows reports to be viewed and edited. It provides dashboards to visualize the data, including plotting boardings on a map. User accounts are created and managed through the web app.
All three frontend apps share a common backend Realm application. The Realm app is responsible for authenticating users, controlling what data gets synced to each mobile app instance, and persisting the data to MongoDB Atlas. Multiple "agencies" share the same frontend and backend apps. An officer should have access to the reports belonging to their agency. An agency is an authority responsible for enforcing the rules for one or more regional MPAs. Agencies are often named after the country they operate in. Examples of agencies would be Galapogas or Tanzania.
## System Architecture
The iOS and Android mobile apps both contain an embedded Realm mobile database. The app reads and writes data to that Realm database-whether the device is connected to the network or not. Whenever the device has network coverage, Realm synchronizes the data with other devices via the Realm backend service.
O-FISH System Architecture
The Realm database is embedded within the mobile apps, each instance storing a partition of the O-FISH data. We also need a consolidated view of all of the data that the O-FISH web app can access, and we use MongoDB Atlas for that. MongoDB Realm is also responsible for synchronizing the data with the MongoDB Atlas database.
The web app is stateless, accessing data from Atlas as needed via the Realm SDK.
MongoDB Charts dashboards are embedded in the web app to provide richer, aggregated views of the data.
## Data Partitioning
MongoDB Realm Sync uses partitions to control what data it syncs to instances of a mobile app. You typically partition data to limit the amount of space used on the device and prevent users from accessing information they're not entitled to see or change.
When a mobile app opens a synced Realm, it can provide a partition value to specify what data should be synced to the device.
As a developer, you must specify an attribute to use as your partition key. The rules for partition keys have some restrictions:
- All synced collections use the same attribute name and type for the partition key.
- The key can be a `string`, `objectId`, or a `long`.
- When the app provides a partition key, only documents that have an exact match will be synced. For example, the app can't specify a set or range of partition key values.
A common use case would be to use a string named "username" as the partition key. The mobile app would then open a Realm by setting the partition to the current user's name, ensuring that the user's data is available (but no data for other users).
If you want to see an example of creating a sophisticated partitioning strategy, then Building a Mobile Chat App Using Realm – Data Architecture describes RChat's approach (RChat is a reference mobile chat app built on Realm and MongoDB Realm). O-FISH's method is straightforward in comparison.
WildAid works with different agencies around the world. Each officer within an agency needs access to any boarding report created by other officers in the same agency. Photos added to the app by one officer should be visible to the other officers. Officers should be offered menu options tailored to their agency—an agency operating in the North Sea would want cod to be in the list of selectable species, but including clownfish would clutter the menu.
We use a string attribute named `agency` as the partitioning key to meet those requirements.
As an extra level of security, we want to ensure that an app doesn't open a Realm for the wrong partition. This could result from a coding error or because someone hacks a version of the app. When enabling Realm Sync, we can provide expressions to define whether the requesting user should be able to access a partition or not.
Expression to Limit Sync Access to Partitions
For O-FISH, the rule is straightforward. We compare the logged-in user's agency name with the partition they're requesting to access. The Realm will be synced if and only if they're the same:
``` json
{
"%%user.custom_data.agency.name": "%%partition"
}
```
## Data Schema
At the highest level, the O-FISH schema is straightforward with four Realms (each with an associated MongoDB Atlas collection):
- `DutyChange` records an officer going on-duty or back off-duty.
- `Report` contains all of the details associated with the inspection of a vessel.
- `Photo` represents a picture (either of one of the users or a photo that was taken to attach to a boarding report).
- `MenuData` contains the agency-specific values that officers can select through the app's menus.
You might want to right-click that diagram so that you can open it in a new tab!
Let's take a look at each of those four objects.
### DutyChange
The app creates a `DutyChange` object when a user toggles a switch to flag that they are going on or off duty (at sea or not).
These are the Swift and Kotlin versions of the `DutyChange` class:
::::tabs
:::tab]{tabid="Swift"}
``` swift
import RealmSwift
class DutyChange: Object {
@objc dynamic var _id: ObjectId = ObjectId.generate()
@objc dynamic var user: User? = User()
@objc dynamic var date = Date()
@objc dynamic var status = ""
override static func primaryKey() -> String? {
return "_id"
}
}
```
:::
:::tab[]{tabid="Kotlin"}
``` kotlin
import io.realm.RealmObject
import io.realm.annotations.PrimaryKey
import io.realm.annotations.RealmClass
import org.bson.types.ObjectId
import java.util.Date
@RealmClass
open class DutyChange : RealmObject() {
@PrimaryKey
var _id: ObjectId = ObjectId.get()
var user: User? = User()
var date: Date = Date()
var status: String = ""
}
```
:::
::::
On iOS, `DutyChange` inherits from the Realm `Object` class, and the attributes need to be made accessible to the Realm SDK by making them `dynamic` and adding the `@objc` annotation. The Kotlin app uses the `@RealmClass` annotation and inheritance from `RealmObject`.
Note that there is no need to include the partition key as an attribute.
In addition to primitive attributes, `DutyChange` contains `user` which is of type `User`:
::::tabs
:::tab[]{tabid="Swift"}
``` swift
import RealmSwift
class User: EmbeddedObject, ObservableObject {
@objc dynamic var name: Name? = Name()
@objc dynamic var email = ""
}
```
:::
:::tab[]{tabid="Kotlin"}
``` kotlin
import io.realm.RealmObject
import io.realm.annotations.RealmClass
@RealmClass(embedded = true)
open class User : RealmObject() {
var name: Name? = Name()
var email: String = ""
}
```
:::
::::
`User` objects are always embedded in higher-level objects rather than being top-level Realm objects. So, the class inherits from `EmbeddedObject` rather than `Object` in Swift. The Kotlin app extends the `@RealmClass` annotation to include `(embedded = true)`.
Whether created in the iOS or Android app, the `DutyChange` object is synced to MongoDB Atlas as a single `DutyChange` document that contains a `user` sub-document:
``` json
{
"_id" : ObjectId("6059c9859a545bbceeb9e881"),
"agency" : "Ecuadorian Galapagos",
"date" : ISODate("2021-03-23T10:57:09.777Z"),
"status" : "At Sea",
"user" : {
"email" : "[email protected]",
"name" : {
"first" : "Global",
"last" : "Admin"
}
}
}
```
There's a [Realm schema associated with each collection that's synced with Realm Sync. The schema can be viewed and managed through the Realm UI:
### Report
The `Report` object is at the heart of the O-FISH app. A report is what an officer reviews for relevant data before boarding a boat. A report is where the officer records all of the details when they've boarded a vessel for an inspection.
In spite of appearances, it pretty straightforward. It looks complex because there's a lot of information that an officer may need to include in their report.
Starting with the top-level object - `Report`:
::::tabs
:::tab]{tabid="Swift"}
``` swift
import RealmSwift
class Report: Object, Identifiable {
@objc dynamic var _id: ObjectId = ObjectId.generate()
let draft = RealmOptional()
@objc dynamic var reportingOfficer: User? = User()
@objc dynamic var timestamp = NSDate()
let location = List()
@objc dynamic var date: NSDate? = NSDate()
@objc dynamic var vessel: Boat? = Boat()
@objc dynamic var captain: CrewMember? = CrewMember()
let crew = List()
let notes = List()
@objc dynamic var inspection: Inspection? = Inspection()
override static func primaryKey() -> String? {
return "_id"
}
}
```
:::
:::tab[]{tabid="Kotlin"}
``` kotlin
@RealmClass
open class Report : RealmObject() {
@PrimaryKey
var _id: ObjectId = ObjectId.get()
var reportingOfficer: User? = User()
var timestamp: Date = Date()
@Required
var location: RealmList = RealmList() // In order longitude, latitude
var date: Date? = Date()
var vessel: Boat? = Boat()
var captain: CrewMember? = CrewMember()
var crew: RealmList = RealmList()
var notes: RealmList = RealmList()
var draft: Boolean? = false
var inspection: Inspection? = Inspection()
}
```
:::
::::
The `Report` class contains Realm `List` s (`RealmList` in Kotlin) to store lists of instances of classes such as `CrewMember`.
Some of the classes embedded in `Report` contain further embedded classes. There are 19 classes in total that make up a `Report`. You can view all of the component classes in the [iOS and Android repos.
Once synced to Atlas, the Report is represented as a single `BoardingReports` document (the name change is part of the schema definition):
Note that Realm lists are mapped to JSON/BSON arrays.
### Photo
A single boarding report could contain many large photographs, and so we don't want to embed those within the `Report` object (as an object could grow very large and even exceed MongoDB's 16 MB document limit). Instead, the `Report` object (and its embedded objects) store references to `Photo` objects. Each photo is represented by a top-level `Photo` Realm object. As an example, `Attachments` contains a Realm `List` of strings, each of which identifies a `Photo` object. Handling images will step through how we implemented this.
::::tabs
:::tab]{tabid="Swift"}
``` swift
class Attachments: EmbeddedObject, ObservableObject {
let notes = List()
let photoIDs = List()
}
```
:::
:::tab[]{tabid="Kotlin"}
``` kotlin
@RealmClass(embedded = true)
open class Attachments : RealmObject() {
@Required
var notes: RealmList = RealmList()
@Required
var photoIDs: RealmList = RealmList()
}
```
:::
::::
The general rule is that it isn't the best practice to store images in a database as they consume a lot of valuable storage space. A typical solution is to keep the image in some store with plentiful, cheap capacity (e.g., a block store such as cloud storage - Amazon S3 of Microsoft Blob Storage.) The O-FISH app's issue is that it's probable that the officer's phone has no internet connectivity when they create the boarding report and attach photos, so uploading them to cloud object storage can't be done at that time. As a compromise, O-FISH stores the image in the `Photo` object, but when the device has internet access, that image is uploaded to cloud object storage, removed from the `Photo` object and replaced with the S3 link. This is why the `Photo` includes both an optional binary `picture` attribute and a `pictureURL` field for the S3 link:
::::tabs
:::tab[]{tabid="Swift"}
``` swift
class Photo: Object {
@objc dynamic var _id: ObjectId = ObjectId.generate()
@objc dynamic var thumbNail: NSData?
@objc dynamic var picture: NSData?
@objc dynamic var pictureURL = ""
@objc dynamic var referencingReportID = ""
@objc dynamic var date = NSDate()
override static func primaryKey() -> String? {
return "_id"
}
}
```
:::
:::tab[]{tabid="Kotlin"}
``` kotlin
open class Photo : RealmObject() {
@PrimaryKey
var _id: ObjectId = ObjectId.get()
var thumbNail: ByteArray? = null
var picture: ByteArray? = null
var pictureURL: String = ""
var referencingReportID: String = ""
var date: Date = Date()
}
```
:::
::::
Note that we include the `referencingReportID` attribute to make it easy to delete all `Photo` objects associated with a `Report`.
The officer also needs to review past boarding reports (and attached photos), and so the `Photo` object also includes a thumbnail image for off-line use.
### MenuData
Each agency needs the ability to customize what options are added in the app's menus. For example, agencies operating in different countries will need to define the list of locally applicable laws. Each agency has a `MenuData` instance with a list of strings for each of the customizable menus:
::::tabs
:::tab[]{tabid="Swift"}
``` swift
class MenuData: Object {
@objc dynamic var _id = ObjectId.generate()
let countryPickerPriorityList = List()
let ports = List()
let fisheries = List()
let species = List()
let emsTypes = List()
let activities = List()
let gear = List()
let violationCodes = List()
let violationDescriptions = List()
override static func primaryKey() -> String? {
return "_id"
}
}
```
:::
:::tab[]{tabid="Kotlin"}
``` kotlin
@RealmClass
open class MenuData : RealmModel {
@PrimaryKey
var _id: ObjectId = ObjectId.get()
@Required
var countryPickerPriorityList: RealmList = RealmList()
@Required
var ports: RealmList = RealmList()
@Required
var fisheries: RealmList = RealmList()
@Required
var species: RealmList = RealmList()
@Required
var emsTypes: RealmList = RealmList()
@Required
var activities: RealmList = RealmList()
@Required
var gear: RealmList = RealmList()
@Required
var violationCodes: RealmList = RealmList()
@Required
var violationDescriptions: RealmList = RealmList()
}
```
:::
::::
## Handling images
When MongoDB Realm Sync writes a new `Photo` document to Atlas, it contains the full-sized image in the `picture` attribute. It consumes space that we want to free up by moving that image to Amazon S3 and storing the resulting S3 location in `pictureURL`. Those changes are then synced back to the mobile apps, which can then decide how to get an image to render using this algorithm:
1. If `picture` contains an image, use it.
2. Else, if `pictureURL` is set and the device is connected to the internet, then fetch the image from cloud object storage and use the returned image.
3. Else, use the `thumbNail`.
When the `Photo` document is written to Atlas, the `newPhoto` database trigger fires, which invokes a function named `newPhoto` function.
The trigger passes the `newPhoto` Realm function the `changeEvent`, which contains the new `Photo` document. The function invokes the `uploadImageToS3` Realm function and then updates the `Photo` document by removing the image and setting the URL:
``` javascript
exports = function(changeEvent){
const fullDocument = changeEvent.fullDocument;
const image = fullDocument.picture;
const agency = fullDocument.agency;
const id = fullDocument._id;
const imageName = `${id}`;
if (typeof image !== 'undefined') {
console.log(`Requesting upload of image: ${imageName}`);
context.functions.execute("uploadImageToS3", imageName, image)
.then (() => {
console.log('Uploaded to S3');
const bucketName = context.values.get("photoBucket");
const imageLink = `https://${bucketName}.s3.amazonaws.com/${imageName}`;
const collection = context.services.get('mongodb-atlas').db("wildaid").collection("Photo");
collection.updateOne({"_id": fullDocument._id}, {$set: {"pictureURL": imageLink}, $unset: {picture: null}});
},
(error) => {
console.error(`Failed to upload image to S3: ${error}`);
});
} else {
console.log("No new photo to upload this time");
}
};
```
`uploadImageToS3` uses Realm's AWS integration to upload the image:
``` javascript
exports = function(name, image) {
const s3 = context.services.get('AWS').s3(context.values.get("awsRegion"));
const bucket = context.values.get("photoBucket");
console.log(`Bucket: ${bucket}`);
return s3.PutObject({
"Bucket": bucket,
"Key": name,
"ACL": "public-read",
"ContentType": "image/jpeg",
"Body": image
});
};
```
## Summary
We've covered the common data model used across the iOS, Android, and backend Realm apps. (The [web app also uses it, but that's beyond the scope of this article.)
The data model is deceptively simple. There's a lot of nested information that can be captured in each boarding report, resulting in 20+ classes, but there are only four top-level classes in the app, with the rest accounted for by embedding. The only other type of relationship is the references to instances of the `Photo` class from other classes (required to prevent the `Report` objects from growing too large).
The partitioning strategy is straightforward. Partitioning for every class is based on the name of the user's agency. That pattern is going to appear in many apps—just substitute "agency" with "department," "team," "user," "country," ...
Suppose you determine that your app needs a different partitioning strategy for different classes. In that case, you can implement a more sophisticated partitioning strategy by encoding a key-value pair in a string partition key.
For example, if we'd wanted to partition the reports by username (each officer can only access reports they created) and the menu items by agency, then you could partition on a string attribute named `partition`. For the `Report` objects, it would be set to pairs such as `partition = "[email protected]"` whereas for a `MenuData` object it might be set to `partition = "agency=Galapagos"`. Building a Mobile Chat App Using Realm – Data Architecture steps through designing these more sophisticated strategies.
## Resources
- O-FISH GitHub repos
- iOS.
- Android.
- Web.
- Realm Backend.
- Read Building a Mobile Chat App Using Realm – Data Architecture to understand the data model and partitioning strategy behind the RChat app-an example of a more sophisticated partitioning strategy.
- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"Realm"
],
"pageDescription": "Understand the data model and partitioning scheme used for WildAid's O-FISH app and how you can adapt them for your own mobile apps.",
"contentType": "Tutorial"
} | Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/php/laravel-mongodb-tutorial | created | # How To Build a Laravel + MongoDB Back End Service
Laravel is a leading PHP framework that vastly increases the productivity of PHP developers worldwide. I come from a WordPress background, but when asked to build a web service for a front end app, Laravel and MongoDB come to mind, especially when combined with the MongoDB Atlas developer data platform.
This Laravel MongoDB tutorial addresses prospective and existing Laravel developers considering using MongoDB as a database.
Let's create a simple REST back end for a front-end app and go over aspects of MongoDB that might be new. Using MongoDB doesn't affect the web front-end aspect of Laravel, so we'll use Laravel's built-in API routing in this article.
MongoDB support in Laravel is provided by the official mongodb/laravel-mongodb package, which extends Eloquent, Laravel's built-in ORM.
First, let's establish a baseline by creating a default Laravel app. We'll mirror some of the instructions provided on our MongoDB Laravel Integration page, which is the primary entry point for all things Laravel at MongoDB. Any Laravel environment should work, but we'll be using some Linux commands under Ubuntu in this article.
> Laravel MongoDB Documentation
### Prerequisites
- A MongoDB Atlas cluster
- Create a free cluster and load our sample data.
- A code editor
- **Optional**: We have a MongoDB VS Code extension that makes it very easy to browse the database(s).
## Getting the Laravel web server up and running
**Note**: We'll go over creating the Laravel project with Composer but the article's code repository is available.
The "Setting Up and Configuring Your Laravel Project" instructions in the MongoDB and Laravel Integration show how to configure a Laravel-MongoDB development environment. We'll cover the Laravel application creation and the MongoDB configuration below.
Here are handy links, just in case:
- Official Laravel installation instructions (10.23.0 here)
- Official PHP installation instructions (PHP 8.1.6+ here)
- Install Composer (Composer 2.3.5 here)
- The MongoDB PHP extension (1.13.0 here)
## Create a Laravel project
With our development environment working, let's create a Laravel project by creating a Laravel project directory. From inside that new directory, create a new Laravel project called `laraproject` by running the command, which specifies using Laravel:
`composer create-project laravel/laravel laraproject`
After that, your directory structure should look like this:
```.
└── ./laraproject
├── ./app
├── ./artisan
├── ./bootstrap
├── ...
```
Once our development environment is properly configured, we can browse to the Laravel site (likely 'localhost', for most people) and view the homepage:
## Add a Laravel to MongoDB connection
Check if the MongoPHP driver is installed and running
To check the MongoDB driver is up and running in our web server, we can add a webpage to our Laravel website. in the code project, open `/routes/web.php` and add a route as follows:
Route::get('/info', function () {
phpinfo();
});
Subsequently visit the web page at localhost/info/ and we should see the PHPinfo page. Searching for the MongoDB section in the page, we should see something like the below. It means the MongoDB PHP driver is loaded and ready. If there are experience errors, our MongoDB PHP error handling goes over typical issues.
We can use Composer to add the Laravel MongoDB package to the application. In the command prompt, go to the project's directory and run the command below to add the package to the `/vendor/` directory.
`composer require mongodb/laravel-mongodb:4.0.0`
Next, update the database configuration to add a MongoDB connection string and credentials. Open the `/config/database.php` file and update the 'connection' array as follows:
'connections' =>
'mongodb' => [
'driver' => 'mongodb',
'dsn' => env('MONGODB_URI'),
'database' => 'YOUR_DATABASE_NAME',
],
`env('MONGODB_URI')` refers to the content of the default `.env` file of the project. Make sure this file does not end up in the source control. Open the `/.env` file and add the DB_URI environment variable with the connection string and credentials in the form:
`MONGODB_URI=mongodb+srv://USERNAME:[email protected]/?retryWrites=true&w=majority`
Your connection string may look a bit different. Learn [how to get the connection string in Atlas. Remember to allow the web server's IP address to access the MongoDB cluster. Most developers will add their current IP address to the cluster.
In `/config/database.php`, we can optionally set the default database connection. At the top of the file, change 'default' to this:
` 'default' => env('DB_CONNECTION', 'mongodb'),`
Our Laravel application can connect to our MongoDB database. Let's create an API endpoint that pings it. In `/routes/api.php`, add the route below, save, and visit `localhost/api/ping/`. The API should return the object {"msg": "MongoDB is accessible!"}. If there's an error message, it's probably a configuration issue. Here are some general PHP MongoDB error handling tips.
Route::get('/ping', function (Request $request) {
$connection = DB::connection('mongodb');
$msg = 'MongoDB is accessible!';
try {
$connection->command('ping' => 1]);
} catch (\Exception $e) {
$msg = 'MongoDB is not accessible. Error: ' . $e->getMessage();
}
return ['msg' => $msg];
});
## Create data with Laravel's Eloquent
Laravel comes with [Eloquent, an ORM that abstracts the database back end so users can use different databases utilizing a common interface. Thanks to the Laravel MongoDB package, developers can opt for a MongoDB database to benefit from a flexible schema, excellent performance, and scalability.
Eloquent has a "Model" class, the interface between our code and a specific database table (or "collection," in MongoDB terminology). Instances of the Model classes represent rows of tables in relational databases.
In MongoDB, they are documents in the collection. In relational databases, we can set values only for columns defined in the database, but MongoDB allows any field to be set.
The models can define fillable fields if we want to enforce a document schema in our application and prevent errors like name typos. This is not required if we want full flexibility of being schemaless to be faster.
For new Laravel developers, there are many Eloquent features and philosophies. The official Eloquent documentation is the best place to learn more about that. For now, **we will highlight the most important aspects** of using MongoDB with Eloquent. We can use both MongoDB and an SQL database in the same Laravel application. Each model is associated with one connection or the other.
### Classic Eloquent model
First, we create a classic model with its associated migration code by running the command:
`php artisan make:model CustomerSQL --migration`
After execution, the command created two files, `/app/Models/CustomerSQL.php` and `/database/migrations/YY_MM_DD_xxxxxx_create_customer_s_q_l_s_table.php`. The migration code is meant to be executed once in the prompt to initialize the table and schema. In the extended Migration class, check the code in the `up()` function.
We'll edit the migration's `up()` function to build a simple customer schema like this:
public function up()
{
Schema::connection('mysql')->create('customer_sql', function (Blueprint $table) {
$table->id();
$table->uuid('guid')->unique();
$table->string('first_name');
$table->string('family_name');
$table->string('email');
$table->text('address');
$table->timestamps();
});
}
Our migration code is ready, so let's execute it to build the table and index associated with our Eloquent model.
`php artisan migrate --path=/database/migrations/YY_MM_DD_xxxxxx_create_customer_s_q_l_s_table.php`
In the MySQL database, the migration created a 'customer_sql' table with the required schema, along with the necessary indexes. Laravel keeps track of which migrations have been executed in the 'migrations' table.
Next, we can modify the model code in `/app/Models/CustomerSQL.php` to match our schema.
// This is the standard Eloquent Model
use Illuminate\Database\Eloquent\Model;
class CustomerSQL extends Model
{
use HasFactory;
// the selected database as defined in /config/database.php
protected $connection = 'mysql';
// the table as defined in the migration
protected $table= 'customer_sql';
// our selected primary key for this model
protected $primaryKey = 'guid';
//the attributes' names that match the migration's schema
protected $fillable = 'guid', 'first_name', 'family_name', 'email', 'address'];
}
### MongoDB Eloquent model
Let's create an Eloquent model for our MongoDB database named "CustomerMongoDB" by running this Laravel prompt command from the project's directory"
`php artisan make:model CustomerMongoDB`
Laravel creates a `CustomerMongoDB` class in the file `\models\CustomerMongoDB.php` shown in the code block below. By default, models use the 'default' database connection, but we can specify which one to use by adding the `$connection` member to the class. Likewise, it is possible to specify the collection name via a `$collection` member.
Note how the base model class is replaced in the 'use' statement. This is necessary to set "_id" as the primary key and profit from MongoDB's advanced features like array push/pull.
//use Illuminate\Database\Eloquent\Model;
use MongoDB\Laravel\Eloquent\Model;
class CustomerMongoDB extends Model
{
use HasFactory;
// the selected database as defined in /config/database.php
protected $connection = 'mongodb';
// equivalent to $table for MySQL
protected $collection = 'laracoll';
// defines the schema for top-level properties (optional).
protected $fillable = ['guid', 'first_name', 'family_name', 'email', 'address'];
}
The extended class definition is nearly identical to the default Laravel one. Note that `$table` is replaced by `$collection` to use MongoDB's naming. That's it.
We can still use Eloquent Migrations with MongoDB (more on that below), but defining the schema and creating a collection with a Laravel-MongoDB Migration is optional because of MongoDB's flexible schema. At a high level, each document in a MongoDB collection can have a different schema.
If we want to [enforce a schema, we can! MongoDB has a great schema validation mechanism that works by providing a validation document when manually creating the collection using db.createcollection(). We'll cover this in an upcoming article.
## CRUD with Eloquent
With the models ready, creating data for a MongoDB back end isn't different, and that's what we expect from an ORM.\
Below, we can compare the `/api/create_eloquent_mongo/` and `/api/create_eloquent_sql/` API endpoints. The code is identical, except for the different `CustomerMongoDB` and `CustomerSQL` model names.
Route::get('/create_eloquent_sql/', function (Request $request) {
$success = CustomerSQL::create(
'guid'=> 'cust_0000',
'first_name'=> 'John',
'family_name' => 'Doe',
'email' => '[email protected]',
'address' => '123 my street, my city, zip, state, country'
]);
...
});
Route::get('/create_eloquent_mongo/', function (Request $request) {
$success = CustomerMongoDB::create([
'guid'=> 'cust_1111',
'first_name'=> 'John',
'family_name' => 'Doe',
'email' => '[email protected]',
'address' => '123 my street, my city, zip, state, country'
]);
...
});
After adding the document, we can retrieve it using Eloquent's "where" function as follows:
Route::get('/find_eloquent/', function (Request $request) {
$customer = CustomerMongoDB::where('guid', 'cust_1111')->get();
...
});
Eloquent allows developers to find data using complex queries with multiple matching conditions, and there's more to learn by studying both Eloquent and the MongoDB Laravel extension. The [Laravel MongoDB query tests are an excellent place to look for additional syntax examples and will be kept up-to-date.
Of course, we can also **Update** and **Delete** records using Eloquent as shown in the code below:
Route::get('/update_eloquent/', function (Request $request) {
$result = CustomerMongoDB::where('guid', 'cust_1111')->update( 'first_name' => 'Jimmy'] );
...
});
Route::get('/delete_eloquent/', function (Request $request) {
$result = CustomerMongoDB::where('guid', 'cust_1111')->delete();
...
});
Eloquent is an easy way to start with MongoDB, and things work very much like one would expect. Even with a simple schema, developers can benefit from great scalability, high data reliability, and cluster availability with MongoDB Atlas' fully-managed [clusters and sharding.
At this point, our MongoDB-connected back-end service is up and running, and this could be the end of a typical "CRUD" article. However, MongoDB is capable of much more, so keep reading.
## Unlock the full power of MongoDB
To extract the full power of MongoDB, it's best to fully utilize its document model and native Query API.
The document model is conceptually like a JSON object, but it is based on BSON (a binary representation with more fine-grained typing) and backed by a high-performance storage engine. Document supports complex BSON types, including object, arrays, and regular expressions. Its native Query API can efficiently access and process such data.
### Why is the document model great?
Let's discuss a few benefits of the document model.
#### It reduces or eliminates joins
Embedded documents and arrays paired with data modeling allow developers to avoid expensive database "join" operations, especially on the most critical workloads, queries, and huge collections. If needed, MongoDB does support join-like operations with the $lookup operator, but the document model lets developers keep such operations to a minimum or get rid of them entirely. Reducing joins also makes it easier to shard collections across multiple servers to increase capacity.
#### It reduces workload costs
This NoSQL strategy is critical to increasing **database workload efficiency**, to **reduce billing**. That's why Amazon eliminated most of its internal relational database workloads years ago. Learn more by watching Rick Houlihan, who led this effort at Amazon, tell that story on YouTube, or read about it on our blog. He is now MongoDB's Field CTO for Strategic Accounts.
#### It helps avoid downtime during schema updates
MongoDB documents are contained within "collections" (tables, in SQL parlance). The big difference between SQL and MongoDB is that each document in a collection can have a different schema. We could store completely different schemas in the same collection. This enables strategies like schema versioning to **avoid downtime during schema updates** and more!
Data modeling goes beyond the scope of this article, but it is worth spending 15 minutes watching the Principles of Data Modeling for MongoDB video featuring Daniel Coupal, the author of MongoDB Data Modeling and Schema Design, a book that many of us at MongoDB have on our desks. At the very least, read this short 6 Rules of Thumb for MongoDB Schema article.
## CRUD with nested data
The Laravel MongoDB Eloquent extension does offer MongoDB-specific operations for nested data. However, adding nested data is also very intuitive without using the embedsMany() and embedsOne() methods provided by the extension.
As shown earlier, it is easy to define the top-level schema attributes with Eloquent. However, it is more tricky to do so when using arrays and embedded documents.
Fortunately, we can intuitively create the Model's data structures in PHP. In the example below, the 'address' field has gone from a string to an object type. The 'email' field went from a string to an array of strings] type. Arrays and objects are not supported types in MySQL.
Route::get('/create_nested/', function (Request $request) {
$message = "executed";
$success = null;
$address = new stdClass;
$address->street = '123 my street name';
$address->city = 'my city';
$address->zip= '12345';
$emails = ['[email protected]', '[email protected]'];
try {
$customer = new CustomerMongoDB();
$customer->guid = 'cust_2222';
$customer->first_name = 'John';
$customer->family_name= 'Doe';
$customer->email= $emails;
$customer->address= $address;
$success = $customer->save(); // save() returns 1 or 0
}
catch (\Exception $e) {
$message = $e->getMessage();
}
return ['msg' => $message, 'data' => $success];
});
If we run the `localhost/api/create_nested/` API endpoint, it will create a document as the JSON representation below shows. The `updated_at` and `created_at` datetime fields are automatically added by Eloquent, and it is possible to disable this Eloquent feature (check the [Timestamps in the official Laravel documentation).
## Introducing the MongoDB Query API
MongoDB has a native query API optimized to manipulate and transform complex data. There's also a powerful aggregation framework with which we can pipe data from one stage to another, making it intuitive for developers to create very complex aggregations. The native query is accessible via the MongoDB "collection" object.
### Eloquent and "raw queries"
Eloquent has an intelligent way of exposing the full capabilities of the underlying database by using "raw queries," which are sent "as is" to the database without any processing from the Eloquent Query Builder, thus exposing the native query API. Read about raw expressions in the official Laravel documentation.
We can perform a raw native MongoDB query from the model as follows, and the model will return an Eloquent collection
$mongodbquery = 'guid' => 'cust_1111'];
// returns a "Illuminate\Database\Eloquent\Collection" Object
$results = CustomerMongoDB::whereRaw( $mongodbquery )->get();
It's also possible to obtain the native MongoDB collection object and perform a query that will return objects such as native MongoDB documents or cursors:
$mongodbquery = ['guid' => 'cust_1111', ];
$mongodb_native_collection = DB::connection('mongodb')->getCollection('laracoll');
$document = $mongodb_native_collection->findOne( $mongodbquery );
$cursor = $mongodb_native_collection->find( $mongodbquery );
Using the MongoDB collection directly is the sure way to access all the MongoDB features. Typically, people start using the native collection.insert(), collection.find(), and collection.update() first.
Common MongoDB Query API functions work using a similar logic and require matching conditions to identify documents for selection or deletion. An optional projection defines which fields we want in the results.
With Laravel, there are several ways to query data, and the /find_native/ API endpoint below shows how to use whereRaw(). Additionally, we can use MongoDB's [findOne() and find() collection methods that return a document and a cursor, respectively.
/*
Find records using a native MongoDB Query
1 - with Model->whereRaw()
2 - with native Collection->findOne()
3 - with native Collection->find()
*/
Route::get('/find_native/', function (Request $request) {
// a simple MongoDB query that looks for a customer based on the guid
$mongodbquery = 'guid' => 'cust_2222'];
// Option #1
//==========
// use Eloquent's whereRaw() function. This is the easiest way to stay close to the Laravel paradigm
// returns a "Illuminate\Database\Eloquent\Collection" Object
$results = CustomerMongoDB::whereRaw( $mongodbquery )->get();
// Option #2 & #3
//===============
// use the native MongoDB driver Collection object. with it, you can use the native MongoDB Query API
$mdb_collection = DB::connection('mongodb')->getCollection('laracoll');
// find the first document that matches the query
$mdb_bsondoc= $mdb_collection->findOne( $mongodbquery ); // returns a "MongoDB\Model\BSONDocument" Object
// if we want to convert the MongoDB Document to a Laravel Model, use the Model's newFromBuilder() method
$cust= new CustomerMongoDB();
$one_doc = $cust->newFromBuilder((array) $mdb_bsondoc);
// find all documents that matches the query
// Note: we're using find without any arguments, so ALL documents will be returned
$mdb_cursor = $mdb_collection->find( ); // returns a "MongoDB\Driver\Cursor" object
$cust_array = array();
foreach ($mdb_cursor->toArray() as $bson) {
$cust_array[] = $cust->newFromBuilder( $bson );
}
return ['msg' => 'executed', 'whereraw' => $results, 'document' => $one_doc, 'cursor_array' => $cust_array];
});
Updating documents is done by providing a list of updates in addition to the matching criteria. Here's an example using [updateOne(), but updateMany() works similarly. updateOne() returns a document that contains information about how many documents were matched and how many were actually modified.
/*
Update a record using a native MongoDB Query
*/
Route::get('/update_native/', function (Request $request) {
$mdb_collection = DB::connection('mongodb')->getCollection('laracoll');
$match = 'guid' => 'cust_2222'];
$update = ['$set' => ['first_name' => 'Henry', 'address.street' => '777 new street name'] ];
$result = $mdb_collection->updateOne($match, $update );
return ['msg' => 'executed', 'matched_docs' => $result->getMatchedCount(), 'modified_docs' => $result->getModifiedCount()];
});
Deleting documents is as easy as finding them. Again, there's a matching criterion, and the API returns a document indicating the number of deleted documents.
Route::get('/delete_native/', function (Request $request) {
$mdb_collection = DB::connection('mongodb')->getCollection('laracoll');
$match = ['guid' => 'cust_2222'];
$result = $mdb_collection->deleteOne($match );
return ['msg' => 'executed', 'deleted_docs' => $result->getDeletedCount() ];
});
### Aggregation pipeline
Since we now have access to the MongoDB native API, let's introduce the [aggregation pipeline. An aggregation pipeline is a task in MongoDB's aggregation framework. Developers use the aggregation framework to perform various tasks, from real-time dashboards to "big data" analysis.
We will likely use it to query, filter, and sort data at first. The aggregations introduction of the free online book Practical MongoDB Aggregations by Paul Done gives a good overview of what can be done with it.
An aggregation pipeline consists of multiple stages where the output of each stage is the input of the next, like piping in Unix.
We will use the "sample_mflix" sample database that should have been loaded when creating our Atlas cluster. Laravel lets us access multiple MongoDB databases in the same app, so let's add the sample_mflix database (to `database.php`):
'mongodb_mflix' =>
'driver' => 'mongodb',
'dsn' => env('DB_URI'),
'database' => 'sample_mflix',
],
Next, we can build an /aggregate/ API endpoint and define a three-stage aggregation pipeline to fetch data from the "movies" collection, compute the average movie rating per genre, and return a list. [More details about this movie ratings aggregation.
Route::get('/aggregate/', function (Request $request) {
$mdb_collection = DB::connection('mongodb_mflix')->getCollection('movies');
$stage0 = '$unwind' => ['path' => '$genres']];
$stage1 = ['$group' => ['_id' => '$genres', 'averageGenreRating' => ['$avg' => '$imdb.rating']]];
$stage2 = ['$sort' => ['averageGenreRating' => -1]];
$aggregation = [$stage0, $stage1, $stage2];
$mdb_cursor = $mdb_collection->aggregate( $aggregation );
return ['msg' => 'executed', 'data' => $mdb_cursor->toArray() ];
});
This shows how easy it is to compose several stages to group, compute, transform, and sort data. This is the preferred method to perform [aggregation operations, and it's even possible to output a document, which is subsequently used by the updateOne() method. There's a whole aggregation course.
### Don't forget to index
We now know how to perform CRUD operations, native queries, and aggregations. However, don't forget about indexing to increase performance. MongoDB indexing strategies and best practices are beyond the scope of this article, but let's look at how we can create indexes.
#### Option #1: Create indexes with Eloquent's Migrations
First, we can use Eloquent's Migrations. Even though we could do without Migrations because we have a flexible schema, they could be a vessel to store how indexes are defined and created.\
Since we have not used the --migration option when creating the model, we can always create the migration later. In this case, we can run this command:
`php artisan make:migration create_customer_mongo_db_table`
It will create a Migration located at `/database/migrations/YYYY_MM_DD_xxxxxx_create_customer_mongo_db_table.php`.
We can update the code of our up() function to create an index for our collection. For example, we'll create an index for our 'guid' field, and make it a unique constraint. By default, MongoDB always has an _id primary key field initialized with an ObjectId by default. We can provide our own unique identifier in place of MongoDB's default ObjectId.
public function up() {
Schema::connection('mongodb')->create('laracoll', function ($collection) {
$collection->unique('guid'); // Ensure the guid is unique since it will be used as a primary key.
});
}
As previously, this migration `up()` function can be executed using the command:
`php artisan migrate --path=/database/migrations/2023_08_09_051124_create_customer_mongo_db_table.php`
If the 'laracoll' collection does not exist, it is created and an index is created for the 'guid' field. In the Atlas GUI, it looks like this:
#### Option #2: Create indexes with MongoDB's native API
The second option is to use the native MongoDB createIndex() function which might have new options not yet covered by the Laravel MongoDB package. Here's a simple example that creates an index with the 'guid' field as the unique constraint.
Route::get('/create_index/', function (Request $request) {
$indexKeys = "guid" => 1];
$indexOptions = ["unique" => true];
$result = DB::connection('mongodb')->getCollection('laracoll')->createIndex($indexKeys, $indexOptions);
return ['msg' => 'executed', 'data' => $result ];
});
#### Option #3: Create indexes with the Atlas GUI
Finally, we can also [create an Index in the web Atlas GUI interface, using a visual builder or from JSON. The GUI interface is handy for experimenting. The same is true inside MongoDB Compass, our MongoDB GUI application.
## Conclusion
This article covered creating a back-end service with PHP/Laravel, powered by MongoDB, for a front-end web application. We've seen how easy it is for Laravel developers to leverage their existing skills with a MongoDB back end.
It also showed why the document model, associated with good data modeling, leads to higher database efficiency and scalability. We can fully use it with the native MongoDB Query API to unlock the full power of MongoDB to create better apps with less downtime.
Learn more about the Laravel MongoDB extension syntax by looking at the official documentation and repo's example tests on GitHub. For plain PHP MongoDB examples, look at the example tests of our PHP Library.
Consider taking the free Data Modeling course at MongoDB University or the overall PHP/MongoDB course, although it's not specific to Laravel.
We will build more PHP/Laravel content, so subscribe to our various channels, including YouTube and LinkedIn. Finally, join our official community forums! There's a PHP tag where fellow developers and MongoDB engineers discuss all things data and PHP.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8df3f279bdc52b8e/64dea7310349062efa866ae4/laravel-9-mongodb-tutorial_02_laravel-homepage.png | md | {
"tags": [
"PHP"
],
"pageDescription": "A tutorial on how to use MongoDB with Laravel Eloquent, but also with the native MongoDB Query API and Aggregation Pipeline, to access the new MongoDB features.",
"contentType": "Tutorial"
} | How To Build a Laravel + MongoDB Back End Service | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/global-read-write-concerns | created | # Set Global Read and Write Concerns in MongoDB 4.4
MongoDB is very flexible when it comes to both reading and writing data. When it comes to writing data, a MongoDB write concern allows you to set the level of acknowledgment for a desired write operation. Likewise, the read concern allows you to control the consistency and isolation properties of the data read from your replica sets. Finding the right values for the read and write concerns is pivotal as your application evolves and with the latest release of MongoDB adding global read isolation and write durability defaults is now possible.
MongoDB 4.4 is available in beta right now. You can try it out in MongoDB Atlas or download the development release. In this post, we are going to look at how we can set our read isolation and write durability defaults globally and also how we can override these global settings on a per client or per operation basis when needed.
## Prerequisites
For this tutorial, you'll need:
- MongoDB 4.4
- MongoDB shell
>Setting global read and write concerns is currently unavailable on MongoDB Atlas. If you wish to follow along with this tutorial, you'll need your own instance of MongoDB 4.4 installed.
## Read and Write Concerns
Before we get into how we can set these features globally, let's quickly examine what it is they actually do, what benefits they provide, and why we should even care.
We'll start with the MongoDB write concern functionality. By default, when you send a write operation to a MongoDB database, it has a write concern of `w:1`. What this means is that the write operation will be acknowledged as successful when the primary in a replica set has successfully executed the write operation.
Let's assume you're working with a 3-node replicate set, which is the default when you create a free MongoDB Atlas cluster. Sending a write command such as `db.collection('test').insertOne({name:"Ado"})` will be deemed successful when the primary has acknowledged the write. This ensures that the data doesn't violate any database constraints and has successfully been written to the database in memory. We can improve this write concern durability, by increasing the number of nodes we want to acknowledge the write.
Instead of `w:1`, let's say we set it to `w:2`. Now when we send a write operation to the database, we wouldn't hear back until both the primary, and one of the two secondary nodes acknowledged the write operation was successful. Likewise, we could also set the acknowledgement value to 0, i.e `w:0`, and in this instance we wouldn't ask for acknowledgement at all. I wouldn't recommend using `w:0` for any important data, but in some instances it can be a valid option. Finally, if we had a three member replica set and we set the w value to 3, i.e `w:3`, now the primary and both of the secondary nodes would need to acknowledge the write. I wouldn't recommend this approach either, because if one of the secondary members become unavailable, we wouldn't be able to acknowledge write operations, and our system would no longer be highly available.
Additionally, when it comes to write concern, we aren't limited to setting a numeric value. We can set the value of w to "majority" for example, which will wait for the write operation to propagate to a majority of the nodes or even write our own custom write concern.
MongoDB read concern allows you to control the consistency and isolation properties of the data read from replica sets and replica set shards. Essentially what this means is that when you send a read operation to the database such as a db.collection.find(), you can specify how durable the data that is returned must be. Note that read concern should not be confused with read preference, which specifies which member of a replica set you want to read from.
There are multiple levels of read concern including local, available, majority, linearizable, and snapshot. Each level is complex enough that it can be an article itself, but the general idea is similar to that of the write concern. Setting a read concern level will allow you to control the type of data read. Defaults for read concerns can vary and you can find what default is applied when here. Default read concern reads the most recent data, rather than data that's been majority committed.
Through the effective use of write concerns and read
concerns, you can adjust the level of consistency and availability defaults as appropriate for your application.
## Setting Global Read and Write Concerns
So now that we know a bit more about why these features exist and how they work, let's see how we can change the defaults globally. In MongoDB 4.4, we can use the db.adminCommand() to configure our isolation and durability defaults.
>Setting global read and write concerns is currently unavailable on MongoDB Atlas. If you wish to follow along with this tutorial, you'll need your own instance of MongoDB 4.4 installed.
We'll use the `db.adminCommand()` to set a default read and write concern of majority. In the MongoDB shell, execute the following command:
``` bash
db.adminCommand({
setDefaultRWConcern: 1,
defaultReadConcern: { level : "majority" },
defaultWriteConcern: { w: "majority" }
})
```
Note that to execute this command you need to have a replica set and the command will need to be sent to the primary node. Additionally, if you have a sharded cluster, the command will need to be run on the `mongos`. If you have a standalone node, you'll get an error. The final requirement to be able to execute the `setDefaultRWConcern` command is having the correct privilege.
When setting default read and write concerns, you don't have to set both a default read concern and a default write concern, you are allowed to set only a default read concern or a default write concern as you see fit. For example, say we only wanted to set a default write concern, it would look something like this:
``` bash
db.adminCommand({
setDefaultRWConcern: 1,
defaultWriteConcern: { w: 2 }
})
```
The above command would set just a default write concern of 2, meaning that the write would succeed when the primary and one secondary node acknowledged the write.
When it comes to default write concerns, in addition to specifying the acknowledgment, you can also set a `wtimeout` period for how long an operation has to wait for an acknowledgement. To set this we can do this:
``` bash
db.adminCommand({
setDefaultRWConcern: 1,
defaultWriteConcern: { w: 2, wtimeout: 5000 }
})
```
This will set a timeout of 5000ms so if we don't get an acknowledgement within 5 seconds, the write operation will return an `writeConcern` timeout error.
To unset either a default read or write concern, you can simply pass into it an empty object.
``` bash
db.adminCommand({
setDefaultRWConcern: 1,
defaultReadConcern: { },
defaultWriteConcern: { }
})
```
This will return the read concern and the write concern to their MongoDB defaults. You can also easily check and see what defaults are currently set for your global read and write concerns using the getDefaultRWConcern command. When you run this command against the `admin` database like so:
``` bash
db.adminCommand({
getDefaultRWConcern: 1
})
```
You will get a response like the one below showing you your global settings:
```
{
"defaultWriteConcern" : {
"w" : "majority"
},
"defaultReadConcern" : {
"level" : "majority"
},
"updateOpTime" : Timestamp(1586290895, 1),
"updateWallClockTime" : ISODate("2020-04-07T20:21:41.849Z"),
"localUpdateWallClockTime" : ISODate("2020-04-07T20:21:41.862Z"),
"ok" : 1,
"$clusterTime" : { ... }
"operationTime" : Timestamp(1586290925, 1)
}
```
In the next section, we'll take a look at how we can override these global settings when needed.
## Overriding Global Read and Write Concerns
MongoDB is a very flexible database. The default read and write concerns allow you to set reasonable defaults for how clients interact with your database cluster-wide, but as your application evolves a specific client may need a different read isolation or write durability default. This can be accomplished using any of the MongoDB drivers.
We can override read and write concerns at:
- the client connection layer when connecting to the MongoDB database,
- the database level,
- the collection level,
- an individual operation or query.
However, note that MongoDB transactions can span multiple databases and collections, and since all operations within a transaction must use the same write concern, transactions have their own hierarchy of:
- the client connection layer,
- the session level,
- the transaction level.
A diagram showing this inheritance is presented below to help you understand what read and write concern takes precedence when multiple are declared:
We'll take a look at a couple of examples where we override the read and write concerns. For our examples we'll use the Node.js Driver.
Let's see an example of how we would overwrite our read and write concerns in our Node.js application. The first example we'll look at is how to override read and write concerns at the database level. To do this our code will look like this:
``` js
const MongoClient = require('mongodb').MongoClient;
const uri = "{YOUR-CONNECTION-STRING}";
const client = new MongoClient(uri, { useNewUrlParser: true });
client.connect(err => {
const options = {w:"majority", readConcern: {level: "majority"}};
const db = client.db("test", options);
});
```
When we specify the database we want to connect to, in this case the database is called `test`, we also pass an `options` object with the read and write concerns we wish to use. For our first example, we are using the **majority** concern for both read and write operations.
If we already set defaults globally, then overriding in this way may not make sense, but we may still run into a situation where we want a specific collection to execute read and write operations at a specific read or write concern. Let's declare a collection with a **majority** write concern and a read concern "majority."
``` js
const options = {w:"majority", readConcern: {level: "majority"}};
const collection = db.collection('documents', options);
```
Likewise we can even scope it down to a specific operation. In the following example we'll use the **majority** read concern for just one specific query.
``` js
const collection = db.collection('documents');
collection.insertOne({name:"Ado Kukic"}, {w:"majority", wtimeout: 5000})
```
The code above will execute a write query and try to insert a document that has one field titled **name**. For the query to be successful, the write operation will have to be acknowledged by the primary and one secondary, assuming we have a three member replica set.
Being able to set the default read and write concerns is important to providing developers the ability to set defaults that make sense for their use case, but also the flexibility to easily override those defaults when needed.
## Conclusion
Global read or write concerns allow developers to set default read isolation and write durability defaults for their database cluster-wide. As your application evolves, you are able to override the global read and write concerns at the client level ensuring you have flexibility when you need it and customized defaults when you don't. It is available in MongoDB 4.4, which is available in beta today.
>**Safe Harbor Statement**
>
>The development, release, and timing of any features or functionality described for MongoDB products remains at MongoDB's sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality. Except as required by law, we undertake no obligation to update any forward-looking statements to reflect events or circumstances after the date of such statements. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to set global read isolation and write durability defaults in MongoDB 4.4.",
"contentType": "Article"
} | Set Global Read and Write Concerns in MongoDB 4.4 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/location-geofencing-stitch-mapbox | created | # Location Geofencing with MongoDB, Stitch, and Mapbox
>
>
>Please note: This article discusses Stitch. Stitch is now MongoDB Realm.
>All the same features and functionality, now with a new name. Learn more
>here. We will be updating this article
>in due course.
>
>
For a lot of organizations, when it comes to location, geofencing is
often a very desirable or required feature. In case you're unfamiliar, a
geofence can be thought of as a virtual perimeter for a geographic area.
Often, you'll want to know when something enters or exits that geofence
so that you can apply your own business logic. Such logic might include
sending a notification or updating something in your database.
MongoDB supports GeoJSON data and offers quite a few operators that make
working the location data easy.
When it comes to geofencing, why would you want to use a database like
MongoDB rather than defining boundaries directly within your
client-facing application? Sure, it might be easy to define and manage
one or two boundaries, but when you're working at scale, checking to see
if something has exited or entered one of many boundaries could be a
hassle.
In this tutorial, we're going to explore the
$near
and
$geoIntersects
operators within MongoDB to define geofences and see if we're within the
fences. For the visual aspect of things, we're going to make use of
Mapbox for showing our geofences and our
location.
To get an idea of what we're going to build, take a look at the
following animated image:
We're going to implement functionality where a map is displayed and
polygon shapes are rendered based on data from within MongoDB. When we
move the marker around on the map to simulate actual changes in
location, we're going to determine whether or not we've entered or
exited a geofence.
## The Requirements
There are a few moving pieces for this particular tutorial, so it is
important that the prerequisites are met prior to starting:
- Must have a Mapbox account with an access token generated.
- Must have a MongoDB Atlas cluster available.
Mapbox is a service, not affiliated with MongoDB. To render a map along
with shapes and markers, an account is necessary. For this example,
everything can be accomplished within the Mapbox free tier.
Because we'll be using MongoDB Stitch in connection with Mapbox, we'll
need to be using MongoDB Atlas.
>
>
>MongoDB Atlas can be used to deploy an M0
>sized cluster of MongoDB for FREE.
>
>
The MongoDB Atlas cluster should have a **location_services** database
with a **geofences** collection.
## Understanding the GeoJSON Data to Represent Fenced Regions
To use the geospatial functionality that MongoDB offers, the data stored
within MongoDB must be valid GeoJSON data. At the end of the day,
GeoJSON is still JSON, which plays very nicely with MongoDB, but there
is a specific schema that must be followed. To learn more about GeoJSON,
visit the specification documentation.
For our example, we're going to be working with Polygon and Point data.
Take the following document model:
``` json
{
"_id": ObjectId(),
"name": string,
"region": {
"type": string,
"coordinates":
[
[double]
]
]
}
}
```
In the above example, the `region` represents our GeoJSON data and
everything above it such as `name` represents any additional data that
we want to store for the particular document. A realistic example to the
above model might look something like this:
``` json
{
"_id": ObjectId("5ebdc11ab96302736c790694"),
"name": "tracy",
"region": {
"type": "Polygon",
"coordinates": [
[
[-121.56115581054638, 37.73644193427164],
[-121.33868266601519, 37.59729761382843],
[-121.31671000976553, 37.777700170855454],
[-121.56115581054638, 37.73644193427164]
]
]
}
}
```
We're naming any of our possible fenced regions. This could be useful to
a lot of organizations. For example, maybe you're a business with
several franchise locations. You could geofence the location and name it
something like the address, store number, etc.
To get the performance we need from our geospatial data and to be able
to use certain operators, we're going to need to create an index on our
collection. The index looks something like the following:
``` javascript
db.geofences.createIndex({ region: "2dsphere" })
```
The index can be created through Atlas, Compass, and with the CLI. The
goal here is to make sure the `region` field is a `2dsphere` index.
## Configuring MongoDB Stitch for Client-Facing Application Interactions
Rather than creating a backend application to interact with the
database, we're going to make use of MongoDB Stitch. Essentially, the
client-facing application will use the Stitch SDK to authenticate before
interacting with the data.
Within the [MongoDB Cloud, choose to create
a new Stitch application if you don't already have one that you wish to
use. Make sure that the application is using the cluster that has your
geofencing data.
Within the Stitch dashboard, choose the **Rules** tab and create a new
set of permissions for the **geofences** collection. For this particular
example, the **Users can only read all data** permission template is
fine.
Next, we'll want to choose an authentication mechanism. In the **Users**
tab, choose **Providers**, and enable the anonymous authentication
provider. In a more realistic production scenario, you'll likely want to
create geofences that have stricter users and rules design.
Before moving onto actually creating an application, make note of your
**App ID** within Stitch, as it will be necessary for connecting.
## Interacting with the Geofences using Mapbox and MongoDB Geospatial Queries
With all the configuration out of the way, we can move into the fun part
of creating an attractive client-facing application that queries the
geospatial data in MongoDB and renders it on a map.
On your computer, create an **index.html** file with the following
boilerplate code:
``` xml
```
In the above HTML, we're importing the Mapbox and MongoDB Stitch SDKs,
and we are defining an HTML container to hold our interactive map.
Interacting with MongoDB and the map will be done in the ` | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to use MongoDB geospatial queries and GeoJSON with Mapbox to create dynamic geofences.",
"contentType": "Tutorial"
} | Location Geofencing with MongoDB, Stitch, and Mapbox | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-cocoa-data-types | created | # New Realm Cocoa Data Types
In this blog post we will discover the new data types that Realm has to offer.
Over the past year we have worked hard to bring three new datatypes to the Realm SDK: `MutableSet`, `Map`, and `AnyRealmValue`.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## MutableSet
`MutableSet` allows you to store a collection of unique values in an unordered fashion. This is different to `List` which allows you to store duplicates and persist the order of items.
`MutableSet` has some methods that many will find useful for data manipulation and storage:
- `Intersect`
- Gets the common items between two `MutableSet`s.
- `Union`
- Combines elements from two `MutableSet`s, removing any duplicates.
- `Subtract`
- Removes elements from one `MutableSet` that are present in another given `MutableSet`.
- `isSubset`
- Checks to see if the elements in a `MutableSet` are children of a given super `MutableSet`.
So why would you use a `MutableSet` over a `List`?
- You require a distinct collection of elements.
- You do not rely on the order of items.
- You need to perform mathematical operations such as `Intersect`, `Union`, and `Subtract`.
- You need to test for membership in other Set collections using `isSubset` or `intersects`.
### Practical example
Using our Movie object, we want to store and sync certain properties that will never contain duplicates and we don't care about ordering. Let's take a look below:
```swift
class Movie: Object, ObjectKeyIdentifiable {
@Persisted(primaryKey: true) var _id: ObjectId
@Persisted var _partitionKey: String
// we will want to keep the order of the cast, so we will use a `List`
@Persisted var cast: List
@Persisted var countries: MutableSet
@Persisted var genres: MutableSet
@Persisted var languages: MutableSet
@Persisted var writers: MutableSet
}
```
Straight away you can see the use case, we never want to have duplicate elements in the `countries`, `genres`, `languages`, and `writers` collections, nor do we care about their stored order. `MutableSet` does support sorting so you do have the ability to rearrange the order at runtime, but you can't persist the order.
You query a `MutableSet` the same way you would with List:
```swift
let danishMovies = realm.objects(Movie.self).filter("'Danish' IN languages")
```
### Under the hood
`MutableSet` is based on the `NSSet` type found in Foundation. From the highest level we mirror the `NSMutableSet / Set API` on `RLMSet / MutableSet`.
When a property is unmanaged the underlying storage type is deferred to `NSMutableSet`.
## Map
Our new `Map` data type is a Key-Value store collection type. It is similar to Foundation's `Dictionary` and shares the same call semantics. You use a `Map` when you are unsure of a schema and need to store data in a structureless fashion. NOTE: You should not use `Map` over an `Object` where a schema is known.
### Practical example
```swift
@Persisted phoneNumbers: Map
phoneNumbers"Charlie"] = "+353 86 123456789"
let charliesNumber = phoneNumbers["Charlie"] // "+353 86 123456789"
```
`Map` also supports aggregate functions so you can easily calculate data:
```swift
@Persisted testScores: Map
testScores["Julio"] = 95
testScores["Maria"] = 95
testScores["John"] = 70
let averageScore = testScores.avg()
```
As well as filtering with NSPredicate:
```swift
@Persisted dogMap: Map
let spaniels = dogMap.filter(NSPredicate("breed = 'Spaniel'")) // Returns `Results`
```
You can observe a `Map` just like the other collection types:
```swift
let token = map.observe(on: queue) { change in
switch change {
case .initial(let map):
...
case let .update(map, deletions: deletions, insertions: insertions, modifications: modifications):
// `deletions`, `insertions` and `modifications` contain the modified keys in the Map
...
case .error(let error):
...
}
}
```
Combine is also supported for observation:
```swift
cancellable = map.changesetPublisher
.sink { change in
...
}
```
### Under the hood
`Map` is based on the `NSDictionary` type found in Foundation. From the highest level, we mirror the `NSMutableDictionary / Dictionary API` on `RLMDictionary / Map`.
When a property is unmanaged the underlying storage type is deferred to `NSMutableDictionary`.
## AnyRealmValue
Last but not least, a datatype we are very excited about, `AnyRealmValue`. No this is not another collection type but one that allows you to store various different types of data under one property. Think of it like `Any` or `AnyObject` in Swift or a union in C.
To better understand how to use `AnyRealmValue`, let's see some practical examples.
Let's say we have a Settings class which uses a `Map` for storing the user preferences, because the types of references we want to store are changing all the time, we are certain that this is schemaless for now:
```swift
class Settings: Object {
@Persisted(primaryKey: true) var _id: ObjectId
@Persisted var _partitionKey: String?
@Persisted var misc: Map
}
```
Usage:
```swift
misc["lastScreen"] = .string("home")
misc["lastOpened"] = .date(.now)
// To unwrap the values
if case let .string(lastScreen) = misc["lastScreen"] {
print(lastScreen) // "home"
}
```
Here we can store different variants of the value, so depending on the need of your application, you may find it useful to be able to switch between different types.
### Under the hood
We don't use any Foundation types for storing `AnyRealmValue`. Instead the `AnyRealmValue` enum is converted to the ObjectiveC representation of the stored type. This is any type that conforms to `RLMValue`. You can see how that works [here.
## Conclusion
I hope you found this insightful and have some great ideas with what to do with these data types! All of these new data types are fully compatible with MongoDB Realm Sync too, and are available in Objective-C as well as Swift. We will follow up with another post and presentation on data modelling with Realm soon.
Links to documentation:
- MutableSet
- Map
- AnyRealmValue | md | {
"tags": [
"Realm",
"Mobile"
],
"pageDescription": "In this blog post we will discover the new data types that Realm Cocoa has to offer.",
"contentType": "Article"
} | New Realm Cocoa Data Types | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-setup-crud-operations | created | # Getting Started with MongoDB and Java - CRUD Operations Tutorial
## Updates
The MongoDB Java quickstart repository is available on GitHub.
### February 28th, 2024
- Update to Java 21
- Update Java Driver to 5.0.0
- Update `logback-classic` to 1.2.13
- Update the `preFlightChecks` method to support both MongoDB Atlas shared and dedicated clusters.
### November 14th, 2023
- Update to Java 17
- Update Java Driver to 4.11.1
- Update mongodb-crypt to 1.8.0
### March 25th, 2021
- Update Java Driver to 4.2.2.
- Added Client Side Field Level Encryption example.
### October 21st, 2020
- Update Java Driver to 4.1.1.
- The MongoDB Java Driver logging is now enabled via the popular SLF4J API, so I added logback
in the `pom.xml` and a configuration file `logback.xml`.
## Introduction
In this very first blog post of the Java Quick Start series, I will show you how to set up your Java project with Maven
and execute a MongoDB command in Java. Then, we will explore the most common operations — such as create, read, update,
and delete — using the MongoDB Java driver. I will also show you
some of the more powerful options and features available as part of the
MongoDB Java driver for each of these
operations, giving you a really great foundation of knowledge to build upon as we go through the series.
In future blog posts, we will move on and work through:
- Mapping MongoDB BSON documents directly to Plain Old Java Object (POJO)
- The MongoDB Aggregation Framework
- Change Streams
- Multi-document ACID transactions
- The MongoDB Java reactive streams driver
### Why MongoDB and Java?
Java is the most popular language in the IT industry at the
date of this blog post,
and developers voted MongoDB as their most wanted database four years in a row.
In this series of blog posts, I will be demonstrating how powerful these two great pieces of technology are when
combined and how you can access that power.
### Prerequisites
To follow along, you can use any environment you like and the integrated development environment of your choice. I'll
use Maven 3.8.7 and the Java OpenJDK 21, but it's fairly easy to update the code
to support older versions of Java, so feel free to use the JDK of your choice and update the Java version accordingly in
the pom.xml file we are about to set up.
For the MongoDB cluster, we will be using a M0 Free Tier MongoDB Cluster
from MongoDB Atlas. If you don't have one already, check out
my Get Started with an M0 Cluster blog post.
> Get your free M0 cluster on MongoDB Atlas today. It's free forever, and you'll
> be able to use it to work with the examples in this blog series.
Let's jump in and take a look at how well Java and MongoDB work together.
## Getting set up
To begin with, we will need to set up a new Maven project. You have two options at this point. You can either clone this
series' git repository or you can create and set up the Maven project.
### Using the git repository
If you choose to use git, you will get all the code immediately. I still recommend you read through the manual set-up.
You can clone the repository if you like with the following command.
``` bash
git clone [email protected]:mongodb-developer/java-quick-start.git
```
Or you
can download the repository as a zip file.
### Setting up manually
You can either use your favorite IDE to create a new Maven project for you or you can create the Maven project manually.
Either way, you should get the following folder architecture:
``` none
java-quick-start/
├── pom.xml
└── src
└── main
└── java
└── com
└── mongodb
└── quickstart
```
The pom.xml file should contain the following code:
``` xml
4.0.0
com.mongodb
java-quick-start
1.0-SNAPSHOT
UTF-8
21
21
3.12.1
5.0.0
1.8.0
1.2.13
3.1.1
org.mongodb
mongodb-driver-sync
${mongodb-driver-sync.version}
org.mongodb
mongodb-crypt
${mongodb-crypt.version}
ch.qos.logback
logback-classic
${logback-classic.version}
org.apache.maven.plugins
maven-compiler-plugin
${maven-compiler-plugin.version}
${maven-compiler-plugin.source}
${maven-compiler-plugin.target}
org.codehaus.mojo
exec-maven-plugin
${exec-maven-plugin.version}
false
```
To verify that everything works correctly, you should be able to create and run a simple "Hello MongoDB!" program.
In `src/main/java/com/mongodb/quickstart`, create the `HelloMongoDB.java` file:
``` java
package com.mongodb.quickstart;
public class HelloMongoDB {
public static void main(String] args) {
System.out.println("Hello MongoDB!");
}
}
```
Then compile and execute it with your IDE or use the command line in the root directory (where the `src` folder is):
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.HelloMongoDB"
```
The result should look like this:
``` none
[INFO] Scanning for projects...
[INFO]
[INFO] --------------------< com.mongodb:java-quick-start >--------------------
[INFO] Building java-quick-start 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ java-quick-start ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.12.1:compile (default-compile) @ java-quick-start ---
[INFO] Nothing to compile - all classes are up to date.
[INFO]
[INFO] --- exec-maven-plugin:3.1.1:java (default-cli) @ java-quick-start ---
Hello MongoDB!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.634 s
[INFO] Finished at: 2024-02-19T18:12:22+01:00
[INFO] ------------------------------------------------------------------------
```
## Connecting with Java
Now that our Maven project works, we have resolved our dependencies, we can start using MongoDB Atlas with Java.
If you have imported the [sample dataset as suggested in
the Quick Start Atlas blog post, then with the Java code we are about
to create, you will be able to see a list of the databases in the sample dataset.
The first step is to instantiate a `MongoClient` by passing a MongoDB Atlas connection string into
the `MongoClients.create()` static method. This will establish a connection
to MongoDB Atlas using the connection string. Then we can retrieve the list of
databases on this cluster and print them out to test the connection with MongoDB.
As per the recommended best practices, I'm also doing a "pre-flight check" using the `{ping: 1}` admin command.
In `src/main/java/com/mongodb`, create the `Connection.java` file:
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import org.bson.Document;
import org.bson.json.JsonWriterSettings;
import java.util.ArrayList;
import java.util.List;
public class Connection {
public static void main(String] args) {
String connectionString = System.getProperty("mongodb.uri");
try (MongoClient mongoClient = MongoClients.create(connectionString)) {
System.out.println("=> Connection successful: " + preFlightChecks(mongoClient));
System.out.println("=> Print list of databases:");
List databases = mongoClient.listDatabases().into(new ArrayList<>());
databases.forEach(db -> System.out.println(db.toJson()));
}
}
static boolean preFlightChecks(MongoClient mongoClient) {
Document pingCommand = new Document("ping", 1);
Document response = mongoClient.getDatabase("admin").runCommand(pingCommand);
System.out.println("=> Print result of the '{ping: 1}' command.");
System.out.println(response.toJson(JsonWriterSettings.builder().indent(true).build()));
return response.get("ok", Number.class).intValue() == 1;
}
}
```
As you can see, the MongoDB connection string is retrieved from the *System Properties*, so we need to set this up. Once
you have retrieved your [MongoDB Atlas connection string, you
can add the `mongodb.uri` system property into your IDE. Here is my configuration with IntelliJ for example.
Or if you prefer to use Maven in command line, here is the equivalent command line you can run in the root directory:
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.Connection" -Dmongodb.uri="mongodb+srv://username:[email protected]/test?w=majority"
```
> Note: Don't forget the double quotes around the MongoDB URI to avoid surprises from your shell.
The standard output should look like this:
``` none
{"name": "admin", "sizeOnDisk": 303104.0, "empty": false}
{"name": "config", "sizeOnDisk": 147456.0, "empty": false}
{"name": "local", "sizeOnDisk": 5.44731136E8, "empty": false}
{"name": "sample_airbnb", "sizeOnDisk": 5.761024E7, "empty": false}
{"name": "sample_geospatial", "sizeOnDisk": 1384448.0, "empty": false}
{"name": "sample_mflix", "sizeOnDisk": 4.583424E7, "empty": false}
{"name": "sample_supplies", "sizeOnDisk": 1339392.0, "empty": false}
{"name": "sample_training", "sizeOnDisk": 7.4801152E7, "empty": false}
{"name": "sample_weatherdata", "sizeOnDisk": 5103616.0, "empty": false}
```
## Insert operations
### Getting set up
In the Connecting with Java section, we created the classes `HelloMongoDB` and `Connection`. Now we will work on
the `Create` class.
If you didn't set up your free cluster on MongoDB Atlas, now is great time to do so. Get the directions
for creating your cluster.
### Checking the collection and data model
In the sample dataset, you can find the database `sample_training`, which contains a collection `grades`. Each document
in this collection represents a student's grades for a particular class.
Here is the JSON representation of a document in the MongoDB shell.
``` bash
MongoDB Enterprise Cluster0-shard-0:PRIMARY> db.grades.findOne({student_id: 0, class_id: 339})
{
"_id" : ObjectId("56d5f7eb604eb380b0d8d8ce"),
"student_id" : 0,
"scores" :
{
"type" : "exam",
"score" : 78.40446309504266
},
{
"type" : "quiz",
"score" : 73.36224783231339
},
{
"type" : "homework",
"score" : 46.980982486720535
},
{
"type" : "homework",
"score" : 76.67556138656222
}
],
"class_id" : 339
}
```
And here is the [extended JSON representation of the
same student. You can retrieve it in MongoDB Compass, our free GUI tool, if
you want.
Extended JSON is the human-readable version of a BSON document without loss of type information. You can read more about
the Java driver and
BSON in the MongoDB Java driver documentation.
``` json
{
"_id": {
"$oid": "56d5f7eb604eb380b0d8d8ce"
},
"student_id": {
"$numberDouble": "0"
},
"scores": {
"type": "exam",
"score": {
"$numberDouble": "78.40446309504266"
}
}, {
"type": "quiz",
"score": {
"$numberDouble": "73.36224783231339"
}
}, {
"type": "homework",
"score": {
"$numberDouble": "46.980982486720535"
}
}, {
"type": "homework",
"score": {
"$numberDouble": "76.67556138656222"
}
}],
"class_id": {
"$numberDouble": "339"
}
}
```
As you can see, MongoDB stores BSON documents and for each key-value pair, the BSON contains the key and the value along
with its type. This is how MongoDB knows that `class_id` is actually a double and not an integer, which is not explicit
in the mongo shell representation of this document.
We have 10,000 students (`student_id` from 0 to 9999) already in this collection and each of them took 10 different
classes, which adds up to 100,000 documents in this collection. Let's say a new student (`student_id` 10,000) just
arrived in this university and received a bunch of (random) grades in his first class. Let's insert this new student
document using Java and the MongoDB Java driver.
In this university, the `class_id` varies from 0 to 500, so I can use any random value between 0 and 500.
### Selecting databases and collections
Firstly, we need to set up our `Create` class and access this `sample_training.grades` collection.
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import org.bson.Document;
public class Create {
public static void main(String[] args) {
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
}
}
}
```
### Create a BSON document
Secondly, we need to represent this new student in Java using the `Document` class.
``` java
Random rand = new Random();
Document student = new Document("_id", new ObjectId());
student.append("student_id", 10000d)
.append("class_id", 1d)
.append("scores", List.of(new Document("type", "exam").append("score", rand.nextDouble() * 100),
new Document("type", "quiz").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100)));
```
As you can see, we reproduced the same data model from the existing documents in this collection as we made sure
that `student_id`, `class_id`, and `score` are all doubles.
Also, the Java driver would have generated the `_id` field with an ObjectId for us if we didn't explicitly create one
here, but it's good practice to set the `_id` ourselves. This won't change our life right now, but it makes more sense
when we directly manipulate POJOs, and we want to create a clean REST API. I'm doing this in
my [mapping POJOs post.
Note as well that we are inserting a document into an existing collection and database, but if these didn't already
exist, MongoDB would automatically create them the first time you to go insert a document into the collection.
### Insert document
Finally, we can insert this document.
``` java
gradesCollection.insertOne(student);
```
### Final code to insert one document
Here is the final `Create` class to insert one document in MongoDB with all the details I mentioned above.
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import org.bson.Document;
import org.bson.types.ObjectId;
import java.util.List;
import java.util.Random;
public class Create {
public static void main(String] args) {
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
Random rand = new Random();
Document student = new Document("_id", new ObjectId());
student.append("student_id", 10000d)
.append("class_id", 1d)
.append("scores", List.of(new Document("type", "exam").append("score", rand.nextDouble() * 100),
new Document("type", "quiz").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100)));
gradesCollection.insertOne(student);
}
}
}
```
You can execute this class with the following Maven command line in the root directory or using your IDE (see above for
more details). Don't forget the double quotes around the MongoDB URI to avoid surprises.
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.Create" -Dmongodb.uri="mongodb+srv://USERNAME:[email protected]/test?w=majority"
```
And here is the document I extracted from [MongoDB
Compass.
``` json
{
"_id": {
"$oid": "5d97c375ded5651ea3462d0f"
},
"student_id": {
"$numberDouble": "10000"
},
"class_id": {
"$numberDouble": "1"
},
"scores": {
"type": "exam",
"score": {
"$numberDouble": "4.615256396625178"
}
}, {
"type": "quiz",
"score": {
"$numberDouble": "73.06173415145801"
}
}, {
"type": "homework",
"score": {
"$numberDouble": "19.378205578990727"
}
}, {
"type": "homework",
"score": {
"$numberDouble": "82.3089189278531"
}
}]
}
```
Note that the order of the fields is different from the initial document with `"student_id": 0`.
We could get exactly the same order if we wanted to by creating the document like this.
``` java
Random rand = new Random();
Document student = new Document("_id", new ObjectId());
student.append("student_id", 10000d)
.append("scores", List.of(new Document("type", "exam").append("score", rand.nextDouble() * 100),
new Document("type", "quiz").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100)))
.append("class_id", 1d);
```
But if you do things correctly, this should not have any impact on your code and logic as fields in JSON documents are
not ordered.
I'm quoting [json.org for this:
> An object is an unordered set of name/value pairs.
### Insert multiple documents
Now that we know how to create one document, let's learn how to insert many documents.
Of course, we could just wrap the previous `insert` operation into a `for` loop. Indeed, if we loop 10 times on this
method, we would send 10 insert commands to the cluster and expect 10 insert acknowledgments. As you can imagine, this
would not be very efficient as it would generate a lot more TCP communications than necessary.
Instead, we want to wrap our 10 documents and send them in one call to the cluster and we want to receive only one
insert acknowledgement for the entire list.
Let's refactor the code. First, let's make the random generator a `private static final` field.
``` java
private static final Random rand = new Random();
```
Let's make a grade factory method.
``` java
private static Document generateNewGrade(double studentId, double classId) {
List scores = List.of(new Document("type", "exam").append("score", rand.nextDouble() * 100),
new Document("type", "quiz").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100));
return new Document("_id", new ObjectId()).append("student_id", studentId)
.append("class_id", classId)
.append("scores", scores);
}
```
And now we can use this to insert 10 documents all at once.
``` java
List grades = new ArrayList<>();
for (double classId = 1d; classId <= 10d; classId++) {
grades.add(generateNewGrade(10001d, classId));
}
gradesCollection.insertMany(grades, new InsertManyOptions().ordered(false));
```
As you can see, we are now wrapping our grade documents into a list and we are sending this list in a single call with
the `insertMany` method.
By default, the `insertMany` method will insert the documents in order and stop if an error occurs during the process.
For example, if you try to insert a new document with the same `_id` as an existing document, you would get
a `DuplicateKeyException`.
Therefore, with an ordered `insertMany`, the last documents of the list would not be inserted and the insertion process
would stop and return the appropriate exception as soon as the error occurs.
As you can see here, this is not the behaviour we want because all the grades are completely independent from one to
another. So, if one of them fails, we want to process all the grades and then eventually fall back to an exception for
the ones that failed.
This is why you see the second parameter `new InsertManyOptions().ordered(false)` which is true by default.
### The final code to insert multiple documents
Let's refactor the code a bit and here is the final `Create` class.
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.InsertManyOptions;
import org.bson.Document;
import org.bson.types.ObjectId;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class Create {
private static final Random rand = new Random();
public static void main(String] args) {
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
insertOneDocument(gradesCollection);
insertManyDocuments(gradesCollection);
}
}
private static void insertOneDocument(MongoCollection gradesCollection) {
gradesCollection.insertOne(generateNewGrade(10000d, 1d));
System.out.println("One grade inserted for studentId 10000.");
}
private static void insertManyDocuments(MongoCollection gradesCollection) {
List grades = new ArrayList<>();
for (double classId = 1d; classId <= 10d; classId++) {
grades.add(generateNewGrade(10001d, classId));
}
gradesCollection.insertMany(grades, new InsertManyOptions().ordered(false));
System.out.println("Ten grades inserted for studentId 10001.");
}
private static Document generateNewGrade(double studentId, double classId) {
List scores = List.of(new Document("type", "exam").append("score", rand.nextDouble() * 100),
new Document("type", "quiz").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100),
new Document("type", "homework").append("score", rand.nextDouble() * 100));
return new Document("_id", new ObjectId()).append("student_id", studentId)
.append("class_id", classId)
.append("scores", scores);
}
}
```
As a reminder, every write operation (create, replace, update, delete) performed on a **single** document
is [ACID in MongoDB. Which means `insertMany` is not ACID by default but, good
news, since MongoDB 4.0, we can wrap this call in a multi-document ACID transaction to make it fully ACID. I explain
this in more detail in my blog
about multi-document ACID transactions.
## Read documents
### Create data
We created the class `Create`. Now we will work in the `Read` class.
We wrote 11 new grades, one for the student with `{"student_id": 10000}` and 10 for the student
with `{"student_id": 10001}` in the `sample_training.grades` collection.
As a reminder, here are the grades of the `{"student_id": 10000}`.
``` javascript
MongoDB Enterprise Cluster0-shard-0:PRIMARY> db.grades.findOne({"student_id":10000})
{
"_id" : ObjectId("5daa0e274f52b44cfea94652"),
"student_id" : 10000,
"class_id" : 1,
"scores" :
{
"type" : "exam",
"score" : 39.25175977753478
},
{
"type" : "quiz",
"score" : 80.2908713167313
},
{
"type" : "homework",
"score" : 63.5444978481843
},
{
"type" : "homework",
"score" : 82.35202261582563
}
]
}
```
We also discussed BSON types, and we noted that `student_id` and `class_id` are doubles.
MongoDB treats some types as equivalent for comparison purposes. For instance, numeric types undergo conversion before
comparison.
So, don't be surprised if I filter with an integer number and match a document which contains a double number for
example. If you want to filter documents by value types, you can use
the [$type operator.
You can read more
about type bracketing
and comparison and sort order in our
documentation.
### Read a specific document
Let's read the document above. To achieve this, we will use the method `find`, passing it a filter to help identify the
document we want to find.
Please create a class `Read` in the `com.mongodb.quickstart` package with this code:
``` java
package com.mongodb.quickstart;
import com.mongodb.client.*;
import org.bson.Document;
import java.util.ArrayList;
import java.util.List;
import static com.mongodb.client.model.Filters.*;
import static com.mongodb.client.model.Projections.*;
import static com.mongodb.client.model.Sorts.descending;
public class Read {
public static void main(String] args) {
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
// find one document with new Document
Document student1 = gradesCollection.find(new Document("student_id", 10000)).first();
System.out.println("Student 1: " + student1.toJson());
}
}
}
```
Also, make sure you set up your `mongodb.uri` in your system properties using your IDE if you want to run this code in
your favorite IDE.
Alternatively, you can use this Maven command line in your root project (where the `src` folder is):
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.Read" -Dmongodb.uri="mongodb+srv://USERNAME:[email protected]/test?w=majority"
```
The standard output should be:
``` javascript
Student 1: {"_id": {"$oid": "5daa0e274f52b44cfea94652"},
"student_id": 10000.0,
"class_id": 1.0,
"scores": [
{"type": "exam", "score": 39.25175977753478},
{"type": "quiz", "score": 80.2908713167313},
{"type": "homework", "score": 63.5444978481843},
{"type": "homework", "score": 82.35202261582563}
]
}
```
The MongoDB driver comes with a few helpers to ease the writing of these queries. Here's an equivalent query using
the `Filters.eq()` method.
``` java
gradesCollection.find(eq("student_id", 10000)).first();
```
Of course, I used a static import to make the code as compact and easy to read as possible.
``` java
import static com.mongodb.client.model.Filters.eq;
```
### Read a range of documents
In the previous example, the benefit of these helpers is not obvious, but let me show you another example where I'm
searching all the grades with a *student_id* greater than or equal to 10,000.
``` java
// without helpers
gradesCollection.find(new Document("student_id", new Document("$gte", 10000)));
// with the Filters.gte() helper
gradesCollection.find(gte("student_id", 10000));
```
As you can see, I'm using the `$gte` operator to write this query. You can learn about all the
different [query operators in the MongoDB documentation.
### Iterators
The `find` method returns an object that implements the interface `FindIterable`, which ultimately extends
the `Iterable` interface, so we can use an iterator to go through the list of documents we are receiving from MongoDB:
``` java
FindIterable iterable = gradesCollection.find(gte("student_id", 10000));
MongoCursor cursor = iterable.iterator();
System.out.println("Student list with cursor: ");
while (cursor.hasNext()) {
System.out.println(cursor.next().toJson());
}
```
### Lists
Lists are usually easier to manipulate than iterators, so we can also do this to retrieve directly
an `ArrayList`:
``` java
List studentList = gradesCollection.find(gte("student_id", 10000)).into(new ArrayList<>());
System.out.println("Student list with an ArrayList:");
for (Document student : studentList) {
System.out.println(student.toJson());
}
```
### Consumers
We could also use a `Consumer` which is a functional interface:
``` java
Consumer printConsumer = document -> System.out.println(document.toJson());
gradesCollection.find(gte("student_id", 10000)).forEach(printConsumer);
```
### Cursors, sort, skip, limit, and projections
As we saw above with the `Iterator` example, MongoDB
leverages cursors to iterate through your result set.
If you are already familiar with the cursors in the mongo shell, you know
that transformations can be applied to it. A cursor can
be sorted and the documents it contains can be
transformed using a projection. Also,
once the cursor is sorted, we can choose to skip a few documents and limit the number of documents in the output. This
is very useful to implement pagination in your frontend for example.
Let's combine everything we have learnt in one query:
``` java
List docs = gradesCollection.find(and(eq("student_id", 10001), lte("class_id", 5)))
.projection(fields(excludeId(),
include("class_id",
"student_id")))
.sort(descending("class_id"))
.skip(2)
.limit(2)
.into(new ArrayList<>());
System.out.println("Student sorted, skipped, limited and projected: ");
for (Document student : docs) {
System.out.println(student.toJson());
}
```
Here is the output we get:
``` javascript
{"student_id": 10001.0, "class_id": 3.0}
{"student_id": 10001.0, "class_id": 2.0}
```
Remember that documents are returned in
the natural order, so if you want your output
ordered, you need to sort your cursors to make sure there is no randomness in your algorithm.
### Indexes
If you want to make these queries (with or without sort) efficient,
**you need** indexes!
To make my last query efficient, I should create this index:
``` javascript
db.grades.createIndex({"student_id": 1, "class_id": -1})
```
When I run an explain on this query, this is the
winning plan I get:
``` javascript
"winningPlan" : {
"stage" : "LIMIT",
"limitAmount" : 2,
"inputStage" : {
"stage" : "PROJECTION_COVERED",
"transformBy" : {
"_id" : 0,
"class_id" : 1,
"student_id" : 1
},
"inputStage" : {
"stage" : "SKIP",
"skipAmount" : 2,
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"student_id" : 1,
"class_id" : -1
},
"indexName" : "student_id_1_class_id_-1",
"isMultiKey" : false,
"multiKeyPaths" : {
"student_id" : ],
"class_id" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"student_id" : [
"[10001.0, 10001.0]"
],
"class_id" : [
"[5.0, -inf.0]"
]
}
}
}
}
}
```
With this index, we can see that we have no *SORT* stage, so we are not doing a sort in memory as the documents are
already sorted "for free" and returned in the order of the index.
Also, we can see that we don't have any *FETCH* stage, so this is
a [covered query, the most efficient type of
query you can run in MongoDB. Indeed, all the information we are returning at the end is already in the index, so the
index itself contains everything we need to answer this query.
### The final code to read documents
``` java
package com.mongodb.quickstart;
import com.mongodb.client.*;
import org.bson.Document;
import java.util.ArrayList;
import java.util.List;
import java.util.function.Consumer;
import static com.mongodb.client.model.Filters.*;
import static com.mongodb.client.model.Projections.*;
import static com.mongodb.client.model.Sorts.descending;
public class Read {
public static void main(String] args) {
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
// find one document with new Document
Document student1 = gradesCollection.find(new Document("student_id", 10000)).first();
System.out.println("Student 1: " + student1.toJson());
// find one document with Filters.eq()
Document student2 = gradesCollection.find(eq("student_id", 10000)).first();
System.out.println("Student 2: " + student2.toJson());
// find a list of documents and iterate throw it using an iterator.
FindIterable iterable = gradesCollection.find(gte("student_id", 10000));
MongoCursor cursor = iterable.iterator();
System.out.println("Student list with a cursor: ");
while (cursor.hasNext()) {
System.out.println(cursor.next().toJson());
}
// find a list of documents and use a List object instead of an iterator
List studentList = gradesCollection.find(gte("student_id", 10000)).into(new ArrayList<>());
System.out.println("Student list with an ArrayList:");
for (Document student : studentList) {
System.out.println(student.toJson());
}
// find a list of documents and print using a consumer
System.out.println("Student list using a Consumer:");
Consumer printConsumer = document -> System.out.println(document.toJson());
gradesCollection.find(gte("student_id", 10000)).forEach(printConsumer);
// find a list of documents with sort, skip, limit and projection
List docs = gradesCollection.find(and(eq("student_id", 10001), lte("class_id", 5)))
.projection(fields(excludeId(), include("class_id", "student_id")))
.sort(descending("class_id"))
.skip(2)
.limit(2)
.into(new ArrayList<>());
System.out.println("Student sorted, skipped, limited and projected:");
for (Document student : docs) {
System.out.println(student.toJson());
}
}
}
}
```
## Update documents
### Update one document
Let's edit the document with `{student_id: 10000}`. To achieve this, we will use the method `updateOne`.
Please create a class `Update` in the `com.mongodb.quickstart` package with this code:
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.FindOneAndUpdateOptions;
import com.mongodb.client.model.ReturnDocument;
import com.mongodb.client.model.UpdateOptions;
import com.mongodb.client.result.UpdateResult;
import org.bson.Document;
import org.bson.conversions.Bson;
import org.bson.json.JsonWriterSettings;
import static com.mongodb.client.model.Filters.and;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Updates.*;
public class Update {
public static void main(String[] args) {
JsonWriterSettings prettyPrint = JsonWriterSettings.builder().indent(true).build();
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
// update one document
Bson filter = eq("student_id", 10000);
Bson updateOperation = set("comment", "You should learn MongoDB!");
UpdateResult updateResult = gradesCollection.updateOne(filter, updateOperation);
System.out.println("=> Updating the doc with {\"student_id\":10000}. Adding comment.");
System.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));
System.out.println(updateResult);
}
}
}
```
As you can see in this example, the method `updateOne` takes two parameters:
- The first one is the filter that identifies the document we want to update.
- The second one is the update operation. Here, we are setting a new field `comment` with the
value `"You should learn MongoDB!"`.
In order to run this program, make sure you set up your `mongodb.uri` in your system properties using your IDE if you
want to run this code in your favorite IDE (see above for more details).
Alternatively, you can use this Maven command line in your root project (where the `src` folder is):
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.Update" -Dmongodb.uri="mongodb+srv://USERNAME:[email protected]/test?w=majority"
```
The standard output should look like this:
``` javascript
=> Updating the doc with {"student_id":10000}. Adding comment.
{
"_id": {
"$oid": "5dd5c1f351f97d4a034109ed"
},
"student_id": 10000.0,
"class_id": 1.0,
"scores": [
{
"type": "exam",
"score": 21.580800815091415
},
{
"type": "quiz",
"score": 87.66967927111044
},
{
"type": "homework",
"score": 96.4060480668003
},
{
"type": "homework",
"score": 75.44966835508427
}
],
"comment": "You should learn MongoDB!"
}
AcknowledgedUpdateResult{matchedCount=1, modifiedCount=1, upsertedId=null}
```
### Upsert a document
An upsert is a mix between an insert operation and an update one. It happens when you want to update a document,
assuming it exists, but it actually doesn't exist yet in your database.
In MongoDB, you can set an option to create this document on the fly and carry on with your update operation. This is an
upsert operation.
In this example, I want to add a comment to the grades of my student 10002 for the class 10 but this document doesn't
exist yet.
``` java
filter = and(eq("student_id", 10002d), eq("class_id", 10d));
updateOperation = push("comments", "You will learn a lot if you read the MongoDB blog!");
UpdateOptions options = new UpdateOptions().upsert(true);
updateResult = gradesCollection.updateOne(filter, updateOperation, options);
System.out.println("\n=> Upsert document with {\"student_id\":10002.0, \"class_id\": 10.0} because it doesn't exist yet.");
System.out.println(updateResult);
System.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));
```
As you can see, I'm using the third parameter of the update operation to set the option upsert to true.
I'm also using the static method `Updates.push()` to push a new value in my array `comments` which does not exist yet,
so I'm creating an array of one element in this case.
This is the output we get:
``` javascript
=> Upsert document with {"student_id":10002.0, "class_id": 10.0} because it doesn't exist yet.
AcknowledgedUpdateResult{matchedCount=0, modifiedCount=0, upsertedId=BsonObjectId{value=5ddeb7b7224ad1d5cfab3733}}
{
"_id": {
"$oid": "5ddeb7b7224ad1d5cfab3733"
},
"class_id": 10.0,
"student_id": 10002.0,
"comments": [
"You will learn a lot if you read the MongoDB blog!"
]
}
```
### Update many documents
The same way I was able to update one document with `updateOne()`, I can update multiple documents with `updateMany()`.
``` java
filter = eq("student_id", 10001);
updateResult = gradesCollection.updateMany(filter, updateOperation);
System.out.println("\n=> Updating all the documents with {\"student_id\":10001}.");
System.out.println(updateResult);
```
In this example, I'm using the same `updateOperation` as earlier, so I'm creating a new one element array `comments` in
these 10 documents.
Here is the output:
``` javascript
=> Updating all the documents with {"student_id":10001}.
AcknowledgedUpdateResult{matchedCount=10, modifiedCount=10, upsertedId=null}
```
### The findOneAndUpdate method
Finally, we have one last very useful method available in the MongoDB Java Driver: `findOneAndUpdate()`.
In most web applications, when a user updates something, they want to see this update reflected on their web page.
Without the `findOneAndUpdate()` method, you would have to run an update operation and then fetch the document with a
find operation to make sure you are printing the latest version of this object in the web page.
The `findOneAndUpdate()` method allows you to combine these two operations in one.
``` java
// findOneAndUpdate
filter = eq("student_id", 10000);
Bson update1 = inc("x", 10); // increment x by 10. As x doesn't exist yet, x=10.
Bson update2 = rename("class_id", "new_class_id"); // rename variable "class_id" in "new_class_id".
Bson update3 = mul("scores.0.score", 2); // multiply the first score in the array by 2.
Bson update4 = addToSet("comments", "This comment is uniq"); // creating an array with a comment.
Bson update5 = addToSet("comments", "This comment is uniq"); // using addToSet so no effect.
Bson updates = combine(update1, update2, update3, update4, update5);
// returns the old version of the document before the update.
Document oldVersion = gradesCollection.findOneAndUpdate(filter, updates);
System.out.println("\n=> FindOneAndUpdate operation. Printing the old version by default:");
System.out.println(oldVersion.toJson(prettyPrint));
// but I can also request the new version
filter = eq("student_id", 10001);
FindOneAndUpdateOptions optionAfter = new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER);
Document newVersion = gradesCollection.findOneAndUpdate(filter, updates, optionAfter);
System.out.println("\n=> FindOneAndUpdate operation. But we can also ask for the new version of the doc:");
System.out.println(newVersion.toJson(prettyPrint));
```
Here is the output:
``` javascript
=> FindOneAndUpdate operation. Printing the old version by default:
{
"_id": {
"$oid": "5dd5d46544fdc35505a8271b"
},
"student_id": 10000.0,
"class_id": 1.0,
"scores": [
{
"type": "exam",
"score": 69.52994626959251
},
{
"type": "quiz",
"score": 87.27457417188077
},
{
"type": "homework",
"score": 83.40970667948744
},
{
"type": "homework",
"score": 40.43663797673247
}
],
"comment": "You should learn MongoDB!"
}
=> FindOneAndUpdate operation. But we can also ask for the new version of the doc:
{
"_id": {
"$oid": "5dd5d46544fdc35505a82725"
},
"student_id": 10001.0,
"scores": [
{
"type": "exam",
"score": 138.42535412437857
},
{
"type": "quiz",
"score": 84.66740178906916
},
{
"type": "homework",
"score": 36.773091359279675
},
{
"type": "homework",
"score": 14.90842128691825
}
],
"comments": [
"You will learn a lot if you read the MongoDB blog!",
"This comment is uniq"
],
"new_class_id": 10.0,
"x": 10
}
```
As you can see in this example, you can choose which version of the document you want to return using the appropriate
option.
I also used this example to show you a bunch of update operators:
- `set` will set a value.
- `inc` will increment a value.
- `rename` will rename a field.
- `mul` will multiply the value by the given number.
- `addToSet` is similar to push but will only push the value in the array if the value doesn't exist already.
There are a few other update operators. You can consult the entire list in
our [documentation.
### The final code for updates
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.FindOneAndUpdateOptions;
import com.mongodb.client.model.ReturnDocument;
import com.mongodb.client.model.UpdateOptions;
import com.mongodb.client.result.UpdateResult;
import org.bson.Document;
import org.bson.conversions.Bson;
import org.bson.json.JsonWriterSettings;
import static com.mongodb.client.model.Filters.and;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Updates.*;
public class Update {
public static void main(String] args) {
JsonWriterSettings prettyPrint = JsonWriterSettings.builder().indent(true).build();
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
// update one document
Bson filter = eq("student_id", 10000);
Bson updateOperation = set("comment", "You should learn MongoDB!");
UpdateResult updateResult = gradesCollection.updateOne(filter, updateOperation);
System.out.println("=> Updating the doc with {\"student_id\":10000}. Adding comment.");
System.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));
System.out.println(updateResult);
// upsert
filter = and(eq("student_id", 10002d), eq("class_id", 10d));
updateOperation = push("comments", "You will learn a lot if you read the MongoDB blog!");
UpdateOptions options = new UpdateOptions().upsert(true);
updateResult = gradesCollection.updateOne(filter, updateOperation, options);
System.out.println("\n=> Upsert document with {\"student_id\":10002.0, \"class_id\": 10.0} because it doesn't exist yet.");
System.out.println(updateResult);
System.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));
// update many documents
filter = eq("student_id", 10001);
updateResult = gradesCollection.updateMany(filter, updateOperation);
System.out.println("\n=> Updating all the documents with {\"student_id\":10001}.");
System.out.println(updateResult);
// findOneAndUpdate
filter = eq("student_id", 10000);
Bson update1 = inc("x", 10); // increment x by 10. As x doesn't exist yet, x=10.
Bson update2 = rename("class_id", "new_class_id"); // rename variable "class_id" in "new_class_id".
Bson update3 = mul("scores.0.score", 2); // multiply the first score in the array by 2.
Bson update4 = addToSet("comments", "This comment is uniq"); // creating an array with a comment.
Bson update5 = addToSet("comments", "This comment is uniq"); // using addToSet so no effect.
Bson updates = combine(update1, update2, update3, update4, update5);
// returns the old version of the document before the update.
Document oldVersion = gradesCollection.findOneAndUpdate(filter, updates);
System.out.println("\n=> FindOneAndUpdate operation. Printing the old version by default:");
System.out.println(oldVersion.toJson(prettyPrint));
// but I can also request the new version
filter = eq("student_id", 10001);
FindOneAndUpdateOptions optionAfter = new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER);
Document newVersion = gradesCollection.findOneAndUpdate(filter, updates, optionAfter);
System.out.println("\n=> FindOneAndUpdate operation. But we can also ask for the new version of the doc:");
System.out.println(newVersion.toJson(prettyPrint));
}
}
}
```
## Delete documents
### Delete one document
Let's delete the document above. To achieve this, we will use the method `deleteOne`.
Please create a class `Delete` in the `com.mongodb.quickstart` package with this code:
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.result.DeleteResult;
import org.bson.Document;
import org.bson.conversions.Bson;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Filters.gte;
public class Delete {
public static void main(String[] args) {
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
// delete one document
Bson filter = eq("student_id", 10000);
DeleteResult result = gradesCollection.deleteOne(filter);
System.out.println(result);
}
}
}
```
As you can see in this example, the method `deleteOne` only takes one parameter: a filter, just like the `find()`
operation.
In order to run this program, make sure you set up your `mongodb.uri` in your system properties using your IDE if you
want to run this code in your favorite IDE (see above for more details).
Alternatively, you can use this Maven command line in your root project (where the `src` folder is):
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.Delete" -Dmongodb.uri="mongodb+srv://USERNAME:[email protected]/test?w=majority"
```
The standard output should look like this:
``` javascript
AcknowledgedDeleteResult{deletedCount=1}
```
### FindOneAndDelete()
Are you emotionally attached to your document and want a chance to see it one last time before it's too late? We have
what you need.
The method `findOneAndDelete()` allows you to retrieve a document and delete it in a single atomic operation.
Here is how it works:
``` java
Bson filter = eq("student_id", 10002);
Document doc = gradesCollection.findOneAndDelete(filter);
System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));
```
Here is the output we get:
``` javascript
{
"_id": {
"$oid": "5ddec378224ad1d5cfac02b8"
},
"class_id": 10.0,
"student_id": 10002.0,
"comments": [
"You will learn a lot if you read the MongoDB blog!"
]
}
```
### Delete many documents
This time we will use `deleteMany()` instead of `deleteOne()` and we will use a different filter to match more
documents.
``` java
Bson filter = gte("student_id", 10000);
DeleteResult result = gradesCollection.deleteMany(filter);
System.out.println(result);
```
As a reminder, you can learn more about all the query selectors [in our
documentation.
This is the output we get:
``` javascript
AcknowledgedDeleteResult{deletedCount=10}
```
### Delete a collection
Deleting all the documents from a collection will not delete the collection itself because a collection also contains
metadata like the index definitions or the chunk distribution if your collection is sharded for example.
If you want to remove the entire collection **and** all the metadata associated with it, then you need to use
the `drop()` method.
``` java
gradesCollection.drop();
```
### The final code for delete operations
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.result.DeleteResult;
import org.bson.Document;
import org.bson.conversions.Bson;
import org.bson.json.JsonWriterSettings;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Filters.gte;
public class Delete {
public static void main(String] args) {
try (MongoClient mongoClient = MongoClients.create(System.getProperty("mongodb.uri"))) {
MongoDatabase sampleTrainingDB = mongoClient.getDatabase("sample_training");
MongoCollection gradesCollection = sampleTrainingDB.getCollection("grades");
// delete one document
Bson filter = eq("student_id", 10000);
DeleteResult result = gradesCollection.deleteOne(filter);
System.out.println(result);
// findOneAndDelete operation
filter = eq("student_id", 10002);
Document doc = gradesCollection.findOneAndDelete(filter);
System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));
// delete many documents
filter = gte("student_id", 10000);
result = gradesCollection.deleteMany(filter);
System.out.println(result);
// delete the entire collection and its metadata (indexes, chunk metadata, etc).
gradesCollection.drop();
}
}
}
```
## Wrapping up
With this blog post, we have covered all the basic operations, such as create and read, and have also seen how we can
easily use powerful functions available in the Java driver for MongoDB. You can find the links to the other blog posts
of this series just below.
> If you want to learn more and deepen your knowledge faster, I recommend you check out the "MongoDB Java
> Developer Path" available for free on [MongoDB University.
| md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "Learn how to use MongoDB with Java in this tutorial on CRUD operations with example code and walkthrough!",
"contentType": "Quickstart"
} | Getting Started with MongoDB and Java - CRUD Operations Tutorial | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-meetup-javascript-react-native | created | # Realm Meetup - Realm JavaScript for React Native Applications
Didn't get a chance to attend the Realm JavaScript for React Native applications Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.
:youtube]{vid=6nqMCAR_v7U}
In this event, recorded on June 10th, Andrew Meyer, Software Engineer, on the Realm JavaScript team, walks us through the React Native ecosystem as it relates to persisting data with Realm. We discuss things to consider when using React Native, best practices to implement and gotcha's to avoid, as well as what's next for the JavaScript team at Realm.
In this 55-minute recording, Andrew spends about 45 minutes presenting
- React Native Overview & Benefits
- React Native Key Concepts and Architecture
- Realm Integration with React Native
- Realm Best Practices / Tips&Tricks with React Native
After this, we have about 10 minutes of live Q&A with Ian & Andrew and our community . For those of you who prefer to read, below we have a full transcript of the meetup too.
Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.
To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.
### Transcript
(*As this is verbatim, please excuse any typos or punctuation errors!*)
**Ian:**
I'm Ian Ward. I'm a product manager that focuses on the Realm SDKs. And with me today, I'm joined by Andrew Meyer, who is an engineer on our React Native development team, and who is focusing on a lot of the improvements we're looking to make for the React Native SDK in the future. And so I just went through a few slides here to just kick it off. So we've been running these user group sessions for a while now, we have some upcoming meetups next week we are going to be joined by a AWS engineer to talk about how to integrate MongoDB Realm, our serverless platform with AWS EventBridge. A couple of weeks after that, we will also be joined by the Swift team to talk about some of the new improvements they've made to the SDK and developer experience. So that's key path filtering as well as automatic open for Realms.
We also have MongoDB.live, which is happening on July 13th and 14th. This is a free virtual event and we will have a whole track set up for Realm and mobile development. So if you are interested in mobile development, which I presume you are, if you're here, you can sign up for that. No knowledge or experience with MongoDB is necessary to learn something from some of these sessions that we're going to have.
A little bit of housekeeping here. So we're using this Bevy platform. You'll see, in the web view here that there's a chat. If you have questions during the program, feel free to type in the question right there, and we'll look to answer it if we can in the chat, as well as we're going to have after Andrew goes through his presentation, we're going to have a Q&A session. So we'll go through some of those questions that have been accumulating across the presentation. And then at the end you can also ask other questions as well. We'll go through each one of those as well. If you'd like to get more connected we have our developer hub. This is our developer blog, We post a bunch of developer focused articles there. Please check that out at developer.mongodb.com, many of them are mobile focus.
So if you have questions on Swift UI, if you have questions on Kotlin multi-platform we have articles for you. If you have a question yourself come to forums.realm.io and ask a question, we patrol that regularly and answer a lot of those questions. And of course our Twitter @realm please follow us. And if you're interested in getting Swag please tweet about us. Let us know your comments, thoughts, especially about this program that you're watching right now. We would love to give away Swag and we'd love to see the community talk about us in the Twitter sphere. And without further ado, I'll stop sharing my screen here and pass it over to Andrew. Andrew, floor's yours.
**Andrew:**
Hello. Just one second, I'll go find my slides. Okay. I think that looks good. So hello. My name is Andrew Meyer, I'm a software engineer at MongoDB on the Realm-JS team, just joined in February. I have been working with React Native for the past four years. In my last job I worked for HORNBACH, which is one of the largest hardware stores in Germany making the HORNBACH shopping app. It also allowed you to scan barcodes in the store and you could fill up a cart and checkout in the store and everything. So I like React Native, I've been using it as I said for four years and I'm excited to talk to you all about it. So my presentation is called Realm JavaScript for React Native applications. I'm hoping that it inspires you if you haven't used React Native to give it a shot and hopefully use realm for your data and persistence.
Let's get started. So my agenda today, I'm going to go over React Native. I'm also going to go over some key concepts in React. We're going to go over how to integrate Realm with React Native, some best practices and tips when using Realm with React Native. And I'm also going to go over some upcoming changes to our API. So what is React Native? I think we've got a pretty mixed group. I'm not sure how many of you are actually React Native developers now or not. But I'm going to just assume that you don't know what React Native is and I'm going to give you a quick overview. So React Native is a framework made from Facebook. It's a cross platform app development library, you can basically use it for developing both Android and iOS applications, but it doesn't end there; there is also the ability to make desktop applications with React Native windows and React Native Mac OS.
It's pretty nice because with one team you can basically get your entire application development done. As it is written in JavaScript, if your backend is written in Node.JS, then you don't have a big context switch from jumping from front end development to back end development. So at my last job I think a lot of us started as front end developers, but by the end of a couple of years, we basically were full stack developers. So we were constantly going back and forth from front end to backend. And it was pretty easy, it's really a huge context switch when you have to jump into something like Ruby or Java, and then go back to JavaScript and yeah, it takes more time. So basically you just stay in one spot, but when you're using a JavaScript for the full stack you can hop back and forth really fast.
Another cool feature about React Native is fast refresh. This was implemented a few years ago, basically, as you develop your code, you can see the changes real time in your simulator, actually on your hardware as well. It can actually handle multiple simulators and hardware at the same time. I've tested Android, iOS phones in multiple languages and sizes and was able to see my front end changes happen in real time. So that's super useful if you've ever done a native development in iOS or an Android, you have to compile your changes and that takes quite a bit of time.
So this is an example of what a component looks like in React Native. If you're familiar with HTML and CSS it's really not a big jump to use React Native, basically you have a view and you apply styles to it, which is a JavaScript object that looks eerily similar to CSS except that it has camelCase instead of dash-case. One thing is if you do use React Native, you are going to want to become a very big friend of Flexbox. They use Flex quite adamantly, there's no CSS grid or anything like that, but I've been able to get pretty much anything I need to get done using Flexbox. So this is just a basic example of how that looks.
So, we're going to move on to React. So the React portion of React Native; it is using the React framework under the hood which is a front end web development framework. Key stomped concepts about React are JSX, that's what we just saw over here in the last example, this is JSX basically it's HTML and JavaScript. It resolves to basically a function call that will manipulate the DOM. If you're doing front end development and React Native world, it's actually going to bridge over into objective C for iOS and Java for Android. So that's one concept of it. The next is properties, pretty much every component you write is going to have properties and they're going to be managed by state. State is very important too. You can make basically say a to-do list and you need to have a state that's saving all its items and you need to be able to manipulate that. And if you manipulate that state, then it will re-render any changes through the properties that you pass down to sub components. And I'll show you an example of that now.
So this is an example of some React code. This is basically just a small piece of text to the button that allows you to change it from lowercase to uppercase. This is an example of a class component. There's actually two ways that you can make components in React, class components and functional components. So this is an example of how you do it with a class component, basically you make an instructor where you set your initial state then you have a rendering function that returns JSX. So this JSX reacts on that state. So in this case, we have a toUpper state with just a Boolean. If I change this toUpper Boolean to true, then that's going to change the text property that was passed in to uppercase or lowercase. And that'll be displayed here in the text. To set that state, I call this dot set state and basically just toggle that Boolean from true to false or false to true, depending on what state it's in.
So, as I said, this is class components. There's a lot more to this. Basically there's some of these life cycle methods that you had to override. You could basically before your component is mounted, make a network request and maybe initiate your state with some of that data. Or if you need to talk to a database that's where you would handle that. There's also a lot of... Yeah, the lifecycle methods get pretty confusing and that's why I'm going to move on to functional components, which are quite simpler. Before we could use this, but I think three years ago React introduced hooks, which is a way that we can do state management with functional programming. This gets rid of all those life cycle methods that are a bit confusing to know what's happening when.
So this is an example of what a functional component looks like. It's a lot less code, your state is actually being handled by a function and this function is called useState. Basically, you initialize it with some state and you get back that state and a function to set that state with. So in this case, I can look at that toUpper Boolean here and call this function to change that state. I want to go back real quick, that's how it was looking before, and that's how it is now. So I'm just going to go quickly through some of the hooks that are available to you because these are pretty much the basics of what you need to work with React and React Native. So as I talked about useState before this is just an example of showing a modal, but it's not too different than changing the case of a text.
So basically you'd be able to press a button and say, show this modal, you pass that in as a property to your modal. And you could actually pass that set modal, visible function to your modal components so that something inside of that modal can close that. And if you don't know what the modal is, it's basically an overlay that shows up on top of your app.
So then the next one is called useEffect. This is basically going to replace all your life cycle methods that I talked about before. And what you can do with useEffect is basically subscribe to changes that are happening. So that could be either in the state or some properties that are being passed down. There's an array at the end that you provide with the dependencies and every time something changes this function will be called. In this case, it's just an empty array, which basically means call this once and never call it again. This would be if you need to initialize your state with some data that's stored in this case in persistent storage then you'd be able to get that data out and store it to your state. We're going to see a lot more of this in the next slides.
UseContext is super useful. It's a bit confusing, but this is showing how to use basically a provider pattern to apply a darker light mode to your application. So basically you would define the styles that you want to apply for your component. You create your context with the default state and that create gives you a context that you can call the provider on, and then you can set that value. So this one's basically overriding that light with dark, but maybe you have some sort of functionality or a switch that would change this value of state and change it on the fly. And then if you wrap your component or your entire application with this provider, then you can use the useContext hook to basically get that value out.
So this could be a very complex app tree and some button way deep down in that whole tree structure that can just easily get this theme value out and say, "Okay, what am I, dark or light?" Also, you can define your own hooks. So if you notice that one of your components is getting super complex or that you created a use effect that you're just using all over the place, then that's probably a good chance for you to do a little bit of dry coding and create your own hooks. So this one is basically one that will check a friend status. If you have some sort of chat API, so you'd be able to subscribe to any changes to that. And for trends it's Boolean from that to let you know that friends online or not. There's also a cool thing about useEffect. It has a tear down function. So if that component that's using this hook is removed from the tree, this function will be called so that those subscription handlers are not going to be called later on.
A couple of other hooks useCallback and useMemo. These are a bit nice, this is basically the concept of memorization. So if you have a component that's doing some sort of calculation like averaging an array of items and maybe they raise 5,000 items long. If you just call the function to do that in your component, then every time your component got re-rendered from a state change, then it would do that computation again, and every single time it got re-rendered. You actually only want to do that if something in that array changes. So basically if you use useMemo you can provide dependencies and then compute that expensive value. Basically it helps you not have an on performance app.
UseCallback is similar, but this is on return to function, this function will basically... Well, this is important because if you were to give a function to a component as a property, and you didn't call useCallback to do that, then every time that function re-rendered any component that was using that function as a property would also be re-rendered. So this basically keeps that function reference static, basically makes sure it doesn't change all the time. We're going to go into that a little bit more on the next slides. UseRef is also quite useful in React Native. React Native has to sometimes have some components that take advantage of data features on your device, for instance, the camera. So you have a camera component that you're using typically there might be some functions you want to call on that component. And maybe you're not actually using properties to define that, but you actually have functions that you can call in this case, maybe something that turns the flashlight on.
In that case, you would basically define your reference using useRef and you would be able to basically get a reference from useRef and you can the useRef property on this component to get a reference of that. Sorry, it's a bit confusing. But if you click this button, then you'd be able to basically call function on that reference. Cool. And these are the rest of the hooks. I didn't want to go into detail on them, but there are other ones out there. I encourage you to take a look at them yourselves and see what's useful but the ones that I went through are probably the most used, you get actually really far with useState, useEffect, useContext.
So one thing I want to go over that's super important is if you're going to use a JavaScript object for state manipulation. So, basically objects in JavaScript are a bit strange. I'll go up here. So if I make an object, like say, we have this message here and I changed something on that object and set that state with that changed object, the reference doesn't change. So basically react doesn't detect that anything changed, it's not looking at did the values inside this object change it's actually looking at is the address value of this object different. So going back to that, let me go back to the previous slide. So basically that's what immutability is. Sorry, let me get back here. And the way we fix that is if you set that state with an object, you want to make a copy of it, and if you make a copy, then the address will change and then the state will be detected as new, and then you'll get your message where you rendered.
And you can either use object data sign, but basically the best method right now is to use the spread operator and what this does is basically take all the properties of that object and makes a copy of them here and then overrides that message with the texts that you entered into this text input. Cool. And back to that concept of memorization. There's actually a pretty cool function from React to that, it's very useful. When we had class components, there used to be a function you could override that compared the properties of what was changing inside of your component. And then you would be able to basically compare your previous properties with your next properties and decide, should I re-render this or not, should I return false, it's going to re-render. If you return true, then it's just going to stay the same.
To do that with functional components, we get a function called the React.memo. Basically, if you wrap your component React.memo it's going to automatically look at those base level properties and just check if they're equal to each other. With objects that becomes a little bit problematic, if it's just strings and Booleans, then it's going to successfully pull that off and only re-render that component if that string changes or that Boolean changes. So if you do wrap this and you're using objects, then you can actually make a function called... Well, in this case, it's equal or are equal, which is the second argument of this memo function. And that will give you the access to previous prompts and next prompts. So if you're coming into the hooks world and you already have a class component, this is a way to basically get that functionality back. Otherwise hooks is just going to re-render all the time.
So I have an example of this right here. So basically if I have in like my example before that text input if I wrap that in memo and I have this message and my setMessage, you state functions past in here, then this will only re-render if those things change. So back to this, the setMessage, if this was defined by me, not from useState, this is something that you definitely want to wrap with useCallback by the way making sure that this doesn't potentially always re-render, just like objects, functions are also... Actually functions in JavaScript are objects. So if you change a function or re-render the definition of a function, then its address is going to change and thus your component is going to be unnecessarily re-rendering.
So let's see if there's any questions at the moment, nothing. Okay, cool. So that brings us to Realm. Basically, how do you persist state? We have our hooks, like useState that's all great, but you need a way to be able to save that state and persist it when you close your app and open it again. And that's where Realm comes in. Realm has been around for 10 years, basically started as an iOS library and has moved on to Native Android and .NET and React Native finally. It's a very fast database it's actually written in C++, so that's why it's easily cross-platform. Its offline first, so most data that you usually have in an application is probably going to be talking to a server and getting that data off that, you can actually just store the data right on the phone.
We actually do offer a synchronization feature, it's not free, but if you do want to have cloud support, we do offer that as well. I'm not going to go over that in this presentation, but if that's something that you're really interested, in I encourage you to take a look at that and see if that's right for your application. And most databases, you have to know some sort of SQL language or some sort of query language to do anything. I don't know if anybody has MySQL or a PostgreSQL. Yeah, that's how I started learning what the DOM is. And you don't need to know a query language to basically use Realm. It's just object oriented data model. So you'll be using dots and texts and calling functions and using JavaScript objects to basically create and manipulate your data. It's pretty easy to integrate, if you want to add Realm to your React Native application either use npm or Yarn, whatever your flavor is to install Realm, and then just update your pods.
This is a shortcut, if anybody wanted to know how to install your pods without jumping into the iOS directory, if you're just getting to React Native, you'll know what I'm talking about later. So there's a little bit of an introduction around, so basically if you want to get started using Realm, you need to start modeling your data. Realm has schemas to do that, basically any model you have needs to have a schema. You provide a name for that schema, you define properties for this. Properties are typically defined with just a string to picking their type. This also accepts an object with a few other properties. This would be, if we were using the objects INTAX, then this would be type colon object ID, and then you could also make this the primary key if you wanted to, or provide some sort of default value.
We also have a bit of TypeScript support. So here's an example of how you would define a class using this syntax to basically make sure that you have that TypeScript support and whatever you get back from your Realm queries is going to be properly typed. Basically, so this is example of a journal from my previous schema definition here. And what's important to notice is that you have to add this exclamation point, basically, this is just telling TypeScript that something else is going to be defining how these properties are being set, which Realm is going to be doing for you. It's important to know that Realm objects, their properties are actually pointers to a memory address. So those will be automatically propagated as soon as you connect this to Realm.
In this example, I created a generate function. This is basically just a nice syntax, where if you wanted to basically define an object that you can use to create Realm objects you can do that here and basically provide some values, you'll see what I mean in a second how that works. So once you have your schema defined then you can put that into a configuration and open the Realm, and then you get this Realm object. When you create a Realm, then it's going to actually create that database on your phone. If you close it, then it'll just make sure that that's saved and everything's good to go. So I'm going to show you some tips on how to keep that open and close using hooks here in a second.
Another thing that's pretty useful though, is when you're starting to get getting started with defining your definitions, your schema definitions, you're getting started with your app, it's pretty useful to put this deleteRealmMigrationNeeded to true. Basically that's if you're adding new properties to your Realm in development, and it's going to yell at you because it needs to have a migration path. If you've put this to true, then it's just going to ignore that, it's going to delete all that data and start from scratch. So this is pretty useful to have in development when you're constantly tweaking changes and all that to your data models.
Here's some examples about how you can basically create, edit and delete anything in Realm. So that Realm object that you get basically any sort of manipulation you have to put inside of a right transaction that basically ensures that you're not going to have any sort of problems with concurrency. So if I do realm.write that takes a call back and within that callback, you can start manipulating data. So this is an example of how I would create something using that journal class. So if I give this thing, that journal class, it's going to actually already be typed for me, and I'm going to call that generate function. I could actually just give this a plain JavaScript object as well. And if I provide this journal in the front then it'll start type checking that whatever I'm providing as a second argument.
If you want to change anything, say that display journal in this case, it's just the journal that I'm working on in some component, then if I wrap this in a right transaction, I can immediately manipulate that that property and it'll automatically be written to the database. I'll show you how to manage state with that in a second because it's a bit tricky. And then if you want to delete something, then basically you just provide what's coming back from realm.object creation into this, or realm.query into this delete function and then it'll remove that from the database. In this example, I'm just grabbing a journal by the ID primary key.
And last but not least how to read data. There's two main functions I basically use to get data out of Realm, one is using realm.objects. Oops, I have a little bit of code there. If you call realm.objects and journal and forget about this filtered part basically it'll just get everything in the database for that model that you defined. If you want to filter it by something, say if you have an author field and it's got a name, then you could say, I just want everything that was authored by Andrew then this filter would basically return a model that's filtered and then you can also sort it. But you can chain these as well as you see, you can just be filtered or realm.object.filtered.sorted, that'd be the better syntax, but for readability sake, I kept it on one line. And if you want to get a single object, you can use object for primary and provide that ID.
So I'm going to go through a few best practices and tips to basically combine this knowledge of Realm and hooks, it's a lot, so bear with me. So if you have an app and you need to access Realm you could either use a singleton or something to provide that, but I prefer to make sure to just provide it once and I found that using useContext is the best way to do that. So if you wanted to do that, you could write your own Realm provider, basically this is a component, it's going to be wrapping. So if you make any sort of component, that's wrapping other components, you have to give children and you have to access the children property and make sure that what you're returning is implementing those children otherwise you won't have an app it'll just stop here. So this Realm provider is going to have children and it's going to have a configuration just like where you defined in the previous slide.
And basically I have a useEffect that basically detects changes on the configuration and opens the Realm and then it sets that Realm to state and adds that to that provider value. And then if you do that, you'll be able to use that useContext to retrieve that realm at any point in your app or any component. So if you wrap that component with Realm provider, then you'll be able to get that Realm. I would recommend making a hook for this called useRealm or something similar where you can have error checking and any sort of extra logic that you need when you're accessing that Realm here and have that return that context for you to use.
So another thing, initializing data. So if you have a component and it's the very first time your app is opened you might want to initialize it with some data. The way I recommend doing that is making an effect for it, basically calling realm.objects and setting that to your state, having this useEffect listen for that state and just check, do we have any entries? If we don't have any entries then I would initialize some data and then set that journal up. So going on the next slide. And another very important thing is subscribing to changes. Yeah, basically if you are making changes to your collection in Realm, it's not going to automatically re-render. So I recommend using useState to do that and keeping a copy of that realm.object in state and updating with set state. And basically all you need to do is create an effect with a handle change function. This handle change function can be given to these listeners and basically it will be called any time any change happens to that Realm collection.
You want to make sure though that you do check if there are any modifications before you start setting state especially if you're subscribing to changes to that collection, because you could find yourself into an infinite loop. Because as soon as you call ad listener, there will be an initial event that fires and the length of all the changes is zero. So this is pretty important, make sure you check that there actually are changes before you set that state. So here's an example of basically providing or using a FlatList to display around data. FlatList is one of the main components from React Native that I've used to basically display any list of data. FlatList basically takes an array of data, in our case, it'll also take a Realm collection, which is almost an array. It works like an array. So it works in this case. So you can provide that collection.
I recommend sorting it because one thing about Realm collections is the order is not guaranteed. So you should sort it by some sort of timestamp or something to make sure that when you add new entries, it's not just showing up in some random spot in the list. It's just showing up in this case at the creation date. And then it's also recommended to use a key extractor and do not set it to the index of the array. That's a bad idea, set it to something that is that's unique. In this case, the idea that we were using for our Realm is object ID, in the future we'll have a UUID property coming out, but in the meantime, object ID is our best option for providing that for you to have basically a unique ID that you can define your data with. And if you use that, I recommend using that. You can call it the two check string function on here because key extractor wants a string. He's not going to work with an object. And then basically this will make sure that your items are properly rendered and not rerunning the whole list all the time.
Also, using React.memo is going to help with that as well, which I'm going to show you how to do that. This item in this case is actually a React.memo. I recommend instead of just passing that whole item as a property to maybe just get what you need out of it and passing that down and that way you'll avoid any necessary re-renders. I did intentionally put a mistake in here. ID is an object, so you will have to be careful if you do it like this and I'll show you how that's done. you could just set it to string and then you wouldn't have to provide this extra function that on purpose I decided to put the object and to basically show you how you can check the properties and, and update this. So this is using React.memo and basically it will only render once. It will only render if that title changes or if that ID changes, which it shouldn't change.
Basically, this guy will look at is title different? Are the IDs different? If they're not return true, if any of them changed return false. And that'll basically cause a re-render. So I wrote quite a bit of sample code to basically make these slides, if you want to check that out, my GitHub is Takameyer, T-A-K-A-M-E-Y-E-R. And I have a Realm and React Native example there. You can take a look there and I'll try to keep that updated with some best practices and things, but a lot of the sample code came from there. So I recommend checking that out. So that's basically my overview on React and Realm. I'll just want to take an opportunity to show up what's coming up for these upcoming features. Yeah, you just saw there was quite a lot of boiler plate in setting up those providers and schemas and things.
And yeah, if you're setting up TypeScript types, you got to set up your schemers, you got to set up your types and you're doing that in multiple places. So I'm going to show you some things that I'm cooking up in the near future. Well, I'm not sure when they're coming out, but things that are going to make our lives a little bit easier. One goal I had is I wanted to have a single source of truth for your types. So we are working on some decorators. This is basically a feature of a JavaScript. It's basically a Boolean that you have to hit in TypeScript or in Babel to get working. And basically what that does is allow you to add some more context to classes and properties on that class. So in this case this one is going to allow you to define your models without a schema. And to do that, you provide a property, a decorator to your attributes. And that property is basically as an argument taking those configuration values I talked about before.
So basically saying, "Hey, this description is a type string, or this ID is primary key and type object ID." My goal eventually when TypeScript supports it, I would like to infer the types from the TypeScript types that you're defining here. So at the moment we're probably going to have to live with just defining it twice, but at least they're not too far from each other and you can immediately see if they're not lining up. I didn't go over relations, but you can set up relations between Realms models. And that's what I'm going to revive with this link from property, this is bit easier, send texts, get that done. You can take a look at our documentation to see how you do that with normal schemas. But basically this is saying I'm linking lists from todoLists because a TodoItem on the items property from todoList link from todoList items, reads kind of nice.
Yeah, so those are basically how we're going to define schemas in the future. And we're also going to provide some mutator functions for your business logic in your classes. So basically if you define the mutator, it'll basically wrap this in a right transaction for you. So I'm running out of time, so I'm just going to go for the next things quick. We have Realm context generator. This is basically going to do that whole provider pattern for you. You call createRealmContext, give it your schemas, he's going to give you a context object back, you can call provider on that, but you can also use that thing to get hooks. I'm going to provide some hooks, so you don't have to do any sort of notification handling or anything like that. You basically call the hook on that context. You give it that Realm class.
And in this case use object, he's just going to be looking at the primary key. You'll get that object back and you'll be able to render that and display it and subscribe to updates. UseQuery is also similar. That'll provide a sorting and filter function for you as well. And that's how you'd be able to get lists of items and display that. And then obviously you can just call, useRealm to get your Realm and then you can do all your right transactions. So that's coming up and that's it for me. Any questions?
**Ian:**
Yeah. Great. Well, thank you, Andrew. We don't have too many questions, but we'll go through the ones we have. So there's one question around the deleteRealmIfMigrationNeeded and the user said this should only be used in dev. And I think, yes, we would agree with that, that this is for iterating your schema while you're developing your application. Is that correct Andrew?
**Andrew:**
Yeah, definitely. You don't want to be using that in production at all. That's just for development. So Yeah.
**Ian:**
Definitely. Next question here is how has Realm integrated with static code analyzers in order to give better dev experience and show suggestions like if a filtered field doesn't exist? I presume this is for maybe you're using Realm objects or maybe using regular JavaScript objects and filtered wouldn't exist on those, right? It's the regular filter expression.
**Andrew:**
Yeah. If you're using basically that syntax I showed to you, you should still see the filtered function on all your collections. If you are looking at using filtered in that string, we don't have any sort of static analysis for those query strings yet, but definitely for the future, we could look at that in the future.
**Ian:**
Yeah, I think the Vs code is definitely plugin heavy. And as we start to replatform the JavaScript SDK, especially for React Native, some of these new features that Andrew showed we definitely want to get into creating a plugin that is Realm specific. That'll help you create some of your queries and give you suggestions. So that's definitely something to look forward to in the future. Please give us feedback on some of these new features and APIs that you're looking to come out with, especially as around hooks? Because we interviewed quite a few users of Realm and React Native, and we settled on this, but if you have some extra feedback, we are a community driven product, so please we're looking for the feedback and if it could work for you or if it needed an extra parameter to take your use case into account, we're in the stage right now where we're designing it and we can add more functionality as we come.
Some of the other things we're looking to develop for the React Native SDK is we're replatforming it to take advantage of the new Hermes JavaScripts VM, right interpreter, so that not just using JavaScript core, but also using Hermes with that. Once we do that, we'll also be able to get a new debugger right now, the debugging experience with Realm and React Native is a little bit... It's not great because of the way that we are a C++ database that runs on the device itself. And so with the Chrome debugger, right, it wants to run all your JavaScript code in Chrome. And so there's no Native context for it to attach to, and had to write this RPC layer to work around that. But with our new Hermes integration, we'll be able to get in much better debugging experience. And we think we'll actually look to Flipper as the debugger in the future, once we do that.
Okay, great. What is your opinion benefit, would you say makes a difference better to use than the likes of PouchDB? So certainly noted here that this is a SDK for React Native. We're depending on the hard drive to be available in a mobile application. So for PouchDB, it's more used in web browsers. This SDK you can't use it in a web browser. We do have a Realm web SDK that is specific for querying data from Atlas, and it gives some convenience methods for logging into our serverless platform. But I will say that we are doing a spike right now to allow for compilation of our Realm core database into. And if we do that, we'll be able to then integrate into browsers and have the ability to store and persist data into IndexedDB, which is a browser available or is the database available in browsers. Right. And so you can look forward to that because then we could then be integrated into PWAs for instance in the web.
Other Question here, is there integration, any suggestions talk around Realm sync? Is there any other, I guess, tips and tricks that we can suggest the things may be coming in the future API regarding a React Native application for Realm sync? I know one of the things that came out in our user interviews was partitions. And being able to open up multiple Realms in a React Native SDK, I believe we were looking to potentially add this to our provider pattern, to put in multiple partition key values. Maybe you can talk a little bit to that.
**Andrew:**
Yeah. Basically that provider you'd be able to actually provide that configuration as properties as well to the provider. So if you initiate your context with the configuration and something needs to change along the line based on some sort of state, or maybe you open a new screen and it's like a detailed view. And that parameter, that new screen is taking an ID, then you'd be able to basically set the partition to that ID and base the data off that partition ID.
**Ian:**
Yeah, mostly it's our recommendation here to follow a singleton pattern where you put everything in the provider and that when you call that in a new view, it basically gives you an already open Realm reference. So you can boot up the app, you open up all the rounds that you need to, and then depending on the view you're on, you can call to that provider to get the Realm reference that you'd need.
**Andrew:**
Right. Yeah. That's another way to do it as well. So you can do it as granular as you want. And so you can use your provider on a small component on your header of your app, or you could wrap the whole app with it. So many use cases. So I would like to go a little bit more into detail someday about how to like use Realm with React navigation and multiple partitions and Realms and stuff like that. So maybe that's something we could look at in the future.
**Ian:**
Yeah. Absolutely. Great. Are there any other questions from anyone here? Just to let everyone know this will be recorded, so we've recorded this and then we'll post this on YouTube later, so you can watch it from there, but if there's any other questions, please ask them now, otherwise we'll close early. Are there any issues with multiple independently installed applications accessing the same database? So I think it's important to note here that with Realm, we do allow multi-process access. We do have a way, we have like a lot file and so there is the ability to have Realm database be used and access by multiple applications if you are using a non-sync Realm. With sync Realms, we don't have multi-process support, it is something we'll look to add in the future, but for right now we don't have it. And that's just from the fact that our synchronization runs in a background thread. And it's hard for us to tell when that thread has done to at work or not.
Another question is the concept behind partitions. We didn't cover this. I'd certainly encourage you to go to [email protected]/realm we have a bunch of documentation around our sync but a partition corresponds to the Realm file on the client side. So what you can do with the Realm SDK is if you enable sync, you're now sinking into a MongoDB Atlas server or cluster. This is the database as a service managed offering that is for the cloud version of MongoDB. And you can have multiple collections within this MongoDB instance. And you could have, let's say a hundred thousand documents. Those a hundred thousand documents are for the amalgamation of all of your Realm clients. And so a partition allows you to specify which documents are for which clients. So you can boot up and say, "Okay, I logged in. My user ID is Ian Ward. Therefore give me all documents that are for Ian Ward." And that's where you can segment your data that's all stored together in MongoDB. Interesting Question.
**Andrew:**
Yeah. I feel like a simple application, it's probably just going to be partitioned by a user ID, but if you're making an app for a logistics company that has multiple warehouses and you have an app that has the inventory for all those warehouses, then you might probably want to partition on those warehouses, the warehouse that you're in. So that'd be a good example of where you could use that partition in a more complex environment.
**Ian:**
Yeah, definitely. Yeah. It doesn't need to be user ID. It could also be store ID. We have a lot of logistics customer, so it could be driver ID, whatever packages that driver is supposed to deliver on that day will be part of their partition. Great. Well if there's no other... Oops, got another one in, can we set up our own sync on existing Realm database and have it sync existing data i.e. the user used the app without syncing, but later decides to sync the data after signing up? So right now the file format for a Realm database using non-sync and a syncing database is different basically because with sync, we need to keep track of the operations that are happening when you're occurring offline. So it keeps a queue of those operations.
**Ian:**
And then once you connect back online, it automatically sends those operations to the service side to apply the state. Right now, if you wanted to move to a synchronized brown, you would need to copy that data from the non-sync Realm to the sync Realm. We do have a project that I hope to get to in the next quarter for automatically doing that conversion for you. So you'll basically be able to write all the data and copy it over to the sync and make it a lot easier for developers to do that if they wish to. But it is something that we get some requests for. So we would like to make it easier. Okay. Well, thank you very much, everyone. Thank you, Andrew. I really appreciate it. And thank you everyone for coming. I hope you found this valuable and please reach out to us if you have any further questions. Okay. Thanks everyone. Bye.
| md | {
"tags": [
"Realm",
"JavaScript",
"React"
],
"pageDescription": "In this event, recorded on June 10th, Andrew Meyer, Software Engineer, on the Realm JavaScript team, walks us through the React Native ecosystem as it relates to persisting data with Realm. We discuss things to consider when using React Native, best practices to implement and gotcha's to avoid, as well as what's next for the JavaScript team at Realm.\n\n",
"contentType": "Article"
} | Realm Meetup - Realm JavaScript for React Native Applications | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/designing-developing-analyzing-new-mongodb-shell | created | # Designing, Developing, and Analyzing with the New MongoDB Shell
There are many methods available for interacting with MongoDB and depending on what you're trying to accomplish, one way to work with MongoDB might be better than another. For example, if you're a power user of Visual Studio Code, then the MongoDB Extension for Visual Studio Code might make sense. If you're constantly working with infrastructure and deployments, maybe the MongoDB CLI makes the most sense. If you're working with data but prefer a command line experience, the MongoDB Shell is something you'll be interested in.
The MongoDB Shell gives you a rich experience to work with your data through syntax highlighting, intelligent autocomplete, clear error messages, and the ability to extend and customize it however you'd like.
In this article, we're going to look a little deeper at the things we can do with the MongoDB Shell.
## Syntax Highlighting and Intelligent Autocomplete
If you're like me, looking at a wall of code or text that is a single color can be mind-numbing to you. It makes it difficult to spot things and creates overall strain, which could damage productivity. Most development IDEs don't have this problem because they have proper syntax highlighting, but it's common for command line tools to not have this luxury. However, this is no longer true when it comes to the MongoDB Shell because it is a command line tool that has syntax highlighting.
When you write commands and view results, you'll see colors that match your command line setup as well as pretty-print formatting that is readable and easy to process.
Formatting and colors are only part of the battle that typical command line advocates encounter. The other common pain-point, that the MongoDB Shell fixes, is around autocomplete. Most IDEs have autocomplete functionality to save you from having to memorize every little thing, but it is less common in command line tools.
As you're using the MongoDB Shell, simply pressing the "Tab" key on your keyboard will bring up valuable suggestions related to what you're trying to accomplish.
Syntax highlighting, formatting, and autocomplete are just a few small things that can go a long way towards making the developer experience significantly more pleasant.
## Error Messages that Actually Make Sense
How many times have you used a CLI, gotten some errors you didn't understand, and then either wasted half your day finding a missing comma or rage quit? It's happened to me too many times because of poor error reporting in whatever tool I was using.
With the MongoDB Shell, you'll get significantly better error reporting than a typical command line tool.
In the above example, I've forgotten a comma, something I do regularly along with colons and semi-colons, and it told me, along with providing a general area on where the comma should go. That's a lot better than something like "Generic Runtime Error 0x234223."
## Extending the MongoDB Shell with Plugins Known as Snippets
If you use the MongoDB Shell enough, you'll probably reach a point in time where you wish it did something specific to your needs on a repetitive basis. Should this happen, you can always extend the tool with snippets, which are similar to plugins.
To get an idea of some of the official MongoDB Shell snippets, execute the following from the MongoDB Shell:
```bash
snippet search
```
The above command searches the snippets found in a repository on GitHub.
You can always define your own repository of snippets, but if you wanted to use one of the available but optional snippets, you could run something like this:
```bash
snippet install analyze-schema
```
The above snippet allows you to analyze any collection that you specify. So in the example of my "recipes" collection, I could do the following:
```bash
use food;
schema(db.recipes);
```
The results of the schema analysis, at least for my collection, is the following:
```
┌─────────┬───────────────┬───────────┬────────────┐
│ (index) │ 0 │ 1 │ 2 │
├─────────┼───────────────┼───────────┼────────────┤
│ 0 │ '_id ' │ '100.0 %' │ 'ObjectID' │
│ 1 │ 'ingredients' │ '100.0 %' │ 'Array' │
│ 2 │ 'name ' │ '100.0 %' │ 'String' │
└─────────┴───────────────┴───────────┴────────────┘
```
Snippets aren't the only way to extend functionality within the MongoDB Shell. You can also use Node.js in all its glory directly within the MongoDB Shell using custom scripts.
## Using Node.js Scripts within the MongoDB Shell
So let's say you've got a data need that you can't easily accomplish with the MongoDB Query API or an aggregation pipeline. If you can accomplish what you need using Node.js, you can accomplish what you need in the MongoDB Shell.
Let's take this example.
Say you need to consume some data from a remote service and store it in MongoDB. Typically, you'd probably write an application, download the data, maybe store it in a file, and load it into MongoDB or load it with one of the programming drivers. You can skip a few steps and make your life a little easier.
Try this.
When you are connected to the MongoDB Shell, execute the following commands:
```bash
use pokemon
.editor
```
The first will switch to a database—in this case, "pokemon"—and the second will open the editor. From the editor, paste in the following code:
```javascript
async function getData(url) {
const fetch = require("node-fetch");
const results = await fetch(url)
.then(response => response.json());
db.creatures.insertOne(results);
}
```
The above function will make use of the node-fetch package from NPM. Then, using the package, we can make a request to a provided URL and store the results in a "creatures" collection.
You can execute this function simply by doing something like the following:
```bash
getData("https://pokeapi.co/api/v2/pokemon/pikachu");
```
If it ran successfully, your collection should have new data in it.
In regards to the NPM packages, you can either install them globally or to your current working directory. The MongoDB Shell will pick them up when you need them.
If you'd like to use your own preferred editor rather than the one that the MongoDB Shell provides you, execute the following command prior to attempting to open an editor:
```bash
config.set("editor", "vi");
```
The above command will make VI the default editor to use from within the MongoDB Shell. More information on using an external editor can be found in the documentation.
## Conclusion
You can do some neat things with the MongoDB Shell, and while it isn't for everyone, if you're a power user of the command line, it will certainly improve your productivity with MongoDB.
If you have questions, stop by the MongoDB Community Forums! | md | {
"tags": [
"MongoDB",
"Bash",
"JavaScript"
],
"pageDescription": "Learn about the benefits of using the new MongoDB Shell for interacting with databases, collections, and the data inside.",
"contentType": "Article"
} | Designing, Developing, and Analyzing with the New MongoDB Shell | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/getting-started-unity-creating-2d-game | created | # Getting Started with Unity for Creating a 2D Game
If you've been keeping up with the content on the MongoDB Developer Portal, you'll know that a few of us at MongoDB (Nic Raboy, Adrienne Tacke, Karen Huaulme) have been working on a game titled Plummeting People, a Fall Guys: Ultimate Knockout tribute game. Up until now we've focused on game planning and part of our backend infrastructure with a user profile store.
As part of the natural progression in our development of the game and part of this tutorial series, it makes sense to get started with the actual gaming aspect, and that means diving into Unity, our game development framework.
In this tutorial, we're going to get familiar with some of the basics behind Unity and get a sprite moving on the screen as well as handing collision. If you're looking for how we plan to integrate the game into MongoDB, that's going to be saved for another tutorial.
An example of what we want to accomplish can be seen in the following animated image:
The framerate in the image is a little stuttery, but the actual result is quite smooth.
## The Requirements
Before we get started, it's important to understand the requirements for creating the game.
- Unity 2020+
- Image to be used for player
- Image to be used for the background
I'm using Unity 2020.1.6f1, but any version around this particular version should be fine. You can download Unity at no cost for macOS and Windows, but make sure you understand the licensing model if you plan to sell your game.
Since the goal of this tutorial is around moving a game object and handling collisions with another game object, we're going to need images. I'm using a 1x1 pixel image for my player, obstacle, and background, all scaled differently within Unity, but you can use whatever images you want.
## Creating a New Unity Project with Texture and Script Assets
To keep things easy to understand, we're going to start with a fresh project. Within the **Unity Hub** application that becomes available after installing Unity, choose to create a new project.
You'll want to choose **2D** from the available templates, but the name and project location doesn't matter as long as you're comfortable with it.
The project might take a while to generate, but when it's done, you should be presented with something that looks like the following:
As part of the first steps, we need to make the project a little more development ready. Within the **Project** tree, right click on **Assets** and choose to create a new folder for **Textures** as well as **Scripts**.
Any images that we plan to use in our game will end up in the **Textures** folder and any game logic will end up as a script within the **Scripts** folder. If you have your player, background, and obstacle images, place them within the **Textures** directory now.
As of right now there is a single scene for the game titled **SampleScene**. The name for this scene doesn't properly represent what the scene will be responsible for. Instead, let's rename it to **GameScene** as it will be used for the main gaming component for our project. A scene for a game is similar to a scene in a television show or movie. You'll likely have more than one scene, but each scene is responsible for something distinct. For example, in a game you might have a scene for the menu that appears when the user starts the game, a scene for game-play, and a scene for what happens when they've gotten game over. The use cases are limitless.
With the scene named appropriately, it's time to add game objects for the player, background, and obstacle. Within the project hierarchy panel, right click underneath the **Main Camera** item (if your hierarchy is expanded) or just under **GameScene** (if not expanded) and choose **Create Empty** from the list.
We'll want to create a game object for each of the following: the player, background, and obstacle. The name isn't too important, but it's probably a good idea to give them names based around their purpose.
To summarize what we've done, double-check the following:
- Created a **Textures** and **Scripts** directory within the **Assets** directory.
- Added an image that represents a player, an obstacle, and a background to the **Textures** directory.
- Renamed **SampleScene** to **GameScene**.
- Created a **Player** game object within the scene.
- Created an **Obstacle** game object within the scene.
- Created a **Background** game object within the scene.
At this point in time we have the project properly laid out.
## Adding Sprite Renders, Physics, Collision Boxes, and Scripts to a Game Object
We have our game objects and assets ready to go and are now ready to configure them. This means adding images to the game object, physics properties, and any collision related data.
With the player game object selected from the project hierarchy, choose **Add Component** and search for **Sprite Renderer**.
The **Sprite Renderer** allows us to associate an image to our game object. Click the circle icon next to the **Sprite** property's input box. A panel will pop up that allows you to select the image you want to associate to the selected game object. You're going to want to use the image that you've added to the **Textures** directory. Follow the same steps for the obstacle and the background.
You may or may not notice that the layering of your sprites with images are not correct in the sense that some images are in the background and some are in the foreground. To fix the layering, we need to add a **Sorting Layer** to the game objects.
Rather than using the default sorting layer, choose to **Add Sorting Layer...** so we can use our own strategy. Create two new layers titled **Background** and **GameObject** and make sure that **Background** sits above **GameObject** in the list. The list represents the rendering order so higher in the list gets rendered first and lower in the list gets rendered last. This means that the items rendering last appear at the highest level of the foreground. Think about it as layers in Adobe Photoshop, only reversed in terms of which layers are most visible.
With the sorting layers defined, set the correct **Sorting Layer** for each of the game objects in the scene.
For clarity, the background game object should have the **Background** sorting layer applied and the obstacle as well as the player game object should have the **GameObject** sorting layer applied. We are doing it this way because based on the order of our layers, we want the background game object to truly sit behind the other game objects.
The next step is to add physics and collision box data to the game objects that should have such data. Select the player game object and search for a **Rigidbody 2D** component.
Since this is a 2D game that has no sense of flooring, the **Gravity Scale** for the player should be zero. This will prevent the player from falling off the screen as soon as the game starts. The player is the only game object that will need a rigid body because it is the only game object where physics might be important.
In addition to a rigid body, the player will also need a collision box. Add a new **Box Collider 2D** component to the player game object.
The **Box Collider 2D** component should be added to the obstacle as well. The background, since it has no interaction with the player or obstacle does not need any additional component added to it.
The final configuration for the game objects is the adding of the scripts for game logic.
Right click on the **Scripts** directory and choose to create a new **C# Script**. You'll want to rename the script to something that represents the game object that it will be a part of. For this particular script, it will be associated to the player game object.
After selecting the game object for the player, drag the script file to the **Add Component** area of the inspector to add it to the game object.
At this point in time everything for this particular game is configured. However, before we move onto the next step, let's confirm the components added to each of the game objects in the scene.
- Background has one sprite renderer with a **Background** sorting layer.
- Player has one sprite renderer, one rigid body, and one box collider with the **GameObject** sorting layer.
- Obstacle has one sprite renderer, and one box collider with the **GameObject** sorting layer.
The next step is to apply some game logic.
## Controlling a Game Object with a Unity C# Script
In Unity, everything in a scene is controlled by a script. These scripts exist on game objects which make it easy to separate the bits and pieces that make up a game. For example the player might have a script with logic. The obstacles might have a different script with logic. Heck, even the grass within your scene might have a script. It's totally up to you how you want to script every part of your scene.
In this particular game example, we're only going to add logic to the player object script.
The script should already be associated to a player object, so open the script file and you should see the following code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Player : MonoBehaviour
{
void Start()
{
// ...
}
void Update()
{
// ...
}
}
```
To move the player we have a few options. We could transform the position of the game object, we can transform the position of the rigid body, or we can apply physics force to the rigid body. Each will give us different results, with the force option being the most unique.
Because we do have physics, let's look at the latter two options, starting with the movement through force.
Within your C# script, change your code to the following:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Player : MonoBehaviour
{
public float speed = 1.5f;
private Rigidbody2D rigidBody2D;
void Start()
{
rigidBody2D = GetComponent();
}
void Update()
{
}
void FixedUpdate() {
float h = 0.0f;
float v = 0.0f;
if (Input.GetKey("w")) { v = 1.0f; }
if (Input.GetKey("s")) { v = -1.0f; }
if (Input.GetKey("a")) { h = -1.0f; }
if (Input.GetKey("d")) { h = 1.0f; }
rigidBody2D.AddForce(new Vector2(h, v) * speed);
}
}
```
We're using a `FixedUpdate` because we're using physics on our game object. Had we not been using physics, the `Update` function would have been fine.
When any of the directional keys are pressed (not arrow keys), force is applied to the rigid body in a certain direction at a certain speed. If you ran the game and tried to move the player, you'd notice that it moves with a kind of sliding on ice effect. Rather than moving the player at a constant speed, the player increases speed as it builds up momentum and then when you release the movement keys it gradually slows down. This is because of the physics and the applying of force.
Moving the player into the obstacle will result in the player stopping. We didn't even need to add any code to make this possible.
So let's look at moving the player without applying force. Change the `FixedUpdate` function to the following:
``` csharp
void FixedUpdate() {
float h = 0.0f;
float v = 0.0f;
if (Input.GetKey("w")) { v = 1.0f; }
if (Input.GetKey("s")) { v = -1.0f; }
if (Input.GetKey("a")) { h = -1.0f; }
if (Input.GetKey("d")) { h = 1.0f; }
rigidBody2D.MovePosition(rigidBody2D.position + (new Vector2(h, v) * speed * Time.fixedDeltaTime));
}
```
Instead of using the `AddForce` method we are using the `MovePosition` method. We are now translating our rigid body which will also translate our game object position. We have to use the `fixedDeltaTime`, otherwise we risk our translations happening too quickly if the `FixedUpdate` is executed too quickly.
If you run the game, you shouldn't get the moving on ice effect, but instead nice smooth movement that stops as soon as you let go of the keys.
In both examples, the movement was limited to the letter keys on the keyboard.
If you want to move based on the typical WASD letter keys and the arrow keys, you could do something like this instead:
``` csharp
void FixedUpdate() {
float h = Input.GetAxis("Horizontal");
float v = Input.GetAxis("Vertical");
rigidBody2D.MovePosition(rigidBody2D.position + (new Vector2(h, v) * speed * Time.fixedDeltaTime));
}
```
The above code will generate a value of -1.0, 0.0, or 1.0 depending on if the corresponding letter key or arrow key was pressed.
Just like with the `AddForce` method, when using the `MovePosition` method, the collisions between the player and the obstacle still happen.
## Conclusion
You just saw how to get started with Unity and building a simple 2D game. Of course what we saw in this tutorial wasn't an actual game, but it has all of the components that can be applied towards a real game. This was discussed by Karen Huaulme and myself (Nic Raboy) in the fourth part of our game development Twitch stream.
The player movement and collisions will be useful in the Plummeting People game as players will not only need to dodge other players, but obstacles as well as they race to the finish line. | md | {
"tags": [
"C#",
"Unity"
],
"pageDescription": "Learn how to get started with Unity for moving an object on the screen with physics and collisions.",
"contentType": "Tutorial"
} | Getting Started with Unity for Creating a 2D Game | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/adding-realm-as-dependency-ios-framework | created | # Adding Realm as a dependency to an iOS Framework
# Adding Realm as a Dependency to an iOS Framework
## Introduction
In this post we’ll review how we can add RealmSwift as a dependency to our libraries, using two different methods: Xcode assistants and the Swift Package Manager.
## The Problem
I have a little, nice Binary Tree library. I know that I will use it for a later project, and that it'll be used at least in a macOS and iOS app. Maybe also a Vapor web app. So I decided to create a Framework to hold this code. But some of my model classes there need to be persisted in some way locally in the phone later. The Realm library is perfect for this, as I can start working with regular objects and store them locally and, later, if I need a ~~no code~~ really simple & quick to implement backend solution I can use Atlas Device Sync.
But the problem is, how do we add Realm as a dependency in our Frameworks?
## Solution 1: Use Xcode to Create the Framework and Add Realm with SPM
The first way to create the Framework is just to create a new Xcode Project. Start Xcode and select `File > New > Project`. In this case I’ll change to the iOS tab, scroll down to the Framework & Library section, then select Framework. This way I can share this Framework between my iOS app and its extensions, for instance.
Now we have a new project that holds our code. This project has two targets, one to build the Framework itself and a second one to run our Unit Tests. Every time we write code we should test it, but this is especially important for reusable code, as one bug can propagate to multiple places.
To add Realm/Swift as a dependency, open your project file in the File Navigator. Then click on the Project Name and change to the Swift Packages tab. Finally click on the + button to add a new package.
In this case, we’ll add Realm Cocoa, a package that contains two libraries. We’re interested in Realm Swift: https://github.com/realm/realm-cocoa. We want one of the latest versions, so we’ll choose “Up to major version” 10.0.0. Once the resolution process is done, we can select RealmSwift.
Nice! Now that the package is added to our Framework we can compile our code containing Realm Objects without any problems!
## Solution 2: create the Framework using SPM and add the dependency directly in Package.swift
The other way to author a framework is to create it using the Swift Package Manager. We need to add a Package Manifest (the Package.swift file), and follow a certain folder structure. We have two options here to create the package:
* Use the Terminal
* Use Xcode
### Creating the Package from Terminal
* Open Terminal / CLI
* Create a folder with `mkdir yourframeworkname`
* Enter that folder with `cd yourframeworkname`
* Run `swift package init`
* Once created, you can open the package with `open Package.swift`
### Creating the Package using Xcode
You can also use Xcode to do all this for you. Just go to `File > New > Swift Package`, give it a name and you’ll get your package with the same structure.
### Adding Realm as a dependency
So we have our Framework, with our library code and we can distribute it easily using Swift Package Manager. Now, we need to add Realm Swift. We don’t have the nice assistant that Xcode shows when you create the Framework using Xcode, so we need to add it manually to `Package.swift`
The complete `Package.swift` file
```swift
let package = Package(
name: "BinaryTree",
platforms:
.iOS(.v14)
],
products: [
// Products define the executables and libraries a package produces, and make them visible to other packages.
.library(
name: "BinaryTree",
targets: ["BinaryTree"]),
],
dependencies: [
// Dependencies declare other packages that this package depends on.
.package(name: "Realm", url: "https://github.com/realm/realm-cocoa", from: "10.7.0")
],
targets: [
// Targets are the basic building blocks of a package. A target can define a module or a test suite.
// Targets can depend on other targets in this package, and on products in packages this package depends on.
.target(
name: "BinaryTree",
dependencies: [.product(name: "RealmSwift", package: "Realm")]),
.testTarget(
name: "BinaryTreeTests",
dependencies: ["BinaryTree"]),
]
)
```
Here, we declare a package named “BinaryTree”, supporting iOS 14
```swift
let package = Package(
name: "BinaryTree",
platforms: [
.iOS(.v14)
],
```
As this is a library, we declare the products we’re going to build, in this case it’s just one target called `BinaryTree`.
```swift
products: [
// Products define the executables and libraries a package produces, and make them visible to other packages.
.library(
name: "BinaryTree",
targets: ["BinaryTree"]),
],
```
Now, the important part: we declare Realm as a dependency in our library. We’re giving this dependency the short name “Realm” so we can refer to it in the next step.
```swift
dependencies: [
// Dependencies declare other packages that this package depends on.
.package(name: "Realm", url: "https://github.com/realm/realm-cocoa", from: "10.7.0")
],
```
In our target, we use the previously defined `Realm` dependency.
```swift
.target(
name: "BinaryTree",
dependencies: [.product(name: "RealmSwift", package: "Realm")]),
```
And that’s all! Now our library can be used as a Swift Package normally, and it will include automatically Realm.
## Recap
In this post we’ve seen different ways to create a Framework, directly from Xcode or as a Swift Package, and how to add `Realm` as a dependency to that Framework. This way, we can write code that uses Realm and distribute it quickly using SPM.
In our next post in this series we’ll document this library using the new Documentation Compiler (DocC) from Apple. Stay tuned and thanks for reading!
If you have questions, please head to our [developer community website where the Realm engineers and the Realm/MongoDB community will help you build your next big idea with Realm and MongoDB.
| md | {
"tags": [
"Realm",
"Swift",
"iOS"
],
"pageDescription": "Adding Realm to a Project is how we usually work. But sometimes we want to create a Framework (could be the data layer of a bigger project) that uses Realm. So... how do we add Realm as a dependency to said Framework? ",
"contentType": "Tutorial"
} | Adding Realm as a dependency to an iOS Framework | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/nodejs-python-ruby-atlas-api | created | # Calling the MongoDB Atlas Administration API: How to Do it from Node, Python, and Ruby
The real power of a cloud-hosted, fully managed service like MongoDB Atlas is that you can create whole new database deployment architectures automatically, using the services API. Getting to the MongoDB Atlas Administration API is relatively simple and, once unlocked, it opens up a massive opportunity to integrate and automate the management of database deployments from creation to deletion. The API itself is an extensive REST API. There's role-based access control and you can have user or app-specific credentials to access it.
There is one tiny thing that can trip people up though. The credentials have to be passed over using the digest authentication mechanism, not the more common basic authentication or using an issued token. Digest authentication, at its simplest, waits to get an HTTP `401 Unauthorized` response from the web endpoint. That response comes with data and the client then sends an encrypted form of the username and password as a digest and the server works with that.
And that's why we’re here today: to show you how to do that with the least fuss in Python, Node, and Ruby. In each example, we'll try and access the base URL of the Atlas Administration API which returns a JSON document about the underlying applications name, build and other facts.
You can find all code samples in the dedicated Github repository.
## Setup
To use the Atlas Administration API, you need… a MongoDB Atlas cluster! If you don’t have one already, follow the Get Started with Atlas guide to create your first cluster.
The next requirement is the organization API key. You can set it up in two steps:
Create an API key in your Atlas organization. Make sure the key has the Organization Owner permission.
Add your IP address to the API Access List for the API key.
Then, open a new terminal and export the following environment variables, where `ATLAS_USER` is your public key and `ATLAS_USER_KEY` is your private key.
```
export ATLAS_USER=
export ATLAS_USER_KEY=
```
You’re all set up! Let’s see how we can use the Admin API with Python, Node, and Ruby.
## Python
We start with the simplest and most self-contained example: Python.
In the Python version, we lean on the `requests` library for most of the heavy lifting. We can install it with `pip`:
``` bash
python -m pip install requests
```
The implementation of the digest authentication itself is the following:
``` python
import os
import requests
from requests.auth import HTTPDigestAuth
import pprint
base_url = "https://cloud.mongodb.com/api/atlas/v1.0/"
auth = HTTPDigestAuth(
os.environ"ATLAS_USER"],
os.environ["ATLAS_USER_KEY"]
)
response = requests.get(base_url, auth = auth)
pprint.pprint(response.json())
```
As well as importing `requests`, we also bring in `HTTPDigestAuth` from requests' `auth` module to handle digest authentication. The `os` import is just there so we can get the environment variables `ATLAS_USER` and `ATLAS_USER_KEY` as credentials, and the `pprint` import is just to format our results.
The critical part is the addition of `auth = HTTPDigestAuth(...)` to the `requests.get()` call. This installs the code needed to respond to the server when it asks for the digest.
If we now run this program...
![Screenshot of the terminal emulator after the execution of the request script for Python. The printed message shows that the request was successful.
…we have our API response.
## Node.js
For Node.js, we’ll take advantage of the `urllib` package which supports digest authentication.
``` bash
npm install urllib
```
The code for the Node.js HTTP request is the following:
``` javascript
const urllib = require('urllib');
const baseUrl = 'https://cloud.mongodb.com/api/atlas/v1.0/';
const { ATLAS_USER, ATLAS_USER_KEY } = process.env;
const options = {
digestAuth: `${ATLAS_USER}:${ATLAS_USER_KEY}`,
};
urllib.request(baseUrl, options, (error, data, response) => {
if (error || response.statusCode !== 200) {
console.error(`Error: ${error}`);
console.error(`Status code: ${response.statusCode}`);
} else {
console.log(JSON.parse(data));
}
});
```
Taking it from the top… we first require and import the `urllib` package. Then, we extract the `ATLAS_USER` and `ATLAS_USER_KEY` variables from the process environment and use them to construct the authentication key. Finally, we send the request and handle the response in the passed callback.
And we’re ready to run:
On to our final language...
## Ruby
HTTParty is a widely used Gem which is used by the Ruby and Rails community to perform HTTP operations. It also, luckily, supports digest authentication. So, to get the party started:
``` bash
gem install httparty
```
There are two ways to use HTTParty. One is creating an object which abstracts the calls away while the other is just directly calling methods on HTTParty itself. For brevity, we'll do the latter. Here's the code:
``` ruby
require 'httparty'
require 'json'
base_url = 'https://cloud.mongodb.com/api/atlas/v1.0/'
options = {
:digest_auth => {
:username=>ENV'ATLAS_USER'],
:password=>ENV['ATLAS_USER_KEY']
}
}
result = HTTParty.get(base_url, options)
pp JSON.parse(result.body())
```
We require the HTTParty and JSON gems first. We then create a dictionary with our username and key, mapped for HTTParty's authentication, and set a variable to hold the base URL. We're ready to do our GET request now, and in the `options` (the second parameter of the GET request), we pass `:digest_auth=>auth` to switch on the digest support. We wrap up by JSON parsing the resulting body and pretty printing that. Put it all together and run it and we get:
![Screenshot of the terminal emulator after the execution of the request script for Ruby. The printed message shows that the request was successful.
## Next Stop - The API
In this article, we learned how to call the MongoDB Atlas Administration API using digest authentication. We took advantage of the vast library ecosystems of Python, Node.js, and Ruby, and used the following open-source community libraries:
Requests for Python
urllib for JavaScript
httparty for Ruby
If your project requires it, you can implement digest authentication yourself by following the official specification. You can draw inspiration from the implementations in the aforementioned libraries.
Additionally, you can find all code samples from the article in Github.
With the authentication taken care of, just remember to be fastidious with your API key security and make sure you revoke unused keys. You can now move on to explore the API itself. Start in the documentation and see what you can automate today.
| md | {
"tags": [
"Atlas",
"Ruby",
"Python",
"Node.js"
],
"pageDescription": "Learn how to use digest authentication for the MongoDB Atlas Administration API from Python, Node.js, and Ruby.",
"contentType": "Tutorial"
} | Calling the MongoDB Atlas Administration API: How to Do it from Node, Python, and Ruby | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/node-transactions-3-3-2 | created | # How to Use MongoDB Transactions in Node.js
Developers who move from relational databases to MongoDB commonly ask, "Does MongoDB support ACID transactions? If so, how do you create a transaction?" The answer to the first question is, "Yes!"
Beginning in 4.0, MongoDB added support for multi-document ACID transactions, and, beginning in 4.2, MongoDB added support for distributed ACID transactions. If you're not familiar with what ACID transactions are or if you should be using them in MongoDB, check out my earlier post on the subject.
>
>
>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.
>
We're over halfway through the Quick Start with MongoDB and Node.js series. We began by walking through how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. Then we jumped into more advanced topics like the aggregation framework.
The code we write today will use the same structure as the code we built in the first post in the series; so, if you have any questions about how to get started or how the code is structured, head back to that first post.
Now let's dive into that second question developers ask—let's discover how to create a transaction!
>
>
>Want to see transactions in action? Check out the video below! It covers the same topics you'll read about in this article.
>
>:youtube]{vid=bdS03tgD2QQ}
>
>
>
>
>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
>
>
## Creating an Airbnb Reservation
As you may have experienced while working with MongoDB, most use cases do not require you to use multi-document transactions. When you model your data using our rule of thumb **Data that is accessed together should be stored together**, you'll find that you rarely need to use a multi-document transaction. In fact, I struggled a bit to think of a use case for the Airbnb dataset that would require a multi-document transaction.
After a bit of brainstorming, I came up with a somewhat plausible example. Let's say we want to allow users to create reservations in the `sample_airbnb database`.
We could begin by creating a collection named `users`. We want users to be able to easily view their reservations when they are looking at their profiles, so we will store the reservations as embedded documents in the `users` collection. For example, let's say a user named Leslie creates two reservations. Her document in the `users` collection would look like the following:
``` json
{
"_id": {"$oid":"5dd589544f549efc1b0320a5"},
"email": "[email protected]",
"name": "Leslie Yepp",
"reservations":
{
"name": "Infinite Views",
"dates": [
{"$date": {"$numberLong":"1577750400000"}},
{"$date": {"$numberLong":"1577836800000"}}
],
"pricePerNight": {"$numberInt":"180"},
"specialRequests": "Late checkout",
"breakfastIncluded": true
},
{
"name": "Lovely Loft",
"dates": [
{"$date": {"$numberLong": "1585958400000"}}
],
"pricePerNight": {"$numberInt":"210"},
"breakfastIncluded": false
}
]
}
```
When browsing Airbnb listings, users need to know if the listing is already booked for their travel dates. As a result, we want to store the dates the listing is reserved in the `listingsAndReviews` collection. For example, the "Infinite Views" listing that Leslie reserved should be updated to list her reservation dates.
``` json
{
"_id": {"$oid":"5dbc20f942073d6d4dabd730"},
"name": "Infinite Views",
"summary": "Modern home with infinite views from the infinity pool",
"property_type": "House",
"bedrooms": {"$numberInt": "6"},
"bathrooms": {"$numberDouble":"4.5"},
"beds": {"$numberInt":"8"},
"datesReserved": [
{"$date": {"$numberLong": "1577750400000"}},
{"$date": {"$numberLong": "1577836800000"}}
]
}
```
Keeping these two records in sync is imperative. If we were to create a reservation in a document in the `users` collection without updating the associated document in the `listingsAndReviews` collection, our data would be inconsistent. We can use a multi-document transaction to ensure both updates succeed or fail together.
## Setup
As with all posts in this MongoDB and Node.js Quick Start series, you'll need to ensure you've completed the prerequisite steps outlined in the **Set up** section of the [first post in this series.
**Note**: To utilize transactions, MongoDB must be configured as a replica set or a sharded cluster. Transactions are not supported on standalone deployments. If you are using a database hosted on Atlas, you do not need to worry about this as every Atlas cluster is either a replica set or a sharded cluster. If you are hosting your own standalone deployment, follow these instructions to convert your instance to a replica set.
We'll be using the "Infinite Views" Airbnb listing we created in a previous post in this series. Hop back to the post on Creating Documents if your database doesn't currently have the "Infinite Views" listing.
The Airbnb sample dataset only has the `listingsAndReviews` collection by default. To help you quickly create the necessary collection and data, I wrote usersCollection.js. Download a copy of the file, update the `uri` constant to reflect your Atlas connection info, and run the script by executing `node usersCollection.js`. The script will create three new users in the `users` collection: Leslie Yepp, April Ludfence, and Tom Haverdodge. If the `users` collection does not already exist, MongoDB will automatically create it for you when you insert the new users. The script also creates an index on the `email` field in the `users` collection. The index requires that every document in the `users` collection has a unique `email`.
## Create a Transaction in Node.js
Now that we are set up, let's implement the functionality to store Airbnb reservations.
### Get a Copy of the Node.js Template
To make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.
1. Download a copy of template.js.
2. Open `template.js` in your favorite code editor.
3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.
4. Save the file as `transaction.js`.
You can run this file by executing `node transaction.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.
### Create a Helper Function
Let's create a helper function. This function will generate a reservation document that we will use later.
1. Paste the following function in `transaction.js`:
``` js
function createReservationDocument(nameOfListing, reservationDates, reservationDetails) {
// Create the reservation
let reservation = {
name: nameOfListing,
dates: reservationDates,
}
// Add additional properties from reservationDetails to the reservation
for (let detail in reservationDetails) {
reservationdetail] = reservationDetails[detail];
}
return reservation;
}
```
To give you an idea of what this function is doing, let me show you an example. We could call this function from inside of `main()`:
``` js
createReservationDocument("Infinite Views",
[new Date("2019-12-31"), new Date("2020-01-01")],
{ pricePerNight: 180, specialRequests: "Late checkout", breakfastIncluded: true });
```
The function would return the following:
``` json
{
name: 'Infinite Views',
dates: [ 2019-12-31T00:00:00.000Z, 2020-01-01T00:00:00.000Z ],
pricePerNight: 180,
specialRequests: 'Late checkout',
breakfastIncluded: true
}
```
### Create a Function for the Transaction
Let's create a function whose job is to create the reservation in the database.
1. Continuing to work in `transaction.js`, create an asynchronous function named `createReservation`. The function should accept a `MongoClient`, the user's email address, the name of the Airbnb listing, the reservation dates, and any other reservation details as parameters.
``` js
async function createReservation(client, userEmail, nameOfListing, reservationDates, reservationDetails) {
}
```
2. Now we need to access the collections we will update in this function. Add the following code to `createReservation()`.
``` js
const usersCollection = client.db("sample_airbnb").collection("users");
const listingsAndReviewsCollection = client.db("sample_airbnb").collection("listingsAndReviews");
```
3. Let's create our reservation document by calling the helper function we created in the previous section. Paste the following code in `createReservation()`.
``` js
const reservation = createReservationDocument(nameOfListing, reservationDates, reservationDetails);
```
4. Every transaction and its operations must be associated with a session. Beneath the existing code in `createReservation()`, start a session.
``` js
const session = client.startSession();
```
5. We can choose to define options for the transaction. We won't get into the details of those here. You can learn more about these options in the [driver documentation. Paste the following beneath the existing code in `createReservation()`.
``` js
const transactionOptions = {
readPreference: 'primary',
readConcern: { level: 'local' },
writeConcern: { w: 'majority' }
};
```
6. Now we're ready to start working with our transaction. Beneath the existing code in `createReservation()`, open a `try { }` block, follow it with a `catch { }` block, and finish it with a `finally { }` block.
``` js
try {
} catch(e){
} finally {
}
```
7. We can use ClientSession's withTransaction() to start a transaction, execute a callback function, and commit (or abort on error) the transaction. `withTransaction()` requires us to pass a function that will be run inside the transaction. Add a call to `withTransaction()` inside of `try { }` . Let's begin by passing an anonymous asynchronous function to `withTransaction()`.
``` js
const transactionResults = await session.withTransaction(async () => {}, transactionOptions);
```
8. The anonymous callback function we are passing to `withTransaction()` doesn't currently do anything. Let's start to incrementally build the database operations we want to call from inside of that function. We can begin by adding a reservation to the `reservations` array inside of the appropriate `user` document. Paste the following inside of the anonymous function that is being passed to `withTransaction()`.
``` js
const usersUpdateResults = await usersCollection.updateOne(
{ email: userEmail },
{ $addToSet: { reservations: reservation } },
{ session });
console.log(`${usersUpdateResults.matchedCount} document(s) found in the users collection with the email address ${userEmail}.`);
console.log(`${usersUpdateResults.modifiedCount} document(s) was/were updated to include the reservation.`);
```
9. Since we want to make sure that an Airbnb listing is not double-booked for any given date, we should check if the reservation date is already listed in the listing's `datesReserved` array. If so, we should abort the transaction. Aborting the transaction will rollback the update to the user document we made in the previous step. Paste the following beneath the existing code in the anonymous function.
``` js
const isListingReservedResults = await listingsAndReviewsCollection.findOne(
{ name: nameOfListing, datesReserved: { $in: reservationDates } },
{ session });
if (isListingReservedResults) {
await session.abortTransaction();
console.error("This listing is already reserved for at least one of the given dates. The reservation could not be created.");
console.error("Any operations that already occurred as part of this transaction will be rolled back.");
return;
}
```
10. The final thing we want to do inside of our transaction is add the reservation dates to the `datesReserved` array in the `listingsAndReviews` collection. Paste the following beneath the existing code in the anonymous function.
``` js
const listingsAndReviewsUpdateResults = await listingsAndReviewsCollection.updateOne(
{ name: nameOfListing },
{ $addToSet: { datesReserved: { $each: reservationDates } } },
{ session });
console.log(`${listingsAndReviewsUpdateResults.matchedCount} document(s) found in the listingsAndReviews collection with the name ${nameOfListing}.`);
console.log(`${listingsAndReviewsUpdateResults.modifiedCount} document(s) was/were updated to include the reservation dates.`);
```
11. We'll want to know if the transaction succeeds. If `transactionResults` is defined, we know the transaction succeeded. If `transactionResults` is undefined, we know that we aborted it intentionally in our code. Beneath the definition of the `transactionResults` constant, paste the following code.
``` js
if (transactionResults) {
console.log("The reservation was successfully created.");
} else {
console.log("The transaction was intentionally aborted.");
}
```
12. Let's log any errors that are thrown. Paste the following inside of `catch(e){ }`:
``` js
console.log("The transaction was aborted due to an unexpected error: " + e);
```
13. Regardless of what happens, we need to end our session. Paste the following inside of `finally { }`:
``` js
await session.endSession();
```
At this point, your function should look like the following:
``` js
async function createReservation(client, userEmail, nameOfListing, reservationDates, reservationDetails) {
const usersCollection = client.db("sample_airbnb").collection("users");
const listingsAndReviewsCollection = client.db("sample_airbnb").collection("listingsAndReviews");
const reservation = createReservationDocument(nameOfListing, reservationDates, reservationDetails);
const session = client.startSession();
const transactionOptions = {
readPreference: 'primary',
readConcern: { level: 'local' },
writeConcern: { w: 'majority' }
};
try {
const transactionResults = await session.withTransaction(async () => {
const usersUpdateResults = await usersCollection.updateOne(
{ email: userEmail },
{ $addToSet: { reservations: reservation } },
{ session });
console.log(`${usersUpdateResults.matchedCount} document(s) found in the users collection with the email address ${userEmail}.`);
console.log(`${usersUpdateResults.modifiedCount} document(s) was/were updated to include the reservation.`);
const isListingReservedResults = await listingsAndReviewsCollection.findOne(
{ name: nameOfListing, datesReserved: { $in: reservationDates } },
{ session });
if (isListingReservedResults) {
await session.abortTransaction();
console.error("This listing is already reserved for at least one of the given dates. The reservation could not be created.");
console.error("Any operations that already occurred as part of this transaction will be rolled back.");
return;
}
const listingsAndReviewsUpdateResults = await listingsAndReviewsCollection.updateOne(
{ name: nameOfListing },
{ $addToSet: { datesReserved: { $each: reservationDates } } },
{ session });
console.log(`${listingsAndReviewsUpdateResults.matchedCount} document(s) found in the listingsAndReviews collection with the name ${nameOfListing}.`);
console.log(`${listingsAndReviewsUpdateResults.modifiedCount} document(s) was/were updated to include the reservation dates.`);
}, transactionOptions);
if (transactionResults) {
console.log("The reservation was successfully created.");
} else {
console.log("The transaction was intentionally aborted.");
}
} catch(e){
console.log("The transaction was aborted due to an unexpected error: " + e);
} finally {
await session.endSession();
}
}
```
## Call the Function
Now that we've written a function that creates a reservation using a transaction, let's try it out! Let's create a reservation for Leslie at the "Infinite Views" listing for the nights of December 31, 2019 and January 1, 2020.
1. Inside of `main()` beneath the comment that says
`Make the appropriate DB calls`, call your `createReservation()`
function:
``` js
await createReservation(client,
"[email protected]",
"Infinite Views",
new Date("2019-12-31"), new Date("2020-01-01")],
{ pricePerNight: 180, specialRequests: "Late checkout", breakfastIncluded: true });
```
2. Save your file.
3. Run your script by executing `node transaction.js` in your shell.
4. The following output will be displayed in your shell.
``` none
1 document(s) found in the users collection with the email address [email protected].
1 document(s) was/were updated to include the reservation.
1 document(s) found in the listingsAndReviews collection with the name Infinite Views.
1 document(s) was/were updated to include the reservation dates.
The reservation was successfully created.
```
Leslie's document in the `users` collection now contains the
reservation.
``` js
{
"_id": {"$oid":"5dd68bd03712fe11bebfab0c"},
"email": "[email protected]",
"name": "Leslie Yepp",
"reservations": [
{
"name": "Infinite Views",
"dates": [
{"$date": {"$numberLong":"1577750400000"}},
{"$date": {"$numberLong":"1577836800000"}}
],
"pricePerNight": {"$numberInt":"180"},
"specialRequests": "Late checkout",
"breakfastIncluded": true
}
]
}
```
The "Infinite Views" listing in the `listingsAndReviews` collection now
contains the reservation dates.
``` js
{
"_id": {"$oid": "5dbc20f942073d6d4dabd730"},
"name": "Infinite Views",
"summary": "Modern home with infinite views from the infinity pool",
"property_type": "House",
"bedrooms": {"$numberInt":"6"},
"bathrooms": {"$numberDouble":"4.5"},
"beds": {"$numberInt":"8"},
"datesReserved": [
{"$date": {"$numberLong": "1577750400000"}},
{"$date": {"$numberLong": "1577836800000"}}
]
}
```
## Wrapping Up
Today, we implemented a multi-document transaction. Transactions are really handy when you need to make changes to more than one document as an all-or-nothing operation.
Be sure you are using the correct read and write concerns when creating a transaction. See the [MongoDB documentation for more information.
When you use relational databases, related data is commonly split between different tables in an effort to normalize the data. As a result, transaction usage is fairly common.
When you use MongoDB, data that is accessed together should be stored together. When you model your data this way, you will likely find that you rarely need to use transactions.
This post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.
Now you're ready to try change streams and triggers. Check out the next post in this series to learn more!
Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.
## Additional Resources
- MongoDB official documentation: Transactions
- MongoDB documentation: Read Concern/Write Concern/Read Preference
- Blog post: What's the deal with data integrity in relational databases vs MongoDB?
- Informational page with videos and links to additional resources: ACID Transactions in MongoDB
- Whitepaper: MongoDB Multi-Document ACID Transactions
| md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Discover how to implement multi-document transactions in MongoDB using Node.js.",
"contentType": "Quickstart"
} | How to Use MongoDB Transactions in Node.js | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/mongodb-automation-index-autopilot | created | # Database Automation Series - Automated Indexes
Managing databases can be difficult, but it doesn't have to be. Most
aspects of database management can be automated, and with a platform
such as MongoDB Atlas, the tools are not only available, but they're
easy to use. In this series, we'll chat with Rez
Kahn, Lead Product Manager at
MongoDB, to learn about some of the ways Atlas automates the various
tasks associated with deploying, scaling, and ensuring efficient
performance of your databases. In this first part of the series, we'll
focus on a feature built into Atlas, called Index Autopilot.
:youtube]{vid=8feWYX0KQ9M}
*Nic Raboy (00:56):* Rez, thanks for taking the time to be on this
podcast episode. Before we get into the core material of actually
talking about database management and the things that you can automate,
let's take a step back, and why don't you tell us a little bit about
yourself?
*Rez Kahn (01:10):* Cool. Happy to be here, Nick. My name's Rez. I am a
lead product manager at MongoDB, I work out of the New York office. My
team is roughly responsible for making sure that the experience of our
customers are as amazing as possible after they deploy their first
application in MongoDB, which means we work on problems such as how we
monitor MongoDB. How we make sure our customers can diagnose issues and
fix issues that may come up with MongoDB, and a whole host of other
interesting areas, which we're going to dive into throughout the
podcast.
*Nic Raboy (01:55):* So, when you're talking about the customer success,
after they've gotten started on MongoDB, are you referencing just Atlas?
Are you referencing, say, Realm or some of the other tooling that
MongoDB offers as well? You want to shed some light?
*Rez Kahn (02:10):* Yeah, that's a really good question. Obviously, the
aspiration of the team is to help with all the products which MongoDB
supports today, and will eventually support in the future. But for the
time being, our focus has been on how do we crush the Atlas experience
and make it as magical of an experience as possible after a user
\[inaudible 00:02:29\] the first application.
*Michael Lynn (02:30):* How long have you been with MongoDB and what
were you doing prior to coming on board?
*Rez Kahn (02:35):* Well, I've been with MongoDB for a couple of years
now. Before joining MongoDB, I used to work in a completely different
industry advertising technology. I spent five years at a New York
startup called AppNexus, which eventually got sold to AT&T, and at
AppNexus, I was a product manager as well. But instead of building
databases, or helping manage databases better, I built products to help
our customers buy ads on the internet more effectively. A lot of it was
machine learning-based products. So, this would be systems to help
optimize how you spend your advertising dollars.
*Rez Kahn (03:18):* The root of the problem we're trying to solve is
figuring out which ads customers would click on and eventually purchase
a product based out of. How do we do that as effectively and as
efficiently as possible? Prior to AppNexus, I actually spent a number of
years in the research field trying to invent new types of materials to
build microchips. So, it was even more off-base compared to what I'm
doing today, but it's always very interesting to see the relationship
between research and product management. Eventually, I found it was
actually a very good background to have to be a good product manager.
*Michael Lynn (03:59):* Yeah, I would imagine you've got to be pretty
curious to work in that space, looking for new materials for chips.
That's pretty impressive. So, you've been on board with MongoDB for five
years. Did you go right into the Atlas space as a product manager?
*Rez Kahn (04:20):* So, I've been with MongoDB for two years, and yes-
*Michael Lynn (04:22):* Oh, two years, sorry.
*Rez Kahn (04:23):* No worries. I got hired as a, I think, as the second
product or third product manager for Atlas, and have been very lucky to
work with Atlas when it was fairly small to what is arguably a very
large part of MongoDB today.
*Michael Lynn (04:42):* Yeah. It's huge. I remember when I started,
Atlas didn't exist, and I remember when they made the first initial
announcements, internal only, about this product that was going to be in
the cloud. I just couldn't picture it. It's so funny to now see it, it's
become the biggest, I think arguably, the biggest part of our business.
I remember watching the chart as it took more and more of a percentage
of our gross revenue. Today, it's a phenomenal story and it's helping so
many people. One of the biggest challenges I had even before coming on
board at MongoDB was, how do you deploy this?
*Michael Lynn (05:22):* How do you configure it? If you want high
availability, it's got it. MongoDB has it built in, it's built right in,
but you've got to configure it and you've got to maintain it and you've
got to scale it, and all of those things can take hours, and obviously,
effort associated with that. So, to see something that's hit the stage
and people have just loved it, it's just been such a success. So,
that's, I guess, a bit of congratulations on Atlas and the success that
you've experienced. But I wonder if you might want to talk a little bit
about the problem space that Atlas lives in. Maybe touch a little bit
more on the elements of frustration that DBAs and developers face that
Atlas can have an impact on.
*Rez Kahn (06:07):* Yeah, totally. So, my experience with MongoDB is
actually very similar to yours, Mike. I think I first started, I first
used it at a hackathon back in 2012. I remember, while getting started
with it was very easy, it took us 10 minutes, I think, to run the first
query and get data from MongoDB. But once we had to deploy that app into
production and manage MongoDB, things became a little bit more tricky.
It takes us a number of hours to actually set things up, which is a lot
at the hackathon because you got only two hours to build the whole
thing. So, I did not think too much about the problem that I experienced
in my hackathon day, when I was doing the hackathon till I came to
MongoDB.
*Rez Kahn (06:58):* Then, I learned about Atlas and I saw my manager
deploy an Atlas cluster and show me an app, and the whole experience of
having an app running on a production version of MongoDB within 20
minutes was absolutely magical. Digging deeper into it, the problem we
were trying to solve is this, we know that the experience of using
MongoDB is great as a developer. It's very easy and fast to build
applications, but once you want to deploy an application, there is a
whole host of things you need to think about. You need to think about
how do I configure the MongoDB instance to have multiple nodes so that
if one of those nodes go down, you'll have a database available.
*Rez Kahn (07:50):* How do I configure a backup of my data so that
there's a copy of my data always available in case there's a
catastrophic data loss? How do I do things like monitor the performance
of MongoDB, and if there's a performance degradation, get alerted that
there's a performance degradation? Once I get alerted, what do I need to
do to make sure that I can fix the problem? If the way to fix the
problem is I need to have a bigger machine running MongoDB, how do I
upgrade a machine while making sure my database doesn't lose
connectivity or go down? So, those are all like not easy problems to
solve.
*Rez Kahn (08:35):* In large corporations, you have teams of DBS who do
that, in smaller startups, you don't have DBS. You have software
engineers who have to spend valuable time from their day to handle all
of these operational issues. If you really think about it, these
operational issues are not exactly value added things to be working on,
because you'd rather be spending the time building, differentiating
features in your application. So, the value which Atlas provided is we
handle all these operational issues for you. It's literally a couple of
clicks before you have a production into the MongoDB running, with
backup, monitoring, alerting, all those magically set up for you.
*Rez Kahn (09:20):* If you need to upgrade MongoDB instances, or need to
go to a higher, more powerful instance, all those things are just one
click as well, and it takes only a few minutes for it to be configured
for you. So, in other words, we're really putting a lot of time back
into the hands of our customers so that they can focus on building,
writing code, which differentiates their business as opposed to spending
time doing ops work.
*Michael Lynn (09:45):* Amazing. I mean, truly magical. So, you talked
quite a bit about the space there. You mentioned high availability, you
mentioned monitoring, you mentioned initial deployment, you mentioned
scalability. I know we talked before we kicked the podcast off, I'd love
for this to be the introduction to database management automation.
Because there's just so much, we could probably make four or five
episodes alone, but, Nick, did you have a question?
*Nic Raboy (10:20):* Yeah, I was just going to ask, so of all the things
that Atlas does for us, I was just going to ask, is there anything that
still does require user intervention after they've deployed an Atlas
cluster? Or, is it all automated? This is on the topic of automation,
right?
*Rez Kahn (10:37):* Yeah. I wish everything was automated, but if it
were, I would not have a job. So, there's obviously a lot of work to do.
The particular area, which is of great interest to me and the company
is, once you deploy an application and the application is scaling, or is
being used by lots of users, and you're making changes to the
application, how do we make sure that the performance of MongoDB itself
is as awesome as possible? Now, that's a very difficult problem to
solve, because you could talk about performance in a lot of different
ways. One of the more obvious proxies of performance is how much time it
takes to get responses to a query back.
*Rez Kahn (11:30):* You obviously want it to be as low as possible. Now,
the way to get a very low latency on your queries is you can have a very
powerful machine backing that MongoDB instance, but the consequence of
having a very powerful machine backing that MongoDB instance is it can
be very costly. So, how do you manage, how do you make sure costs are
manageable while getting as great of a performance as possible is a very
difficult problem to solve. A lot of people get paid a lot of money to
solve that particular problem. So, we have to characterize that problem
for us, sort of like track the necessary metrics to measure costs,
measure performance.
*Rez Kahn (12:13):* Then, we need to think about how do I detect when
things are going bad. If things are going bad, what are the strategies I
can put in place to solve those problems? Luckily, with MongoDB, there's
a lot of strategies they can put in place. For example, one of the
attributes of MongoDB is you could have multiple, secondary indexes, and
those indexes can be as complex or as simple as you want, but when do
you put indexes? What indexes do you put? When do you keep indexes
around versus when do you get rid of it? Those are all decisions you
need to make because making indexes is something which is neither cheap,
and keeping them is also not cheap.
*Rez Kahn (12:57):* So, you have to do an optimization in your head on
when you make indexes, when you get rid of them. Those are the kind of
problems that we believe our expertise and how MongoDB works. The
performance data we are capturing from you using Mongo DB can help us
provide you in a more data-driven recommendations. So, you don't have to
worry about making these calculations in your head yourself.
*Michael Lynn (13:22):* The costs that you mentioned, there are costs
associated with implementing and maintaining indexes, but there are also
costs if you don't, right? If you're afraid to implement indexes,
because you feel like you may impact your performance negatively by
having too many indexes. So, just having the tool give you visibility
into the effectiveness of your indexes and your index strategy. That's
powerful as well.
*Nic Raboy (13:51):* So, what kind of visibility would you exactly get?
I want to dig a little deeper into this. So, say I've got my cluster and
I've got tons and tons of data, and quite a few indexes created. Will it
tell me about indexes that maybe I haven't used in a month, for example,
so that way I could remove them? How does it relay that information to
you effectively?
*Rez Kahn (14:15):* Yeah. What the system would do is it keeps a record
of all indexes that you had made. It will track how much you're using
certain indexes. It will also track whether there are overlaps between
those indexes, which might make one index redundant compared to the
other. Then, we do some heuristics in the background to look at each
index and make an evaluation, like whether it's possible, or whether
it's a good idea to remove that particular index based on how often it
has been used over the past X number of weeks. Whether they are overlaps
with other indexes, and all those things you can do by yourself.
*Rez Kahn (14:58):* But these are things you need to learn about MongoDB
behavior, which you can, but why do it if it can just tell you that this
is something which is inefficient, and these are the things you need to
make it more efficient.
*Michael Lynn (15:13):* So, I want to be cognizant that not all of the
listeners of our podcast are going to be super familiar with even
indexes, the concept of indexes. Can you maybe give us a brief
introduction to what indexes are and why they're so important?
*Rez Kahn (15:26):* Yeah, yeah. That's a really good question. So, when
you're working with a ... So, indexes are not something which is unique
to MongoDB, all other databases also have indexes. The way to look at
indexes, it's a special data structure which stores the data you need in
a manner that makes it very fast to get that data back. So, one good
example is, let's say you have a database with names and phone numbers.
You want to query the database with a name and get that person's phone
number.
*Rez Kahn (16:03):* Now, if you don't have an index, what the database
software would do is it would go through every record of name and phone
number so that it finds the name you're looking for, and then, it will
give you back the phone number for that particular name. Now, that's a
very expensive process because if you have a database with 50 million
names and phone numbers, that would take a long time. But one of the
things you can do with index is you can create an index of names, which
would organize the data in a manner where it wouldn't have to go through
all the names to find the relevant name that you care about.
*Rez Kahn (16:38):* It can quickly go to that record and return back the
phone number that you care about. So, instead of going through 50
million rows, you might have to go through a few hundred rows of data in
order to get the information that you want. Suddenly, your queries are
significantly faster than what it would have been if you had not created
an index. Now, the challenge for our users is, like you said, Mike, a
lot of people might not know what an index is, but people generally know
what an index is. The problem is, what is the best possible thing you
could do for MongoDB?
*Rez Kahn (17:18):* There's some stuff you need to learn. There's some
analysis you need to do such as you need to look at the queries you're
running to figure out like which queries are the most important. Then
for those queries, you need figure out what the best index is. Again,
you can think about those things by yourself if you want to, but there
is some analytical, logical way of giving you, of crunching these
numbers, and just telling you that this is the index which is the best
index for you at this given point in time. These are the reasons why,
and these are the benefit it would give you.
*Michael Lynn (17:51):* So, okay. Well, indexes sounded like I need
them, because I've got an application and I'm looking up phone numbers
and I do have a lot of phone numbers. So, I'm just going to index
everything in my database. How does that sound?
*Rez Kahn (18:05):* It might be fine, actually. It depends on how many
indexes you create. The thing which is tricky is because indexes are a
special data structure, it does take up storage space in the database
because you're storing, in my example from before, names in a particular
way. So, you're essentially copying the data that you already have, but
storing it in a particular way. Now, that might be fine, but if you have
a lot of these indexes, you have to create lots of copies of your data.
So, it does use up space, which could actually go to storing new data.
*Rez Kahn (18:43):* It also has another effect where if you're writing a
lot of data into a database, every time you write a new record, you need
to make sure all those indexes are updated. So, writes can take longer
because you have indexes now. So, you need to strike a happy balance
between how many indexes do I need to get great read performance, but
not have too many indexes so my write performance is hard? That's a
balancing act that you need to do as a user, or you can use our tools
and we can do it for.
*Michael Lynn (19:11):* There you go, yeah. Obviously, playing a little
devil's advocate there, but it is important to have the right balance-
*Rez Kahn (19:17):* Absolutely.
*Michael Lynn (19:17):* ... and base the use of your index on the
read-write profile of your application. So, knowing as much about the
read-write profile, how many reads versus how many writes, how big are
each is incredibly important. So, that's the space that this is in. Is
there a tagline or a product within Atlas that you refer to when you're
talking about this capability?
*Rez Kahn (19:41):* Yeah. So, there's a product called Performance
Advisor, which you can use via the API, or you can use it with the UI.
When you use Performance Advisor, it will scan the queries that ran on
your database and give you a ranked list of indexes that you should be
building based on importance. It will tell you why a particular index is
important. So, we have this very silly name called the impact score. It
would just tell you that this is the percentage impact of having this
index built, and it would rank index recommendations based on that.
*Rez Kahn (20:21):* One of the really cool things we are building is, so
we've had Performance Advisor for a few years, and it's a fairly popular
product amongst our customers. Our customers who are building an
application on MongoDB Atlas, or if they're changing an application, the
first thing that they do after deploying is they would go to Performance
Advisor and check to see if there are index recommendations. If there
are, then, they would go and build it, and magically the performance of
their queries become better.
>
>
>So, when you deploy an Atlas cluster, you can say, "I want indexes to be
>built automatically." ... as we detect queries, which doesn't have an
>index and is important and causing performance degradation, we can
>automatically figure out like what the index ought to be.
>
>
*Rez Kahn (20:51):* Because we have had good success with the product,
what we have decided next is, why do even make people go into Atlas and
look at the recommendations, decide which they want to keep, and create
the index manually? Why not just automate that for them? So, when you
deploy an Atlas cluster, you can say, "I want indexes to be built
automatically." If you have it turned on, then we will be proactively
analyzing your queries behind the scenes for you, and as soon as we
detect queries, which doesn't have an index and is important and causing
performance degradation, we can automatically figure out like what the
index ought to be.
*Rez Kahn (21:36):* Then, build that index for you behind the scenes in
a manner that it's performed. That's a product which we're calling
autopilot mode for indexing, which is coming in the next couple of
months.
*Nic Raboy (21:46):* So, I have a question around autopilot indexing.
So, you're saying that it's a feature you can enable to allow it to do
it for you on a needed basis. So, around that, will it also remove
indexes for you that are below the percent threshold, or can you
actually even set the threshold on when an index would be created?
*Rez Kahn (22:08):* So, I'll answer the first question, which is can it
delete indexes for you. Today, it can't. So, we're actually releasing
another product within Performance Advisor called Index Removal
Recommendations, where you can see recommendations of which indexes you
need to remove. The general product philosophy that we have in the
company is, we build recommendations first. If the recommendations are
good, then we can use those recommendations to automate things for our
customers. So, the plan is, over the next six months to provide
recommendations on when indexes ought to be removed.
*Rez Kahn (22:43):* If we get good user feedback, and if it's actually
useful, then we will incorporate that in autopilot mode for indexing and
have that system also do the indexes for you. Regarding your second
question of, are the thresholds of when indexes are built configurable?
That's a good question, because we did spend a lot of time thinking
about whether we want to give users those thresholds. It's a difficult
question to answer because on one hand, having knobs, and dials, and
buttons is attractive because you can, as a user, can control how the
system behaves.
*Rez Kahn (23:20):* On the other hand, if you don't know what you're
doing, you could create a lot of problems for yourself, and we want to
be cognizant of that. So, what we have decided to do instead is we're
not providing a lot of knobs and dials in the beginning for our users.
We have selected some defaults on how the system behaves based on
analysis that we have done on thousands of customers, and hoping that
would be enough. But we have a window to add those knobs and dials back
if there are special use cases for our users, but we will do it if it
makes sense, obviously.
*Nic Raboy (23:58):* The reason why I asked is because you've got the
category of developers who probably are under index, right? Then, to
flip that switch, and now, is there a risk of over-indexing now, in that
sense?
*Rez Kahn (24:12):* That's a great question. The way we built the
system, we built some fail-safes into it, where the risk of
over-indexing is very limited. So, we do a couple of really cool things.
One of the things we do is, when we detect that there's an index that we
can build, we try to predict things such as how long an index would take
to be built. Then, based on that we can make a decision, whether we'll
automatically build it, or we'll give user the power to say, yay or nay
on building that index. Because we're cognizant of how much time and
resources that index build might take. We also have fail-safes in the
background to prevent runaway index build.
*Rez Kahn (24:59):* I think we have this configurable threshold of, I
forget the exact number, like 10 or 20 indexes for collections that can
be auto build. After that, it's up to the users to decide to build more
things. The really cool part is, once we have the removal
recommendations out and assuming it works really, if it works well and
users like it, we could use that as a signal to automatically remove
indexes, if you're building too many indexes. Like a very neat, closed
loop system, where we build indexes and observe how it works. If it does
work well, we'll keep it. If it doesn't work well, we'll remove it. You
can be as hands off as you want.
*Michael Lynn (25:40):* That sounds incredibly exciting. I think I have
a lot of fear around that though, especially because of the speed at
which a system like Atlas, with an application running against it, the
speed to make those types of changes can be onerous, right. To
continually get feedback and then act on that feedback. I'm just
curious, is that one of the challenges that you faced in implementing a
system like this?
*Rez Kahn (26:12):* Yeah. One of the big challenges is, we talked about
this a lot during the R&D phase is, we think there are two strategies
for index creation. There is what we call reactive, and then there is
proactive. Reactive generally is you make a change in your application
and you add a query which has no index, and it's a very expensive query.
You want to make the index as soon as possible in order to protect the
MongoDB instance from a performance problem. The question is, what is
soon? How do you know that this particular query would be used for a
long time versus just used once?
*Rez Kahn (26:55):* It could be a query made by an analyst and it's
expensive, but it's only going to be used once. So, it doesn't make
sense to build an index for it. That's a very difficult problem to
solve. So, in the beginning, our approach has been, let's be
conservative. Let's wait six hours and observe like what a query does
for six hours. That gives us an adequate amount of confidence that this
is a query which is actually going to be there for a while and hence an
index makes sense. Does that make sense, Mike?
*Michael Lynn (27:28):* Yeah, it does. Yeah. Perfect sense. I'm thinking
about the increased flexibility associated with leveraging MongoDB in
Atlas. Now, obviously, MongoDB is an open database. You can download it,
install it on your laptop and you can use it on servers in your data
center. Will any of these automation features appear in the non-Atlas
product set?
*Rez Kahn (27:58):* That's a really good question. We obviously want to
make it available to as many of our customers as possible, because it is
very valuable to have systems like this. There are some practical
realities that make it difficult. One of the reality is, when you're
using Atlas, the underlying machines, which is backing your database, is
something that we can quickly configure and add very easily because
MongoDB is the one which is managing those machines for you, because
it's a service that we provide. The infrastructure is hidden from you,
which means that automation features, where we need to change the
underlying machines very quickly, is only possible in Atlas because we
control those machines.
*Rez Kahn (28:49):* So, a good example of that is, and we should talk
about this at some point, we have auto scaling, where we can
automatically scale a machine up or down in order to manage your load.
Even if you want to, we can actually give that feature to our customers
using MongoDB on premise because we don't have access to the machine,
but in Atlas we do. For automatic indexing, it's a little bit easier
because it's more of a software configuration. So, it's easier for us to
give it to other places where MongoDB is used.
*Rez Kahn (29:21):* We definitely want to do that. We're just starting
with Atlas because it's faster and easier to do, and we have a lot of
customers there. So, it's a lot of customers to test and give us
feedback about the product.
*Michael Lynn (29:31):* That's a great answer. It helps me to draw it
out in my head in terms of an architecture. So, it sounds like there's a
layer above ... MongoDB is a server process. You connect to it to
interface with and to manage your data. But in Atlas, there's an
additional layer that is on top of the database, and through that layer,
we have access to all of the statistics associated with how you're
accessing your database. So, that layer does not exist in the
downloadable MongoDB today, anyway.
*Rez Kahn (30:06):* It doesn't. Yeah, it doesn't.
*Michael Lynn (30:08):* Yeah.
*Rez Kahn (30:09):* Exactly.
*Michael Lynn (30:09):* Wow, so that's quite a bit in the indexing
space, but that's just one piece of the puzzle, right? Folks that are
leveraging the database are struggling across a whole bunch of areas.
So, what else can we talk about in this space where you're solving these
problems?
*Rez Kahn (30:26):* Yeah. There is so much, like you mentioned, indexing
is just one strategy for performance optimization, but there's so many
others, one of the very common or uncommon, whoever you might ask this,
is what should the schema of your data be and how do you optimize the
schema for optimal performance? That's a very interesting problem space.
We have done a lot of ticking on that and we have a couple of products
to help you do that as well.
*Rez Kahn (30:54):* Another problem is, how do we project out, how do we
forecast what your future workload would be in order to make sure that
we are provisioning the right amount of machine power behind your
database, so that you get the best performance, but don't pay extra
because you're over over-provisioned? When is the best time to have a
shard versus scale up vertically, and what is the best shard key to use?
That is also another interesting problem space for us to tackle. So,
there's a lot to talk about \[crosstalk 00:31:33\] we should, at some
point.
*Michael Lynn (31:36):* These are all facets of the product that you
manage?
*Rez Kahn (31:39):* These are all facets of the product that I manage,
yeah. One thing which I would love to invite our users listen to the
podcast, like I mentioned before, we're building this tool called
Autopilot Mode for Indexing to automatically create indexes for you.
It's in heavy engineering development right now, and we're hoping to
release it in the next couple of months. We're going to be doing a private preview program for that particular product, trying to get around
hundred users to use that product and get early access to it. I would
encourage you guys to think about that and give that a shot.
*Michael Lynn (32:21):* Who can participate, who are you looking to get
their hands on this?
*Rez Kahn (32:26):* In theory, it should be anyone, anyone who is
spending a lot of time building indexes would be perfect candidates for
it. All of our MongoDB users spend a lot of time building indexes. So,
we are open to any type of companies, or use cases, and we're very
excited to work with you to see how we can make the product successful
for it, and use your feedback to build the next version of the product.
*Michael Lynn (32:51):* Great. Well, this has been a phenomenal
introduction to database automation, Rez. I want to thank you for taking
the time to talk with us. Nick, before we close out, any other questions
or things you think we should cover?
*Nic Raboy (33:02):* No, not for this episode. If anyone has any
questions after listening to this episode, please jump into our
community. So, this is a community forum board, Rez, Mike, myself, we're
all active in it. It's community.mongodb.com. It's a great way to get
your questions answered about automation.
*Michael Lynn (33:21):* Absolutely. Rez, you're in there as well. You've
taken a look at some of the questions that come across from users.
*Rez Kahn (33:27):* Yeah, I do that occasionally. Not as much as I
should, but I do that.
*Michael Lynn (33:32):* Awesome.
*Nic Raboy (33:32):* Well, there's a question that pops up, we'll pull
you.
*Michael Lynn (33:34):* Yeah, if we get some more questions in there,
we'll get you in there.
*Rez Kahn (33:37):* Sounds good.
*Michael Lynn (33:38):* Awesome. Well, terrific, Rez. Thanks once again
for taking the time to talk with us. I'm going to hold you to that.
We're going to take this in a series approach. We're going to break all
of these facets of database automation down, and we're going to cover
them one by one. Today's been an introduction and a little bit about
autopilot mode for indexing. Next one, what do you think? What do you
think you want to cover next?
*Rez Kahn (34:01):* Oh, let's do scaling.
*Nic Raboy (34:02):* I love it.
*Michael Lynn (34:03):* Scaling and auto scalability. I love it.
Awesome. All right, folks, thanks.
*Rez Kahn (34:08):* Thank you.
## Summary
An important part of ensuring efficient application performance is
modeling the data in your documents, but once you've designed the
structure of your documents, it's absolutely critical that you continue
to review the read/write profile of your application to ensure that
you've properly indexed the data elements most frequently read. MongoDB
Atlas' automated index management can help as the profile of your
application changes over time.
Be sure you check out the links below for suggested reading around
performance considerations. If you have questions, visit us in the
[Community Forums.
In our next episodes in this series, we'll build on the concept of
automating database management to discuss automating the scaling of your
database to ensure that your application has the right mix of resources
based on its requirements.
Stay tuned for Part 2. Remember to subscribe to the
Podcast to make sure that you don't miss
a single episode.
## Related Links
Check out the following resources for more information:
- MongoDB Docs: Remove Unnecessary
Indexes
- MongoDB Docs: Indexes
- MongoDB Docs: Compound Indexes —
Prefixes
- MongoDB Docs: Indexing
Strategies
- MongoDB Docs: Data Modeling
Introduction
- MongoDB University M320: Data
Modeling
- MongoDB University M201: MongoDB
Performance
- MongoDB Docs: Performance
Advisor
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn about database automation with Rez Kahn - Part 1 - Index Autopilot",
"contentType": "Podcast"
} | Database Automation Series - Automated Indexes | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/typescript/type-safety-with-prisma-and-mongodb | created | # Type Safety with Prisma & MongoDB
Did you know that Prisma now supports MongoDB? In this article, we'll take a look at how to use Prisma to connect to MongoDB.
## What is Prisma?
Prisma is an open source ORM (Object Relational Mapper) for Node.js. It supports both JavaScript and TypeScript. It really shines when using TypeScript, helping you to write code that is both readable and type-safe.
> If you want to hear from Nikolas Burk and Matthew Meuller of Prisma, check out this episode of the MongoDB Podcast.
>
>
## Why Prisma?
Schemas help developers avoid data inconsistency issues over time. While you can define a schema at the database level within MongoDB, Prisma lets you define a schema at the application level. When using the Prisma Client, a developer gets the aid of auto-completing queries, since the Prisma Client is aware of the schema.
## Data modeling
Generally, data that is accessed together should be stored together in a MongoDB database. Prisma supports using embedded documents to keep data together.
However, there may be use cases where you'll need to store related data in separate collections. To do that in MongoDB, you can include one document’s `_id` field in another document. In this instance, Prisma can assist you in organizing this related data and maintaining referential integrity of the data.
## Prisma & MongoDB in action
We are going to take an existing example project from Prisma’s `prisma-examples` repository.
One of the examples is a blog content management platform. This example uses a SQLite database. We'll convert it to use MongoDB and then seed some dummy data.
If you want to see the final code, you can find it in the dedicated Github repository.
### MongoDB Atlas configuration
In this article, we’ll use a MongoDB Atlas cluster. To create a free account and your first forever-free cluster, follow the Get Started with Atlas guide.
### Prisma configuration
We'll first need to set up our environment variable to connect to our MongoDB Atlas database. I'll add my MongoDB Atlas connection string to a `.env` file.
Example:
```js
DATABASE_URL="mongodb+srv://:@.mongodb.net/prisma?retryWrites=true&w=majority"
```
> You can get your connection string from the Atlas dashboard.
Now, let's edit the `schema.prisma` file.
> If you are using Visual Studio Code, be sure to install the official Prisma VS Code extension to help with formatting and auto-completion.
>
> While you’re in VS Code, also install the official MongoDB VS Code extension to monitor your database right inside VS Code!
In the `datasource db` object, we'll set the provider to "mongodb" and the url to our environment variable `DATABASE_URL`.
For the `User` model, we'll need to update the `id`. Instead of an `Int`, we'll use `String`. We'll set the default to `auto()`. Since MongoDB names the `id` field `_id`, we'll map the `id` field to `_id`. Lastly, we'll tell Prisma to use the data type of `ObjectId` for the `id` field.
We'll do the same for the `Post` model `id` field. We'll also change the `authorId` field to `String` and set the data type to `ObjectId`.
```js
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "mongodb"
url = env("DATABASE_URL")
}
model User {
id String @id @default(auto()) @map("_id") @db.ObjectId
email String @unique
name String?
posts Post]
}
model Post {
id String @id @default(auto()) @map("_id") @db.ObjectId
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
title String
content String?
published Boolean @default(false)
viewCount Int @default(0)
author User @relation(fields: [authorId], references: [id])
authorId String @db.ObjectId
}
```
This schema will result in a separate `User` and `Post` collection in MongoDB. Each post will have a reference to a user.
Now that we have our schema set up, let's install our dependencies and generate our schema.
```bash
npm install
npx prisma generate
```
> If you make any changes later to the schema, you'll need to run `npx prisma generate` again.
### Create and seed the MongoDB database
Next, we need to seed our database. The repo comes with a `prisma/seed.ts` file with some dummy data.
So, let's run the following command to seed our database:
```bash
npx prisma db seed
```
This also creates the `User` and `Post` collections that are defined in `prisma/schema.prisma`.
### Other updates to the example code
Because we made some changes to the `id` data type, we'll need to update some of the example code to reflect these changes.
The updates are in the [`pages/api/post/[id].ts` and `pages/api/publish/[id].ts` files.
Here's one example. We need to remove the `Number()` call from the reference to the `id` field since it is now a `String`.
```js
// BEFORE
async function handleGET(postId, res) {
const post = await prisma.post.findUnique({
where: { id: Number(postId) },
include: { author: true },
})
res.json(post)
}
// AFTER
async function handleGET(postId, res) {
const post = await prisma.post.findUnique({
where: { id: postId },
include: { author: true },
})
res.json(post)
}
```
### Awesome auto complete & IntelliSense
Notice in this file, when hovering over the `post` variable, VS Code knows that it is of type `Post`. If we just wanted a specific field from this, VS Code automatically knows which fields are included. No guessing!
### Run the app
Those are all of the updates needed. We can now run the app and we should see the seed data show up.
```bash
npm run dev
```
We can open the app in the browser at `http://localhost:3000/`.
From the main page, you can click on a post to see it. From there, you can delete the post.
We can go to the Drafts page to see any unpublished posts. When we click on any unpublished post, we can publish it or delete it.
The "Signup" button will allow us to add a new user to the database.
And, lastly, we can create a new post by clicking the "Create draft" button.
All of these actions are performed by the Prisma client using the API routes defined in our application.
Check out the `pages/api` folder to dive deeper into the API routes.
## Conclusion
Prisma makes dealing with schemas in MongoDB a breeze. It especially shines when using TypeScript by making your code readable and type-safe. It also helps to manage multiple collection relationships by aiding with referential integrity.
I can see the benefit of defining your schema at the application level and will be using Prisma to connect to MongoDB in the future.
Let me know what you think in the MongoDB community. | md | {
"tags": [
"TypeScript",
"MongoDB",
"JavaScript"
],
"pageDescription": "In this article, we’ll explore Prisma, an Object Relational Mapper (ODM) for MongoDB. Prisma helps developers to write code that is both readable and type-safe.",
"contentType": "Tutorial"
} | Type Safety with Prisma & MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/flask-python-mongodb | created | # Build a RESTful API with Flask, MongoDB, and Python
>This is the first part of a short series of blog posts called "Rewrite it in Rust (RiiR)." It's a tongue-in-cheek title for some posts that will investigate the similarities and differences between the same service written in Python with Flask, and Rust with Actix-Web.
This post will show how I built a RESTful API for a collection of cocktail recipes I just happen to have lying around. The aim is to show an API server with some complexity, so although it's a small example, it will cover important factors such as:
- Data transformation between the database and a JSON representation.
- Data validation.
- Pagination.
- Error-handling.
## Prerequisites
- Python 3.8 or above
- A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your database username, password, and connection string as you will need those later.
This is an *advanced* guide, so it'll cover a whole bunch of different libraries which can be brought together to build a declarative Restful API server on top of MongoDB. I won't cover repeating patterns in the codebase, so if you want to build the whole thing, I recommend checking out the source code, which is all on GitHub.
It won't cover the basics of Python, Flask, or MongoDB, so if that's what you're looking for, I recommend checking out the following resources before tackling this post:
- Think Python
- The Python & MongoDB Quickstart Series
- Flask Tutorial
- Pydantic Documentation
## Getting Started
Begin by cloning the sample code source from GitHub. There are four top-level directories:
- actix-cocktail-api: You can ignore this for now.
- data: This contains an export of my cocktail data. You'll import this into your cluster in a moment.
- flask-cocktail-api: The code for this blog post.
- test_scripts: A few shell scripts that use curl to test the HTTP interface of the API server.
There are more details in the GitHub repo, but the basics are: Install the project with your virtualenv active:
``` shell
pip install -e .
```
Next, you should import the data into your cluster. Set the environment variable `$MONGO_URI` to your cluster URI. This environment variable will be used in a moment to import your data, and also by the Flask app. I use `direnv` to configure this, and put the following line in my `.envrc` file in my project's directory:
``` shell
export MONGO_URI="mongodb+srv://USERNAME:[email protected]/cocktails?retryWrites=true&w=majority"
```
Note that your database must be called "cocktails," and the import will create a collection called "recipes." After checking that `$MONGO_URI` is set correctly, run the following command:
``` shell
mongoimport --uri "$MONGO_URI" --file ./recipes.json
```
Now you should be able to run the Flask app from the
`flask-cocktail-api` directory:
``` shell
FLASK_DEBUG=true FLASK_APP=cocktailapi flask run
```
(You can run `make run` if you prefer.)
Check the output to ensure it is happy with the configuration, and then in a different terminal window, run the `list_cocktails.sh` script in the `test_scripts` directory. It should print something like this:
``` json
{
"_links": {
"last": {
"href": "http://localhost:5000/cocktails/?page=5"
},
"next": {
"href": "http://localhost:5000/cocktails/?page=5"
},
"prev": {
"href": "http://localhost:5000/cocktails/?page=3"
},
"self": {
"href": "http://localhost:5000/cocktails/?page=4"
}
},
"recipes":
{
"_id": "5f7daa198ec9dfb536781b0d",
"date_added": null,
"date_updated": null,
"ingredients": [
{
"name": "Light rum",
"quantity": {
"unit": "oz",
}
},
{
"name": "Grapefruit juice",
"quantity": {
"unit": "oz",
}
},
{
"name": "Bitters",
"quantity": {
"unit": "dash",
}
}
],
"instructions": [
"Pour all of the ingredients into an old-fashioned glass almost filled with ice cubes",
"Stir well."
],
"name": "Monkey Wrench",
"slug": "monkey-wrench"
},
]
...
```
## Breaking it All Down
The code is divided into three submodules.
- `__init__.py` contains all the Flask setup code, and defines all the HTTP routes.
- `model.py` contains all the Pydantic model definitions.
- `objectid.py` contains a Pydantic field definition that I stole from the [Beanie object-data mapper for MongoDB.
I mentioned earlier that this code makes use of several libraries:
- PyMongo and Flask-PyMongo handle the connection to the database. Flask-PyMongo specifically wraps the database collection object to provide a convenient`find_one_or_404` method.
- Pydantic manages data validation, and some aspects of data transformation between the database and a JSON representations.
- along with a single function from FastAPI.
## Data Validation and Transformation
When building a robust API, it's important to validate all the data passing into the system. It would be possible to do this using a stack of `if/else` statements, but it's much more effective to define a schema declaratively, and to allow that to programmatically validate the data being input.
I used a technique that I learned from Beanie, a new and neat ODM that I unfortunately couldn't practically use on this project, because Beanie is async, and Flask is a blocking framework.
Beanie uses Pydantic to define a schema, and adds a custom Field type for ObjectId.
``` python
# model.py
class Cocktail(BaseModel):
id: OptionalPydanticObjectId] = Field(None, alias="_id")
slug: str
name: str
ingredients: List[Ingredient]
instructions: List[str]
date_added: Optional[datetime]
date_updated: Optional[datetime]
def to_json(self):
return jsonable_encoder(self, exclude_none=True)
def to_bson(self):
data = self.dict(by_alias=True, exclude_none=True)
if data["_id"] is None:
data.pop("_id")
return data
```
This `Cocktail` schema defines the structure of a `Cocktail` instance, which will be validated by Pydantic when instances are created. It includes another embedded schema for `Ingredient`, which is defined in a similar way.
I added convenience functions to export the data in the `Cocktail` instance to either a JSON-compatible `dict` or a BSON-compatible `dict`. The differences are subtle, but BSON supports native `ObjectId` and `datetime` types, for example, whereas when encoding as JSON, it's necessary to encode ObjectId instances in some other way (I prefer a string containing the hex value of the id), and datetime objects are encoded as ISO8601 strings.
The `to_json` method makes use of a function imported from FastAPI, which recurses through the instance data, encoding all values in a JSON-compatible form. It already handles `datetime` instances correctly, but to get it to handle ObjectId values, I extracted some [custom field code from Beanie, which can be found in `objectid.py`.
The `to_bson` method doesn't need to pass the `dict` data through `jsonable_encoder`. All the types used in the schema can be directly saved with PyMongo. It's important to set `by_alias` to `True`, so that the key for `_id` is just that, `_id`, and not the schema's `id` without an underscore.
``` python
# objectid.py
class PydanticObjectId(ObjectId):
"""
ObjectId field. Compatible with Pydantic.
"""
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, v):
return PydanticObjectId(v)
@classmethod
def __modify_schema__(cls, field_schema: dict):
field_schema.update(
type="string",
examples="5eb7cf5a86d9755df3a6c593", "5eb7cfb05e32e07750a1756a"],
)
ENCODERS_BY_TYPE[PydanticObjectId] = str
```
This approach is neat for this particular use-case, but I can't help feeling that it would be limiting in a more complex system. There are many [patterns for storing data in MongoDB. These often result in storing data in a form that is optimal for writes or reads, but not necessarily the representation you would wish to export in an API.
>**What is a Slug?**
>
>Looking at the schema above, you may have wondered what a "slug" is ... well, apart from a slimy garden pest.
>
>A slug is a unique, URL-safe, mnemonic used for identifying a document. I picked up the terminology as a Django developer, where this term is part of the framework. A slug is usually derived from another field. In this case, the slug is derived from the name of the cocktail, so if a cocktail was called "Rye Whiskey Old-Fashioned," the slug would be "rye-whiskey-old-fashioned."
>
>In this API, that cocktail could be accessed by sending a `GET` request to the `/cocktails/rye-whiskey-old-fashioned` endpoint.
>
>I've kept the unique `slug` field separate from the auto-assigned `_id` field, but I've provided both because the slug could change if the name of the cocktail was tweaked, in which case the `_id` value would provide a constant identifier to look up an exact document.
In the Rust version of this code, I was nudged to use a different approach. It's a bit more verbose, but in the end I was convinced that it would be more powerful and flexible as the system grew.
## Creating a New Document
Now I'll show you what a single endpoint looks like, first focusing on the "Create" endpoint, that handles a POST request to `/cocktails` and creates a new document in the "recipes" collection. It then returns the document that was stored, including the newly unique ID that MongoDB assigned as `_id`, because this is a RESTful API, and that's what RESTful APIs do.
``` python
@app.route("/cocktails/", methods="POST"])
def new_cocktail():
raw_cocktail = request.get_json()
raw_cocktail["date_added"] = datetime.utcnow()
cocktail = Cocktail(**raw_cocktail)
insert_result = recipes.insert_one(cocktail.to_bson())
cocktail.id = PydanticObjectId(str(insert_result.inserted_id))
print(cocktail)
return cocktail.to_json()
```
This endpoint modifies the incoming JSON directly, to add a `date_added` item with the current time. It then passes it to the constructor for our Pydantic schema. At this point, if the schema failed to validate the data, an exception would be raised and displayed to the user.
After validating the data, `to_bson()` is called on the `Cocktail` to convert it to a BSON-compatible dict, and this is directly passed to PyMongo's `insert_one` method. There's no way to get PyMongo to return the document that was just inserted in a single operation (although an upsert using `find_one_and_update` is similar to just that).
After inserting the data, the code then updates the local object with the newly-assigned `id` and returns it to the client.
## Reading a Single Cocktail
Thanks to `Flask-PyMongo`, the endpoint for looking up a single cocktail is even more straightforward:
``` python
@app.route("/cocktails/", methods=["GET"])
def get_cocktail(slug):
recipe = recipes.find_one_or_404({"slug": slug})
return Cocktail(**recipe).to_json()
```
This endpoint will abort with a 404 if the slug can't be found in the collection. Otherwise, it simply instantiates a Cocktail with the document from the database, and calls `to_json` to convert it to a dict that Flask will automatically encode correctly as JSON.
## Listing All the Cocktails
This endpoint is a monster, and it's because of pagination, and the links for pagination. In the sample data above, you probably noticed the `_links` section:
``` json
"_links": {
"last": {
"href": "http://localhost:5000/cocktails/?page=5"
},
"next": {
"href": "http://localhost:5000/cocktails/?page=5"
},
"prev": {
"href": "http://localhost:5000/cocktails/?page=3"
},
"self": {
"href": "http://localhost:5000/cocktails/?page=4"
}
},
```
This `_links` section is specified as part of the [HAL (Hypertext Application
Language) specification. It's a good idea to follow a standard for pagination data, and I didn't feel like inventing something myself!
And here's the code to generate all this. Don't freak out.
``` python
@app.route("/cocktails/")
def list_cocktails():
"""
GET a list of cocktail recipes.
The results are paginated using the `page` parameter.
"""
page = int(request.args.get("page", 1))
per_page = 10 # A const value.
# For pagination, it's necessary to sort by name,
# then skip the number of docs that earlier pages would have displayed,
# and then to limit to the fixed page size, ``per_page``.
cursor = recipes.find().sort("name").skip(per_page * (page - 1)).limit(per_page)
cocktail_count = recipes.count_documents({})
links = {
"self": {"href": url_for(".list_cocktails", page=page, _external=True)},
"last": {
"href": url_for(
".list_cocktails", page=(cocktail_count // per_page) + 1, _external=True
)
},
}
# Add a 'prev' link if it's not on the first page:
if page > 1:
links"prev"] = {
"href": url_for(".list_cocktails", page=page - 1, _external=True)
}
# Add a 'next' link if it's not on the last page:
if page - 1 < cocktail_count // per_page:
links["next"] = {
"href": url_for(".list_cocktails", page=page + 1, _external=True)
}
return {
"recipes": [Cocktail(**doc).to_json() for doc in cursor],
"_links": links,
}
```
Although there's a lot of code there, it's not as complex as it may first appear. Two requests are made to MongoDB: one for a page-worth of cocktail recipes, and the other for the total number of cocktails in the collection. Various calculations are done to work out how many documents to skip, and how many pages of cocktails there are. Finally, some links are added for "prev" and "next" pages, if appropriate (i.e.: the current page isn't the first or last.) Serialization of the cocktail documents is done in the same way as the previous endpoint, but in a loop this time.
The update and delete endpoints are mainly repetitions of the code I've already included, so I'm not going to include them here. Check them out in the [GitHub repo if you want to see how they work.
## Error Handling
Nothing irritates me more than using a JSON API which returns HTML when an error occurs, so I was keen to put in some reasonable error handling to avoid this happening.
After Flask set-up code, and before the endpoint definitions, the code registers two error-handlers:
``` python
@app.errorhandler(404)
def resource_not_found(e):
"""
An error-handler to ensure that 404 errors are returned as JSON.
"""
return jsonify(error=str(e)), 404
@app.errorhandler(DuplicateKeyError)
def resource_not_found(e):
"""
An error-handler to ensure that MongoDB duplicate key errors are returned as JSON.
"""
return jsonify(error=f"Duplicate key error."), 400
```
The first error-handler intercepts any endpoint that fails with a 404 status code and ensures that the error is returned as a JSON dict.
The second error-handler intercepts a `DuplicateKeyError` raised by any endpoint, and does the same thing as the first error-handler, but sets the HTTP status code to "400 Bad Request."
As I was writing this post, I realised that I've missed an error-handler to deal with invalid Cocktail data. I'll leave implementing that as an exercise for the reader! Indeed, this is one of the difficulties with writing robust Python applications: Because exceptions can be raised from deep in your stack of dependencies, it's very difficult to comprehensively predict what exceptions your application may raise in different circumstances.
This is something that's very different in Rust, and even though, as you'll see, error-handling in Rust can be verbose and tricky, I've started to love the language for its insistence on correctness.
## Wrapping Up
When I started writing this post, I though it would end up being relatively straightforward. As I added the requirement that the code should not just be a toy example, some of the inherent difficulties with building a robust API on top of any database became apparent.
In this case, Flask may not have been the right tool for the job. I recently wrote a blog post about building an API with Beanie. Beanie and FastAPI are a match made in heaven for this kind of application and will handle validation, transformation, and pagination with much less code. On top of that, they're self-documenting and can provide the data's schema in open formats, including OpenAPI Spec and JSON Schema!
If you're about to build an API from scratch, I strongly recommend you check them out, and you may enjoy reading Aaron Bassett's posts on the FARM (FastAPI, React, MongoDB) Stack.
I will shortly publish the second post in this series, *Build a Cocktail API with Actix-Web, MongoDB, and Rust*, and then I'll conclude with a third post, *I Rewrote it in Rust—How Did it Go?*, where I'll evaluate the strengths and weaknesses of the two experiments.
Thank you for reading. Keep a look out for the upcoming posts!
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"Flask"
],
"pageDescription": "Build a RESTful API with Flask, MongoDB, and Python",
"contentType": "Tutorial"
} | Build a RESTful API with Flask, MongoDB, and Python | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/real-time-card-fraud-solution-accelerator-databricks | created | # Real-Time Card Fraud Solution Accelerator with MongoDB and Databricks
Card fraud is a significant problem and fear for both consumers and businesses. However, despite the seriousness of it, there are solutions that can be implemented for card fraud prevention. Financial institutions have various processes and technical solutions in place to detect and prevent card fraud, such as monitoring transactions for suspicious activity, implementing know-your-customer (KYC) procedures, and a combination of controls based on static rules or machine learning models. These can all help, but they are not without their own challenges.
Financial institutions with legacy fraud prevention systems can find themselves fighting against their own data infrastructure. These challenges can include:
* **Incomplete data**: Legacy systems may not have access to all relevant data sources, leading to a lack of visibility into fraud patterns and behaviors.
* **Latency**: Fraud prevention systems need to execute fast enough to be able to be deployed as part of a real-time payment approval process. Legacy systems often lack this capability.
* **Difficulty to change**: Legacy systems have been designed to work within specific parameters, and changing them to meet new requirements is often difficult and time-consuming.
* **Weak security**: Legacy systems may have outdated security protocols that leave organizations vulnerable to cyber attacks.
* **Operational overheads due to technical sprawl**: Existing architectures often pose operational challenges due to diverse technologies that have been deployed to support the different access patterns required by fraud models and ML training. This technical sprawl in the environment requires significant resources to maintain and update.
* **High operation costs**: Legacy systems can be costly to operate, requiring significant resources to maintain and update.
* **No collaboration between application and data science teams**: Technical boundaries between the operational platform and the data science platform are stopping application developers and data science teams from working collaboratively, leading to longer time to market and higher overheads.
These data issues can be detrimental to a financial institution trying desperately to keep up with the demands of customer expectations, user experience, and fraud. As technology is advancing rapidly, surely so is card fraud, becoming increasingly sophisticated. This has naturally led to an absolute need for real-time solutions to detect and prevent card fraud effectively. Anything less than that is unacceptable. So, how can financial institutions today meet these demands? The answer is simple. Fraud detection big data analytics should shift-left to the application itself.
What does this look like in practice? Application-driven analytics for fraud detection is the solution for the very real challenges financial institutions face today, as mentioned above.
## Solution overview
To break down what this looks like, we will demonstrate how easy it is to build an ML-based fraud solution using MongoDB and Databricks. The functional and nonfunctional features of this proposed solution include:
* **Data completeness**: To address the challenge of incomplete data, the system will be integrated with external data sources to ensure complete and accurate data is available for analysis.
* **Real-time processing**: The system will be designed to process data in real time, enabling the timely detection of fraudulent activities.
* **AI/ML modeling and model use**: Organizations can leverage AI/ML to enhance their fraud prevention capabilities. AI/ML algorithms can quickly identify and flag potential fraud patterns and behaviors.
* **Real-time monitoring**: Organizations should aim to enable real-time monitoring of the application, allowing for real-time processing and analysis of data.
* **Model observability**: Organizations should aim to improve observability in their systems to ensure that they have full visibility into fraud patterns and behaviors.
* **Flexibility and scalability**: The system will be designed with flexibility and scalability in mind, allowing for easy changes to be made to accommodate changing business needs and regulatory requirements.
* **Security**: The system will be designed with robust security measures to protect against potential security breaches, including encryption, access control, and audit trails.
* **Ease of operation**: The system will be designed with ease of operation in mind, reducing operational headaches and enabling the fraud prevention team to focus on their core responsibilities..
* **Application development and data science team collaboration**: Organizations should aim to enable collaboration between application development and data science teams to ensure that the goals and objectives are aligned, and cooperation is optimized.
* **End-to-end CI/CD pipeline support**: Organizations should aim to have end-to-end CI/CD pipeline support to ensure that their systems are up-to-date and secure.
## Solution components
The functional features listed above can be implemented by a few architectural components. These include:
1. **Data sourcing**
1. **Producer apps**: The producer mobile app simulates the generation of live transactions.
2. **Legacy data source**: The SQL external data source is used for customer demographics.
3. **Training data**: Historical transaction data needed for model training data is sourced from cloud object storage - Amazon S3 or Microsoft Azure Blob Storage.
2. **MongoDB Atlas**: Serves as the Operational Data Store (ODS) for card transactions and processes transactions in real time. The solution leverages MongoDB Atlas aggregation framework to perform in-app analytics to process transactions based on pre-configured rules and communicates with Databricks for advanced AI/ML-based fraud detection via a native Spark connector.
3. **Databricks**: Hosts the AI/ML platform to complement MongoDB Atlas in-app analytics. A fraud detection algorithm used in this example is a notebook inspired by Databrick's fraud framework. MLFlow has been used to manage the MLOps for managing this model. The trained model is exposed as a REST endpoint.
Now, let’s break down these architectural components in greater detail below, one by one.
***Figure 1**: MongoDB for event-driven and shift-left analytics architecture*
### 1. Data sourcing
The first step in implementing a comprehensive fraud detection solution is aggregating data from all relevant data sources. As shown in **Figure 1** above, an event-driven federated architecture is used to collect and process data from real-time sources such as producer apps, batch legacy systems data sources such as SQL databases, and historical training data sets from offline storage. This approach enables data sourcing from various facets such as transaction summary, customer demography, merchant information, and other relevant sources, ensuring data completeness.
Additionally, the proposed event-driven architecture provides the following benefits:
* Real-time transaction data unification, which allows for the collection of card transaction event data such as transaction amount, location, time of the transaction, payment gateway information, payment device information, etc., in **real-time**.
* Helps re-train monitoring models based on live event activity to combat fraud as it happens.
The producer application for the demonstration purpose is a Python script that generates live transaction information at a predefined rate (transactions/sec, which is configurable).
***Figure 2**: Transaction collection sample document*
### 2. MongoDB for event-driven, shift-left analytics architecture
MongoDB Atlas is a managed data platform that offers several features that make it the perfect choice as the datastore for card fraud transaction classification. It supports flexible data models and can handle various types of data, high scalability to meet demand, advanced security features to ensure compliance with regulatory requirements, real-time data processing for fast and accurate fraud detection, and cloud-based deployment to store data closer to customers and comply with local data privacy regulations.
The MongoDB Spark Streaming Connector integrates Apache Spark and MongoDB. Apache Spark, hosted by Databricks, allows the processing and analysis of large amounts of data in real-time. The Spark Connector translates MongoDB data into Spark data frames and supports real time Spark streaming.
***Figure 3**: MongoDB for event-driven and shift-left analytics architecture*
The App Services features offered by MongoDB allow for real-time processing of data through change streams and triggers. Because MongoDB Atlas is capable of storing and processing various types of data as well as streaming capabilities and trigger functionality, it is well suited for use in an event-driven architecture.
In the demo, we used both the rich connector ecosystem of MongoDB and App Services to process transactions in real time. The App Service Trigger function is used by invoking a REST service call to an AI/ML model hosted through the Databricks MLflow framework.
***Figure 4**: The processed and “features of transaction” MongoDB sample document*
***Figure 5**: Processed transaction sample document*
**Note**: *A combined view of the collections, as mentioned earlier, can be visually represented using **MongoDB Charts** to help better understand and observe the changing trends of fraudulent transactions. For advanced reporting purposes, materialized views can help.*
The example solution manages rules-based fraud prevention by storing user-defined payment limits and information in a user settings collection, as shown below. This includes maximum dollar limits per transaction, the number of transactions allowed per day, and other user-related details. By filtering transactions based on these rules before invoking expensive AI/ML models, the overall cost of fraud prevention is reduced.
### 3. Databricks as an AI/ML ops platform
Databricks is a powerful AI/ML platform to develop models for identifying fraudulent transactions. One of the key features of Databricks is the support of real-time analytics. As discussed above, real-time analytics is a key feature of modern fraud detection systems.
Databricks includes MLFlow, a powerful tool for managing the end-to-end machine learning lifecycle. MLFlow allows users to track experiments, reproduce results, and deploy models at scale, making it easier to manage complex machine learning workflows. MLFlow offers model observability, which allows for easy tracking of model performance and debugging. This includes access to model metrics, logs, and other relevant data, which can be used to identify issues and improve the accuracy of the model over time. Additionally, these features can help in the design of modern fraud detection systems using AI/ML.
## Demo artifacts and high-level description
Workflows needed for processing and building models for validating the authenticity of transactions are done through the Databricks AI/ML platform. There are mainly two workflow sets to achieve this:
1: The **Streaming workflow**, which runs in the background continuously to consume incoming transactions in real-time using the MongoDB Spark streaming connector. Every transaction first undergoes data preparation and a feature extraction process; the transformed features are then streamed back to the MongoDB collection with the help of a Spark streaming connector.
***Figure 7**: Streaming workflow*
2: The **Training workflow** is a scheduled process that performs three main tasks/notebooks, as mentioned below. This workflow can be either manually triggered or through the Git CI/CD (webhooks).
***Figure 8**: Training workflow stages*
>A step-by-step breakdown of how the example solution works can be accessed at this GitHub repository, and an end-to-end solution demo is available.
## Conclusion
Modernizing legacy fraud prevention systems using MongoDB and Databricks can provide many benefits, such as improved detection accuracy, increased flexibility and scalability, enhanced security, reduced operational headaches, reduced cost of operation, early pilots and quick iteration, and enhanced customer experience.
Modernizing legacy fraud prevention systems is essential to handling the challenges posed by modern fraud schemes. By incorporating advanced technologies such as MongoDB and Databricks, organizations can improve their fraud prevention capabilities, protect sensitive data, and reduce operational headaches. With the solution proposed, organizations can take a step forward in their fraud prevention journey to achieve their goals.
Learn more about how MongoDB can modernize your fraud prevention system, and contact the MongoDB team. | md | {
"tags": [
"MongoDB",
"AI"
],
"pageDescription": "In this article, we'll demonstrate how easy it is to build an ML-based fraud solution using MongoDB and Databricks.",
"contentType": "Article"
} | Real-Time Card Fraud Solution Accelerator with MongoDB and Databricks | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/serverless-development-kotlin-aws-lambda-mongodb-atlas | created | # Serverless Development with Kotlin, AWS Lambda, and MongoDB Atlas
As seen in a previous tutorial, creating a serverless function for AWS Lambda with Java and MongoDB isn't too complicated of a task. In fact, you can get it done with around 35 lines of code!
However, maybe your stack doesn't consist of Java, but instead Kotlin. What needs to be done to use Kotlin for AWS Lambda and MongoDB development? The good news is not much will be different!
In this tutorial, we'll see how to create a simple AWS Lambda function. It will use Kotlin as the programming language and it will use the MongoDB Kotlin driver for interacting with MongoDB.
## The requirements
There are a few prerequisites that must be met in order to be successful with this particular tutorial:
- Must have a Kotlin development environment installed and configured on your local computer.
- Must have a MongoDB Atlas instance deployed and configured.
- Must have an Amazon Web Services (AWS) account.
The easiest way to develop with Kotlin is through IntelliJ, but it is a matter of preference. The requirement is that you can build Kotlin applications with Gradle.
For the purpose of this tutorial, any MongoDB Atlas instance will be sufficient whether it be the M0 free tier, the serverless pay-per-use tier, or something else. However, you will need to have the instance properly configured with user rules and network access rules. If you need help, use our MongoDB Atlas tutorial as a starting point.
## Defining the project dependencies with the Gradle Kotlin DSL
Assuming you have a project created using your tooling of choice, we need to properly configure the **build.gradle.kts** file with the correct dependencies for AWS Lambda with MongoDB.
In the **build.gradle.kts** file, include the following:
```kotlin
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
kotlin("jvm") version "1.9.0"
application
id("com.github.johnrengelman.shadow") version "7.1.2"
}
application {
mainClass.set("example.Handler")
}
group = "org.example"
version = "1.0-SNAPSHOT"
repositories {
mavenCentral()
}
dependencies {
testImplementation(kotlin("test"))
implementation("com.amazonaws:aws-lambda-java-core:1.2.2")
implementation("com.amazonaws:aws-lambda-java-events:3.11.1")
implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.3.1")
implementation("org.mongodb:bson:4.10.2")
implementation("org.mongodb:mongodb-driver-kotlin-sync:4.10.2")
}
tasks.test {
useJUnitPlatform()
}
tasks.withType {
kotlinOptions.jvmTarget = "1.8"
}
```
There are a few noteworthy items in the above configuration.
Looking at the `plugins` first, you'll notice the use of Shadow:
```kotlin
plugins {
kotlin("jvm") version "1.9.0"
application
id("com.github.johnrengelman.shadow") version "7.1.2"
}
```
AWS Lambda expects a ZIP or a JAR. By using the Shadow plugin, we can use Gradle to build a "fat" JAR, which includes both the application and all required dependencies. When using Shadow, the main class must be defined.
To define the main class, we have the following:
```kotlin
application {
mainClass.set("example.Handler")
}
```
The above assumes that all our code will exist in a `Handler` class in an `example` package. Yours does not need to match, but note that this particular class and package will be referenced throughout the tutorial. You should swap names wherever necessary.
The next item to note is the `dependencies` block:
```kotlin
dependencies {
testImplementation(kotlin("test"))
implementation("com.amazonaws:aws-lambda-java-core:1.2.2")
implementation("com.amazonaws:aws-lambda-java-events:3.11.1")
implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.3.1")
implementation("org.mongodb:bson:4.10.2")
implementation("org.mongodb:mongodb-driver-kotlin-sync:4.10.2")
}
```
In the above block, we are including the various AWS Lambda SDK packages as well as the MongoDB Kotlin driver. These dependencies will allow us to use MongoDB with Kotlin and AWS Lambda.
If you wanted to, you could run the following command:
```bash
./gradlew shadowJar
```
As long as the main class exists, it should build a JAR file for you.
## Developing a serverless function with Kotlin and MongoDB
With the configuration items out of the way, we can focus on the development of our serverless function. Open the project's **src/main/kotlin/example/Handler.kt** file and include the following boilerplate code:
```kotlin
package example
import com.amazonaws.services.lambda.runtime.Context
import com.amazonaws.services.lambda.runtime.RequestHandler
import com.mongodb.client.model.Filters
import com.mongodb.kotlin.client.MongoClient
import com.mongodb.kotlin.client.MongoCollection
import com.mongodb.kotlin.client.MongoDatabase
import org.bson.Document
import org.bson.conversions.Bson
import org.bson.BsonDocument
class Handler : RequestHandler, Void> {
override fun handleRequest(input: Map, context: Context): void {
return null;
}
}
```
The above code won't do much of anything if you tried to execute it on AWS Lambda, but it is a starting point. Let's start by establishing a connection to MongoDB.
Within the `Handler` class, add the following:
```kotlin
class Handler : RequestHandler, Void> {
private val mongoClient: MongoClient = MongoClient.create(System.getenv("MONGODB_ATLAS_URI"))
override fun handleRequest(input: Map, context: Context): void {
val database: MongoDatabase = mongoClient.getDatabase("sample_mflix")
val collection: MongoCollection = database.getCollection("movies")
return null;
}
}
```
First, you'll notice that we are creating a `mongoClient` variable to hold the information about our connection. This client will be created using a MongoDB Atlas URI that we plan to store as an environment variable. It is strongly recommended that you use environment variables to store this information so your credentials don't get added to your version control.
In case you're unsure what the MongoDB Atlas URI looks like, it looks like the following:
```
mongodb+srv://:@.dmhrr.mongodb.net/?retryWrites=true&w=majority
```
You can find your exact connection string using the MongoDB Atlas CLI or through the MongoDB Atlas dashboard.
Within the `handleRequest` function, we get a reference to the database and collection that we want to use:
```kotlin
val database: MongoDatabase = mongoClient.getDatabase("sample_mflix")
val collection: MongoCollection = database.getCollection("movies")
```
For this particular example, we are using the `sample_mflix` database and the `movies` collection, both of which are part of the optional MongoDB Atlas sample dataset. Feel free to use a database and collection that you already have.
Now we can focus on interactions with MongoDB. Make a few changes to the `Handler` class so it looks like this:
```kotlin
package example
import com.amazonaws.services.lambda.runtime.Context
import com.amazonaws.services.lambda.runtime.RequestHandler
import com.mongodb.client.model.Filters
import com.mongodb.kotlin.client.MongoClient
import com.mongodb.kotlin.client.MongoCollection
import com.mongodb.kotlin.client.MongoDatabase
import org.bson.Document
import org.bson.conversions.Bson
import org.bson.BsonDocument
class Handler : RequestHandler, List> {
private val mongoClient: MongoClient = MongoClient.create(System.getenv("MONGODB_ATLAS_URI"))
override fun handleRequest(input: Map, context: Context): List {
val database: MongoDatabase = mongoClient.getDatabase("sample_mflix")
val collection: MongoCollection = database.getCollection("movies")
var filter: Bson = BsonDocument()
if(input.containsKey("title") && !input.get("title").isNullOrEmpty()) {
filter = Filters.eq("title", input.get("title"))
}
val results: List = collection.find(filter).limit(5).toList()
return results;
}
}
```
Instead of using `Void` in the `RequestHandler` and `void` as the return type for the `handleRequest` function, we are now using `List` because we plan to return an array of documents to the requesting client.
This brings us to the following:
```kotlin
var filter: Bson = BsonDocument()
if(input.containsKey("title") && !input.get("title").isNullOrEmpty()) {
filter = Filters.eq("title", input.get("title"))
}
val results: List = collection.find(filter).limit(5).toList()
return results;
```
Instead of executing a fixed query when the function is invoked, we are accepting input from the user. If the user provides a `title` field with the invocation, we construct a filter for it. In other words, we will be looking for movies with a title that matches the user input. If no `title` is provided, we just query for all documents in the collection.
For the actual `find` operation, rather than risking the return of more than a thousand documents, we are limiting the result set to five and are converting the response from a cursor to a list.
At this point in time, our simple AWS Lambda function is complete. We can focus on the building and deployment of the function now.
## Building and deploying a Kotlin function to AWS Lambda
Before we worry about AWS Lambda, let's build the project using Shadow. From the command line, IntelliJ, or with whatever tool you're using, execute the following:
```bash
./gradlew shadowJar
```
Find the JAR file, which is probably in the **build/libs** directory unless you specified otherwise.
Everything we do next will be done in the AWS portal. There are three main items that we want to take care of during this process:
1. Add the environment variable with the MongoDB Atlas URI to the Lambda function.
2. Rename the "Handler" information in Lambda to reflect the actual project.
3. Upload the JAR file to AWS Lambda.
Within the AWS Lambda dashboard for your function, click the "Configuration" tab followed by the "Environment Variables" navigation item. Add `MONGODB_ATLAS_URI` along with the appropriate connection string when prompted. Make sure the connection string reflects your instance with the proper username and password.
You can now upload the JAR file from the "Code" tab of the AWS Lambda dashboard. When this is done, we need to tell AWS Lambda what the main class is and the function that should be executed.
In the "Code" tab, look for "Runtime Settings" and choose to edit it. In our example, we had **example** as the package and **Handler** as the class. We also had our function logic in the **handleRequest** function.
With all this in mind, change the "Handler" within AWS Lambda to **example.Handler::handleRequest** or whatever makes sense for your project.
At this point, you should be able to test your function.
On the "Test" tab of the AWS Lambda dashboard, choose to run a test as is. You should get a maximum of five results back. Next, try using the following input criteria:
```json
{
"title": "The Terminator"
}
```
Your response will now look different because of the filter.
## Conclusion
Congratulations! You created your first AWS Lambda function in Kotlin and that function supports communication with MongoDB!
While this example was intended to be short and simple, you could add significantly more logic to your functions that engage with other functionality of MongoDB, such as aggregations and more.
If you'd like to see how to use Java to accomplish the same thing, check out my previous tutorial on the subject titled Serverless Development with AWS Lambda and MongoDB Atlas Using Java. | md | {
"tags": [
"Atlas",
"Kotlin",
"Serverless"
],
"pageDescription": "Learn how to use Kotlin and MongoDB to create performant and scalable serverless functions on AWS Lambda.",
"contentType": "Tutorial"
} | Serverless Development with Kotlin, AWS Lambda, and MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/securing-mongodb-with-tls | created | # Securing MongoDB with TLS
Hi! I'm Carl from Smallstep. We make it easier to use TLS everywhere. In this post, I’m going to make a case for using TLS/SSL certificates to secure your self-managed MongoDB deployment, and I’ll take you through the steps to enable various TLS features in MongoDB.
MongoDB has very strong support for TLS that can be granularly controlled. At minimum, TLS will let you validate and encrypt connections into your database or between your cluster member nodes. But MongoDB can also be configured to authenticate users using TLS client certificates instead of a password. This opens up the possibility for more client security using short-lived (16-hour) certificates. The addition of Smallstep step-ca, an open source certificate authority, makes it easy to create and manage MongoDB TLS certificates.
## The Case for Certificates
TLS certificates come with a lot of benefits:
* Most importantly, TLS makes it possible to require *authenticated encryption* for every database connection—just like SSH connections.
* Unlike SSH keys, certificates expire. You can issue ephemeral (e.g., five-minute) certificates to people whenever they need to access your database, and avoid having long-lived key material (like SSH keys) sitting around on people's laptops.
* Certificates allow you to create a trust domain around your database. MongoDB can be configured to refuse connections from clients who don’t have a certificate issued by your trusted Certificate Authority (CA).
* Certificates can act as user login credentials in MongoDB, replacing passwords. This lets you delegate MongoDB authentication to a CA. This opens the door to further delegation via OpenID Connect, so you can have Single Sign-On MongoDB access.
When applied together, these benefits offer a level of security comparable to an SSH tunnel—without the need for SSH.
## MongoDB TLS
Here’s an overview of TLS features that can be enabled in MongoDB:
* **Channel encryption**: The traffic between clients and MongoDB is encrypted. You can enable channel encryption using self-signed TLS certificates. Self-signed certificates are easy to create, but they will not offer any client or server identity validation, so you will be vulnerable to man-in-the-middle attacks. This option only makes sense within a trusted network.
* **Identity validation**: To enable identity validation on MongoDB, you’ll need to run an X.509 CA that can issue certificates for your MongoDB hosts and clients. Identity validation happens on both sides of a MongoDB connection:
* **Client identity validation**: Client identity validation means that the database can ensure all client connections are coming from *your* authorized clients. In this scenario, the client has a certificate and uses it to authenticate itself to the database when connecting.
* **Server identity validation**: Server identity validation means that MongoDB clients can ensure that they are talking to your MongoDB database. The server has an identity certificate that all clients can validate when connecting to the database.
* **Cluster member validation**: MongoDB can require all members of a cluster to present valid certificates when they join the cluster. This encrypts the traffic between cluster members.
* **X.509 User Authentication**: Instead of passwords, you can use X.509 certificates as login credentials for MongoDB users.
* **Online certificate rotation**: Use short-lived certificates and MongoDB online certificate rotation to automate operations.
To get the most value from TLS with your self-managed MongoDB deployment, you need to run a CA (the fully-managed MongoDB Atlas comes with TLS features enabled by default).
Setting up a CA used to be a difficult, time-consuming hassle requiring deep domain knowledge. Thanks to emerging protocols and tools, it has become a lot easier for any developer to create and manage a simple private CA in 2021. At Smallstep, we’ve created an open source online CA called step-ca that’s secure and easy to use, either online or offline.
## TLS Deployment with MongoDB and Smallstep step-ca
Here are the main steps required to secure MongoDB with TLS. If you’d like to try it yourself, you can find a series of blog posts on the Smallstep website detailing the steps:
* Set up a CA. A single step-ca instance is sufficient. When you run your own CA and use short-lived certificates, you can avoid the complexity of managing CRL and OCSP endpoints by using passive revocation. With passive revocation, if a key is compromised, you simply block the renewal of its certificate in the CA.
* For server validation, issue a certificate and private key to your MongoDB server and configure server TLS.
* For client validation, issue certificates and private keys to your clients and configure client-side TLS.
* For cluster member validation, issue certificates and keys to your MongoDB cluster members and configure cluster TLS.
* Deploy renewal mechanisms for your certificates. For example, certificates used by humans could be renewed manually when a database connection is needed. Certificates used by client programs or service accounts can be renewed with a scheduled job.
* To enable X.509 user authentication, you’ll need to add X.509-authenticated users to your database, and configure your clients to attempt X.509 user authentication when connecting to MongoDB.
* Here’s the icing on the cake: Once you’ve set all of this up, you can configure step-ca to allow users to get MongoDB certificates via an identity provider, using OpenID Connect. This is a straightforward way to enable Single Sign-on for MongoDB.
Finally, it’s important to note that it’s possible to stage the migration of an existing MongoDB cluster to TLS: You can make TLS connections to MongoDB optional at first, and only require client validation once you’ve migrated all of your clients.
Ready to get started? In this Smallstep series of tutorials, we’ll take you through this process step-by-step. | md | {
"tags": [
"MongoDB",
"TLS"
],
"pageDescription": "Learn how to secure your self-managed MongoDB TLS deployment with certificates using the Smallstep open source online certificate authority.",
"contentType": "Article"
} | Securing MongoDB with TLS | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-federation-control-access-analytics-node | created | # Using Atlas Data Federation to Control Access to Your Analytics Node
MongoDB replica sets, analytics nodes, and read preferences are powerful tools that can help you ensure high availability, optimize performance, and control how your applications access and query data in a MongoDB database. This blog will cover how to use Atlas Data Federation to control access to your analytics node, customize read preferences, and set tag sets for a more seamless and secure data experience.
## How do MongoDB replica sets work?
MongoDB deployed in replica sets is a strategy to achieve high availability. This strategy provides automatic failover and data redundancy for your applications. A replica set is a group of MongoDB servers that contain the same information, with one server designated as the primary node and the others as secondary nodes.
The primary node is the leader of a replica set and is the only node that can receive write operations, while the secondary nodes, on the other hand, continuously replicate the data from the primary node and can be used to serve read operations. If the primary node goes down, one of the secondaries can then be promoted to be the primary, allowing the replica set to continue to operate without downtime. This is often referred to as automatic failover. In this case, the new primary is chosen through an "election" process, which involves the nodes in the replica set voting for the new primary.
However, in some cases, you may not want your secondary node to become the primary node in your replica set. For example, imagine that you have a primary node and two secondary nodes in your database cluster. The primary node is responsible for handling all of the write operations and the secondary nodes are responsible for handling read operations. Now, suppose you have a heavy query that scans a large amount of data over a long period of time on one of the secondary nodes. This query will require the secondary node to do a lot of work because it needs to scan through a large amount of data to find the relevant results. If the primary were to fail while this query is running, the secondary node that is running the query could be promoted to primary. However, since the node is busy running the heavy query, it may struggle to handle the additional load of write operations now that it is the primary. As a result, the performance of the database may suffer, or the newly promoted node might fail entirely.
This is where Analytics nodes come in…
## Using MongoDB’s analytics nodes to isolate workloads
If a database performs complex or long-running operations, such as ETL or reporting, you may want to isolate these queries from the rest of your operational workload by running them on analytics nodes which are completely dedicated to this kind of operation.
Analytics nodes are a type of secondary node in a MongoDB replica set that can be designated to handle special read-only workloads, and importantly, they cannot be promoted to primary. They can be scaled independently to handle complex analytical queries that involve large amounts of data. When you offload read-intensive workloads from the primary node in a MongoDB replica set, you are directing read operations to other nodes in the replica set, rather than to the primary node. This can help to reduce the load on the primary node and ensure it does not get overwhelmed.
To use analytics nodes, you must configure your cluster with additional nodes that are designated as “Analytic Nodes.” This is done in the cluster configuration setup flow. Then, in order to have your client application utilize the Analytic Nodes, you must utilize tag sets when connecting. Utilizing these tag sets enables you to direct all read operations to the analytics nodes.
## What are read preferences and tag sets in MongoDB?
Read preferences in MongoDB allow you to control what node, within a standard cluster, you are connecting to and want to read from.
MongoDB supports several read preference types that you can use to specify which member of a replica set you want to read from. Here are the most commonly used read preference types:
1. **Primary**: Read operations are sent to the primary node. This is the default read preference.
2. **PrimaryPreferred**: Read operations are sent to the primary node if it is available. Otherwise, they are sent to a secondary node.
3. **Secondary**: Read operations are sent to a secondary node.
4. **SecondaryPreferred**: Read operations are sent to a secondary node, if one is available. Otherwise, they are sent to the primary node.
5. **Nearest**: Read operations are sent to the member of the replica set with the lowest network latency, regardless of whether it is the primary or a secondary node.
*It's important to note that read preferences are only used for read operations, not write operations.
Tag sets allow you to control even more details about which node you read. MongoDB tag sets are a way to identify specific nodes in a replica set. You can think of them as labels. This allows the calling client application to specify which nodes in a replica set you want to use for read operations, based on the tags that have been applied to them.
MongoDB Atlas clusters are automatically configured with predefined tag sets for different member types depending on how you’ve configured your cluster. You can utilize these predefined replica set tags to direct queries from specific applications to your desired node types and regions. Here are some examples:
1. **Provider**: Cloud provider on which the node is provisioned
1. {"provider" : "AWS"}
2. {"provider" : "GCP"}
3. {"provider" : "AZURE"}
2. **Region**: Cloud region in which the node resides
1. {"region" : "US_EAST_2"}
3. **Node**: Node type
1. {"nodeType" : "ANALYTICS"}
2. {"nodeType" : "READ_ONLY"}
3. {"nodeType" : "ELECTABLE"}
4. **Workload Type**: Tag to distribute your workload evenly among your non-analytics (electable or read-only) nodes.
1. {"workloadType" : "OPERATIONAL"}
## Customer challenge
Read preferences and tag sets can be helpful in controlling which node gets utilized for a specific query. However, they may not be sufficient on their own to protect against certain types of risks or mistakes. For example, if you are concerned about other users or developers accidentally accessing the primary node of the cluster, read preferences and tag sets may not provide enough protection, as someone with a database user can forget to set the read preference or choose not to use a tag set. In this case, you might want to use additional measures to ensure that certain users or applications only have access to specific nodes of your cluster.
MongoDB Atlas Data Federation can be used as a view on top of your data that is tailored to the specific needs of the user or application. You can create database users in Atlas that are only provisioned to connect to specific clusters or federated database instances. Then, when you provide the endpoints for the federated database instances and the necessary database users, you can be sure that the end user is only able to connect to the nodes you want them to have access to. This can help to "lock down" a user or application to a specific node, allowing them to better control which data is accessible to them and ensuring that your data is being accessed in a safe and secure way.
## How does Atlas Data Federation fit in?
Atlas Data Federation is an on-demand query engine that allows you to query, transform, and move data across multiple data sources, as if it were all in the same place and format. With Atlas Data Federation, you can create virtual collections that refer to your underlying Atlas cluster collections and lock them to a specific read preference or tag set. You can then restrict database users to only be able to connect to the federated database instance, thereby giving partners within your business live access to your cluster data, while not having any risk that they connect to the primary. This allows you to isolate different workloads and reduce the risk of contention between them.
For example, you could create a separate endpoint for analytics queries that is locked down to read-only access and restrict queries to only run on analytics nodes, while continuing to use the cluster connection for your operational application queries. This would allow you to run analytics queries with access to real-time data without affecting the performance of the cluster.
To do this, you would create a virtual collection, choose the source of a specific cluster collection, and specify a tag set for the analytics node. Then, a user can query their federated database instance, knowing it will always query the analytics node and that their primary cluster won’t be impacted. The only way to make a change would be in the storage configuration of the federated database instance, which you can prevent, ensuring that no mistakes happen.
In addition to restricting the federated database instance to only read from the analytics node, the database manager can also place restrictions on the user to only read from that specific federated database instance. Now, not only do you have a connection string for your federated database instance that will never query your primary node, but you can also ensure that your users are assigned the correct roles, and they can’t accidentally connect to your cluster connection string.
By locking down an analytics node to read-only access, you can protect your most sensitive workloads and improve security while still sharing access to your most valuable data.
## How to lock down a user to access the analytics node
The following steps will allow you to set your read-preferences in Atlas Data Federation to use the analytics node:
Step 1: Log into MongoDB Atlas.
Step 2: Select the Data Federation option on the left-hand navigation.
Step 3: Click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI.
Step 4: *Repeat this step for each of your data sources.* Select the dataset for your federated database instance from the Data Sources section.
4a. Select your cluster and collection.
* Select your “Read Preference Mode.” Data Federation enables ‘nearest’ as its default.
4b. Click “Cluster Read Preference.”
* Select your “Read Preference Mode.” Data Federation enables ‘nearest’ as its default.
* Type in your TagSets. For example:
* [ { "name": "nodeType", "value": "ANALYTICS" } ] ]
![Data Federation UI showing the selection of a data source and an example of setting your read preferences and TagSets
4c. Select “Next.”
Step 5: Map your datasets from the Data Sources pane on the left to the Federated Database Instance pane on the right.
Step 6: Click “Save” to create the federated database instance.
*To connect to your federated database instance, continue to follow the instructions outlined in our documentation.
**Note: If you have many databases and collections in your underlying cluster, you can use our “wildcard syntax” along with read preference to easily expose all your databases and collections from your cluster without enumerating each one. This can be set after you’ve configured read preference by going to the JSON editor view.**
```
"databases" :
{
"name" : "*",
"collections" : [
{
"name" : "*",
"dataSources" : [
{
"storeName" : ""
}
]
}
]
}
]
```
## How to manage database access in Atlas and assign roles to users
You must create a database user to access your deployment. For security purposes, Atlas requires clients to authenticate as MongoDB database users to access federated database instances. To add a database user to your cluster, perform the following steps:
Step 1: In the Security section of the left navigation, click “Database Access.”
1a. Make sure it shows the “Database Users” tab display.
1b. Click “+ Add New Database User.”
![add a new database user to assign roles
Step 2: Select “Password” and enter user information.
Step 3: Assign user privileges, such as read/write access.
3a. Select a built-in role from the “Built-in Role” dropdown menu. You can select one built-in role per database user within the Atlas UI. If you delete the default option, you can click “Add Built-in Role” to select a new built-in role.
3b. If you have any custom roles defined, you can expand the “Custom Roles” section and select one or more roles from the “Custom Roles” dropdown menu. Click “Add Custom Role” to add more custom roles. You can also click the “Custom Roles” link to see the custom roles for your project.
3c. Expand the “Specific Privileges” section and select one or more privileges from the “Specific Privileges” dropdown menu. Click “Add Specific Privilege” to add more privileges. This assigns the user specific privileges on individual databases and collections.
Step 4: *Optional*: Specify the resources in the project that the user can access.
*By default, database users can access all the clusters and federated database instances in the project. You can restrict database users to have access to specific clusters and federated database instances by doing the following:
* Toggle “Restrict Access to Specific Clusters/Federated Database Instances” to “ON.”
* Select the clusters and federated database instances to grant the user access to from the “Grant Access To” list.
Step 5: Optional: Save as a temporary user.
Step 6: Click “Add User.”
By following these steps, you can control access management using the analytics node with Atlas Data Federation. This can be a useful way to ensure that only authorized users have access to the analytics node, and that the data on the node is protected.
Overall, setting read preferences and using analytics nodes can help you to better manage access to your data and improve the performance and scalability of your application.
To learn more about Atlas Data Federation and whether it would be the right solution for you, check out our documentation and tutorials.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to use Atlas Data Federation to control access to your analytics node and customize read preferences and tag sets for a more seamless and secure data experience. ",
"contentType": "Article"
} | Using Atlas Data Federation to Control Access to Your Analytics Node | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-swift-query-api | created | # Goodbye NSPredicate, hello Realm Swift Query API
## Introduction
I'm not a fan of writing code using pseudo-English text strings. It's a major context switch when you've been writing "native" code. Compilers don't detect errors in the strings, whether syntax errors or mismatched types, leaving you to learn of your mistakes when your app crashes.
I spent more than seven years working at MySQL and Oracle, and still wasn't comfortable writing anything but the simplest of SQL queries. I left to join MongoDB because I knew that the object/document model was the way that developers should work with their data. I also knew that idiomatic queries for each programming language were the way to go.
That's why I was really excited when MongoDB acquired Realm—a leading mobile **object** database. You work with Realm objects in your native language (in this case, Swift) to manipulate your data.
However, there was one area that felt odd in Realm's Swift SDK. You had to use `NSPredicate` when searching for Realm objects that match your criteria. `NSPredicate`s are strings with variable substitution. 🤦♂️
`NSPredicate`s are used when searching for data in Apple's Core Data database, and so it was a reasonable design decision. It meant that iOS developers could reuse the skills they'd developed while working with Core Data.
But, I hate writing code as strings.
The good news is that the Realm SDK for Swift has added the option to use type-safe queries through the Realm Swift Query API. 🥳.
You now have the option whether to filter using `NSPredicate`s:
```swift
let predicate = NSPredicate(format: "isSoft == %@", NSNumber(value: wantSoft)
let decisions = unfilteredDecisions.filter(predicate)
```
or with the new Realm Swift Query API:
```swift
let decisions = unfilteredDecisions.where { $0.isSoft == wantSoft }
```
In this article, I'm going to show you some examples of how to use the Realm Swift Query API. I'll also show you an example where wrangling with `NSPredicate` strings has frustrated me.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Prerequisites
- Realm-Cocoa 10.19.0+
## Using The Realm Swift Query API
I have a number of existing Realm iOS apps using `NSPredicate`s. When I learnt of the new query API, the first thing I wanted to do was try to replace some of "legacy" queries. I'll start by describing that experience, and then show what other type-safe queries are possible.
### Replacing an NSPredicate
I'll start with the example I gave in the introduction (and how the `NSPredicate` version had previously frustrated me).
I have an app to train you on what decisions to make in Black Jack (based on the cards you've been dealt and the card that the dealer is showing). There are three different decision matrices based on the cards you've been dealt:
- Whether you have the option to split your hand (you've been dealt two cards with the same value)
- Your hand is "soft" (you've been dealt an ace, which can take the value of either one or eleven)
- Any other hand
All of the decision-data for the app is held in `Decisions` objects:
```swift
class Decisions: Object, ObjectKeyIdentifiable {
@Persisted var decisions = List()
@Persisted var isSoft = false
@Persisted var isSplit = false
...
}
```
`SoftDecisionView` needs to find the `Decisions` object where `isSoft` is set to `true`. That requires a simple `NSPredicate`:
```swift
struct SoftDecisionView: View {
@ObservedResults(Decisions.self, filter: NSPredicate(format: "isSoft == YES")) var decisions
...
}
```
But, what if I'd mistyped the attribute name? There's no Xcode auto-complete to help when writing code within a string, and this code builds with no errors or warnings:
```swift
struct SoftDecisionView: View {
@ObservedResults(Decisions.self, filter: NSPredicate(format: "issoft == YES")) var decisions
...
}
```
When I run the code, it works initially. But, when I'm dealt a soft hand, I get this runtime crash:
```
Terminating app due to uncaught exception 'Invalid property name', reason: 'Property 'issoft' not found in object of type 'Decisions''
```
Rather than having a dedicated view for each of the three types of hand, I want to experiment with having a single view to handle all three.
SwiftUI doesn't allow me to use variables (or even named constants) as part of the filter criteria for `@ObservedResults`. This is because the `struct` hasn't been initialized until after the `@ObservedResults` is defined. To live within SwitfUIs constraints, the filtering is moved into the view's body:
```swift
struct SoftDecisionView: View {
@ObservedResults(Decisions.self) var unfilteredDecisions
let isSoft = true
var body: some View {
let predicate = NSPredicate(format: "isSoft == %@", isSoft)
let decisions = unfilteredDecisions.filter(predicate)
...
}
```
Again, this builds, but the app crashes as soon as I'm dealt a soft hand. This time, the error is much more cryptic:
```
Thread 1: EXC_BAD_ACCESS (code=1, address=0x1)
```
It turns out that, you need to convert the boolean value to an `NSNumber` before substituting it into the `NSPredicate` string:
```swift
struct SoftDecisionView: View {
@ObservedResults(Decisions.self) var unfilteredDecisions
let isSoft = true
var body: some View {
let predicate = NSPredicate(format: "isSoft == %@", NSNumber(value: isSoft))
let decisions = unfilteredDecisions.filter(predicate)
...
}
```
Who knew? OK, StackOverflow did, but it took me quite a while to find the solution.
Hopefully, this gives you a feeling for why I don't like writing strings in place of code.
This is the same code using the new (type-safe) Realm Swift Query API:
```swift
struct SoftDecisionView: View {
@ObservedResults(Decisions.self) var unfilteredDecisions
let isSoft = true
var body: some View {
let decisions = unfilteredDecisions.where { $0.isSoft == isSoft }
...
}
```
The code's simpler, and (even better) Xcode won't let me use the wrong field name or type—giving me this error before I even try running the code:
### Experimenting With Other Sample Queries
In my RCurrency app, I was able to replace this `NSPredicate`-based code:
```swift
struct CurrencyRowContainerView: View {
@ObservedResults(Rate.self) var rates
let baseSymbol: String
let symbol: String
var rate: Rate? {
NSPredicate(format: "query.from = %@ AND query.to = %@", baseSymbol, symbol)).first
}
...
}
```
With this:
```swift
struct CurrencyRowContainerView: View {
@ObservedResults(Rate.self) var rates
let baseSymbol: String
let symbol: String
var rate: Rate? {
rates.where { $0.query.from == baseSymbol && $0.query.to == symbol }.first
}
...
}
```
Again, I find this more Swift-like, and bugs will get caught as I type/build rather than when the app crashes.
I'll use this simple `Task` `Object` to show a few more example queries:
```swift
class Task: Object, ObjectKeyIdentifiable {
@Persisted var name = ""
@Persisted var isComplete = false
@Persisted var assignee: String?
@Persisted var priority = 0
@Persisted var progressMinutes = 0
}
```
All in-progress tasks assigned to name:
```swift
let myStartedTasks = realm.objects(Task.self).where {
($0.progressMinutes > 0) && ($0.assignee == name)
}
```
All tasks where the `priority` is higher than `minPriority`:
```swift
let highPriorityTasks = realm.objects(Task.self).where {
$0.priority >= minPriority
}
```
All tasks that have a `priority` that's an integer between `-1` and `minPriority`:
```swift
let lowPriorityTasks = realm.objects(Task.self).where {
$0.priority.contains(-1...minPriority)
}
```
All tasks where the `assignee` name string includes `namePart`:
```swift
let tasksForName = realm.objects(Task.self).where {
$0.assignee.contains(namePart)
}
```
### Filtering on Sub-Objects
You may need to filter your Realm objects on values within their sub-objects. Those sub-object may be `EmbeddedObject`s or part of a `List`.
I'll use the `Project` class to illustrate filtering on the attributes of sub-documents:
```swift
class Project: Object, ObjectKeyIdentifiable {
@Persisted var name = ""
@Persisted var tasks: List
}
```
All projects that include a task that's in-progress, and is assigned to a given user:
```swift
let myActiveProjects = realm.objects(Project.self).where {
($0.tasks.progressMinutes >= 1) && ($0.tasks.assignee == name)
}
```
### Including the Query When Creating the Original Results (SwiftUI)
At the time of writing, this feature wasn't released, but it can be tested using this PR.
You can include the where modifier directly in your `@ObservedResults` call. That avoids the need to refine your results inside your view's body:
```swift
@ObservedResults(Decisions.self, where: { $0.isSoft == true }) var decisions
```
Unfortunately, SwiftUI rules still mean that you can't use variables or named constants in your `where` block for `@ObservedResults`.
## Conclusion
Realm type-safe queries provide a simple, idiomatic way to filter results in Swift. If you have a bug in your query, it should be caught by Xcode rather than at run-time.
You can find more information in the docs. If you want to see hundreds of examples, and how they map to equivalent `NSPredicate` queries, then take a look at the test cases.
For those that prefer working with `NSPredicate`s, you can continue to do so. In fact the Realm Swift Query API runs on top of the `NSPredicate` functionality, so they're not going anywhere soon.
Please provide feedback and ask any questions in the Realm Community Forum. | md | {
"tags": [
"Realm",
"Swift"
],
"pageDescription": "New type-safe queries in Realm's Swift SDK",
"contentType": "News & Announcements"
} | Goodbye NSPredicate, hello Realm Swift Query API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/connectors/deploying-across-multiple-kubernetes-clusters | created | # Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti
This article is part of a three-parts series on deploying MongoDB across multiple Kubernetes clusters using the operators.
- Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud
- Mastering MongoDB Ops Manager
- Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti
With the latest version of the MongoDB Enterprise Kubernetes Operator, you can deploy MongoDB resources across multiple Kubernetes clusters! By running your MongoDB replica set across different clusters, you can ensure that your deployment remains available even in the event of a failure or outage in one of them. The MongoDB Enterprise Kubernetes Operator's Custom Resource Definition (CRD), MongoDBMulti, makes it easy to run MongoDB replica sets across different Kubernetes environments and provides a declarative approach to deploying MongoDB, allowing you to specify the desired state of your deployment and letting the operator handle the details of achieving that state.
> ⚠️ Support for multi-Kubernetes-cluster deployments of MongoDB is a preview feature and not yet ready for Production use. The content of this article is meant to provide you with a way to experiment with this upcoming feature, but should not be used in production as breaking changes may still occur. Support for this feature during preview is direct with the engineering team and on a best-efforts basis, so please let us know if trying this out at [email protected]. Also feel free to get in touch with any questions, or if this is something that may be of interest once fully released.
## Overview of MongoDBMulti CRD
Developed by MongoDB, MongoDBMulti Custom Resource allows for the customization of resilience levels based on the needs of the enterprise application.
- Single region (Multi A-Z) consists of one or more Kubernetes clusters where each cluster has nodes deployed in different availability zones in the same region. This type of deployment protects MongoDB instances backing your enterprise applications against zone and Kubernetes cluster failures.
- Multi Region consists of one or more Kubernetes clusters where you deploy each cluster in a different region, and within each region, deploy cluster nodes in different availability zones. This gives your database resilience against the loss of a Kubernetes cluster, a zone, or an entire cloud region.
By leveraging the native capabilities of Kubernetes, the MongoDB Enterprise Kubernetes Operator performs the following tasks to deploy and operate a multi-cluster MongoDB replica set:
- Creates the necessary resources, such as Configmaps, secrets, service objects, and StatefulSet objects, in each member cluster. These resources are in line with the number of replica set members in the MongoDB cluster, ensuring that the cluster is properly configured and able to function.
- Identifies the clusters where the MongoDB replica set should be deployed using the corresponding MongoDBMulti Custom Resource spec. It then deploys the replica set on the identified clusters.
- Watches for the creation of the MongoDBMulti Custom Resource spec in the central cluster.
- Uses a mounted kubeconfig file to communicate with member clusters. This allows the operator to access the necessary information and resources on the member clusters in order to properly manage and configure the MongoDB cluster.
- Watches for events related to the CentralCluster and MemberCluster in order to confirm that the multi-Kubernetes-cluster deployment is in the desired state.
You should start by constructing a central cluster. This central cluster will host the Kubernetes Operator, MongoDBMulti Custom Resource spec, and act as the control plane for the multi-cluster deployment. If you deploy Ops Manager with the Kubernetes Operator, the central cluster may also host Ops Manager.
You will also need a service mesh. I will be using Istio, but any service mesh that provides a fully qualified domain name resolution between pods across clusters should work.
Communication between replica set members happens via the service mesh, which means that your MongoDB replica set doesn't need the central cluster to function. Keep in mind that if the central cluster goes down, you won't be able to use the Kubernetes Operator to modify your deployment until you regain access to this cluster.
## Using the MongoDBMulti CRD
Alright, let's get started using the operator and build something! For this tutorial, we will need the following tools:
- gcloud
- gke-cloud-auth-plugin
- Go v1.17 or later
- Helm
- kubectl
- kubectx
- Git.
We need to set up a master Kubernetes cluster to host the MongoDB Enterprise Multi-Cluster Kubernetes Operator and the Ops Manager. You will need to create a GKE Kubernetes cluster by following the instructions in Part 1 of this series. Then, we should install the MongoDB Multi-Cluster Kubernetes Operator in the `mongodb` namespace, along with the necessary CRDs. This will allow us to utilize the operator to effectively manage and operate our MongoDB multi cluster replica set. For instructions on how to do this, please refer to the relevant section of Part 1. Additionally, we will need to install the Ops Manager, as outlined in Part 2 of this series.
### Creating the clusters
After master cluster creation and configuration, we need three additional GKE clusters, distributed across three different regions: `us-west2`, `us-central1`, and `us-east1`. Those clusters will host MongoDB replica set members.
```bash
CLUSTER_NAMES=(mdb-cluster-1 mdb-cluster-2 mdb-cluster-3)
ZONES=(us-west2-a us-central1-a us-east1-b)
for ((i=0; i<${#CLUSTER_NAMES@]:0:1}; i++)); do
gcloud container clusters create "${CLUSTER_NAMES[$i]}" \
--zone "${ZONES[$i]}" \
--machine-type n2-standard-2 --cluster-version="${K8S_VERSION}" \
--disk-type=pd-standard --num-nodes 1
done
```
The clusters have been created, and we need to obtain the credentials for them.
```bash
for ((i=0; i<${#CLUSTER_NAMES[@]:0:1}; i++)); do
gcloud container clusters get-credentials "${CLUSTER_NAMES[$i]}" \
--zone "${ZONES[$i]}"
done
```
After successfully creating the Kubernetes master and MongoDB replica set clusters, installing the Ops Manager and all required software on it, we can check them using `[kubectx`.
```bash
kubectx
```
You should see all your Kubernetes clusters listed here. Make sure that you only have the clusters you just created and remove any other unnecessary clusters using `kubectx -d ` for the next script to work.
```bash
gke_lustrous-spirit-371620_us-central1-a_mdb-cluster-2
gke_lustrous-spirit-371620_us-east1-b_mdb-cluster-3
gke_lustrous-spirit-371620_us-south1-a_master-operator
gke_lustrous-spirit-371620_us-west2-a_mdb-cluster-1
```
We need to create the required variables: `MASTER` for a master Kubernetes cluster, and `MDB_1`, `MDB_2`, and `MDB_3` for clusters which will host MongoDB replica set members. Important note: These variables should contain the full Kubernetes cluster names.
```bash
KUBECTX_OUTPUT=($(kubectx))
CLUSTER_NUMBER=0
for context in "${KUBECTX_OUTPUT@]}"; do
if [[ $context == *"master"* ]]; then
MASTER="$context"
else
CLUSTER_NUMBER=$((CLUSTER_NUMBER+1))
eval "MDB_$CLUSTER_NUMBER=$context"
fi
done
```
Your clusters are now configured and ready to host the MongoDB Kubernetes Operator.
### Installing Istio
Install [Istio (I'm using v 1.16.1) in a multi-primary mode on different networks, using the install_istio_separate_network script. To learn more about it, see the Multicluster Istio documentation. I have prepared a code that downloads and updates `install_istio_separate_network.sh` script variables to currently required ones, such as full K8s cluster names and the version of Istio.
```bash
REPO_URL="https://github.com/mongodb/mongodb-enterprise-kubernetes.git"
SUBDIR_PATH="mongodb-enterprise-kubernetes/tools/multicluster"
SCRIPT_NAME="install_istio_separate_network.sh"
ISTIO_VERSION="1.16.1"
git clone "$REPO_URL"
for ((i = 1; i <= ${#CLUSTER_NAMES@]}; i++)); do
eval mdb="\$MDB_${i}"
eval k8s="CTX_CLUSTER${i}"
sed -i'' -e "s/export ${k8s}=.*/export CTX_CLUSTER${i}=${mdb}/" "$SUBDIR_PATH/$SCRIPT_NAME"
done
sed -i'' -e "s/export VERSION=.*/export VERSION=${ISTIO_VERSION}/" "$SUBDIR_PATH/$SCRIPT_NAME"
```
Install Istio in a multi-primary mode on different Kubernetes clusters via the following command.
```bash
yes | "$SUBDIR_PATH/$SCRIPT_NAME"
```
Execute the[ multi-cluster kubeconfig creator tool. By default, the Kubernetes Operator is scoped to the `mongodb` namespace, although it can be installed in a different namespace as well. Navigate to the directory where you cloned the Kubernetes Operator repository in an earlier step, and run the tool. Got to Multi-Cluster CLI documentation to lean more about `multi cluster cli`.
```bash
CLUSTERS=$MDB_1,$MDB_2,$MDB_3
cd "$SUBDIR_PATH"
go run main.go setup \
-central-cluster="${MASTER}" \
-member-clusters="${CLUSTERS}" \
-member-cluster-namespace="mongodb" \
-central-cluster-namespace="mongodb"
```
### Verifying cluster configurations
Let's check the configurations we have made so far. I will switch the context to cluster #2.
```bash
kubectx $MDB_2
```
You should see something like this in your terminal.
```bash
Switched to context "gke_lustrous-spirit-371620_us-central1-a_mdb-cluster-2"
```
We can see `istio-system` and `mongodb` namespaces created by the scripts
```bash
kubectl get ns
NAME STATUS AGE
default Active 62m
istio-system Active 7m45s
kube-node-lease Active 62m
kube-public Active 62m
kube-system Active 62m
mongodb Active 41s
```
and the MongoDB Kubernetes operator service account is ready.
```bash
kubectl -n mongodb get sa
default 1 55s
mongodb-enterprise-operator-multi-cluster 1 52s
```
Next, execute the following command on the clusters, specifying the context for each of the member clusters in the deployment. The command adds the label `istio-injection=enabled`' to the`'mongodb` namespace on each member cluster. This label activates Istio's injection webhook, which allows a sidecar to be added to any pods created in this namespace.
```bash
CLUSTER_ARRAY=($MDB_1 $MDB_2 $MDB_3)
for CLUSTER in "${CLUSTER_ARRAY@]}"; do
kubectl label --context=$CLUSTER namespace mongodb istio-injection=enabled
done
```
### Installing the MongoDB multi cluster Kubernetes operator
Now the MongoDB Multi Cluster Kubernetes operator must be installed on the master-operator cluster and be aware of the all Kubernetes clusters which are part of the Multi Cluster. This step will add the multi cluster Kubernetes operator to each of our clusters.
First, switch context to the master cluster.
```bash
kubectx $MASTER
```
The `mongodb-operator-multi-cluster` operator needs to be made aware of the newly created Kubernetes clusters by updating the operator config through Helm. This procedure was tested with `mongodb-operator-multi-cluster` version `1.16.3`.
```bash
helm upgrade --install mongodb-enterprise-operator-multi-cluster mongodb/enterprise-operator \
--namespace mongodb \
--set namespace=mongodb \
--version="${HELM_CHART_VERSION}" \
--set operator.name=mongodb-enterprise-operator-multi-cluster \
--set "multiCluster.clusters={${CLUSTERS}}" \
--set operator.createOperatorServiceAccount=false \
--set multiCluster.performFailover=false
```
Check if the MongoDB Enterprise Operator multi cluster pod on the master cluster is running.
```bash
kubectl -n mongodb get pods
```
```bash
NAME READY STATUS RESTARTS AGE
mongodb-enterprise-operator-multi-cluster-688d48dfc6 1/1 Running 0 8s
```
It's now time to link all those clusters together using the MongoDB Multi CRD. The Kubernetes API has already been extended with a MongoDB-specific object - `mongodbmulti`.
```bash
kubectl -n mongodb get crd | grep multi
```
```bash
mongodbmulti.mongodb.com
```
You should also review after the installation logs and ensure that there are no issues or errors.
```bash
POD=$(kubectl -n mongodb get po|grep operator|awk '{ print $1 }')
kubectl -n mongodb logs -f po/$POD
```
We are almost ready to create a multi cluster MongoDB Kubernetes replica set! We need to configure the required service accounts for each member cluster.
```bash
for CLUSTER in "${CLUSTER_ARRAY[@]}"; do
helm template --show-only templates/database-roles.yaml mongodb/enterprise-operator --namespace "mongodb" | kubectl apply -f - --context=${CLUSTER} --namespace mongodb;
done
```
Also, let's generate Ops Manager API keys and add our IP addresses to the Ops Manager access list. Get the Ops Manager (created as described in [Part 2) URL. Make sure you switch the context to master.
```bash
kubectx $MASTER
URL=http://$(kubectl -n "${NAMESPACE}" get svc ops-manager-svc-ext -o jsonpath='{.status.loadBalancer.ingress0].ip}:{.spec.ports[0].port}')
echo $URL
```
Log in to Ops Manager, and generate public and private API keys. When you create API keys, don't forget to add your current IP address to API Access List.
To do so, log in to the Ops Manager and go to `ops-manager-db` organization.
![Ops Manager provides a organizations and projects hierarchy to help you manage your Ops Manager deployments. In the organizations and projects hierarchy, an organization can contain many projects
Click `Access Manager` on the left-hand side, and choose Organization Access then choose `Create API KEY` in the top right corner.
The key must have a name (I use `mongodb-blog`) and permissions must be set to `Organization Owner` .
When you click Next, you will see your `Public Key`and `Private Key`. Copy those values and save them --- you will not be able to see the private key again. Also, make sure you added your current IP address to the API access list.
Get the public and private keys generated by the API key creator and paste them into the Kubernetes secret.
```bash
kubectl apply -f - <
privateKey:
EOF
```
You also need an `Organization ID`. You can see the organization ID by clicking on the gear icon in the top left corner.
Copy the `Organization ID` and paste to the Kubernetes config map below.
```bash
kubectl apply -f - <
EOF
```
The Ops Manager instance has been configured, and you have everything needed to add the MongoDBMultiCRD to your cluster.
### Using the MongoDBMultiCRD
Finally, we can create a MongoDB replica set that is distributed across three Kubernetes clusters in different regions. I have updated the Kubernetes manifest with the full names of the Kubernetes clusters. Let's apply it now!
```bash
MDB_VERSION=6.0.2-ent
kubectl apply -f - < | md | {
"tags": [
"Connectors",
"Kubernetes"
],
"pageDescription": "Learn how to deploy MongoDB across multiple Kubernetes clusters using the operator and the MongoDBMulti CRD.",
"contentType": "Tutorial"
} | Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-meetup-jwt-authentication | created | # Easy Realm JWT Authentication with CosyncJWT
Didn't get a chance to attend the Easy Realm JWT Authentication with CosyncJWT Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.
:youtube]{vid=k5ZcrOW-leY}
In this meetup, Richard Krueger, CEO Cosync, will focus on the benefits of JWT authentication and how to easily implement CosyncJWT within a Realm application. CosyncJWT is a JWT Authentication service specifically designed for MongoDB Realm application. It supports RSA public/private key third party email authentication and a number of features for onboard users to a Realm application. These features include signup and invite email confirmation, two-factor verification through the Google authenticator and SMS through Twilio, and configurable meta-data through the JWT standard. CosyncJWT offers both a cloud implementation where Cosync hosts the application/user authentication data, and will soon be releasing a self-hosted version of the service, where developers can save their user data to their own MongoDB Atlas cluster.
In this 60-minute recording, Richard spends about 40 minutes presenting an overview of Cosync, and then dives straight into a live coding demo. After this, we have about 20 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!
Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.
To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our [community forums. Come to learn. Stay to connect.
### Transcript
Shane:
So, you're very, very welcome. We have a great guest here speaker today, Richard Krueger's joined us, which is brilliant to have. But just before Richard get started into the main event, I just wanted to do introductions and a bit of housekeeping and a bit of information about our upcoming events too. My name is Shane McAllister. I look after developer advocacy for Realm, for MongoDB. And we have been doing these meetups, I suppose, steadily since the beginning of this year, this is our fifth meetup and we're delighted that you can all attend. We're delighted to get an audience on board our platform. And as we know in COVID, our events and conferences are few and far between and everything has moved online. And while that is still the case, this is going to be a main channel for our developer community that we're trying to build up here in Realm at MongoDB.
We are going to do these regularly. We are featuring talkers and speakers from both the Realm team, our SDK leads, our advocacy team, number of them who are joining us here today as well too, our users and also our partners. And that's where Richard comes in as well too. So I do want to share with you a couple of future meetups that we have coming as well to show you what we have in store. We have a lot coming on the horizon very, very soon. So just next week we have Klaus talking about Realm Kotlin Multiplatform, followed a week or so later by Jason who's done these meetups before. Jason is our lead for our Coco team, our Swift team, and he's on June 2nd. He's talking about SwiftUI testing and Realm with projections. And then June 10th, a week later again, we have Kræn, who's talking about Realm JS for react native applications.
But that's not the end. June 17th, we have Igor from Amazon Web Services talking about building a serverless event driven application with MongoDB in Realm. And that will also be done with Andrew Morgan who's one of our developer advocates. We've built, and you can see that on our developer hub, we've built a very, very neat application integrating with Slack. And then Jason, a glutton for punishment is back at the end of June and joining us again for a key path filtering and auto open. We really are pushing forward with Swift and SwiftUI with Realm. And we see great uptake within our community. On top of all of that in July is mongodb.live. This is our key MongoDB event. It's on July 13th and 14th, fully online. And we do hope that if you're not registered already, you will sign up, just search for mongodb.live, sign up and register. It's free. And over the two days, we will have a number of talks, a number of sessions, a number of live coding sessions, a number tutorials and an interactive elements as well too. So, it's where we're announcing our new products, our roadmap for the year, and engage in across everything MongoDB, including Realm. We have a number of Realm's specific sessions there as well too. So, just a little bit of housekeeping. We're using this bevy platform, for those of you familiar with Zoom, and who've been here before to meet ups, you're very familiar. We have the chat. Thank you so much on the right-hand side, we have the chats. Thank you for joining there, letting us know where you're all from. We've got people tuning in from India, Sweden, Spain, Germany. So that's brilliant. It's great to see a global audience and I hope this time zone suits all of you.
We're going to take probably about, I think roughly, maybe 40 minutes for both the presentation and Richard's brave enough to do some live coding as well too. So we very much look forward to that. We will be having a Q&A at the end. So, by all means, please ask any questions in the chat during Richard's presentation. We have some people, Kurt and others here, and who'll be able to answer some questions on Cosync. We also have some of our advocates, Diego and Mohit who joined in and answer any questions that you have on Realm as well too. So, we can have the chat in the sidebar. But what happens in this, what happened before at other meetups is that if you have some questions at the end and you're very comfortable, we can open up your mic and your video and allow you to join in in this meetup.
It is a meetup after all, and the more the merrier. So, if you're comfortable, let me know, make a note or a DM in the chats, and you can ask your question directly to Richard or myself at the end as well too. The other thing then really with regard to the housekeeping is, do get connected. This is our meetup, this is our forums. This is our channels. And that we're on as well too. So, developer.mongodb.com is our forums and our developer hub. We're creating articles there weekly and very in-depth tutorials, demos, links to repos, et cetera. That's where our advocates hang out and create content there. And around global community, you're obviously familiar with that because you've ended up here, right? But do spread the word. We're trying to get more and more people joining that community.
The reason being is that you will be first to know about the future events that we're hosting in our Realm global community if you're signed up and a member there. As soon as we add them, you'll automatically get an email, simple button inside the email to RSVP and to join future events as well too. And as always, we're really active on Twitter. We really like to engage with our mobile community there on Twitter. So, please follow us, DM us and get in touch there as well too. And if you do, and especially for this event now, I'm hoping that you will ... We have some prizes, you can win some swag.
It's not for everybody, but please post comments and your thoughts during the presentation or later on today, and we'll pick somebody at random and we send them a bunch of nice swag, as you can see, happily models there by our Realm SDK engineers, and indeed by Richard and myself as well too. So, I won't keep you much longer, essentially, we should get started now. So I would like to introduce Richard Krueger who's the CEO of Cosync. I'm going to stop sharing my screen. Richard, you can swap over to your screen. I'll still be here. I'll be moderating the chat. I'm going to jump back in at the end as well too. So, Richard, really looking forward to today. Thank you so much. We're really happy to have you here.
Richard:
Sounds good. Okay. I'm Richard Krueger, I'm the CEO of Cosync, and I'm going to be presenting a JWT authentication system, which we've built and as we're adding more features to it as we speak. So let me go ahead and share my screen here. And I'm going to share the screen right. Okay. Do you guys see my screen?
Shane:
We see double of your screen at the moment there.
Richard:
Oh, okay. Let me take this away. Okay, there you go.
Shane:
We can see that, if you make that full screen, we should be good and happier. I'd say, are you going to move between windows because you're doing-
Richard:
Yeah, I will. There we go. Let me just ... I could make this full screen right now. I might toggle between full screen and non-full screen. So, what is a little bit about myself, I've been a Realm programmer for now almost six years. I was a very early adopter of the very first object database which I used for ... I've been doing kind of cloud synchronization programs. So my previous employer, Needley we used that extensively, that was before there was even a cloud version of Realm. So, in order to build collaborative apps, one, back in the day would have to use something like Parse and Realm or Firebase and Realm. And it was kind of hybrid systems. And then about 2017, Realm came out with its own cloud version, the Realm Cloud and I was a very early adopter and enthusiast for that system.
I was so enthusiastic. I started a company that would build some add on tools for it. The way I see Realm is as kind of a seminole technology for doing full collaborative computing, I don't think there's any technology out there. The closest would be Firebase but that is still very server centric. What I love about Realm is that it kind of grew out of the client first and then kind of synchronizes client-side database with a mirrored copy on a server automatically. So, what Realm gives you is kind of an offline first capability and that's just absolutely huge. So you could be using your local app and you could be in a non-synced environment or non-connected environment. Then later when you connect everything automatically synchronizes to a server, copy all the updates.
And I think it scales well. And I think this is really seminal to develop collaborative computing apps. So one of the things we decided to do was, and this was about a year ago was build an authentication system. We first did it on the old Realm cloud system. And then in June of last year, Mongo, actually two years ago, Mongo acquired Realm and then merged the Atlas infrastructure with the Realm front end. And that new product was released last June and called MongoDB Realm. And which I actually think is a major improvement even on Realm sync, which I was very happy with, but I think the Apple infrastructures is significantly more featured than the Realm cloud infrastructure was. And they did a number of additional support capabilities on the authentication side.
So, what we did is we retargeted, co-synced JWT as an authentication system for the new MongoDB Realm. So, what is JWT? That stands for Java Script Web Tokens. So it's essentially a mechanism by which a third party can authenticate users for an app and verify their identity. And it's secure because the technology that's used, that underlies JWT's public private key encryption, it's the same technology that's behind Bitcoin. So you have a private key that encrypts the token or signs it, and then a public key that can verify the signature that can verify that a trusted party actually authenticated the user. And so why would you want to separate these two? Well, because very often you may want to do additional processing on your users. And a lot of the authentication systems that are right now with MongoDB Realm, you have anonymous authentication, or you have email password, but you may want to get more sophisticated than that.
You may want to attach metadata. You may want to have a single user that authenticates the same way across multiple apps. And so it was to kind of deal with these more complex issues in a MongoDB Realm environment that we developed this product. Currently, this product is a SaaS system. So, we actually host the authentication server, but the summer we're going to release a self hosted version. So you, the developer can host your own users on your own MongoDB Atlas cluster, and you run a NodeJS module called CosyncJWT server, and you will basically provide your own rest API to your own application. The only thing Cosync portal will do will be to manage that for you to administrate it.
So let me move on to the next slide here. Realm allows you to build better apps faster. So the big thing about Realm is that it works in an offline mode first. And that to me is absolutely huge because if anybody has ever developed synchronized software, often you require people to be connected or just doesn't work at all. Systems like Slack come to mind or most chat programs. But with Realm you can work completely offline. And then when you come back online, your local Realm automatically syncs up to your background Realm. So what we're going to do here is kind of show you how easy it is to implement a JWT server for a MongoDB Realm app. And so what I'm going to go ahead and do is we're going to kind of create an app from scratch and we're going to first create the MongoDB Realm app.
And so what I'm going to go here, I've already created this Atlas cluster. I'm going to go ahead and create an app called, let's call it CosyncJWT test. And this is I'm inside the MongoDB Realm portal right now. And I'm just going to go ahead and create this app. And then I'm going to set up its sync parameters, all of the MongoDB Realm developers are familiar with this. And so we're going to go to is we'll give it a partition key called partition, and we will go ahead and give it a database called CosyncJWT TestDB. And then we will turn our development mode on. Wait, what happened here?
What is the problem there? Okay. Review and deploy. Okay. Let me go ahead and deploy this. Okay. So, now this is a complete Realm app. It's got nothing on it whatsoever. And if I look at its authentication providers, all I have is anonymous login. I don't have JWT set at all. And so what we're going to do is show you how easy it is to configure a JWT token. But the very first thing we need to do is create what I call an API key, and an API key enables a third party program to manipulate programmatically your MongoDB Realm app. And so for that, what we'll do is go into the access manager and for this project, we'll go ahead and create an API key. So let me go ahead and create an API key. And I'm going to call this CosyncJWT test API key, and let's give it some permissions.
I'll be the project owner and let's go ahead and create it. Okay. So that will create both a public key and a private cake. So the very first thing you need to do when you do this is you need to save all of your keys to a file, which your private key, you have to be very careful because the minute somebody has this, go in and programmatically monkey with your stuff. So, save this away securely, not the way I'm doing it now, but write it down or save it to a zip drive. So let me copy the private key here. For the purpose of this demo and let me copy the public key.
Okay. Let me turn that. Not bold. Okay. Now the other thing we need is the project ID, and that's very easy to get, you just hit this little menu here and you go to project settings and you'll have your project ID here. So I'm going to, also, I'll need that as well. And lastly, what we need is the Realm app ID. So, let's go back to Realm here and go into the Realm tab there, and you can always get your app ID here. That's so unique, that uniquely identifies your app to Realm and you'll need that both the cursing portal level and at your app level. Okay, so now we've retrieved all of our data there. So what we're going to go ahead and do now is we're going to go into our Cosync portal and we're going to go ahead and create a Cosync app that mirrors this.
So I'm going to say create new app and I'll say Cosync. And by the way, to get to the Cosync portal, just quick note, to get to the Cosync portal, all you have to do is go to our Cosync website, which is here and then click on sign in, and then you're in your Cosync. I've already signed in. So, you can register yourself with Cosync. So we're going to go ahead and create a new app called Cosync JWT test and I'm going to go ahead and create it here. And close this. And it's initializing there, just takes a minute to create it on our server. Okay. Right. Something's just going wrong here. You go back in here.
Shane:
Such is the world of live demos!
Richard:
That's just the world of live demos. It always goes wrong the very second. Okay, here we go. It's created.
Shane:
There you go.
Richard:
Yeah. Okay. So, now let me explain here. We have a bunch of tabs and this is basically a development app. We either provide free development apps up to 50 users. And after that they become commercial apps and we charge a dollar for 1,000 users per month. So, if you have an app with 10,000 users, that would cost you $10 per month. And let me go, and then there's Realm tab to initialize your Realm. And we'll go into that in a minute. And then there's a JWT tab that kind of has all of the parameters that regulate JWT. So, one of the things I want to do is talk about metadata and for this demo, we can attach some metadata to the JWT token.
So the metadata we're going to attach as a first name and a last name, just to show you how that works. So, I'm going to make this a required field. And I'll say we're going to have a first name, this actually gets attached to the user object. So this will be its path, user data dot name dot first. And then this is the field name that gets attached to the user object. And there'll be first name and let's set another field, which is user data dot name dot last. And that will be last name. Okay. And so we have our metadata defined, let's go ahead and save it. There's also some invite metadata. So, if you want to do an invitation, you could attach a coupon to an invitation. So these are various onboarding techniques.
We support two types of onboarding, which is either invitation or sign up. You could have a system of the invitation only where a user would ... the free masons or something where somebody would have to know you, and then you could only get in if you were invited. Okay. So, now what we're going to go ahead and do is initialize our instance. So that's pretty easy. Let's go take our Realm app ID here, and we paste that in and let's go ahead and initialize our Kosik JWT, our token expiration will be 24 hours. So let's go ahead and initialize this. I'll put in my project ID.
All right. My project ID here, and then I will put in my public key, and I will put in my private key here. Okay. Let's go ahead and do this. Okay. And it's successfully initialized it, and we can kind of see that it did. If we go back over here to authentication, we're going to actually see that now we have cosynced JWT authentication. If we go in, it'll actually have set the signing algorithm to RS256, intellectually, have set the public key. So the Cosync, I mean, the MongoDB Realm app will hold onto the public key so that it knows that only this provider which holds onto the private key has the ability to sign. And then it also is defined metadata fields, which are first name, last name and email. Okay. So, anytime you sign up, those metadata fields will be kind of cemented into your user object.
And we also provide APIs to be able to change the metadata at runtime. So if you need to change it, you can. But it's important to realize that this metadata doesn't reside in Realm, it resides with the provider itself. And that's kind of the big difference there. So you could have another database that only had your user data. That was not part of your MongoDB Realm database, and you could mine that database for just your user stuff. So, that's the idea there. So the next step, what we're going to do is we're going to go ahead and run this kind of sample app. So the sample, we provide a number of sample apps. If you go to our docs here and you go down to sample application, we provide a good hub project called Cosync samples, which has samples for both our Cosync storage product, which we're not talking about here today, and our CosyncJWT project.
Cosync storage basically maps Amazon as three assets onto a MongoDB Realm app. So CosyncJWT has different directories. So, we have a Swift directory, a Kotlin directory and a ReactNative. Today I'm primarily just showing the Swift, but we also have ReactNative binding as well that works fine with this example. Okay. So what happens is you go ahead and clone this. You would go ahead and clone this, Github project here and install it. And then once you've installed it, let me bring it up here, here we go, this is what you would get. We have a sample app called CosyncJWT iOS. Now, that has three packages that depends on. One is a package called CosyncJWT Swift, which wrappers around our arrest API that uses NSURL.
And then we depend on the Realm packages. And so this little sample app will do nothing, but allow you to sign up a user to CosyncJWT, and logging in. And it'll also do things like two factor verification. We support both phones two factor verification if you have a Twilio account and we support the Google two-factor authentication, which is free, and even more secure than a phone. So, that gives you an added level of security, and I'll just show you how easy it is too. So, in order to kind of customize this, you need to set two constants. You need to set your Realm app ID and your wrap token. So, that's very easy to do. I can go ahead, and let me just copy this Realm app ID, which I copied from the Realm portal.
And I'll stick that here. Let me go ahead and get the app token, which itself is a JWT token because the Cosync, this token enables your client side app to use the CosyncJWT rust API and identify you as the client is belonging to the sound. And so if we actually looked at that token, we could go to utilities that have used JWT. You always use jwt.io, and you can paste any JWT token in the world into this little thing. And you'll see that this is this app token is in fact itself, a JWT token, and it's signed with CosyncJWT, and that will enable your client side to use the rest API.
So, let's go ahead and paste that in here, and now we're ready to go. So, at this point, if I just run this app, it should connect to the MongoDB Realm instance that we just previously created, and it should be able to connect to the CosyncJWT service for authentication. There are no users by the way in the system yet. So, let me go ahead and build and run this app here, and comes up, [inaudible 00:29:18] an iPhone 8+ simulator. And what we'll do is we'll sign up a user. So if we actually go to the JWT users, you'll see we have no users in our system at all. So, what we're going to go ahead and do is sign up a user. It'll just come up in a second.
Shane:
Simulators are always slow, Richard, especially-
Richard:
I know.
Shane:
... when you try to enable them. There you go.
Richard:
Right. There we go. Okay. So I would log in here. This is just simple SwiftUI. The design is Apple, generic Apple stuff. So, this was our signup. Now, if I actually look at the code here, I have a logged out view, and this is the actual calls here. I would have a sign up where I would scrape the email, the password, and then some metadata. So what I'm going to go ahead and do is I'm going to go ahead and put a break point right there and let's go ahead and sign myself up as [email protected], give it a password and let's go ahead and let's say Richard Krueger. So, at this point, we're right here. So, if we look at ... Let me just make this a little bit bigger.
Shane:
Yeah. If you could a little bit, because some of this obviously bevy adjusts itself by your connection and sometimes-
Richard:
Right away.
Shane:
... excavated in code. Thank you.
Richard:
Yeah. Okay. So if we look at the ... We have an email here, which is, I think we might be able to see it. I'm not sure. Okay, wait. Self.email. So, for some reason it's coming out empty there, but I'm pretty sure it's not empty. It's just the debugger is not showing the right stuff, but that's the call. I would just make a call to CosyncJWT sign up. I pass in an email, I pass in a password, pass in the metadata and it'll basically come back with it signed in. So, if I just run it here, it came back and then should not be ... there's no error. And it's now going to ask me to verify my code. So, the next step after that will be ... So, at this point I should get an email here. Let's run. So, it's not going to be prompting me for a code. So I just got this email, which says let me give it a code. And I'll make another call, Russ call to verify the code. And this should let me in.
Yeah. Which it did log me in. So, the call to verify the code. We also have things where you can just click on a link. So, by the way, let me close this. How your signup flow, you can either have code, link or none. So, you might have an app that doesn't need purification. So then you would just turn it on to none. If you don't want to enter a code, you would have them click on a link and all of these things themselves can be configured. So, the emails that go out like this particular email looks very generic. But I can customize the HTML of that email with these email templates. So, the email verification, the password reset email, all of these emails can be customized to 50 branding of the client itself.
So, you wouldn't have the words cosync in there. Anyways, so that kind of shows you. So now let me go ahead and log out and I can go ahead and log back in if I wanted to. Let me go ahead and the show you where the log in is. So, this is going to call user manager, which will have a log in here. And that we'll call Realm manage ... Wait a minute, log out, log in this right here. So, let's go put a break point on log in and I'm going to go ahead and say Richard@[email protected]. I'm going to go ahead and log in here. And I just make a call to CosyncJWT rest. And again, I should be able to just come right back.
And there I am. Often, by the way, you'll see this dispatch main async a lot of times when you make Rest calls, you come back on a different thread. The thing to remember, I wrote an article on Medium about this, but the thing to remember about Realm and threads is this, what happens on a thread? It's the Vegas rule. What happens on a thread must stay on a thread. So with Realm does support multithreading very, very well except for the one rule. If you open a Realm on a thread, you have to write it on the same thread and read it from the same thread. If you try and open a Realm on one thread and then try and read it from another thread, you'll cause an exception. So, often what I do a lot is force it back on the main thread.
And that's what this dispatch queue main async is. So, this went ahead and there's no error and it should just go ahead and log me in. So, what this is doing here, by the way, let me step into this. You'll see that that's going to go ahead and now issue a Realm log in. So that's an actual Realm call app.login.credentials, and then I pass it the JWT token that was returned to me by CosyncJWT. So by the way, if you don't want to force your user to go through the whole authentication procedure, every time he takes this app out of process, you can go ahead and save that JWT token to your key chain, and then just redo this this way.
So you could bypass that whole step, but this is a demo app, so I'd put it in there. So this will go ahead and log me in and it should transition, let me see. Yeah, and it did. Okay. So, that kind of shows you that. We also have capabilities for example, if you wanted to change your password, I could. So, I could change my password. Let me give my existing password and then I'll change it to a new password and let me change my password. And it did that. So, that itself is a function called change password.
It's right here, Cosync change password, is passing your new password, your old password, and that's another Rest call. We also have forgotten password, the same kind of thing. And we have two factor phone verification, which I'm not going to go into just because of time right now, or on two factor Google authentication. So, this was kind of what we're working on. It's a system that you can use today as a SaaS system. I think it's going to get very interesting this summer, once we release the self hosted version, because then, we're very big believers in open source, all of the code that you have here result released under the Apache open source license. And so anything that you guys get as developers you can modify and it's the same way that Realm has recently developed, Andrew Morgan recently developed a great chat app for Realm, and it's all equally under the Apache license.
So, if you need to implement chat functionality, I highly recommend to go download that app. And they show you very easily how to build a chat app using the new Swift combine nomenclature which was absolutely phenomenal in terms of opaque ... I mean, in terms of terseness. I actually wrote a chat program recently called Tinychat and I'd say MongoDB Realm app, and it's a cloud hosted chat app that is no more than 70 lines of code. Just to give you an idea how powerful the MongoDB Realm stuff and I'm going to try and get a JWT version of that posted in the next few days. And without it, yes, we probably should take some questions because we're coming up at quarter to the hour here. Shane.
Shane:
Excellent. No, thank you, Richard. Definitely, there's been some questions in the sidebar. Kurt has been answering some of them there, probably no harm to revisit a couple of them. So, Gigan, I hope I'm pronouncing that correctly as well too, was asking about changing the metadata at the beginning, when you were showing first name, last name, can you change that in future? Can you modify it?
Richard:
Yeah. So, if I want to add to the metadata, so what I could do is if I want to go ahead and add another field, so let's go ahead and add another field a year called user data coupon, and I'll just call this guy coupon. I can go ahead and add that. Now if I add something that's required, that could be a problem if I already have users without a required piece of metadata. So, we may actually have to come up with some migration techniques there. You don't want to delete metadata, but yeah, you could go ahead and add things.
Shane:
And is there any limits to how much metadata? I mean, obviously you don't want-
Richard:
Not really.
Shane:
... fields for users to fill in, but is there any strict limit at all?
Richard:
I mean, I don't think you want to store image data even if it's 64 encoded. If you were to store an avatar as metadata I'd store the link to the image somewhere, you might store that avatar on Amazon, that's free, and then you would store the link to it in the metadata. So, it's got normally JWT tokens pretty sparse. It's something supposed to be a 10 HighQ object, but the metadata I find is one of the powers of this thing because ... and all of this metadata gets rolled into the user objects. So, if you get the Realm user object, you can get access to all the metadata once you log in.
Shane:
I mean, the metadata can reside with the provider. That's obviously really important for, look, we see data breaches and I break, so you can essentially have that metadata elsewhere as well too.
Richard:
Right.
Shane:
It's very important for the likes of say publications and things like that.
Richard:
Right. Yeah, exactly. And by the way, this was a big feature MongoDB Realm added, because metadata was not part of the JWT support in the old Realm cloud. So, it was actually a woman on the forum. So MongoDB employee that tuned me into this about a year ago. And I think it was Shakuri I think is her name. And that's why it was after some discussion on the forums. By the way, these forums are fantastic. If you have any, you meet people there, you have great discussions. If you have a problem, you can just post it. If I know an issue, I try to answer it. I would say there it's much better than flashed off. And then it's the best place to get Realm questions answered okay much better than Stack Overflow. So, [inaudible 00:44:20]. Right?
Shane:
I know in our community, especially for Realm are slightly scattered all rights as well too. Our advocates look at questions on Stack Overflow, also get help comments and in our forum as well too. And I know you're an active member there, which is great. Just on another question then that came up was the CosyncJWT. You mentioned it was with Swift and ReactNative by way of examples. Have you plans for other languages?
Richard:
We have, I don't think we've published it yet, but we have a Kotlin example. I've just got to dig that up. I mean, if we like to hear more, I think Swift and Kotlin and React Native are the big ones. And I've noticed what's going on is it seems that people feel compelled to have a Native iOS, just because that's the cache operating system. And then what they do is they'll do an iOS version and then they'll do a ReactNative version to cover desktop and Android. And I haven't bumped into that many people that are pure Android, purest or the iOS people tend to be more purest than the Android people. I know...
Shane:
... partly down to Apple's review process with apps as well too can be incredibly stringent. And so you want to by the letter of the law, essentially try and put two things as natively as possible. Or as we know, obviously with Google, it's much more open, it's much freer to use whatever frameworks you want. Right?
Richard:
Right. I would recommend though, if you're an iOS developer, definitely go with SwiftUI for a number ... Apple is putting a huge amount of effort into that. And I have the impression that if you don't go there, you'll be locked out of a lot of features. And then more importantly, it's like Jason Flax who's a MongoDB employee has done a phenomenal job on getting these MongoDB Realm combined primitives working that make it just super easy to develop a SwiftUI app. I mean, it's gotten to the point where one of our developer advocate, Kurt Libby, is telling me that his 12 year old could
Jason flax's stuff. That was like normally two years ago to use something like Realm required a master's degree, but it's gone from a master's degree to a twelve-year-old. It just in simplification right now.
Shane:
Yeah. We're really impressed with what we've seen in SwiftUI. It's one of the areas we see a lot of innovation, a huge amount of traction, I suppose. Realm, historically, was seen as a leader in the Swift space as well too. Not only did we have Realm compatible with Swift, but we talked about swift a lot outside of, we led one of the largest Swift meetup groups in San Francisco at the time. And we see the same happening again with SwiftUI. Some people, look, dyed in the wool, developers are saying, "Oh, it's not ready for real time commercial apps," but it's 95% there. I think you can build an app wholly with SwiftUI. There's a couple of things that you might want to do, and kind of using UI kit and other things as well too, it's all right, but that's going to change quickly. Let's see what's in store at DC as well for us coming up.
Richard:
Yeah, exactly.
Shane:
Right. Excellent. I know, does anybody, I said at the beginning, we can open up the mic and the cameras to anybody who'd like to come on and ask a question directly of Richard or myself. If you want to do that, please make a comment in the chat. And I can certainly do that, if not just ask the questions in the chat there as well too. While we're waiting for that, you spoke about Google two factor and also Twilio. Your example there was with the code with the Google email, how much more work is involved in the two factor side of things either
Richard:
So, the two factor stuff, what you have to do, when you go here, you can turn on two factor verification. So, if you select Google you would have to put in your ... Let me just see what my ... You would have to put in the name of your Google app. And then if you did phone ... Yes, change it, you'd have to put your Twilio account SI, your off the token from Twilio and your Twilio phone number. Now, Twilio, it looks cheap. It's just like a penny a message. It adds up pretty fast.
Richard:
My previous company I worked with, Needley, we had crypto wallet for EOS and we released it and we had 15,000 users within two weeks. And then our Twilio bill was $4,000 within the week. It just added up very quickly. So it's the kind of thing that ... it doesn't cost much, but if you start sending out machine gunning out these SMS messages, it can start adding up. But if you're a banking app, you don't really care. You're more interested in providing the security for your ... Anyways, I guess that would answer that question. Are there any other questions here?
Shane:
There's been a bit of, I think it was a comment that was funny while you were doing the demo there, Richard, with regards to working on the main thread. And you were saying that there was issues. Now, look, Realm, we have frozen objects as well too, if you need to pass objects rights, but they are frozen. So maybe you might want to just maybe clarify your thoughts on that a little bit there. There was one or two comments in the sidebar.
Richard:
Well, with threading in Realm, this is what I tend to do. If you have a background, one of the problems you bump into is the way threading in SwiftUI works is you have your main thread that's a little bit like you're Sergeant major. And then you have all your secondary threads that are more like your privates. And the Sergeant major says, "Go do this, go clean the latrine, or go peel some potatoes." And he doesn't really care which private goes off and doesn't, just the system in the background will go assign some private to go clean the little train. But when Realm, you have to be careful because if you do an async open on a particular thread, particular worker thread, then all the other subsequent things, all the writes and the reads should be done on that same thread.
Richard:
So, what I found is I go ahead and create a worker thread at the beginning that will kind of handle requests. And then I make sure I can get back there and to that particular thread. There was an article I wrote on Medium about how to do this, because you obviously you don't want to burden your main thread with all your Realm rights. You don't want to do that because it will start eating ... I mean, your main threads should be for SwiftUI and nothing more. And you want to then have a secondary thread that can process that, and having just one secondary thread that's working in the background is sufficient. And then that guy handles the Realm request in a sense. That was the strategy seemed to work best I found.
Richard:
But you could open a Realm on your primary thread. You can also open the same Realm on a background thread. You just have to be careful when you're doing the read better beyond the Realm that was opened on the thread that it was opened on that the read is taking place from. Otherwise, you just got an exception. That's what I've found. But I can't say that I'm a complete expert at it, but in general, with most of my programming, I've always had to eventually revert to kind of multi-threading just to get the performance up because otherwise you'll just be sitting there just waiting and waiting and waiting sometimes.
Shane:
Yeah, no, that's good. And I think everybody has a certain few points on this. Sebastian asked the question originally, I know both Mohit and Andrew who are developer advocates here at Realm have chimed in on that as well too. And it is right by best practices and finding the effect on what might happen depending on where you are trying to read and write.
Richard:
Right. Well, this particular example, I was just forcing it back on the main thread, because I think that's where I had to do the Rest calls from. There was an article I wrote, I think it was about three months ago, Multithreading and MongoDB Realm, because I was messing around with it for some imaging out that there was writing and we needed to get the performance out of it. And so anyways, that was ... But yeah, I hope that answers that question.
Shane:
Yeah, yeah. Look, we could probably do a whole session on this as well. That's the reality of it. And maybe we might do that. I'm conscious of everybody's time. It'd be mindful of that. And didn't see anything else pop up in the questions. Andrew's linked your Medium articles there as well too. We've published them on Realm, also writes on Medium. We publish a lot of the content, we create on dev up to Medium, but we do and we are looking for others who are writing about Realm that who may be writing Medium to also contribute. So if you are, please reach out to us on Medium there to add to that or ping us on the forums or at Realm. I look after a lot of our Twitter content on that Realm as we [crosstalk 00:56:12] there. I've noticed during this, that nobody wants T-shirts and face masks, nobody's tweeted yet at Realm. Please do. We'll keep that open towards the end of the day as well. If there's no other questions, I first of all want to say thank you very much, Richard.
Richard:
Well, thank you for having me.
Shane:
No, we're delighted. I think this is a thing that we want to do ongoing. Yes, we are running our own meetups with our own advocates and engineers, but we also want, at least perhaps once a month, maybe more if we could fit it in to invite guests along to share their experience of using MongoDB Realm as well too. So, this is the first one of those. As we saw at the beginning, we do have Igor in AWS during the presentation in June as well too. But really appreciate the attendance here today. Do keep an eye. We are very busy. You saw it's pretty much once week for the next four or five weeks, these meetups. Please share amongst your team as well too.
Shane:
And above all, join us. As you said, Richard, look, I know you're a contributor in our forums and we do appreciate that. We have a lot of active participants in our forums. We like to, I suppose, let the community answer some of those questions themselves before the engineers and the advocates dive in. It's a slow growth obviously, but we're seeing that happen as well too, so we do appreciate it. So communicate with us via forums, via @realm and go to our dev hub, consume those articles. The articles Richard mentioned about the chat app is on our dev hub by Andrew. If you go look there and select actually the product category, you can select just mobile and see all our mobile articles. Since certainly November of last year, I think there's 24, 25 articles there now. So, they are relatively recent and relatively current. So, I don't know, Richard, have you any parting words? I mean, where do people ... you said up to 50 users it's free, right? And all that.
Richard:
Right. So, up to 50 users it's free. And then after that you would be charged a dollar for 1,000 users per month.
Shane:
That's good.
Richard:
Well, what we're going to try and do is push once we get the self hosted version. We're actually going to try and push developers into that option, we don't know the price of it yet, but it will be equally as affordable. And then you basically host your own authentication server on your own servers and you'll save all your users to your own Atlas cluster. Because one of the things we have bumped into is people go, "Well, I don't really know if I want to have all my user data hosted by you," and which is a valid point. It's very sensitive data.
Shane:
Sure.
Richard:
And so that was why we wanted to build an option so your government agency, you can't share your user data, then you would host, we would just provide the software for you to do that and nothing more. And so that's where the self hosted version of CosyncJWT would do them.
Shane:
Excellent. It sounds great. And look, you mentioned then your storage framework that you're building at the moment as well too. So hopefully, Richard, we can have you back in a couple of months when that's ready.
Richard:
Great. Okay. Sounds good.
Shane:
Excellent.
Richard:
Thanks, Shane.
Shane:
No problem at all. Well, look, thank you everybody for tuning in. This is recorded. So, it will end up on YouTube as well too and we'll send that link to the group once that's ready. We'll also end up on the developer hub where we've got a transcript of the content that Richard's presented here as well. That'd be perfect. Richard, you have some pieces in your presentation too that we can share in our community as well too later?
Richard:
Yeah, yeah. That's fine. Go ahead and share.
Shane:
Excellent. We'll certainly do that.
Richard:
Yeah.
Shane:
So, thank you very much everybody for joining, and look forward to seeing you at the future meetups, as I said, five of them over the next six weeks or so. Very, very [inaudible 01:00:34] time for us. And thank you so much, Richard. Really entertaining, really informative and great to see the demo of the live coding.
Richard:
Okay. Thanks Shane. Excellent one guys.
Shane:
Take care, everybody. Bye.
Richard:
Bye. | md | {
"tags": [
"Realm"
],
"pageDescription": "This meetup talk will focus on the benefits of JWT authentication and how to easily implement CosyncJWT within a Realm application.",
"contentType": "Article"
} | Easy Realm JWT Authentication with CosyncJWT | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/mongodb-data-parquet | created | # How to Get MongoDB Data into Parquet in 10 Seconds or Less
For those of you not familiar with Parquet, it’s an amazing file format that does a lot of the heavy lifting to ensure blazing fast query performance on data stored in files. This is a popular file format in the Data Warehouse and Data Lake space as well as for a variety of machine learning tasks.
One thing we frequently see users struggle with is getting NoSQL data into Parquet as it is a columnar format. Historically, you would have to write some custom code to get the data out of the database, transform it into an appropriate structure, and then probably utilize a third-party library to write it to Parquet. Fortunately, with MongoDB Atlas Data Federation's $out to cloud object storage - Amazon S3 or Microsoft Azure Blob Storage, you can now convert MongoDB Data into Parquet with little effort.
In this blog post, I’m going to walk you through the steps necessary to write data from your Atlas Cluster directly to cloud object storage in the Parquet format and then finish up by reviewing some things to keep in mind when using Parquet with NoSQL data. I’m going to use a sample data set that contains taxi ride data from New York City.
## Prerequisites
In order to follow along with this tutorial yourself, you will need the following:
An Atlas cluster with some data in it. (It can be the sample data.)
An AWS account with privileges to create IAM Roles and cloud object storage buckets (to give us access to write data to your cloud object storage bucket).
## Create a Federated Database Instance and Connect to cloud object storage
The first thing you'll need to do is navigate to the "Data Federation" tab on the left hand side of your Atlas Dashboard and then click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI.
Then, you need to connect your cloud object storage bucket to your Federated Database Instance. This is where we will write the Parquet files. The setup wizard should guide you through this pretty quickly but you will need access to your credentials for AWS. (Be sure to give Atlas Data Federation “Read and Write” access to the bucket so it can write the Parquet files there.)
Once you’ve connected your cloud object storage bucket, we’re going to create a simple data source to query the data in cloud object storage so we can verify we’ve written the data to cloud object storage at the end of this tutorial. Our new setup tool makes it easier than ever to configure your Federated Database Instance to take advantage of the partitioning of data in cloud object storage. Partitioning allows us to only select the relevant data to process in order to satisfy your query. (I’ve put a sample file in there for this test that will fit how we’re going to partition the data by \_cab\_type).
``` bash
mongoimport --uri mongodb+srv://:@/ --collection --type json --file
```
## Connect Your Federated Database Instance to an Atlas Cluster
Now we’re going to connect our Atlas cluster, so we can write data from it into the Parquet files. This involves picking the cluster from a list of clusters in your Atlas project and then selecting the databases and collections you’d like to create Data Sources from and dragging them into your Federated Database Instance.
## $out to cloud object storage in Parquet
Now we’re going to connect to our Federated Database Instance using the mongo shell and execute the following command. This is going to do quite a few things, so I’m going to explain the important ones.
- First, you can use the ‘filename’ field of the $out stage to have your Federated Database Instance partition files by “_cab_type”, so all the green cabs will go in one set of files and all the yellow cabs will go in another.
- Then in the format, we’re going to specify parquet and determine a maxFileSize and maxRowGroupSize.
-- maxFileSize is going to determine the maximum size each partition will be.
-- maxRowGroupSize is going to determine how records are grouped inside of the Parquet file in “row groups” which will impact performance querying your Parquet files, similarly to file size.
- Lastly, we’re using a special Atlas Data Federation aggregation “background: true” which simply tells the Federated Database Instance to keep executing the query even if the client disconnects. (This is handy for long running queries or environments where your network connection is not stable.)
``` js
db.getSiblingDB("clusterData").getCollection("trips").aggregate(
{
"$out" : {
"s3" : {
"bucket" : "ben.flast",
"region" : "us-east-1",
"filename" : {
"$concat" : [
"taxi-trips/",
"$_cab_type",
"/"
]
},
"format" : {
"name" : "parquet",
"maxFileSize" : "10GB",
"maxRowGroupSize" : "100MB"
}
}
}
}
], {
background: true
})
```
![
## Blazing Fast Queries on Parquet Files
Now, to give you some idea of the potential performance improvements for Object Store Data you can see, I’ve written three sets of data, each with 10 million documents: one in Parquet, one in uncompressed JSON, and another in compressed JSON. And I ran a count command on each of them with the following results.
*db.trips.count()*
10,000,000
| Type | Data Size (GB) | Count Command Latency (Seconds) |
| ---- | -------------- | ------------------------------- |
| JSON (Uncompressed) | \~16.1 | 297.182 |
| JSON (Compressed) | \~1.1 | 78.070 |
| Parquet | \~1.02 | 1.596 |
## In Review
So, what have we done and what have we learned?
- We saw how quickly and easily you can create a Federated Database Instance in MongoDB Atlas.
- We connected an Atlas cluster to our Federated Database Instance.
- We used our Federated Database Instance to write Atlas cluster data to cloud object storage in Parquet format.
- We demonstrated how fast and space-efficient Parquet is when compared to JSON.
## A Couple of Things to Remember About Atlas Data Federation
- Parquet is a super fast columnar format that can be read and written with Atlas Data Federation.
- Atlas Data Federation takes advantage of various pieces of metadata contained in Parquet files, not just the maxRowGroupSize. For instance, if your first stage in an aggregation pipeline was $project: {fieldA: 1, filedB: 1}, we would only read the two columns from the Parquet file which results in faster performance and lower costs as we are scanning less data.
- Atlas Data Federation writes Parquet files flexibly so if you have polymorphic data, we will create union columns so you can have ‘Column A - String’ and ‘Column A - Int’. Atlas Data Federation will read union columns back in as one field but other tools may not handle union types. So if you’re going to be using these Parquet files with other tools, you should transform your data before the $out stage to ensure no union columns.
- Atlas Data Federation will also write files with different schemas if it encounters data with varying schemas throughout the aggregation. It can handle different schemas across files in one collection, but other tools may require a consistent schema across files. So if you’re going to be using these Parquet files with other tools, you should do a $project with $convert’s before the $out stage to ensure a consistent schema across generated files.
- Parquet is a great format for your MongoDB data when you need to use columnar oriented tools like Tableau for visualizations or machine learning frameworks that use data frames. Parquet can be quickly and easily converted into Pandas data frames in Python. | md | {
"tags": [
"Atlas",
"Parquet"
],
"pageDescription": "Learn how to transform MongoDB data to Parquet with Atlas Data Federation.",
"contentType": "Tutorial"
} | How to Get MongoDB Data into Parquet in 10 Seconds or Less | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/semantic-search-mongodb-atlas-vector-search | created | # How to Do Semantic Search in MongoDB Using Atlas Vector Search
Have you ever been looking for something but don’t quite have the words? Do you remember some characteristics of a movie but can’t remember the name? Have you ever been trying to get another sweatshirt just like the one you had back in the day but don’t know how to search for it? Are you using large language models, but they only know information up until 2021? Do you want it to get with the times?! Well then, vector search may be just what you’re looking for.
## What is vector search?
Vector search is a capability that allows you to do semantic search where you are searching data based on meaning. This technique employs machine learning models, often called encoders, to transform text, audio, images, or other types of data into high-dimensional vectors. These vectors capture the semantic meaning of the data, which can then be searched through to find similar content based on vectors being “near” one another in a high-dimensional space. This can be a great compliment to traditional keyword-based search techniques but is also seeing an explosion of excitement because of its relevance to augment the capabilities of large language models (LLMs) by providing ground truth outside of what the LLMs “know.” In search use cases, this allows you to find relevant results even when the exact wording isn't known. This technique can be useful in a variety of contexts, such as natural language processing and recommendation systems.
Note: As you probably already know, MongoDB Atlas has supported full-text search since 2020, allowing you to do rich text search on your MongoDB data. The core difference between vector search and text search is that vector search queries on meaning instead of explicit text and therefore can also search data beyond just text.
## Benefits of vector search
- Semantic understanding: Rather than searching for exact matches, vector search enables semantic searching. This means that even if the query words aren't present in the index, but the meanings of the phrases are similar, they will still be considered a match.
- Scalable: Vector search can be done on large datasets, making it perfect for use cases where you have a lot of data.
- Flexible: Different types of data, including text but also unstructured data like audio and images, can be semantically searched.
## Benefits of vector search with MongoDB
- Efficiency: By storing the vectors together with the original data, you avoid the need to sync data between your application database and your vector store at both query and write time.
- Consistency: Storing the vectors with the data ensures that the vectors are always associated with the correct data. This can be important in situations where the vector generation process might change over time. By storing the vectors, you can be sure that you always have the correct vector for a given piece of data.
- Simplicity: Storing vectors with the data simplifies the overall architecture of your application. You don't need to maintain a separate service or database for the vectors, reducing the complexity and potential points of failure in your system.
- Scalability: With the power of MongoDB Atlas, vector search on MongoDB scales horizontally and vertically, allowing you to power the most demanding workloads.
> Want to experience Vector Search with MongoDB quick and easy? Check out this automated demo on GitHub as you walk through the tutorial.
## Set up a MongoDB Atlas cluster
Now, let's get into setting up a MongoDB Atlas cluster, which we will use to store our embeddings.
**Step 1: Create an account**
To create a MongoDB Atlas cluster, first, you need to create a MongoDB Atlas account if you don't already have one. Visit the MongoDB Atlas website and click on “Register.”
**Step 2: Build a new cluster**
After creating an account, you'll be directed to the MongoDB Atlas dashboard. You can create a cluster in the dashboard, or using our public API, CLI, or Terraform provider. To do this in the dashboard, click on “Create Cluster,” and then choose the shared clusters option. We suggest creating an M0 tier cluster.
If you need help, check out our tutorial demonstrating the deployment of Atlas using various strategies.
**Step 3: Create your collections**
Now, we’re going to create your collections in the cluster so that we can insert our data. They need to be created now so that you can create an Atlas trigger that will target them.
For this tutorial, you can create your own collection if you have data to use. If you’d like to use our sample data, you need to first create an empty collection in the cluster so that we can set up the trigger to embed them as they are inserted. Go ahead and create a “sample_mflix” database and “movies” collection now using the UI, if you’d like to use our sample data.
## Setting up an Atlas trigger
We will create an Atlas trigger to call the OpenAI API whenever a new document is inserted into the cluster.
To proceed to the next step using OpenAI, you need to have set up an account on OpenAI and created an API key.
If you don't want to embed all the data in the collection you can use the "sample_mflix.embedded_movies" collection for this which already has embeddings generated by Open AI, and just create an index and run Vector Search queries.
**Step 1: Create a trigger**
To create a trigger, navigate to the “Triggers” section in the MongoDB Atlas dashboard, and click on “Add Trigger.”
**Step 2: Set up secrets and values for your OpenAI credentials**
Go over to “App Services” and select your “Triggers” application.
Click “Values.”
You’ll need your OpenAI API key, which you can create on their website:
Create a new Value
Select “Secret” and then paste in your OpenAI API key.
Then, create another value — this time, a “Value” — and link it to your secret. This is how you will securely reference this API key in your trigger.
Now, you can go back to the “Data Services” tab and into the triggers menu. If the trigger you created earlier does not show up, just add a new trigger. It will be able to utilize the values you set up in App Services earlier.
**Step 3: Configure the trigger**
Select the “Database” type for your trigger. Then, link the source cluster and set the “Trigger Source Details” to be the Database and Collection to watch for changes. For this tutorial, we are using the “sample_mflix” database and the “movies” collection. Set the Operation Type to 'Insert' ‘Update’ ‘Replace’ operation. Check the “Full Document” flag and in the Event Type, choose “Function.”
In the Function Editor, use the code snippet below, replacing DB Name and Collection Name with the database and collection names you’d like to use, respectively.
This trigger will see when a new document is created or updated in this collection. Once that happens, it will make a call to the OpenAI API to create an embedding of the desired field, and then it will insert that vector embedding into the document with a new field name.
```javascript
exports = async function(changeEvent) {
// Get the full document from the change event.
const doc = changeEvent.fullDocument;
// Define the OpenAI API url and key.
const url = 'https://api.openai.com/v1/embeddings';
// Use the name you gave the value of your API key in the "Values" utility inside of App Services
const openai_key = context.values.get("openAI_value");
try {
console.log(`Processing document with id: ${doc._id}`);
// Call OpenAI API to get the embeddings.
let response = await context.http.post({
url: url,
headers: {
'Authorization': `Bearer ${openai_key}`],
'Content-Type': ['application/json']
},
body: JSON.stringify({
// The field inside your document that contains the data to embed, here it is the "plot" field from the sample movie data.
input: doc.plot,
model: "text-embedding-ada-002"
})
});
// Parse the JSON response
let responseData = EJSON.parse(response.body.text());
// Check the response status.
if(response.statusCode === 200) {
console.log("Successfully received embedding.");
const embedding = responseData.data[0].embedding;
// Use the name of your MongoDB Atlas Cluster
const collection = context.services.get("").db("sample_mflix").collection("movies");
// Update the document in MongoDB.
const result = await collection.updateOne(
{ _id: doc._id },
// The name of the new field you'd like to contain your embeddings.
{ $set: { plot_embedding: embedding }}
);
if(result.modifiedCount === 1) {
console.log("Successfully updated the document.");
} else {
console.log("Failed to update the document.");
}
} else {
console.log(`Failed to receive embedding. Status code: ${response.statusCode}`);
}
} catch(err) {
console.error(err);
}
};
```
## Configure index
Now, head over to Atlas Search and create an index. Use the JSON index definition and insert the following, replacing the embedding field name with the field of your choice. If you are using the sample_mflix database, it should be “plot_embedding”, and give it a name. I’ve used “moviesPlotIndex” for my setup with the sample data.
First, click the “atlas search” tab on your cluster
![Databases Page for a Cluster with an arrow pointing at the Search tab][1]
Then, click “Create Search Index.”
![Search tab within the Cluster page with an arrow pointing at Create Search Index
Create “JSON Editor.”
Then, select your Database and Collection on the left and a drop in the code snippet below for your index definition.
```json
{
"type": "vectorSearch",
"fields": {
"path": "plot_embedding",
"dimensions": 1536,
"similarity": "cosine",
"type": "vector"
}]
}
```
## Insert your data
Now, you need to insert your data. As your data is inserted, it will be embedded using the script and then indexed using the KNN index we just set.
If you have your own data, you can insert it now using something like [MongoImports.
If you’re going to use the sample movie data, you can just go to the cluster, click the … menu, and load the sample data. If everything has been set up correctly, the sample_mflix database and movies collections will have the plot embeddings created on the “plot” field and added to a new “plot_embeddings” field.
## Now, to query your data with JavaScript
Once the documents in your collection have their embeddings generated, you can perform a query. But because this is using vector search, your query needs to be transformed into an embedding. This is an example script of how you could add a function to get both an embedding of the query and a function to use that embedding inside of your application.
```javascript
const axios = require('axios');
const MongoClient = require('mongodb').MongoClient;
async function getEmbedding(query) {
// Define the OpenAI API url and key.
const url = 'https://api.openai.com/v1/embeddings';
const openai_key = 'your_openai_key'; // Replace with your OpenAI key.
// Call OpenAI API to get the embeddings.
let response = await axios.post(url, {
input: query,
model: "text-embedding-ada-002"
}, {
headers: {
'Authorization': `Bearer ${openai_key}`,
'Content-Type': 'application/json'
}
});
if(response.status === 200) {
return response.data.data[0].embedding;
} else {
throw new Error(`Failed to get embedding. Status code: ${response.status}`);
}
}
async function findSimilarDocuments(embedding) {
const url = 'your_mongodb_url'; // Replace with your MongoDB url.
const client = new MongoClient(url);
try {
await client.connect();
const db = client.db(''); // Replace with your database name.
const collection = db.collection(''); // Replace with your collection name.
// Query for similar documents.
const documents = await collection.aggregate([
{"$vectorSearch": {
"queryVector": embedding,
"path": "plot_embedding",
"numCandidates": 100,
"limit": 5,
"index": "moviesPlotIndex",
}}
]).toArray();
return documents;
} finally {
await client.close();
}
}
async function main() {
const query = 'your_query'; // Replace with your query.
try {
const embedding = await getEmbedding(query);
const documents = await findSimilarDocuments(embedding);
console.log(documents);
} catch(err) {
console.error(err);
}
}
main();
```
This script first transforms your query into an embedding using the OpenAI API, and then queries your MongoDB cluster for documents with similar embeddings.
> Support for the '$vectorSearch' aggregation pipeline stage is available with MongoDB Atlas 6.0.11 and 7.0.2.
Remember to replace 'your_openai_key', 'your_mongodb_url', 'your_query', ‘’, and ‘’ with your actual OpenAI key, MongoDB URL, query, database name, and collection name, respectively.
And that's it! You've successfully set up a MongoDB Atlas cluster and Atlas trigger which calls the OpenAI API to embed documents when they get inserted into the cluster, and you’ve performed a vector search query.
> If you prefer learning by watching, check out the video version of this article!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb8503d464e800c36/65a1bba2d6cafb29fbf758da/Screenshot_2024-01-12_at_4.45.14_PM.png | md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js",
"Serverless"
],
"pageDescription": "Learn how to get started with Vector Search on MongoDB while leveraging the OpenAI.",
"contentType": "Tutorial"
} | How to Do Semantic Search in MongoDB Using Atlas Vector Search | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/atlas-flask-azure-container-apps | created | # Building a Flask and MongoDB App with Azure Container Apps
For those who want to focus on creating scalable containerized applications without having to worry about managing any environments, this is the tutorial for you! We are going to be hosting a dockerized version of our previously built Flask and MongoDB Atlas application on Azure Container Apps.
Azure Container Apps truly simplifies not only the deployment but also the management of containerized applications and microservices on a serverless platform. This Microsoft service also offers a huge range of integrations with other Azure platforms, making it easy to scale or improve your application over time. The combination of Flask, Atlas, and Container Apps allows for developers to build applications that are capable of handling large amounts of data and traffic, while being extremely accessible from any machine or environment.
The specifics of this tutorial are as follows: We will be cloning our previously built Flask application that utilizes CRUD (create, read, update, and delete) functionality applying to a “bookshelf” created in MongoDB Atlas. When properly up and running and connected to Postman or using cURL, we can add in new books, read back all the books in our database, update (exchange) a book, and even delete books. From here, we will dockerize our application and then we will host our dockerized image on Azure Container Apps. Once this is done, anyone anywhere can access our application!
The success of following this tutorial requires a handful of prerequisites:
* Access and clone our Flask and MongoDB application from the GitHub repository if you would like to follow along.
* View the completed repository for this demo.
* MongoDB Atlas
* Docker Desktop
* Microsoft Azure subscription.
* Python 3.9+.
* Postman Desktop (or another way to test our functions).
### Before we dive in...
Before we continue on to containerizing our application, please ensure you have a proper understanding of our program through this article: Scaling for Demand: Deploying Python Applications Using MongoDB Atlas on Azure App Service. It goes into a lot of detail on how to properly build and connect a MongoDB Atlas database to our application along with the intricacies of the app itself. If you are a beginner, please ensure you completely understand the application prior to containerizing it through Docker.
#### Insight into our database
Before moving on in our demo, if you’ve followed our previous demo linked above, this is how your Atlas database can look. These books were added in at the end of the previous demo using our endpoint. If you’re using your own application, an empty collection will be supported. But if you have existing documents, they need to support our schema or an error message will appear:
We are starting with four novels with various pages. Once properly connected and hosted in Azure Container Apps, when we connect to our `/books` endpoint, these novels will show up.
### Creating a Dockerfile
Once you have a cloned version of the application, it’s time to create our Dockerfile. A Dockerfile is important because it contains all of the information and commands to assemble an image. From the commands in a Dockerfile, Docker can actually build the image automatically with just one command from your CLI.
In your working directory, create a new file called `Dockerfile` and put in these commands:
```
FROM python:3.9-slim-buster
WORKDIR /azurecontainerappsdemo
COPY ./config/requirements.txt /azurecontainerappsdemo/
RUN pip install -r requirements.txt
COPY . /azurecontainerappsdemo/
ENV FLASK_APP=app.py
EXPOSE 5000
CMD "flask", "run", "--host=0.0.0.0"]
```
Please ensure your `requirements.txt` file is placed under a new folder called `/config`. This is so we can be certain our `requirements.txt` file is located and properly copied in with our Dockerfile since it is crucial for our demo.
Our base image `python:3.9-slim-buster` is essential because it provides a starting point for creating a new container image. In Docker, base images contain all the necessary components to successfully build and run an application. The rest of the commands copy over all the files in our working directory, expose Flask’s default port 5000, and specify how to run our application while allowing network access from anywhere. It is crucial to include the `--host=0.0.0.0` because otherwise, when we attempt to host our app on Azure, it will not connect properly.
In our app.py file, please make sure to add the following two lines at the very bottom of the file:
```
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
```
This once again allows Flask to run the application, ensuring it is accessible from any network.
###### Optional
You can test and make sure your app is properly dockerized with these commands:
Build: `docker build --tag azurecontainerappsdemo . `
Run: `docker run -d -p 5000:5000 -e "CONNECTION_STRING=" azurecontainerappsdemo`
You should see your app up and running on your local host.
![app hosted on local host
Now, we can use Azure Container Apps to run our containerized application without worrying about infrastructure on a serverless platform. Let’s go over how to do this.
### Creating an Azure Container Registry
We are going to be building and pushing our image to our Azure Container Registry in order to successfully host our application. To do this, please make sure that you are logged into Azure. There are multiple ways to create an Azure Container Registry: through the user interface, the command line, or even through the VSCode extension. For simplicity, this tutorial will show how to do it through the user interface.
Our first step is to log into Azure and access the Container Registry service. Click to create a new registry and you will be taken to this page:
Choose which Resource Group you want to use, along with a Registry Name (this will serve as your login URL) and Location. Make a note of these because when we start our Container App, all these need to be the same. After these are in place, press the Review and Create button. Once configured, your registry will look like this:
Now that you have your container registry in place, let’s access it in VSCode. Make sure that you have the Docker extension installed. Go to registries, log into your Azure account, and connect your registry. Mine is called “anaiyaregistry” and when set up, looks like this:
Now, log into your ACR using this command:
`docker login `
As an example, mine is:
`docker login anaiyaregistry.azurecr.io`
You will have to go to Access Keys inside of your Container Registry and click on Admin Access. Then, use that username and password to log into your terminal when prompted. If you are on a Windows machine, please make sure to right-click to paste. Otherwise, an error will appear:
When you’ve successfully logged in to your Azure subscription and Azure Registry, we can move on to building and pushing our image.
### Building and pushing our image to Azure Container Registry
We need to now build our image and push it to our Azure Container Registry.
If you are using an M1 Mac, we need to reconfigure our image so that it is using `amd64` instead of the configured `arm64`. This is because at the moment, Azure Container Apps only supports `linux/amd64` container images, and with an M1 machine, your image will automatically be built as `arm`. To get around this, we will be utilizing Buildx, a Docker plugin that allows you to build and push images for various platforms and architectures.
If you are not using an M1 Mac, please skip to our “Non-M1 Machines” section.
#### Install Buildx
To install `buildx` on your machine, please put in the following commands:
`docker buildx install`
To enable `buildx` to use the Docker CLI, please type in:
`docker buildx create –use`
Once this runs and a randomized container name appears in your terminal, you’ll know `buildx` has been properly installed.
#### Building and pushing our image
The command to build our image is as follows:
`docker buildx build --platform linux/amd64 --t /: --output type=docker .`
As an example, my build command is:
`docker buildx build --platform linux/amd64 --t anaiyaregistry.azurecr.io/azurecontainerappsdemo:latest --output type=docker .`
Specifying the platform you want your image to run on is the most important part. Otherwise, when we attempt to host it on Azure, we are going to get an error.
Once this has succeeded, we need to push our image to our registry. We can do this with the command:
`docker push /:`
As an example, my push command is:
`docker push anaiyaregistry.azurecr.io/azurecontainerappsdemo:latest`
#### Non-M1 Mac machines
If you have a non-M1 machine, please follow the above steps but feel free to ignore installing `buildx`. For example, your build command will be:
`docker build --t /: --output type=docker .`
Your push command will be:
`docker push /:`
#### Windows machines
For Windows machines, please use the following build command:
`docker build --t /: .`
And use the following push command:
`docker push :`
Once your push has been successful, let’s ensure we can properly see it in our Azure user interface. Access your Container Registries service and click on your registry. Then, click on Repositories.
Click again on your repository and there will be an image named `latest` since that is what we tagged our image with when we pushed it. This is the image we are going to host on our Container App service.
### Creating our Azure container app
We are going to be creating our container app through the Azure user interface.
Access your Container Apps service and click Create.
Now access your Container Apps in the UI. Click Create and fill in the “Basics” like this:
**If this is the first time creating an Azure Container App, please ensure an environment is created when the App is created. A Container Apps environment is crucial as it creates a secure boundary around various container apps that exist on the same virtual network. To check on your Container Apps environments, they can be accessed under “Container Apps Environments” in your Azure portal.
As we can see, the environment we chose from above is available under our Container Apps Environments tab.
Please ensure your Region and Resource Group are identical to the options picked while creating your Registry in a previous step. Once you’re finished putting in the “Basics,” click App Settings and uncheck “Use quickstart image.” Under “Image Source,” click on Azure Container Registry and put in your image information.
At the bottom, enter your Environment Variable (your connection string. It’s located in both Atlas and your .env file, if copying over from the previous demo):
Under that, hit “Enabled” for ingress and fill in the rest like this:
When done, hit Review and Create at the very bottom of the screen.
You’ll see this page when your deployment is successful. Hit Go to Resource.
Click on the “Application URL” on the right-hand side.
You’ll be taken to your app.
If we change the URL to incorporate our ‘/books’ route, we will see all the books from our Atlas database!
### Conclusion
Our Flask and MongoDB Atlas application has been successfully containerized and hosted on Azure Container Apps! Throughout this article, we’ve gone over how to create a Dockerfile and an Azure Container Registry, along with how to create and host our application on Azure Container Apps.
Grab more details on MongoDB Atlas or Azure Container Apps.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8640927974b6af94/6491be06c32681403e55e181/containerapps1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta91a08c67a4a22c5/6491bea250d8ed2c592f2c2b/containerapps2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt04705caa092b7b97/6491da12359ef03a0860ef58/containerapps3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc9e46e8cf38d937b/6491da9c83c7fb0f375f56e2/containerapps4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd959e0e4e09566fb/6491db1d2429af7455f493d4/containerapps5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa57e9ac66da7827/6491db9e595392cc54a060bb/containerapps6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf60949e01ec5cba1/6491dc67359ef00b4960ef66/containerapps7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted59e4c31f184cc4/6491dcd40f2d9b48bbed67a6/containerapps8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb71c98cdf43724e3/6491dd19ea50bc8939bea30e/containerapps9.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt38985f7d3ddeaa6a/6491dd718b23a55598054728/containerapps10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7a9d4d12404bd90b/6491deb474d501e28e016236/containerapps11.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b2b6dd5b0b8d228/6491def9ee654933dccceb84/containerapps12.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6593fb8e678ddfcd/6491dfc0ea50bc4a92bea31f/containerapps13.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c03d1267f75bd58/6491e006f7411b5c2137dd1a/containerapps14.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt471796e40d6385b5/6491e04b0f2d9b22fded67c1/containerapps15.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1fb382a03652147d/6491e074b9a076289492814a/containerapps16.png | md | {
"tags": [
"MongoDB",
"Python"
],
"pageDescription": "This tutorial explains how to host your MongoDB Atlas application on Azure Container Apps for a scalable containerized solution.\n",
"contentType": "Article"
} | Building a Flask and MongoDB App with Azure Container Apps | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/building-multi-environment-continuous-delivery-pipeline-mongodb-atlas | created | # Building a Multi-Environment Continuous Delivery Pipeline for MongoDB Atlas
## Why CI/CD?
To increase the speed and quality of development, you may use continuous delivery strategies to manage and deploy your application code changes. However, continuous delivery for databases is often a manual process.
Adopting continuous integration and continuous delivery (CI/CD) for managing the lifecycle of a database has the following benefits:
* An automated multi-environment setup enables you to move faster and focus on what really matters.
* The confidence level of the changes applied increases.
* The process is easier to reproduce.
* All changes to database configuration will be traceable.
### Why CI/CD for MongoDB Atlas?
MongoDB Atlas is a multi-cloud developer data platform, providing an integrated suite of cloud database and data services to accelerate and simplify how you build with data. MongoDB Atlas also provides a comprehensive API, making CI/CD for the actual data platform itself possible.
In this blog, we’ll demonstrate how to set up CI/CD for MongoDB Atlas, in a typical production setting. The intended audience is developers, solutions architects, and database administrators with knowledge of MongoDB Atlas, AWS, and Terraform.
## Our CI/CD Solution Requirements
* Ensure that each environment (dev, test, prod) is isolated to minimize blast radius in case of a human error or from a security perspective. MongoDB Atlas Projects and API Keys will be utilized to enable environment isolation.
* All services used in this solution will use managed services. This to minimize the time needed to spend on managing infrastructure.
* Minimize commercial agreements required. Use as much as possible from AWS and the Atlas ecosystem so that there is no need to purchase external tooling, such as HashiCorp Vault.
* Minimize time spent on installing local dev tooling, such as git and Terraform. The solution will provide a docker image, with all tooling required to run provisioning of Terraform templates. The same image will be used to also run the pipeline in AWS CodeBuild.
## Implementation
Enough talk—let’s get to the action. As developers, we love working examples as a way to understand how things work. So, here’s how we did it.
### Prerequisites
First off, we need to have at least an Atlas account to provision Atlas and then somewhere to run our automation. You can get an Atlas account for free at mongodb.com. If you want to take this demo for a spin, take the time and create your Atlas account now. Next, you’ll need to create an organization-level API key. If you or your org already have an Atlas account you’d like to use, you’ll need the organization owner to create the organization-level API key.
Second, you’ll need an AWS account. For more information on how to create an AWS account, see How do I create an AWS account? For this demo, we’ll be using some for-pay services like S3, but you get 12 months free.
You will also need to have Docker installed as we are using a docker container to run all provisioning. For more information on how to install Docker, see Get Started with Docker. We are using Docker as it will make it easier for you to get started, as all the tooling is packaged in the container—such as AWS cli, mongosh, and Terraform.
### What You Will Build
* MongoDB Atlas Projects for dev, test, prod environments, to minimize blast radius in case of a human error and from a security perspective.
* MongoDB Atlas Cluster in each Atlas project (dev, test, prod). MongoDB Atlas is a fully managed data platform for modern applications. Storing data the way it is accessed as documents makes developers more productive. It provides a document-based database that is cost-efficient and resizable while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It allows you to focus on your applications by providing the foundation of high performance, high availability, security, and compatibility they need.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
* CodePipeline orchestrates the CI/CD database migration stages.
* IAM roles and policies allow cross-account access to applicable AWS resources.
* CodeCommit creates a repo to store the SQL statements used when running the database migration.
* Amazon S3 creates a bucket to store pipeline artifacts.
* CodeBuild creates a project in target accounts using Flyway to apply database changes.
* VPC security groups ensure the secure flow of traffic between a CodeBuild project deployed within a VPC and MongoDB Atlas. AWS Private Link will also be provisioned.
* AWS Parameter Store stores secrets securely and centrally, such as the Atlas API keys and database username and password.
* Amazon SNS notifies you by email when a developer pushes changes to the CodeCommit repo.
### Step 1: Bootstrap AWS Resources
Next, we’ll fire off the script to bootstrap our AWS environment and Atlas account as shown in Diagram 1 using Terraform.
You will need to use programmatic access keys for your AWS account and the Atlas organisation-level API key that you have created as described in the prerequisites.This is also the only time you’ll need to handle the keys manually.
```
# Set your environment variables
# You'll find this in your Atlas console as described in prerequisites
export ATLAS_ORG_ID=60388113131271beaed5
# The public part of the Atlas Org key you created previously
export ATLAS_ORG_PUBLIC_KEY=l3drHtms
# The private part of the Atlas Org key you created previously
export ATLAS_ORG_PRIVATE_KEY=ab02313b-e4f1-23ad-89c9-4b6cbfa1ed4d
# Pick a username, the script will create this database user in Atlas
export DB_USER_NAME=demouser
# Pick a project base name, the script will appended -dev, -test, -prod depending on environment
export ATLAS_PROJECT_NAME=blogcicd6
# The AWS region you want to deploy into
export AWS_DEFAULT_REGION=eu-west-1
# The AWS public programmatic access key
export AWS_ACCESS_KEY_ID=AKIAZDDBLALOZWA3WWQ
# The AWS private programmatic access key
export AWS_SECRET_ACCESS_KEY=nmarrRZAIsAAsCwx5DtNrzIgThBA1t5fEfw4uJA
```
Once all the parameters are defined, you are ready to run the script that will create your CI/CD pipeline.
```
# Clone solution code repository
$ git clone https://github.com/mongodb-developer/atlas-cicd-aws
$ cd atlas-cicd
# Start docker container, which contains all the tooling e.g terraform, mongosh, and other,
$ docker container run -it --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_DEFAULT_REGION -e ATLAS_ORG_ID -e ATLAS_ORG_PUBLIC_KEY -e ATLAS_ORG_PRIVATE_KEY -e DB_USER_NAME -e ATLAS_PROJECT_NAME -v ${PWD}/terraform:/terraform piepet/cicd-mongodb:46
$ cd terraform
# Bootstrap AWS account and Atlas Account
$ ./deploy_baseline.sh $AWS_DEFAULT_REGION $ATLAS_ORG_ID $ATLAS_ORG_PUBLIC_KEY $ATLAS_ORG_PRIVATE_KEY $DB_USER_NAME $ATLAS_PROJECT_NAME base apply
```
When deploy:baseline.sh is invoked, provisioning of AWS resources starts, using Terraform templates. The resources created are shown in Diagram 1.
From here on, you'll be able to operate your Atlas infrastructure without using your local docker instance. If you want to blaze through this guide, including cleaning it all up, you might as well keep the container running, though. The final step of tearing down the AWS infrastructure requires an external point like your local docker instance.
Until you’ve committed anything, the pipeline will have a failed Source stage. This is because it tries to check out a branch that does not exist in the code repository. After you’ve committed the Terraform code you want to execute, you’ll see that the Source stage will restart and proceed as expected. You can find the pipeline in the AWS console at this url: https://eu-west-1.console.aws.amazon.com/codesuite/codepipeline/pipelines?region=eu-west-1
### Step 2: Deploy Atlas Cluster
Next is to deploy the Atlas cluster (projects, users, API keys, etc). This is done by pushing a configuration into the new AWS CodeCommit repo.
If you’re like me and want to see how provisioning of the Atlas cluster works before setting up IAM properly, you can push the original github repo to AWS CodeCommit directly inside the docker container (inside the Terraform folder) using a bit of a hack. By pushing to the CodeCommit repo, AWS CodePipeline will be triggered and provisioning of the Atlas cluster will start.
```
cd /terraform
# Push default settings to AWS Codecommit
./git_push_terraform.sh
```
To set up access to the CodeCommit repo properly, for use that survives stopping the docker container, you’ll need a proper git CodeCommit user. Follow the steps in the AWS documentation to create and configure your CodeCommit git user in AWS IAM. Then clone the AWS CodeCommit repository that was created in the bootstrapping, outside your docker container, perhaps in another tab in your shell, using your IAM credentials. If you did not use the “hack” to initialize it, it’ll be empty, so copy the Terraform folder that is provided in this solution, to the root of the cloned CodeCommit repository, then commit and push to kick off the pipeline. Now you can use this repo to control your setup! You should now see in the AWS CodePipeline console that the pipeline has been triggered. The pipeline will create Atlas clusters in each of the Atlas Projects and configure AWS PrivateLink.
Let’s dive into the stages defined in this Terraform pipeline file.
**Deploy-Base**
This is basically re-applying what we did in the bootstrapping. This stage ensures we can improve on the AWS pipeline infrastructure itself over time.
This stage creates the projects in Atlas, including Atlas project API keys, Atlas project users, and database users.
**Deploy-Dev**
This stage creates the corresponding Private Link and MongoDB cluster.
**Deploy-Test**
This stage creates the corresponding Private Link and MongoDB cluster.
**Deploy-Prod**
This stage creates the corresponding Private Link and MongoDB cluster.
**Gate**
Approving means we think it all looks good. Perhaps counter intuitively but great for demos, it proceeds to teardown. This might be one of the first behaviours you’ll change. :)
**Teardown**
This decommissions the dev, test, and prod resources we created above. To decommission the base resources, including the pipeline itself, we recommend you run that externally—for example, from the Docker container on your laptop. We’ll cover that later.
As you advance towards the Gate stage, you’ll see the Atlas clusters build out. Below is an example where the Test stage is creating a cluster. Approving the Gate will undeploy the resources created in the dev, test, and prod stages, but keep projects and users.
### Step 3: Make a Change!
Assuming you took the time to set up IAM properly, you can now work with the infrastructure as code directly from your laptop outside the container. If you just deployed using the hack inside the container, you can continue interacting using the repo created inside the Docker container, but at some point, the container will stop and that repo will be gone. So, beware.
Navigate to the root of the clone of the CodeCommit repo. For example, if you used the script in the container, you’d run, also in the container:
```
cd /${ATLAS_PROJECT_NAME}-base-repo/
```
Then you can edit, for example, the MongoDB version by changing 4.4 to 5.0 in `terraform/environment/dev/variables.tf`.
```
variable "cluster_mongodbversion" {
description = "The Major MongoDB Version"
default = "5.0"
}
```
Then push (git add, commit, push) and you’ll see a new run initiated in CodePipeline.
### Step 4: Clean Up Base Infrastructure
Now, that was interesting. Time for cleaning up! To decommission the full environment, you should first approve the Gate stage to execute the teardown job. When that’s been done, only the base infrastructure remains. Start the container again as in Step 1 if it’s not running, and then execute deploy_baseline.sh, replacing the word ***apply*** with ***destroy***:
```
# inside the /terraform folder of the container
# Clean up AWS and Atlas Account
./deploy_baseline.sh $AWS_DEFAULT_REGION $ATLAS_ORG_ID $ATLAS_ORG_PUBLIC_KEY $ATLAS_ORG_PRIVATE_KEY $DB_USER_NAME $ATLAS_PROJECT_NAME base destroy
```
## Lessons Learned
In this solution, we have separated the creation of AWS resources and the Atlas cluster, as the changes to the Atlas cluster will be more frequent than the changes to the AWS resources.
When implementing infrastructure as code for a MongoDB Atlas Cluster, you have to consider not just the cluster creation but also a strategy for how to separate dev, qa, and prod environments and how to store secrets. This to minimize blast radius.
We also noticed how useful resource tagging is to make Terraform scripts portable. By setting tags on AWS resources, the script does not need to know the names of the resources but can look them up by tag instead.
## Conclusion
By using CI/CD automation for Atlas clusters, you can speed up deployments and increase the agility of your software teams.
MongoDB Atlas offers a powerful API that, in combination with AWS CI/CD services and Terraform, can support continuous delivery of MongoDB Atlas clusters, and version-control the database lifecycle. You can apply the same pattern with other CI/CD tools that aren’t specific to AWS.
In this blog, we’ve offered an exhaustive, reproducible, and reusable deployment process for MongoDB Atlas, including traceability. A devops team can use our demonstration as inspiration for how to quickly deploy MongoDB Atlas, automatically embedding organisation best practices. | md | {
"tags": [
"Atlas",
"AWS",
"Docker"
],
"pageDescription": "In this blog, we’ll demonstrate how to set up CI/CD for MongoDB Atlas, in a typical production setting.",
"contentType": "Tutorial"
} | Building a Multi-Environment Continuous Delivery Pipeline for MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/swiftui-previews | created | # Making SwiftUI Previews Work For You
## Introduction
Canvas previews are an in-your-face feature of SwiftUI. When you create a new view, half of the boilerplate code is for the preview. A third of your Xcode real estate is taken up by the preview.
Despite the prominence of the feature, many developers simply delete the preview code from their views and rely on the simulator.
In past releases of Xcode (including the Xcode 13 betas), a reluctance to use previews was understandable. They'd fail for no apparent reason, and the error messages were beyond cryptic.
I've stuck with previews from the start, but at times, they've felt like more effort than they're worth. But, with Xcode 13, I think we should all be using them for all views. In particular, I've noticed:
- They're more reliable.
- The error messages finally make sense.
- Landscape mode is supported.
I consider previews a little like UI unit tests for your views. Like with unit tests, there's some extra upfront effort required, but you get a big payback in terms of productivity and quality.
In this article, I'm going to cover:
- What you can check in your previews (think light/dark mode, different devices, landscape mode, etc.) and how to do it.
- Reducing the amount of boilerplate code you need in your previews.
- Writing previews for stateful apps. (I'll be using Realm, but the same approach can be used with Core Data.)
- Troubleshooting your previews.
One feature I won't cover is using previews as a graphical way to edit views. One of the big draws of SwiftUI is writing everything in code rather than needing storyboards and XML files. Using a drag-and-drop view builder for SwiftUI doesn't appeal to me.
95% of the examples I use in this article are based on a BlackJack training app. You can find the final version in the repo.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Prerequisites
- Xcode 13+
- iOS 15+
- Realm-Cocoa 10.17.0+
Note:
- I've used Xcode 13 and iOS 15, but most of the examples in this post will work with older versions.
- Previewing in landscape mode is new in Xcode 13.
- The `buttonStyle` modifier is only available in iOS 15.
- I used Realm-Cocoa 10.17.0, but earlier 10.X versions are likely to work.
## Working with previews
Previews let you see what your view looks like without running it in a simulator or physical device. When you edit the code for your view, its preview updates in real time.
This section shows what aspects you can preview, and how it's done.
### A super-simple preview
When you create a new Xcode project or SwiftUI view, Xcode adds the code for the preview automatically. All you need to do is press the "Resume" button (or CMD-Alt-P).
The preview code always has the same structure, with the `View` that needs previewing (in this case, `ContentView`) within the `previews` `View`:
```swift
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
```
### Views that require parameters
Most of your views will require that the enclosing view pass in parameters. Your preview must do the same—you'll get a build error if you forget.
My `ResetButton` view requires that the caller provides two values—`label` and `resetType`:
```swift
struct ResetButton: View {
var label: String
var resetType: ResetType
...
}
```
The preview code needs to pass in those values, just like any embedding view:
```swift
struct ResetButton_Previews: PreviewProvider {
static var previews: some View {
ResetButton(label: "Reset All Matrices",
resetType: .all)
}
}
```
### Views that require `Binding`s
In a chat app, I have a `LoginView` that updates the `username` binding that's past from the enclosing view:
```swift
struct LoginView: View {
@Binding var username: String
...
}
```
The simplest way to create a binding in your preview is to use the `constant` function:
```swift
struct LoginView_Previews: PreviewProvider {
static var previews: some View {
LoginView(username: .constant("Billy"))
}
}
```
### `NavigationView`s
In your view hierarchy, you only add a `NavigationView` at a single level. That `NavigationView` then wraps all subviews.
When previewing those subviews, you may or may not care about the `NavigationView` functionality. For example, you'll only see titles and buttons in the top nav bar if your preview wraps the view in a `NavigationView`.
If I preview my `PracticeView` without adding a `NavigationView`, then I don't see the title:
To preview the title, my preview code needs to wrap `PracticeView` in a `NavigationView`:
```swift
struct PracticeView_Previews: PreviewProvider {
static var previews: some View {
NavigationView {
PracticeView()
}
}
}
```
### Smaller views
Sometimes, you don't need to preview your view in the context of a full device screen. My `CardView` displays a single playing card. Previewing it in a full device screen just wastes desk space:
We can add the `previewLayout` modifier to indicate that we only want to preview an area large enough for the view. It often makes sense to add some `padding` as well:
```swift
struct CardView_Previews: PreviewProvider {
static var previews: some View {
CardView(card: Card(suit: .heart))
.previewLayout(.sizeThatFits)
.padding()
}
}
```
### Light and dark modes
It can be quite a shock when you finally get around to testing your app in dark mode. If you've not thought about light/dark mode when implementing each of your views, then the result can be ugly, or even unusable.
Previews to the rescue!
Returning to `CardView`, I can preview a card in dark mode using the `preferredColorScheme` view modifier:
```swift
struct CardView_Previews: PreviewProvider {
static var previews: some View {
CardView(card: Card(suit: .heart))
.preferredColorScheme(.dark)
.previewLayout(.sizeThatFits)
.padding()
}
}
```
That seems fine, but what if I previewed a spade instead?
That could be a problem.
Adding a white background to the view fixes it:
### Preview multiple view instances
Sometimes, previewing a single instance of your view doesn't paint the full picture. Just look at the surprise I got when enabling dark mode for my card view. Wouldn't it be better to simultaneously preview both hearts and spades in both dark and light modes?
You can create multiple previews for the same view using the `Group` view:
```swift
struct CardView_Previews: PreviewProvider {
static var previews: some View {
Group {
CardView(card: Card(suit: .heart))
CardView(card: Card(suit: .spade))
CardView(card: Card(suit: .heart))
.preferredColorScheme(.dark)
CardView(card: Card(suit: .spade))
.preferredColorScheme(.dark)
}
.previewLayout(.sizeThatFits)
.padding()
}
}
```
### Composing views in a preview
A preview of a single view in isolation might look fine, but what will they look like within a broader context?
Previewing a single `DecisionCell` view looks great:
```swift
struct DecisionCell_Previews: PreviewProvider {
static var previews: some View {
DecisionCell(
decision: Decision(handValue: 6, dealerCardValue: .nine, action: .hit), myHandValue: 8, dealerCardValue: .five)
.previewLayout(.sizeThatFits)
.padding()
}
}
```
But, the app will never display a single `DecisionCell`. They'll always be in a grid. Also, the text, background color, and border vary according to state. To create a more realistic preview, I created some sample data within the view and then composed multiple `DecisionCell`s using vertical and horizontal stacks:
```swift
struct DecisionCell_Previews: PreviewProvider {
static var previews: some View {
let decisions: Decision] = [
Decision(handValue: 6, dealerCardValue: .nine, action: .split),
Decision(handValue: 6, dealerCardValue: .nine, action: .stand),
Decision(handValue: 6, dealerCardValue: .nine, action: .double),
Decision(handValue: 6, dealerCardValue: .nine, action: .hit)
]
return Group {
VStack(spacing: 0) {
ForEach(decisions) { decision in
HStack (spacing: 0) {
DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .three)
DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .three)
DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .nine)
DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .nine)
}
}
}
VStack(spacing: 0) {
ForEach(decisions) { decision in
HStack (spacing: 0) {
DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .three)
DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .three)
DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .nine)
DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .nine)
}
}
}
.preferredColorScheme(.dark)
}
.previewLayout(.sizeThatFits)
.padding()
}
```
I could then see that the black border didn't work too well in dark mode:
![Dark border around selected cells is lost in front of the dark background
Switching the border color from `black` to `primary` quickly fixed the issue:
### Landscape mode
Previews default to portrait mode. Use the `previewInterfaceOrientation` modifier to preview in landscape mode instead:
```swift
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
.previewInterfaceOrientation(.landscapeRight)
}
}
```
### Device type
Previews default to the simulator device that you've selected in Xcode. Chances are that you want your app to work well on multiple devices. Typically, I find that there's extra work needed to make an app I designed for the iPhone work well on an iPad.
The `previewDevice` modifier lets us specify the device type to use in the preview:
```swift
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
.previewDevice(PreviewDevice(rawValue: "iPad (9th generation)"))
}
}
```
You can find the names of the available devices from Xcode's simulator menu, or from the terminal using `xcrun simctl list devices`.
### Pinning views
In the bottom-left corner of the preview area, there's a pin button. Pressing this "pins" the current preview so that it's still shown when you browse to the code for other views:
This is useful to observe how a parent view changes as you edit the code for the child view:
### Live previews
At the start of this article, I made a comparison between previews and unit testing. Live previews mean that you really can test your views in isolation (to be accurate, the view you're testing plus all of the views it embeds or links to).
Press the play button above the preview to enter live mode:
You can now interact with your view:
## Getting rid of excess boilerplate preview code
As you may have noticed, some of my previews now have more code than the actual views. This isn't necessarily a problem, but there's a lot of repeated boilerplate code used by multiple views. Not only that, but you'll be embedding the same boilerplate code into previews in other projects.
To streamline my preview code, I've created several view builders. They all follow the same pattern—receive a `View` and return a new `View` that's built from that `View`.
I start the name of each view builder with `_Preview` to make it easy to take advantage of Xcode's code completion feature.
### Light/dark mode
`_PreviewColorScheme` returns a `Group` of copies of the view. One is in light mode, the other dark:
```swift
struct _PreviewColorScheme: View {
private let viewToPreview: Value
init(_ viewToPreview: Value) {
self.viewToPreview = viewToPreview
}
var body: some View {
Group {
viewToPreview
viewToPreview.preferredColorScheme(.dark)
}
}
}
```
To use this view builder in a preview, simply pass in the `View` you're previewing:
```swift
struct CardView_Previews: PreviewProvider {
static var previews: some View {
_PreviewColorScheme(
VStack {
ForEach(Suit.allCases, id: \.rawValue) { suit in
CardView(card: Card(suit: suit))
}
}
.padding()
.previewLayout(.sizeThatFits)
)
}
}
```
### Orientation
`_PreviewOrientation` returns a `Group` containing the original `View` in portrait and landscape modes:
```swift
struct _PreviewOrientation: View {
private let viewToPreview: Value
init(_ viewToPreview: Value) {
self.viewToPreview = viewToPreview
}
var body: some View {
Group {
viewToPreview
viewToPreview.previewInterfaceOrientation(.landscapeRight)
}
}
}
```
To use this view builder in a preview, simply pass in the `View` you're previewing:
```swift
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
_PreviewOrientation(
ContentView()
)
}
}
```
### No device
`_PreviewNoDevice` returns a view built from adding the `previewLayout` modifier and adding `padding to the input view:
```swift
struct _PreviewNoDevice: View {
private let viewToPreview: Value
init(_ viewToPreview: Value) {
self.viewToPreview = viewToPreview
}
var body: some View {
Group {
viewToPreview
.previewLayout(.sizeThatFits)
.padding()
}
}
}
```
To use this view builder in a preview, simply pass in the `View` you're previewing:
```swift
struct CardView_Previews: PreviewProvider {
static var previews: some View {
_PreviewNoDevice(
CardView(card: Card())
)
}
}
```
### Multiple devices
`_PreviewDevices` returns a `Group` containing a copy of the `View` for each device type. You can modify `devices` in the code to include the devices you want to see previews for:
```swift
struct _PreviewDevices: View {
let devices =
"iPhone 13 Pro Max",
"iPhone 13 mini",
"iPad (9th generation)"
]
private let viewToPreview: Value
init(_ viewToPreview: Value) {
self.viewToPreview = viewToPreview
}
var body: some View {
Group {
ForEach(devices, id: \.self) { device in
viewToPreview
.previewDevice(PreviewDevice(rawValue: device))
.previewDisplayName(device)
}
}
}
}
```
I'd be cautious about adding too many devices as it will make any previews using this view builder slow down and consume resources.
To use this view builder in a preview, simply pass in the `View` you're previewing:
```swift
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
_PreviewDevices(
ContentView()
)
}
}
```
![The same view previewed on 3 different device types
### Combining multiple view builders
Each view builder receives a view and returns a new view. That means that you can compose the functions by passing the results of one view builder to another. In the extreme case, you can use up to three on the same view preview:
```swift
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
_PreviewOrientation(
_PreviewColorScheme(
_PreviewDevices(ContentView())
)
)
}
}
```
This produces 12 views to cover all permutations of orientation, appearance, and device.
For each view, you should consider which modifiers add value. For the `CardView`, it makes sense to use `_PreviewNoDevice` and `_PreviewColorSchem`e, but previewing on different devices and orientations wouldn't add any value.
## Previewing stateful views (Realm)
Often, a SwiftUI view will fetch state from a database such as Realm or Core Data. For that to work, there needs to be data in that database.
Previews are effectively running on embedded iOS simulators. That helps explain how they are both slower and more powerful than you might expect from a "preview" feature. That also means that each preview also contains a Realm database (assuming that you're using the Realm-Cocoa SDK). The preview can store data in that database, and the view can access that data.
In the BlackJack training app, the action to take for each player/dealer hand combination is stored in Realm. For example, `DefaultDecisionView` uses `@ObservedResults` to access data from Realm:
```swift
struct DefaultDecisionView: View {
@ObservedResults(Decisions.self,
filter: NSPredicate(format: "isSoft == NO AND isSplit == NO")) var decisions
```
To ensure that there's data for the previewed view to find, the preview checks whether the Realm database already contains data (`Decisions.areDecisionsPopulated`). If not, then it adds the required data (`Decisions.bootstrapDecisions()`):
```swift
struct DefaultDecisionView_Previews: PreviewProvider {
static var previews: some View {
if !Decisions.areDecisionsPopulated {
Decisions.bootstrapDecisions()
}
return _PreviewOrientation(
_PreviewColorScheme(
Group {
NavigationView {
DefaultDecisionView(myHandValue: 6, dealerCardValue: .nine)
}
NavigationView {
DefaultDecisionView(myHandValue: 6, dealerCardValue: .nine, editable: true)
}
}
.navigationViewStyle(StackNavigationViewStyle())
)
)
}
}
```
`DefaultDecisionView` is embedded in `DecisionMatrixView` and so the preview for `DecisionMatrixView` must also conditionally populate the Realm data. In turn, `DecisionMatrixView` is embedded in `PracticeView`, and `PracticeView` in `ContentView`—and so, they too need to bootstrap the Realm data so that it's available further down the view hierarchy.
This is the implementation of the bootstrap functions:
```swift
extension Decisions {
static var areDecisionsPopulated: Bool {
do {
let realm = try Realm()
let decisionObjects = realm.objects(Decisions.self)
return decisionObjects.count >= 3
} catch {
print("Error, couldn't read decision objects from Realm: \(error.localizedDescription)")
return false
}
}
static func bootstrapDecisions() {
do {
let realm = try Realm()
let defaultDecisions = Decisions()
let softDecisions = Decisions()
let splitDecisions = Decisions()
defaultDecisions.bootstrap(defaults: defaultDefaultDecisions, handType: .normal)
softDecisions.bootstrap(defaults: defaultSoftDecisions, handType: .soft)
splitDecisions.bootstrap(defaults: defaultSplitDecisions, handType: .split)
try realm.write {
realm.delete(realm.objects(Decision.self))
realm.delete(realm.objects(Decisions.self))
realm.delete(realm.objects(Decision.self))
realm.delete(realm.objects(Decisions.self))
realm.add(defaultDecisions)
realm.add(softDecisions)
realm.add(splitDecisions)
}
} catch {
print("Error, couldn't read decision objects from Realm: \(error.localizedDescription)")
}
}
}
```
### Partitioned, synced realms
The BlackJack training app uses a standalone Realm database. But what happens if the app is using Realm Sync?
One option could be to have the SwiftUI preview sync data with your backend Realm service. I think that's a bit too complex, and it breaks my paradigm of treating previews like unit tests for views.
I've found that the simplest solution is to make the view aware of whether it's been created by a preview or by a running app. I'll explain how that works.
`AuthorView` from the RChat app fetches data from Realm:
```swift
struct AuthorView: View {
@ObservedResults(Chatster.self) var chatsters
...
}
```
Its preview code bootstraps the embedded realm:
```swift
struct AuthorView_Previews: PreviewProvider {
static var previews: some View {
Realm.bootstrap()
return AppearancePreviews(AuthorView(userName: "[email protected]"))
.previewLayout(.sizeThatFits)
.padding()
}
}
```
The app adds bootstrap as an extension to Realm:
```swift
extension Realm: Samplable {
static func bootstrap() {
do {
let realm = try Realm()
try realm.write {
realm.deleteAll()
realm.add(Chatster.samples)
realm.add(User(User.sample))
realm.add(ChatMessage.samples)
}
} catch {
print("Failed to bootstrap the default realm")
}
}
}
```
A complication is that `AuthorView` is embedded in `ChatBubbleView`. For the app to work, `ChatBubbleView` must pass the synced realm configuration to `AuthorView`:
```swift
AuthorView(userName: authorName)
.environment(\.realmConfiguration,
app.currentUser!.configuration(
partitionValue: "all-users=all-the-users"))
```
**But**, when previewing `ChatBubbleView`, we want `AuthorView` to use the preview's local, embedded realm (not to be dependent on a Realm back-end app). That means that `ChatBubbleView` must check whether or not it's running as part of a preview:
```swift
struct ChatBubbleView: View {
...
var isPreview = false
...
var body: some View {
...
if isPreview {
AuthorView(userName: authorName)
} else {
AuthorView(userName: authorName)
.environment(\.realmConfiguration,
app.currentUser!.configuration(
partitionValue: "all-users=all-the-users"))
}
...
}
}
```
The preview is then responsible for bootstrapping the local realm and flagging to `ChatBubbleView` that it's a preview:
```swift
struct ChatBubbleView_Previews: PreviewProvider {
static var previews: some View {
Realm.bootstrap()
return ChatBubbleView(
chatMessage: .sample,
authorName: "jane",
isPreview: true)
}
}
```
## Troubleshooting your previews
As mentioned at the beginning of this article, the error messages for failed previews are actually useful in Xcode 13.
That's the good news.
The bad news is that you still can't use breakpoints or print to the console.
One mitigation is that the `previews` static var in your preview is a `View`. That means that you can replace the `body` of your `ContentView` with your `previews` code. You can then run the app in a simulator and add breakpoints or print to the console. It feels odd to use this approach, but I haven't found a better option yet.
## Conclusion
I've had a mixed relationship with SwiftUI previews.
When they work, they're a great tool, making it quicker to write your views. Previews allow you to unit test your views. Previews help you avoid issues when your app is running in dark or landscape mode or on different devices.
But, they require effort to build. Prior to Xcode 13, it would be tough to justify that effort because of reliability issues.
I believe that Xcode 13 is the tipping point where the efficiency and quality gains far outweigh the effort of writing preview code. That's why I've written this article now.
In this article, you've seen a number of tips to make previews as useful as possible. I've provided four view builders that you can copy directly into your SwiftUI projects, letting you build the best previews with the minimum of code. Finally, you've seen how you can write previews for views that work with data held in a database such as Realm or Core Data.
Please provide feedback and ask any questions in the Realm Community Forum. | md | {
"tags": [
"Realm",
"Swift",
"Mobile",
"iOS"
],
"pageDescription": "Get the most out of iOS Canvas previews to improve your productivity and app quality",
"contentType": "Article"
} | Making SwiftUI Previews Work For You | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/real-time-data-architectures-with-mongodb-cloud-manager-and-verizon-5g-edge | created | # Real-time Data Architectures with MongoDB Cloud Manager and Verizon 5G Edge
The network edge has been one of the most explosive cloud computing opportunities in recent years. As mobile contactless experiences become the norm and as businesses move ever-faster to digital platforms and services, edge computing is positioned as a faster, cheaper, and more reliable alternative for data processing and compute at scale.
While mobile devices continue to increase their hardware capabilities with built-in GPUs, custom chipsets, and more storage, even the most cutting-edge devices will suffer the same fundamental problem: each device serves as a single point of failure and, thus, cannot effectively serve as a persistent data storage layer. Said differently, wouldn’t it be nice to have the high-availability of the cloud but with the topological distance to your end users of the smartphone?
Mobile edge computing promises to precisely address this problem—bringing low latency compute to the edge of networks with the high-availability and scale of cloud computing. Through Verizon 5G Edge with AWS Wavelength, we saw the opportunity to explore how to take existing compute-intensive workflows and overlay a data persistence layer with MongoDB, utilizing the MongoDB Atlas management platform, to enable ultra-immersive experiences with personalized experience—reliant on existing database structures in the parent region with the seamlessness to extend to the network edge.
In this article, learn how Verizon and MongoDB teamed up to deliver on this vision, a quick Getting Started guide to build your first MongoDB application at the edge, and advanced architectures for those proficient with MongoDB.
Let’s get started!
## About Verizon 5G Edge and MongoDB
Through Verizon 5G Edge, AWS developers can now deploy parts of their application that require low latency at the edge of 4G and 5G networks using the same AWS APIs, tools, and functionality they use today, while seamlessly connecting back to the rest of their application and the full range of cloud services running in an AWS Region. By embedding AWS compute and storage services at the edge of the network, use cases such as ML inference, real-time video streaming, remote video production, and game streaming can be rapidly accelerated.
However, for many of these use cases, a persistent storage layer is required that extends beyond the native storage capabilities of AWS Wavelength—namely Elastic Block Storage (EBS) volumes. However, using MongoDB Enterprise, developers can leverage the underlying compute (i.e.,. EC2 instances) at the edge to deploy MongoDB clusters either a) as standalone clusters or b) highly available replica sets that can synchronize data seamlessly.
MongoDB is a general purpose, document-based, distributed database built for modern application developers. With MongoDB Atlas, developers can get up and running even faster with fully managed MongoDB databases deployed across all major cloud providers.
While MongoDB Atlas today does not support deployments within Wavelength Zones, MongoDB Cloud Manager can automate, monitor, and back up your MongoDB infrastructure. Cloud Manager Automation enables you to configure and maintain MongoDB nodes and clusters, whereby MongoDB Agents running on each MongoDB host can maintain your MongoDB deployments. In this example, we’ll start with a fairly simple architecture highlighting the relationship between Wavelength Zones (the edge) and the Parent Region (core cloud):
Just like any other architecture, we’ll begin with a VPC consisting of two subnets. Instead of one public subnet and one private subnet, we’ll have one public subnet and one carrier subnet —a new way to describe subnets exposed within Wavelength Zones to the mobile network only.
* **Public Subnet**: Within the us-west-2 Oregon region, we launched a subnet in us-west-2a availability zone consisting of a single EC2 instance with a public IP address. From a routing perspective, we attached an Internet Gateway to the VPC to provide outbound connectivity and attached the Internet Gateway as the default route (0.0.0.0/0) to the subnet’s associated route table.
* **Carrier Subnet**: Also within the us-west-2 Oregon region, our second subnet is in the San Francisco Wavelength Zone (us-west-2-wl1-sfo-wlz-1) —an edge data center within the Verizon carrier network but part of the us-west-2 region. In this subnet, we also deploy a single EC2 instance, this time with a carrier IP address—a carrier network-facing IP address exposed to Verizon mobile devices. From a routing perspective, we attached a Carrier Gateway to the VPC to provide outbound connectivity and attached the Carrier Gateway as the default route (0.0.0.0/0) to the subnet’s associated route table.
Next, let’s configure the EC2 instance in the parent region. Once you get the IP address (54.68.26.68) of the launched EC2 instance, SSH into the instance itself and begin to download the MongoDB agent.
```bash
ssh -i "mongovz.pem" [email protected]
```
Once you are in, download and install the packages required for the MongoDB MMS Automation Agent. Run the following command:
```bash
sudo yum install cyrus-sasl cyrus-sasl-gssapi \
cyrus-sasl-plain krb5-libs libcurl \
lm_sensors-libs net-snmp net-snmp-agent-libs \
openldap openssl tcp_wrappers-libs xz-libs
```
Once within the instance, download the MongoDB MMS Automation Agent, and install the agent using the RPM package manager.
```bash
curl -OL https://cloud.mongodb.com/download/agent/automation/mongodb-mms-automation-agent-manager-10.30.1.6889-1.x86_64.rhel7.rpm
sudo rpm -U mongodb-mms-automation-agent-manager-10.30.1.6889-1.x86_64.rhel7.rpm
```
Next, navigate to the **/etc/mongodb-mms/** and edit the **automation-agent.config** file to include your MongoDB Cloud Manager API Key. To create a key, head over to MongoDB Atlas at https://mongodb.com/atlas and either login to an existing account, or sign up for a new free account.
Once you are logged in, create a new organization, and for the cloud service, be sure to select Cloud Manager.
With your organization created, next we’ll create a new Project. When creating a new project, you may be asked to select a cloud service, and you’ll choose Cloud Manager again.
Next, you’ll name your project. You can select any name you like, we’ll go with Verizon for our project name. After you give your project a name, you will be given a prompt to invite others to the project. You can skip this step for now as you can always add additional users in the future.
Finally, you are ready to deploy MongoDB to your environment using Cloud Manager. With Cloud Manager, you can deploy both standalone instances as well as Replica Sets of MongoDB. Since we want high availability, we’ll deploy a replica set.
Clicking on the **New Replica Set** button will bring us to the user interface to configure our replica set. At this point, we’ll probably get a message saying that no servers were detected, and that’s fine since we haven’t started our MongoDB Agents yet.
Click on the “see instructions” link to get more details on how to install the MongoDB Agent. On the modal that pops up, it will have familiar instructions that we’re already following, but it will also have two pieces of information that we’ll need. The **mmsApiKey** and **mmsGroupId** will be displayed here and you’ll likely have to click the Generate Key button to generate a new mmsAPIKey which will be automatically populated. Make note of these **mmsGroupId** and **mmsApiKey** values as we’ll need when configuring our MongoDB Agents next.
Head back to your terminal for the EC2 instance and navigate to the **/etc/mongodb-mms/** and edit the **automation-agent.config** file to include your MongoDB Cloud Manager API Key.
In this example, we edited the **mmsApiKey** and **mmsGroupId** variables. From there, we’ll create the data directory and start our MongoDB agent!
```bash
sudo mkdir -p /data
sudo chown mongod:mongod /data
sudo systemctl start mongodb-mms-automation-agent.service
```
Once you’ve completed these configuration steps, go ahead and do the same for your Wavelength Zone instance. Note that you will not be able to SSH directly to the instance’s Carrier IP (155.146.16.178/). Instead, you must use the parent region instance as a bastion host to “jump” onto the edge instance itself. To do so, find the private IP address of the edge instance (10.0.0.54) and, from the parent region instance, SSH into the second instance using the same key pair you used.
```bash
ssh -i "mongovz.pem" [email protected]
```
After completing configuration of the second instance, which follows the same instructions from above, it’s time for the fun part —launching the ReplicaSet on the Cloud Manager Console! The one thing to note for the replica set, since we’ll have three nodes, on the edge instance we’ll create a /data and /data2 directories to allow for two separate directories to host the individual nodes data. Head back over to https://mongodb.com/atlas and the Cloud Manager to complete setup.
Refresh the Create New Replica Set page and now since the MongoDB Agents are running you should see a lot of information pre-populated for you. Make sure that it matches what you’d expect and when you’re satisfied hit the Create Replica Set button.
Click on the “Create Replica Set” button to finalize the process.
Within a few minutes the replica set cluster will be deployed to the servers and your MongoDB cluster will be up and running.
With the replica set deployed, you should now be able to connect to your MongoDB cluster hosted on either the standard Us-West or Wavelength zone. To do this, you’ll need the public address for the cluster and the port as well as Authentication enabled in Cloud Manager. To enable Authentication, simply click on the Enabled/Disabled button underneath the Auth section of your replica set and you’ll be given a number of options to connect to the client. We’ll select Username/password.
Click Next, and the subsequent modal will have your username and password to connect to the cluster with.
You are all set. Next, let’s see how the MongoDB performs at the edge. We’ll test this by reading data from both our standard US-West node as well as the Wavelength zone and compare our results.
## Racing MongoDB at the Edge
After laying out the architecture, we wanted to see the power of 5G Edge in action. To that end, we designed a very simple “race.” Over 1,000 trials we would read data from our MongoDB database, and timestamp each operation both from the client to the edge and to the parent region.
```python
from pymongo import MongoClient
import time
client = MongoClient('155.146.144.134', 27017)
mydb = client"mydatabase"]
mycol = mydb["customers"]
mydict = { "name": "John", "address": "Highway 37" }
# Load dataset
for i in range(1000):
x = mycol.insert(mydict)
# Measure reads from Parent Region
edge_latency=[]
for i in range(1000):
t1=time.time()
y = mycol.find_one({"name":"John"})
t2=time.time()
edge_latency.append(t2-t1)
print(sum(edge_latency)/len(edge_latency))
client = MongoClient('52.42.129.138', 27017)
mydb = client["mydatabase"]
mycol = mydb["customers"]
mydict = { "name": "John", "address": "Highway 37" }
# Measure reads from Wavelength Region
edge_latency=[]
for i in range(1000):
t1=time.time()
y = mycol.find_one({"name":"John"})
t2=time.time()
edge_latency.append(t2-t1)
print(sum(edge_latency)/len(edge_latency))
```
After running this experiment, we found that our MongoDB node at the edge performed **over 40% faster** than the parent region! But why was that the case?
Given that the Wavelength Zone nodes were deployed within the mobile network, packets never had to leave the Verizon network and incur the latency penalty of traversing through the public internet—prone to incremental jitter, loss, and latency. In our example, our 5G Ultra Wideband connected device in San Francisco had two options: connect to a local endpoint within the San Francisco mobile network or travel 500+ miles to a data center in Oregon. Thus, we validated the significant performance savings of MongoDB on Verizon 5G Edge relative to the next best alternative: deploying the same architecture in the core cloud.
## Getting started on 5G Edge with MongoDB
While Verizon 5G Edge alone enables developers to build ultra-immersive applications, how can immersive applications become personalized and localized?
Enter MongoDB.
From real-time transaction processing, telemetry capture for your IoT application, or personalization using profile data for localized venue experiences, bringing MongoDB ReplicaSets to the edge allows you to maintain the low latency characteristics of your application without sacrificing access to user profile data, product catalogues, IoT telemetry, and more.
There’s no better time to start your edge enablement journey with Verizon 5G Edge and MongoDB. To learn more about Verizon 5G Edge, you can visit our [developer resources page. If you have any questions about this blog post, find us in the MongoDB community.
In our next post, we will demonstrate how to build your first image classifier on 5G Edge using MongoDB to identify VIPs at your next sporting event, developer conference, or large-scale event.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace. | md | {
"tags": [
"MongoDB",
"AWS"
],
"pageDescription": "From real-time transaction processing, telemetry capture for your IoT application, or personalization using profile data for localized venue experiences, bringing MongoDB to the edge allows you to maintain the low latency characteristics of your application without sacrificing access to data.",
"contentType": "Tutorial"
} | Real-time Data Architectures with MongoDB Cloud Manager and Verizon 5G Edge | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/adl-sql-integration-test | created | # Atlas Query Federation SQL to Form Powerful Data Interactions
Modern platforms have a wide variety of data sources. As businesses grow, they have to constantly evolve their data management and have sophisticated, scalable, and convenient tools to analyse data from all sources to produce business insights.
MongoDB has developed a rich and powerful query language, including a very robust aggregation framework.
These were mainly done to optimize the way developers work with data and provide great tools to manipulate and query MongoDB documents.
Having said that, many developers, analysts, and tools still prefer the legacy SQL language to interact with the data sources. SQL has a strong foundation around joining data as this was a core concept of the legacy relational databases normalization model.
This makes SQL have a convenient syntax when it comes to describing joins.
Providing MongoDB users the ability to leverage SQL to analyse multi-source documents while having a flexible schema and data store is a compelling solution for businesses.
## Data Sources and the Challenge
Consider a requirement to create a single view to analyze data from operative different systems. For example:
- Customer data is managed in the user administration systems (REST API).
- Financial data is managed in a financial cluster (Atlas cluster).
- End-to-end transactions are stored in files on cold storage gathered from various external providers (cloud object storage - Amazon S3 or Microsoft Azure Blob Storage store).
How can we combine and best join this data?
MongoDB Atlas Query Federation connects multiple data sources using the different data store types. Once the data sources are mapped, we can create collections consuming this data. Those collections can have SQL schema generated, allowing us to perform sophisticated joins and do JDBC queries from various BI tools.
In this article, we will showcase the extreme power hidden in Atlas SQL Query.
## Setting Up My Federated Database Instance
In the following view, I have created three main data stores:
- S3 Transaction Store (S3 sample data).
- Accounts from my Atlas clusters (Sample data sample_analytics.accounts).
- Customer data from a secure https source.
I mapped the stores into three collections under `FinTech` database:
- `Transactions`
- `Accounts`
- `CustomerDL`
Now, I can see them through a Query Federation connection as MongoDB collections.
Let's grab our Query Federation instance connection string from the Atlas UI.
This connection string can be used with our BI tools or client applications to run SQL queries.
## Connecting and Using $sql and db.sql
Once we connect to the Query Federation instancee via a mongosh shell, we can generate a SQL schema for our collections. This is optional for the JDBC or $sql operators to recognise collections as SQL “tables” as this step is done automatically for newly created collections, however, its always good to be familiar with the available commands.
#### Generate SQL schema for each collection:
```js
use admin;
db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: "FinTech.customersDL"], sampleSize: 1000, setSchemas: true})
{
ok: 1,
schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
}
db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: ["FinTech.accounts"], sampleSize: 1000, setSchemas: true})
{
ok: 1,
schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
}
db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: ["FinTech.transactions"], sampleSize: 1000, setSchemas: true})
{
ok: 1,
schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
}
```
#### Running SQL queries and joins using $sql stage:
```js
use FinTech;
db.aggregate([{
$sql: {
statement: "SELECT a.* , t.transaction_count FROM accounts a, transactions t where a.account_id = t.account_id SORT BY t.transaction_count DESC limit 2",
format: "jdbc",
formatVersion: 2,
dialect: "mysql",
}
}])
// Equivalent command
db.sql("SELECT a.* , t.transaction_count FROM accounts a, transactions t where a.account_id = t.account_id SORT BY t.transaction_count DESC limit 2");
```
The above query will prompt account information and the transaction counts of each account.
## Connecting Via JDBC
Let’s connect a powerful BI tool like Tableau with the [JDBC driver.
Download JDBC Driver.
#### Connect to Tableau
You have 2 main options to connect, via "MongoDB Atlas" connector or via a JDBC general connector. Please follow the relevant instructions and prerequisites on this documentation page.
##### Connector "MongoDB Atlas by MongoDB"
Search and click the “MongoDB Atlas by MongoDB” connector and provide the information pointing to our Query Federation URI. See the following example:
##### "JDBC" Connector
Setting `connection.properties` file.
```
user=root
password=*******
authSource=admin
database=FinTech
ssl=true
compressors=zlib
```
Click the “Other Databases (JDBC)” connector, copy JDBC connection format, and load the `connection.properties` file.
Once the data is read successfully, the collections will appear on the right side.
#### Setting and Joining Data
We can drag and drop collections from different sources and link them together.
In my case, I connected `Transactions` => `Accounts` based on the `Account Id` field, and accounts and users based on the `Account Id` to `Accounts` field.
In this view, we will see a unified table for all accounts with usernames and their transactions start quarter.
## Summary
MongoDB has all the tools to read, transform, and analyse your documents for almost any use-case.
Whether your data is in an Atlas operational cluster, in a service, or on cold storage like cloud object storage, Atlas Query Federation will provide you with the ability to join the data in real time. With the option to use powerful join SQL syntax and SQL-based BI tools like Tableau, you can get value out of the data in no time.
Try Atlas Query Federation with your BI tools and SQL today. | md | {
"tags": [
"MongoDB",
"SQL"
],
"pageDescription": "Learn how new SQL-based queries can power your Query Federation insights in minutes. Integrate this capability with powerful BI tools like Tableau to get immediate value out of your data. ",
"contentType": "Article"
} | Atlas Query Federation SQL to Form Powerful Data Interactions | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-serverless-quick-start | created | # MongoDB Atlas Serverless Instances: Quick Start
MongoDB Atlas serverless instances are now GA (generally available)!
What is a serverless instance you might ask? In short, *it’s an on-demand serverless database*. In this article, we'll deploy a MongoDB Atlas serverless instance and perform some basic CRUD operations. You’ll need a MongoDB Atlas account. If you already have one sign-in, or register now.
## Demand Planning
When you deploy a MongoDB Atlas cluster, you need to understand what compute and storage resources your application will require so that you pick the correct tier to accommodate its needs.
As your storage needs grow, you will need to adjust your cluster’s tier accordingly. You can also enable auto-scaling between a minimum and maximum tier.
## Ongoing Management
Once you’ve set your tiering scale, what happens when your app explodes and gets tons of traffic and exceeds your estimated maximum tier? It’s going to be slow and unresponsive because there aren’t enough resources.
Or, maybe you’ve over-anticipated how much traffic your application would get but you’re not getting any traffic. You still have to pay for the resources even if they aren’t being utilized.
As your application scales, you are limited to these tiered increments but nothing in between.
These tiers tightly couple compute and storage with each other. You may not need 3TB of storage but you do need a lot of compute. So you’re forced into a tier that isn’t balanced to the needs of your application.
## The Solve
MongoDB Atlas serverless instances solve all of these issues:
- Deployment friction
- Management overhead
- Performance consequences
- Paying for unused resources
- Rigid data models
With MongoDB Atlas serverless instances, you will get seamless deployment and scaling, a reliable backend infrastructure, and an intuitive pricing model.
It’s even easier to deploy a serverless instance than it is to deploy a free cluster on MongoDB Atlas. All you have to do is choose a cloud provider and region. Once created, your serverless instance will seamlessly scale up and down as your application demand fluctuates.
The best part is you only pay for the compute and storage resources you use, leaving the operations to MongoDB’s best-in-class automation, including end-to-end security, continuous uptime, and regular backups.
## Create Your First Serverless Instance
Let’s see how it works…
If you haven’t already signed up for a MongoDB Atlas account, go ahead and do that first, then select "Build a Database".
Next, choose the Serverless deployment option.
Now, select a cloud provider and region, and then optionally modify your instance name. Create your new deployment and you’re ready to start using your serverless instance!
Your serverless instance will be up and running in just a few minutes. Alternatively, you can also use the Atlas CLI to create and deploy a new serverless instance.
While we wait for that, let’s set up a quick Node.js application to test out the CRUD operations.
## Node.js CRUD Example
Prerequisite: You will need Node.js installed on your computer.
Connecting to the serverless instance is just as easy as a tiered instance.
1. Click “Connect.”
3. Set your IP address and database user the same as you would a tiered instance.
4. Choose a connection method.
- You can choose between mongo shell, Compass, or “Connect your application” using MongoDB drivers.
We are going to “Connect your application” and choose Node.js as our driver. This will give us a connection string we can use in our Node.js application. Check the “Include full driver code example” box and copy the example to your clipboard.
To set up our application, open VS Code (or your editor of choice) in a blank folder. From the terminal, let’s initiate a project:
`npm init -y`
Now we’ll install MongoDB in our project:
`npm i mongodb`
### Create
We’ll create a `server.js` file in the root and paste the code example we just copied.
```js
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb+srv://mongo:@serverlessinstance0.xsel4.mongodb.net/myFirstDatabase?retryWrites=true&w=majority";
const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });
client.connect(err => {
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
```
We’ll need to replace `` with our actual user password and `myFirstDatabase` with the database name we’ll be connecting to.
Let’s modify the `client.connect` method to create a database, collection, and insert a new document.
Now we’ll run this from our terminal using `node server`.
```js
client.connect((err) => {
const collection = client.db("store").collection("products");
collection
.insertOne(
{
name: "JavaScript T-Shirt",
category: "T-Shirts",
})
.then(() => {
client.close();
});
});
```
When we use the `.db` and `.collection` methods, if the database and/or collection does not exist, it will be created. We also have to move the `client.close` method into a `.then()` after the `.insertOne()` promise has been returned. Alternatively, we could wrap this in an async function.
We can also insert multiple documents at the same time using `.insertMany()`.
```js
collection
.insertMany(
{
name: "React T-Shirt",
category: "T-Shirts",
},
{
name: "Vue T-Shirt",
category: "T-Shirts",
}
])
.then(() => {
client.close();
});
```
Make the changes and run `node server` again.
### Read
Let’s see what’s in the database now. There should be three documents. The `find()` method will return all documents in the collection.
```js
client.connect((err) => {
const collection = client.db("store").collection("products");
collection.find().toArray((err, result) => console.log(result))
.then(() => {
client.close();
});
});
```
When you run `node server` now, you should see all of the documents created in the console.
If we wanted to find a specific document, we could pass an object to the `find()` method, giving it something to look for.
```js
client.connect((err) => {
const collection = client.db("store").collection("products");
collection.find({name: “React T-Shirt”}).toArray((err, result) => console.log(result))
.then(() => {
client.close();
});
});
```
### Update
To update a document, we can use the `updateOne()` method, passing it an object with the search parameters and information to update.
```js
client.connect((err) => {
const collection = client.db("store").collection("products");
collection.updateOne(
{ name: "Awesome React T-Shirt" },
{ $set: { name: "React T-Shirt" } }
)
.then(() => {
client.close();
});
});
```
To see these changes, run a `find()` or `findOne()` again.
### Delete
To delete something from the database, we can use the `deleteOne()` method. This is similar to `find()`. We just need to pass it an object for it to find and delete.
```js
client.connect((err) => {
const collection = client.db("store").collection("products");
collection.deleteOne({ name: "Vue T-Shirt" }).then(() => client.close());
});
```
## Conclusion
It’s super easy to use MongoDB Atlas serverless instances! You will get seamless deployment and scaling, a reliable backend infrastructure, and an intuitive pricing model. We think that serverless instances are a great deployment option for new users on Atlas.
I’d love to hear your feedback or questions. Let’s chat in the [MongoDB Community. | md | {
"tags": [
"Atlas",
"JavaScript",
"Serverless",
"Node.js"
],
"pageDescription": "MongoDB Atlas serverless instances are now generally available! What is a serverless instance you might ask? In short, it’s an on-demand serverless database. In this article, we'll deploy a MongoDB Atlas serverless instance and perform some basic CRUD operations.",
"contentType": "Quickstart"
} | MongoDB Atlas Serverless Instances: Quick Start | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-federation-out-aws-s3 | created | # MongoDB Atlas Data Federation Tutorial: Federated Queries and $out to AWS S3
Data Federation is a MongoDB Atlas feature that allows you to query data from disparate sources such as:
* Atlas databases.
* Atlas Data Lake.
* HTTP APIs.
* AWS S3 buckets.
In this tutorial, I will show you how to access your archived documents in S3 **and** your documents in your MongoDB Atlas cluster with a **single** MQL query.
This feature is really amazing because it allows you to have easy access to your archived data in S3 along with your "hot" data in your Atlas cluster. This could help you prevent your Atlas clusters from growing in size indefinitely and reduce your costs drastically. It also makes it easier to gain new insights by easily querying data residing in S3 and exposing it to your real-time app.
Finally, I will show you how to use the new version of the $out aggregation pipeline stage to write documents from a MongoDB Atlas cluster into an AWS S3 bucket.
## Prerequisites
In order to follow along this tutorial, you need to:
* Create a MongoDB Atlas cluster. ✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
* Create a user in the **Database Access** menu.
* Add your IP address in the Network Access List in the **Network Access** menu.
* Have Python 3 with `pymongo` and `dnspython` libs installed.
### Configure your S3 bucket and AWS account
Log into your AWS account and create an S3 bucket. Choose a region close to your Atlas deployment to minimize data latency. The scripts in this tutorial use a bucket called `cold-data-mongodb` in the region `eu-west-1`. If you use a different name or select another region, make sure to reflect that in the Python code you’ll see in the tutorial.
Then, install the AWS CLI and configure it to access your AWS account. If you need help setting it up, refer to the AWS documentation.
### Prepare the dataset
To illustrate how `$out` and federated queries work, I will use an overly simple dataset to keep things as easy as possible to understand. Our database “test” will have a single collection, “orders,” representing orders placed in an online store. Each order document will have a “created” field of type “Date.” We’ll use that field to archive older orders, moving them from the Atlas cluster to S3.
I’ve written a Python script that inserts the required data in the Atlas cluster. You can get the script, along with the rest of the code we’ll use in the tutorial, from GitHub:
```
git clone https://github.com/mongodb-developer/data-lake-tutorial.git
```
Then, go back to Atlas to locate the connection string for your cluster. Click on “Connect” and then “Connect your application.” Copy the connection string and paste it in the `insert_data.py` script you just downloaded from GitHub. Don’t forget to replace the `` and `` placeholders with the credentials of your database user:
**insert_data.py**
```python
from pymongo import MongoClient
from datetime import datetime
client = MongoClient('mongodb+srv://:@m0.lbtrerw.mongodb.net/')
…
```
Finally, install the required libraries and run the script:
```
pip3 install -r requirements.txt
python3 insert_data.py
```
Now that we have a “massive” collection of orders, we can consider archiving the oldest orders to an S3 bucket. Let's imagine that once a month is over, we can archive all the orders from the previous month. We’ll create one JSON file in S3 for all the orders created during the previous month.
We’ll transfer these orders to S3 using the aggregation pipeline stage $out.
But first, we need to configure Atlas Data Federation correctly.
## Configure Data Federation
Navigate to “Data Federation” from the side menu in Atlas and then click “set up manually” in the "create new federated database" dropdown in the top right corner of the UI.
On the left, we see a panel with the data sources (we don’t have any yet), and on the right are the “virtual” databases and collections of the federated instance.
### Configure the Atlas cluster as a data source
Let’s add the first data source — the orders from our Atlas cluster. Click “Add Data Sources,” select “Atlas Cluster,” and then select your cluster and database.
Click “Next” and you’ll see the “test.orders” collection as a data source. Click on the “test.orders” row, drag it underneath the “VirtualCollection0,” and drop it there as a data source.
### Configure the S3 bucket as a data source
Next, we’ll connect our S3 bucket. Click on “Add Data Sources” again and this time, select Amazon S3. Click “Next” and follow the instructions to create and authorize a new AWS IAM role. We need to execute a couple of commands with the AWS CLI. Make sure you’ve installed and linked the CLI to your AWS account before that. If you’re facing any issues, check out the AWS CLI troubleshooting page.
Once you’ve authorized the IAM role, you’ll be prompted for the name of your S3 bucket and the access policy. Since we'll be writing files to our bucket, we need to choose “Read and write.”
You can also configure a prefix. If you do, Data Federation will only search for files in directories starting with the specified prefix. In this tutorial, we want to access files in the root directory of the bucket, so we’ll leave this field empty.
After that, we need to execute a couple more AWS CLI commands to make sure the IAM role has permissions for the S3 bucket. When you’re finished, click “Next.”
Finally, we’ll be prompted to define a path to the data we want to access in the bucket. To keep things simple, we’ll use a wildcard configuration allowing us to access all files. Set `s3://cold-data-mongodb/*` as the path and `any value (*)` as the data type of the file.
Data Federation also allows you to create partitions and parse fields from the filenames in your bucket. This can optimize the performance of your queries by traversing only relevant files and directories. To find out more, check out the Data Federation docs.
Once we’ve added the S3 bucket data, we can drag it over to the virtual collection as a data source.
### Rename the virtual database and collection
The names “VirtualDatabase0” and “VirtualCollection0” don’t feel appropriate for our data. Let’s rename them to “test” and “orders” respectively to match the data in the Atlas cluster.
### Verify the JSON configuration
Finally, to make sure that our setup is correct, we can switch to the JSON view in the top right corner, right next to the “Save” button. Your configuration, except for the project ID and the cluster name, should be identical to this:
```json
{
"databases":
{
"name": "test",
"collections": [
{
"name": "orders",
"dataSources": [
{
"storeName": "M0",
"database": "test",
"collection": "orders"
},
{
"storeName": "cold-data-mongodb",
"path": "/*"
}
]
}
],
"views": []
}
],
"stores": [
{
"name": "M0",
"provider": "atlas",
"clusterName": "M0",
"projectId": ""
},
{
"name": "cold-data-mongodb",
"provider": "s3",
"bucket": "cold-data-mongodb",
"prefix": "",
"delimiter": "/"
}
]
}
```
Once you've verified everything looks good, click the “Save” button. If your AWS IAM role is configured correctly, you’ll see your newly configured federated instance. We’re now ready to connect to it!
## Archive cold data to S3 with $out
Let's now collect the URI we are going to use to connect to Atlas Data Federation.
Click on the “Connect” button, and then “Connect your application.” Copy the connection string as we’ll need it in just a minute.
Now let's use Python to execute our aggregation pipeline and archive the two orders from May 2020 in our S3 bucket.
``` python
from datetime import datetime
from pymongo import MongoClient
client = MongoClient('')
db = client.get_database('test')
coll = db.get_collection('orders')
start_date = datetime(2020, 5, 1) # May 1st
end_date = datetime(2020, 6, 1) # June 1st
pipeline = [
{
'$match': {
'created': {
'$gte': start_date,
'$lt': end_date
}
}
},
{
'$out': {
's3': {
'bucket': 'cold-data-mongodb',
'region': 'eu-west-1',
'filename': start_date.isoformat('T', 'milliseconds') + 'Z-' + end_date.isoformat('T', 'milliseconds') + 'Z',
'format': {'name': 'json', 'maxFileSize': '200MiB'}
}
}
}
]
coll.aggregate(pipeline)
print('Archive created!')
```
Once you replace the connection string with your own, execute the script:
```
python3 archive.py
```
And now we can confirm that our archive was created correctly in our S3 bucket:
!["file in the S3 bucket"
### Delete the “cold” data from Atlas
Now that our orders are safe in S3, I can delete these two orders from my Atlas cluster. Let's use Python again. This time, we need to use the URI from our Atlas cluster because the Atlas Data Federation URI doesn't allow this kind of operation.
``` python
from datetime import datetime
from pymongo import MongoClient
client = MongoClient('')
db = client.get_database('test')
coll = db.get_collection('orders')
start_date = datetime(2020, 5, 1) # May 1st
end_date = datetime(2020, 6, 1) # June 1st
query = {
'created': {
'$gte': start_date,
'$lt': end_date
}
}
result = coll.delete_many(query)
print('Deleted', result.deleted_count, 'orders.')
```
Let's run this code:
``` none
python3 remove.py
```
Now let's double-check what we have in S3. Here is the content of the S3 file I downloaded:
``` json
{"_id":{"$numberDouble":"1.0"},"created":{"$date":{"$numberLong":"1590796800000"}},"items":{"$numberDouble":"1.0"},{"$numberDouble":"3.0"}],"price":{"$numberDouble":"20.0"}}
{"_id":{"$numberDouble":"2.0"},"created":{"$date":{"$numberLong":"1590883200000"}},"items":[{"$numberDouble":"2.0"},{"$numberDouble":"3.0"}],"price":{"$numberDouble":"25.0"}}
```
And here is what's left in my MongoDB Atlas cluster.
![Documents left in MongoDB Atlas cluster
### Federated queries
As mentioned above already, with Data Federation, you can query data stored across Atlas and S3 simultaneously. This allows you to retain easy access to 100% of your data. We actually already did that when we ran the aggregation pipeline with the `$out` stage.
Let's verify this one last time with Python:
``` python
from pymongo import MongoClient
client = MongoClient('')
db = client.get_database('test')
coll = db.get_collection('orders')
print('All the docs from S3 + Atlas:')
docs = coll.find()
for d in docs:
print(d)
pipeline =
{
'$group': {
'_id': None,
'total_price': {
'$sum': '$price'
}
}
}, {
'$project': {
'_id': 0
}
}
]
print('\nI can also run an aggregation.')
print(coll.aggregate(pipeline).next())
```
Execute the script with:
```bash
python3 federated_queries.py
```
Here is the output:
``` none
All the docs from S3 + Atlas:
{'_id': 1.0, 'created': datetime.datetime(2020, 5, 30, 0, 0), 'items': [1.0, 3.0], 'price': 20.0}
{'_id': 2.0, 'created': datetime.datetime(2020, 5, 31, 0, 0), 'items': [2.0, 3.0], 'price': 25.0}
{'_id': 3.0, 'created': datetime.datetime(2020, 6, 1, 0, 0), 'items': [1.0, 3.0], 'price': 20.0}
{'_id': 4.0, 'created': datetime.datetime(2020, 6, 2, 0, 0), 'items': [1.0, 2.0], 'price': 15.0}
I can also run an aggregation:
{'total_price': 80.0}
```
## Wrap up
If you have a lot of infrequently accessed data in your Atlas cluster but you still need to be able to query it and access it easily once you've archived it to S3, creating a federated instance will help you save tons of money. If you're looking for an automated way to archive your data from Atlas clusters to fully-managed S3 storage, then check out our new [Atlas Online Archive feature!
Storage on S3 is a lot cheaper than scaling up your MongoDB Atlas cluster because your cluster is full of cold data and needs more RAM and storage size to operate correctly.
All the Python code is available in this Github repository.
Please let us know on Twitter if you liked this blog post: @MBeugnet and @StanimiraVlaeva.
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will give you a hand. | md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "Learn how to use MongoDB Atlas Data Federation to query data from Atlas databases and AWS S3 and archive cold data to S3 with $out.",
"contentType": "Tutorial"
} | MongoDB Atlas Data Federation Tutorial: Federated Queries and $out to AWS S3 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/php/php-crud | created | # Creating, Reading, Updating, and Deleting MongoDB Documents with PHP
Welcome to Part 2 of this quick start guide for MongoDB and PHP. In the previous article, I walked through the process of installing, configuring, and setting up PHP, Apache, and the MongoDB Driver and Extension so that you can effectively begin building an application leveraging the PHP, MongoDB stack.
I highly recommend visiting the first article in this series to get set up properly if you have not previously installed PHP and Apache.
I've created each section with code samples. And I'm sure I'm much like you in that I love it when a tutorial includes examples that are standalone... They can be copy/pasted and tested out quickly. Therefore, I tried to make sure that each example is created in a ready-to-run fashion.
These samples are available in this repository, and each code sample is a standalone program that you can run by itself. In order to run the samples, you will need to have installed PHP, version 8, and you will need to be able to install additional PHP libraries using `Compose`. These steps are all covered in the first article in this series.
Additionally, while I cover it in this article, it bears mentioning upfront that you will need to create and use a `.env` file with your credentials and the server name from your MongoDB Atlas cluster.
This guide is organized into a few sections over a few articles. This first article addresses the installation and configuration of your development environment. PHP is an integrated web development language. There are several components you typically use in conjunction with the PHP programming language.
>Video Introduction and Overview
>
>:youtube]{vid=tW87xDCPspk}
Let's start with an overview of what we'll cover in this article.
1. [Connecting to a MongoDB Database Instance
1. Creating or Inserting a Single MongoDB Document with PHP
1. Creating or Inserting Multiple MongoDB Documents with PHP
1. Reading Documents with PHP
1. Updating Documents with PHP
1. Deleting Documents with PHP
## Connecting to a MongoDB Database Instance
To connect to a MongoDB Atlas cluster, use the Atlas connection string for your cluster:
``` php
:@/test?w=majority'
);
$db = $client->test;
```
>Just a note about language. Throughout this article, we use the term `create` and `insert` interchangeably. These two terms are synonymous. Historically, the act of adding data to a database was referred to as `CREATING`. Hence, the acronym `CRUD` stands for Create, Read, Update, and Delete. Just know that when we use create or insert, we mean the same thing.
## Protecting Sensitive Authentication Information with DotEnv (.env)
When we connect to MongoDB, we need to specify our credentials as part of the connection string. You can hard-code these values into your programs, but when you commit your code to a source code repository, you're exposing your credentials to whomever you give access to that repository. If you're working on open source, that means the world has access to your credentials. This is not a good idea. Therefore, in order to protect your credentials, we store them in a file that **does** **not** get checked into your source code repository. Common practice dictates that we store this information only in the environment. A common method of providing these values to your program's running environment is to put credentials and other sensitive data into a `.env` file.
The following is an example environment file that I use for the examples in this tutorial.
``` bash
MDB_USER="yourusername"
MDB_PASS="yourpassword"
ATLAS_CLUSTER_SRV="mycluster.zbcul.mongodb.net"
```
To create your own environment file, create a file called `.env` in the root of your program directory. You can simply copy the example environment file I've provided and rename it to `.env`. Be sure to replace the values in the file `yourusername`, `yourpassword`, and `mycluster.zbcul.mongodb.net` with your own.
Once the environment file is in place, you can use `Composer` to install the DotEnv library, which will enable us to read these variables into our program's environment. See the first article in this series for additional setup instructions.
``` bash
$ composer require vlucas/phpdotenv
```
Once installed, you can incorporate this library into your code to pull in the values from your `.env` file.
``` php
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
```
Next, you will be able to reference the values from the `.env` file using the `$_ENV]` array like this:
``` php
echo $_ENV['MDB_USER'];
```
See the code examples below to see this in action.
## Creating or Inserting a Single MongoDB Document with PHP
The [MongoDBCollection::insertOne() method inserts a single document into MongoDB and returns an instance of MongoDBInsertOneResult, which you can use to access the ID of the inserted document.
The following code sample inserts a document into the users collection in the test database:
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/test'
);
$collection = $client->test->users;
$insertOneResult = $collection->insertOne([
'username' => 'admin',
'email' => '[email protected]',
'name' => 'Admin User',
]);
printf("Inserted %d document(s)\n", $insertOneResult->getInsertedCount());
var_dump($insertOneResult->getInsertedId());
```
You should see something similar to:
``` bash
Inserted 1 document(s)
object(MongoDB\BSON\ObjectId)#11 (1) {
["oid"]=>
string(24) "579a25921f417dd1e5518141"
}
```
The output includes the ID of the inserted document.
## Creating or Inserting Multiple MongoDB Documents with PHP
The [MongoDBCollection::insertMany() method allows you to insert multiple documents in one write operation and returns an instance of MongoDBInsertManyResult, which you can use to access the IDs of the inserted documents.
The following sample code inserts two documents into the users collection in the test database:
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/test'
);
$collection = $client->test->users;
$insertManyResult = $collection->insertMany([
[
'username' => 'admin',
'email' => '[email protected]',
'name' => 'Admin User',
],
[
'username' => 'test',
'email' => '[email protected]',
'name' => 'Test User',
],
]);
printf("Inserted %d document(s)\n", $insertManyResult->getInsertedCount());
var_dump($insertManyResult->getInsertedIds());
```
You should see something similar to the following:
``` bash
Inserted 2 document(s)
array(2) {
[0]=>
object(MongoDB\BSON\ObjectId)#18 (1) {
["oid"]=>
string(24) "6037b861301e1d502750e712"
}
[1]=>
object(MongoDB\BSON\ObjectId)#21 (1) {
["oid"]=>
string(24) "6037b861301e1d502750e713"
}
}
```
## Reading Documents with PHP
Reading documents from a MongoDB database can be accomplished in several ways, but the most simple way is to use the `$collection->find()` command.
``` php
function find($filter = [], array $options = []): MongoDB\Driver\Cursor
```
Read more about the find command in PHP [here:.
The following sample code specifies search criteria for the documents we'd like to find in the `restaurants` collection of the `sample_restaurants` database. To use this example, please see the Available Sample Datasets for Atlas Clusters.
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'
);
$collection = $client->sample_restaurants->restaurants;
$cursor = $collection->find(
[
'cuisine' => 'Italian',
'borough' => 'Manhattan',
],
[
'limit' => 5,
'projection' => [
'name' => 1,
'borough' => 1,
'cuisine' => 1,
],
]
);
foreach ($cursor as $restaurant) {
var_dump($restaurant);
};
```
You should see something similar to the following output:
``` bash
object(MongoDB\Model\BSONDocument)#20 (1) {
["storage":"ArrayObject":private]=>
array(4) {
["_id"]=>
object(MongoDB\BSON\ObjectId)#26 (1) {
["oid"]=>
string(24) "5eb3d668b31de5d588f42965"
}
["borough"]=>
string(9) "Manhattan"
["cuisine"]=>
string(7) "Italian"
["name"]=>
string(23) "Isle Of Capri Resturant"
}
}
object(MongoDB\Model\BSONDocument)#19 (1) {
["storage":"ArrayObject":private]=>
array(4) {
["_id"]=>
object(MongoDB\BSON\ObjectId)#24 (1) {
["oid"]=>
string(24) "5eb3d668b31de5d588f42974"
}
["borough"]=>
string(9) "Manhattan"
["cuisine"]=>
string(7) "Italian"
["name"]=>
string(18) "Marchis Restaurant"
}
}
object(MongoDB\Model\BSONDocument)#26 (1) {
["storage":"ArrayObject":private]=>
array(4) {
["_id"]=>
object(MongoDB\BSON\ObjectId)#20 (1) {
["oid"]=>
string(24) "5eb3d668b31de5d588f42988"
}
["borough"]=>
string(9) "Manhattan"
["cuisine"]=>
string(7) "Italian"
["name"]=>
string(19) "Forlinis Restaurant"
}
}
object(MongoDB\Model\BSONDocument)#24 (1) {
["storage":"ArrayObject":private]=>
array(4) {
["_id"]=>
object(MongoDB\BSON\ObjectId)#19 (1) {
["oid"]=>
string(24) "5eb3d668b31de5d588f4298c"
}
["borough"]=>
string(9) "Manhattan"
["cuisine"]=>
string(7) "Italian"
["name"]=>
string(22) "Angelo Of Mulberry St."
}
}
object(MongoDB\Model\BSONDocument)#20 (1) {
["storage":"ArrayObject":private]=>
array(4) {
["_id"]=>
object(MongoDB\BSON\ObjectId)#26 (1) {
["oid"]=>
string(24) "5eb3d668b31de5d588f42995"
}
["borough"]=>
string(9) "Manhattan"
["cuisine"]=>
string(7) "Italian"
["name"]=>
string(8) "Arturo'S"
}
}
```
## Updating Documents with PHP
Updating documents involves using what we learned in the previous section for finding and passing the parameters needed to specify the changes we'd like to be reflected in the documents that match the specific criterion.
There are two specific commands in the PHP Driver vocabulary that will enable us to `update` documents.
- `MongoDB\Collection::updateOne` - Update, at most, one document that matches the filter criteria. If multiple documents match the filter criteria, only the first matching document will be updated.
- `MongoDB\Collection::updateMany` - Update all documents that match the filter criteria.
These two work very similarly, with the obvious exception around the number of documents impacted.
Let's start with `MongoDB\Collection::updateOne`. The following [code sample finds a single document based on a set of criteria we pass in a document and `$set`'s values in that single document.
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'
);
$collection = $client->sample_restaurants->restaurants;
$updateResult = $collection->updateOne(
[ 'restaurant_id' => '40356151' ],
[ '$set' => [ 'name' => 'Brunos on Astoria' ]]
);
printf("Matched %d document(s)\n", $updateResult->getMatchedCount());
printf("Modified %d document(s)\n", $updateResult->getModifiedCount());
```
You should see something similar to the following output:
``` bash
Matched 1 document(s)
Modified 1 document(s)
```
Now, let's explore updating multiple documents in a single command execution.
The following [code sample updates all of the documents with the borough of "Queens" by setting the active field to true:
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'
);
$collection = $client->sample_restaurants->restaurants;
$updateResult = $collection->updateMany(
[ 'borough' => 'Queens' ],
[ '$set' => [ 'active' => 'True' ]]
);
printf("Matched %d document(s)\n", $updateResult->getMatchedCount());
printf("Modified %d document(s)\n", $updateResult->getModifiedCount());
```
You should see something similar to the following:
``` bash
Matched 5656 document(s)
Modified 5656 document(s)
```
>When updating data in your MongoDB database, it's important to consider `write concern`. Write concern describes the level of acknowledgment requested from MongoDB for write operations to a standalone `mongod`, replica sets, or sharded clusters.
To understand the current value of write concern, try the following example code:
``` php
$collection = (new MongoDB\Client)->selectCollection('test', 'users', [
'writeConcern' => new MongoDB\Driver\WriteConcern(1, 0, true),
]);
var_dump($collection->getWriteConcern());
```
See for more information on write concern.
## Deleting Documents with PHP
Just as with updating and finding documents, you have the ability to delete a single document or multiple documents from your database.
- `MongoDB\Collection::deleteOne` - Deletes, at most, one document that matches the filter criteria. If multiple documents match the filter criteria, only the first matching document will be deleted.
- `MongoDB\Collection::deleteMany` - Deletes all documents that match the filter criteria.
Let's start with deleting a single document.
The following [code sample deletes one document in the users collection that has "ny" as the value for the state field:
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'
);
$collection = $client->sample_restaurants->restaurants;
$deleteResult = $collection->deleteOne(['cuisine' => 'Hamburgers']);
printf("Deleted %d document(s)\n", $deleteResult->getDeletedCount());
```
You should see something similar to the following output:
``` bash
Deleted 1 document(s)
```
You will notice, if you examine the `sample_restaurants` database, that there are many documents matching the criteria `{ "cuisine": "Hamburgers" }`. However, only one document was deleted.
Deleting multiple documents is possible using `MongoDB\Collection::deleteMany`. The following code sample shows how to use `deleteMany`.
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV['MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'
);
$collection = $client->sample_restaurants->restaurants;
$deleteResult = $collection->deleteMany(['cuisine' => 'Hamburgers']);
printf("Deleted %d document(s)\n", $deleteResult->getDeletedCount());
```
You should see something similar to the following output:
> Deleted 432 document(s)
>If you run this multiple times, your output will obviously differ. This is because you may have removed or deleted documents from prior executions. If, for some reason, you want to restore your sample data, visit: for instructions on how to do this.
## Summary
The basics of any language are typically illuminated through the process of creating, reading, updating, and deleting data. In this article, we walked through the basics of CRUD with PHP and MongoDB. In the next article in the series, will put these principles into practice with a real-world application.
Creating or inserting documents is accomplished through the use of:
- [MongoDBCollection::insertOne
- MongoDBCollection::insertMany
Reading or finding documents is accomplished using:
- MongoDBCollection::find
Updating documents is accomplished through the use of:
- MongoDBCollection::updateOne
- MongoDBCollection::updateMany
Deleting or removing documents is accomplished using:
- MongoDBCollection::deleteOne
- MongoDBCollection::deleteMany
Please be sure to visit, star, fork, and clone the companion repository for this article.
Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.
## References
- MongoDB PHP Quickstart Source Code Repository
- MongoDB PHP Driver CRUD Documentation
- MongoDB PHP Driver Documentation provides thorough documentation describing how to use PHP with our MongoDB cluster.
- MongoDB Query Document documentation details the full power available for querying MongoDB collections. | md | {
"tags": [
"PHP",
"MongoDB"
],
"pageDescription": "Getting Started with MongoDB and PHP - Part 2 - CRUD",
"contentType": "Quickstart"
} | Creating, Reading, Updating, and Deleting MongoDB Documents with PHP | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/schema-suggestions-julia-oppenheim | created | # Schema Suggestions with Julia Oppenheim - Podcast Episode 59
Today, we are joined by Julia Oppenheim, Associate Product Manager at MongoDB. Julia chats with us and shares details of a set of features within MongoDB Atlas designed to help developers improve the design of their schemas to avoid common anti-patterns.
The notion that MongoDB is schema-less is a bit of a misnomer. Traditional relational databases use a separate entity in the database that defines the schema - the structure of the tables/rows/columns and acceptable values that get stored in the database. MongoDB takes a slightly different approach. The schema does exist in MongoDB, but to see what that schema is - you typically look at the documents previously written to the database. With this in mind, you, as a developer have the power to make decisions about the structure of the documents you store in your database... and as they say with great power, comes great responsibility.
MongoDB has created a set of features built into Atlas that enable you to see when your assumptions about the structure of your documents turn out to be less than optimal. These features come under the umbrella of Schema Suggestions and on today's podcast episode, Julia Oppenheim joins Nic Raboy and I to talk about how Schema Suggestions can help you maintain and improve the performance of your applications by exposing anti-patterns in your schema.
**Julia: [00:00:00]** My name is Julia Oppenheim and welcome to the Mongo DB podcast. Stay tuned to learn more about how to improve your schema and alleviate schema anti-patterns with schema suggestions and Mongo DB Atlas.
**Michael: [00:00:12]** And today we're talking with Julia Oppenheim. Welcome to the show, Julia, it's great to have you on the podcast. Thanks. It's great to be here. So why don't you introduce yourself to the audience? Let folks know who you are and what you do at Mongo DB.
**Julia: [00:00:26]** Yeah. Sure. So hi, I'm Julia. I actually joined Mongo DB about nine months ago as a product manager on Rez's team.
So yeah, I actually did know that you had spoken to him before. And if you listened to those episodes Rez probably touched on what our team does, which is. Ensure that the customer's journey or the user's journey with Mongo DB runs smoothly and that their deployments are performance. Making sure that, you know, developers can focus on what's truly exciting and interesting to them like pushing out new features and they don't have the stress of is my deployment is my database.
You know, going to have any problems. We try to make that process as smooth as possible.
**Michael: [00:01:10]** Fantastic. And today we're going to be focusing on schemas, right. Schema suggestions, and eliminating schema. Anti-patterns so hold the phone, Mike. Yeah, yeah, go ahead, Nick.
**Nic: [00:01:22]** I thought I thought I'm going to be people call this the schema-less database.
**Michael: [00:01:28]** Yeah, I guess that is, that is true. With the document database, it's not necessary to plan your schema ahead of time. So maybe Julia, do you want to shed some light on why we need schema suggestions in the Mongo DB
**Julia: [00:01:41]** Yeah, no, I think that's a really good point and definitely a common misconception.
So I think one of the draws of Mongo DB is that schema can be pretty flexible. And it's not rigid in the sense that other more relational databases you know, they have a strict set of rules and how you can access the data. I'm going to be as definitely more lenient in that regard, but at the end of the day, you still.
Need certain fields value types and things like that dependent on the needs of your application. So one of the first things that any developer will do is kind of map out what their use cases for their applications are and figure out how they should store the data to make sure that those use cases can be carried out.
I think that you can kind of get a little stuck with schema in MongoDB, is that. The needs of your application changed throughout the development cycle. So a schema that may work on day one when you're you know, user base is relatively small, your feature set is pretty limited. May not work. As your app, you get Cisco, you may need to refactor a little bit, and it may not always be immediately obvious how to do that.
And, you know, we don't expect users to be experts in MongoDB and schema design with Mongo DB which is why I think. Highlighting schema anti-patterns is very useful.
**Michael: [00:03:03]** Fantastic. So do you want to talk a little bit about how the product works? How schema suggestions work in Mongo DB. Atlas?
**Julia: [00:03:12]** Yeah. So there are two places where you as a user can see schema anti-patterns they're in.
The performance advisor tab a, which Rez definitely touched on if he talked about autopilot and index suggestions, and you can also see schema anti-patterns in the in our data Explorer. So the collections tab, and we can talk about you know, in a little bit why we have them in two separate places, but in general what you, as the user will see is the same.
So we. Flag schema anti-patterns we give kind of like a brief explanation as to why we flagged them. We'll show, which collections are impacted by you know, this anti-pattern that we've identified and we'll also kind of give a call to action on how to address them. So we actually have custom docs on the six schema anti-patterns that we.
Look for at this stage of the products, you know, life cycle, and we give kind of steps on how to solve it, what our recommendation would be, and also kind of explain, you know, why it's a problem and how it can really you know, come back to hurt you later on.
**Nic: [00:04:29]** So you've thrown out the keyword schema.
Anti-patterns a few times now, do you want to go over what you said? There are six of them, right? We want to go what each of those six are.
**Julia: [00:04:39]** Yeah, sure. So there are, like you said, there are six. So I think that we look for use of Our dollar lookup operations. So this means that where it's very, very similar to joining in the relational world where you would be accessing data across different collections.
And this is not always ideal because you're reading and performing, you know, different logic on more than one collection. So in general, it just takes a lot of time a little more resource intensive and. You know, when we see this, we're kind of thinking, oh, this person might come from a more relational background.
That's not to say that this is always a problem. It could make sense to do this in certain cases. Which is where things get a little dicier, but that's the first one that we look for. The, another one is looking for unbounded arrays. So if you just keep. Embedding information and have no limit on that.
The size of your documents can get really, really big. This, we actually have a limit in place and this is one of our third anti-patterns where if you keep you'll hit our 16 megabyte per document limit which kind of means that. Your hottest documents are the working set, takes up too much space on RAM.
So now we're going to disk to fulfill your request, which is, you know, generally again, we'll take some time it's more resource you know, consumptive, things like that.
**Nic: [00:06:15]** This might be out of scope, but how do you prevent an unbounded array in Mongo DB? Like. I get the concept, but I've never, I've never heard of it done in a database before, so this would be new to me.
**Julia: [00:06:27]** So this is going to be a little contradictory to the lookup anti-pattern that I just mentioned, and I think that we can talk about this more. Cause I know that when I was first learning about anti-patterns and they did seem very contradictory to me and I got of stressed. So we'll talk about that in a little bit, but the way you would avoid.
The unbounded array would probably be to reference other documents. So that's essentially doing the look of that. I just said was an anti-pattern, but one way to think of it is say you have, okay, so you have a developer collection and you have different information about the developer, like their team at Mongo DB.
You know how long they've been here and maybe you have all of their get commits and like they get commit. It could be an embedded document. It could have like the date of the commit and what project it was on and things like that. A developer can have, you know, infinitely many commits, like maybe they just commit a lot and there was no bound on that.
So you know, it's a one to many relationship and. If that were in an array, I think we all see that that would grow probably would hit that 16 megabyte limit. What we would instead maybe want to consider doing is creating like a commit collection where we would then tie it back to the developer who made the commit and reference it from the original developer document.
I don't know if that analogy was helpful, but that's, that's kind of how you would handle that.
**Michael: [00:08:04]** And I think the the key thing here is, you know, you get to make these decisions about how you design your schema. You're not forced to normalize data in one way across the entire database, as you are in the relational world.
And so you're going to make a decision about the number of elements in a potential array versus the cost of storing that data in separate collections and doing a lookup. And. Obviously, you know, you may start, you may embark on your journey to develop an application, thinking that your arrays are going to be within scope within a relative, relatively low number.
And maybe the use pattern changes or the number of users changes the number of developers using your application changes. And at some point you may need to change that. So let me ask the question about the. The user case when I'm interacting with Mongo DB Atlas, and my use case does change. My user pattern does change.
How will that appear? How will it surface in the product that now I've breached the limits of what is an acceptable pattern. And now it's, I'm in the scope of an anti-pattern.
**Julia: [00:09:16]** Right. So when that happens, the best place for it to be flagged is our performance advisor tab. So we'll have, we have a little card that says improve your schema.
And if we have anti-patterns that we flagged we'll show the number of suggestions there. You can click it to learn more about them. And what we do there is it's based on. A sample of your data. So we kind of try to catch these in a reactive sense. We'll see that something is going on and we'll give you a suggestion to improve it.
So to do that, we like analyze your data. We try to determine which collections matter, which collections you're really using. So based on the number of reads and writes to the collections, we'll kind of identify your top 20 collections and then. We'll see what's going on. We'll look for, you know, the edgy pattern, some of which I've mentioned and kind of just collect, this is all going on behind the scenes, by the way, we'll kind of collect you know, distributions of, you know, average data size, our look ups happening you know, just looking for some of those anti-patterns that I've mentioned, and then we'll determine which ones.
You can actually fix and which ones are most impactful, which ones are actually a problem. And then we surface that to the user.
**Nic: [00:10:35]** So is it monitoring what type of queries you're doing or is it just looking at, based on how your documents are structured when it's suggesting a schema?
**Julia: [00:10:46]** Yeah. It's mainly looking for how your documents are structured.
The dollar lookup is a little tricky because it is, you know, an operation that's kind of happening under the hood, but it's based on the fact that you're referencing things within the document.
**Michael: [00:11:00]** Okay. So we talked about the unbounded arrays. We talked about three anti-patterns so far. Do you want to continue on the journey of anti-patterns?
**Julia: [00:11:10]** Okay. Yeah. Yeah, no, definitely. So one that we also flag is at the index level, and this is something that is also available in porphyry performance advisor in general.
So if you have unnecessary indexes on the collection, that's something that is problematic because an index just existing is you know, it consumes resources, it takes up space and. It can slow down, writes, even though it does slow down speed up reads. So that's like for indexes in general, but then there's the case where the index isn't actually doing anything and it may be kind of stale.
Maybe your query patterns have changed and things like that. So if you have excessive indexes on your collection, we'll flag that, but I will say in performance advisor we do now have index removal recommendations that. We'll say this is the actual index that you should remove. So a little more granular which is nice.
Then another one we have is reducing the number of collections you have in general. So at a certain point, collections again, consume a lot of resources. You have indexes on the collections. You have a lot of documents. Maybe you're referencing things that could be embedded. So that's just kind of another sign that you might want to refactor your data landscape within Mongo DB.
**Michael: [00:12:36]** Okay. So we've talked about a number of, into patterns so far, we've talked about a use of dollar lookup, storing unbounded arrays in your documents. We've talked about having too many indexes. We've talked about having a large document sizes in your collections. We've talked about too many collections.
And then I guess the last one we need to cover off is around case insensitive rejects squares. You want to talk a little bit about that?
**Julia: [00:13:03]** Yeah. So. Like with the other anti-patterns we'll kind of look to see when you have queries that are using case insensitive red jacks and recommend that you have the appropriate index.
So it could be case insensitive. Index, it could be a search index, things like that. That is, you know, the last anti-pattern we flag.
**Michael: [00:13:25]** Okay. Okay, great. And obviously, you know, any kind of operation against the database is going to require resource. And the whole idea here is there's a balancing act between leveraging the resource and and operating efficiently.
So, so these are, this is a product feature that's available in Mongo, DB, Atlas. All of these things are available today. Correct? Yeah. And you would get to, to see these suggestions in the performance advisor tab, right?
**Julia: [00:13:55]** Yes. Performance advisor. And also as I mentioned, our data Explorer, which is our collections.
Yeah. Right.
**Michael: [00:14:02]** Yeah. Fantastic. The whole entire goal of. Automating database management is to make it easier for the developer to interact with the database. What else do we want to tell the audience about a schema suggestions or anything in this product space? So
Julia: [00:14:19] I think definitely want to highlight what you just mentioned, that, you know, your schema changes the anti-patterns that could be, you know, more damaging to your performance.
Change over time and it really does depend on your workload and how you're accessing the data. I know that, you know, some of this FEMA anti-patterns do conflict with each other. We do say that some cases you S you should reduce references and some cases you shouldn't, it really depends on, you know, is the data that you want to access together, actually being stored together.
And does that. You know, it makes sense. So they won't all always apply. It will be kind of situational and that's, you know why we're here to help.
**Nic: [00:15:01]** So when people are using Mongo DB to create documents in their collections, I imagine that they have some pretty intense looking document schemas, like I'm talking objects that are nested eight levels deep.
Will the schema suggestions help in those scenarios to try to improve how people have created their data?
**Julia: [00:15:23]** Schema suggestions are still definitely in their early days. I think we released this product almost a year ago. We'll definitely capture any of the six anti-patterns that we just mentioned if they're happening on a high level.
So if you're nesting a lot of stuff within the document, that would probably increase. You know, document size and we would flag it. We might not be able to get that targeted to say, this is why your document sizes this large. But I think that that's a really good call-out and it's safe to say, we know that we are not capturing every scenario that a user could encounter with their schema.
You can truly do whatever you want you know, designing your Mongo DB documents. Were actively researching, which schema suggestions it makes sense to look for in our next iteration of this product. So if you have feedback, you know, always don't hesitate to reach out. We'd love to hear your thoughts.
So yeah, there are definitely some limitations we're working on it. We're looking into it.
**Michael: [00:16:27]** Okay. Let's say I'm a developer and I have a number of collections that maybe they're not accessed as frequently, but I am concerned about the patterns in them. How can I force the performance advisor to look at a specific collection?
**Julia: [00:16:43]** Yeah, that's a really good question. So as I mentioned before, we do surface the anti-patterns in two places. One is performance advisor and that's for the more reactive use case where doing a sweep, seeing what's going on and those 20 most active collections and kind of. Doing some logic to determine where the most impactful changes could be made.
And then there's also the collections tab in Atlas. And this is where you can go say you're actively developing or adding documents to collection. They aren't heavily used yet, but you want to make sure you're on the right track. If you view the schema, anti-patterns there, it basically runs our algorithm for you.
And we'll. Search a sample of collections for that, or sorry, a sample of documents for that collection and surface the suggestions there. So it's a little more targeted. And I would say very useful for when you're actively developing something or have a small workload.
**Michael: [00:17:39]** We've got a huge conference coming up in July.
It's Mongo, db.live. My first question is, are you going to be there? Are you perhaps presenting a talk on on this subject at.live?
**Julia: [00:17:50]** I am not presenting a talk on this subject at.live, but I will be there. I'm very, very excited for it.
**Michael: [00:17:56]** Fantastic. Well, maybe we can get you to come to community day, which is the week after where we've got talks and sessions and games and all sorts of fun stuff for the community.
Maybe we can get you to to talk a little bit about this at the at the event that would be. That would be fantastic. I'm going to be.live is our biggest user conference of the year. Joined us July 13th and 14th. It's free. It's all online. There's a huge lineup of cutting edge keynotes and breakout sessions.
All sorts of ask me anything, panels and brain breaking activities so much more. You can get more [email protected] slash live. All right, Nick, anything else to add before we begin to wrap?
**Nic: [00:18:36]** Nothing for me. I mean, Julia, is there any other last minute words of wisdom or anything that you want to tell the audience about schemas suggestions with the Mongo DB or anything that'll help them?
Yeah,
**Julia: [00:18:47]** I don't think so. I think we covered a lot again. I would just emphasize you know, don't be overwhelmed. Scheme is very important for Mongo DB. And it is meant to be flexible. We're just here to help you.
**Nic: [00:19:00]** I think that's the key word there. It's not a, it's not schema less. It's just flexible schema, right?
**Julia: [00:19:05]** Yes, yes, yes.
**Michael: [00:19:05]** Yes. Well, Julia, thank you so much. This has been a great conversation.
**Julia: [00:19:09]** Awesome. Thanks for having me.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Today, we are joined by Julia Oppenheim, Associate Product Manager at MongoDB. Julia chats with us and shares details of a set of features within MongoDB Atlas designed to help developers improve the design of their schemas to avoid common anti-patterns. ",
"contentType": "Podcast"
} | Schema Suggestions with Julia Oppenheim - Podcast Episode 59 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-using-realm-sync-in-unity | created | # Turning Your Local Game into an Online Experience with MongoDB Realm Sync
Playing a game locally can be fun at times. But there is nothing more exciting than playing with or against the whole world. Using Realm Sync you can easily synchronize data between multiple instances and turn your local game into an online experience.
In a previous tutorial we showed how to use Realm locally to persist your game's data. We will build on the local Realm to show how to easily transition to Realm Sync.
If you have not used local Realms before we recommend working through the previous tutorial first so you can easily follow along here when we build on them.
You can find the local Realm example that this tutorial is based on in our example repository at Github and use it to follow along.
The final of result of this tutorial can also be found in the examples reposity.
## MongoDB Realm Sync and MongoDB Atlas
The local Realm database we have seen in the previous tutorial is one of three components we need to synchronize data between multiple instances of our game. The other two are MongoDB Atlas and MongoDB Realm Sync.
We will use Atlas as our backend and cloud-based database. Realm Sync on the other side enables sync between your local Realm database and Atlas, seamlessly stitching together the two components into an application layer for your game. To support these services, MongoDB Realm also provides components to fulfill several common application requirements from which we will be using the Realm Users and Authentication feature to register and login the user.
There are a couple of things we need to prepare in order to enable synchronisation in our app. You can find an overview on how to get started with MongoDB Realm Sync in the documentation. Here are the steps we need to take:
- Create an Atlas account
- Create a Realm App
- Enable Sync
- Enable Developer Mode
- Enable email registration and choose `Automatically confirm users` under `User Confirmation Method`
## Example
We will build on the local Realm example we created in the previous tutorial using the 3D chess game. To get you started easily you can find the final result in our examples reposity (branch: `local-realm`).
The local Realm is based on four building blocks:
- `PieceEntity`
- `Vector3Entity`
- `PieceSpawner`
- `GameState`
The `PieceEntity` along with the `Vector3Entity` represents our model which include the two properties that make up a chess piece: type and position.
```cs
...
public class PieceEntity : RealmObject
{
public PieceType PieceType
{
...
}
public Vector3 Position
{
...
}
...
}
```
In the previous tutorial we have also added functionality to persist changes in position to the Realm and react to changes in the database that have to be reflected in the model. This was done by implementing `OnPropertyChanged` in the `Piece` and `PieceEntity` respectively.
The `PieceSpawner` is responsible for spawning new `Piece` objects when the game starts via `public void CreateNewBoard(Realm realm)`. Here we can see some of the important functions that we need when working with Realm:
- `Write`: Starts a new write transaction which is necessary to change the state of the database.
- `Add`: Adds a new `RealmObject` to the database that has not been there before.
- `RemoveAll`: Removes all objects of a specified type from the database.
All of this comes together in the central part of the game that manages the flow of it: `GameState`. The `GameState` open the Realm using `Realm.GetInstance()` in `Awake` and offers an option to move pieces via `public void MovePiece(Vector3 oldPosition, Vector3 newPosition)` which also checks if a `Piece` already exists at the target location. Furthermore we subscribe for notifications to set up the initial board. One of the things we will be doing in this tutorial is to expand on this subscription mechanic to also react to changes that come in through Realm Sync.
## Extending the model
The first thing we need to change to get the local Realm example ready for Sync is to add a PrimaryKey to the PieceType. This is a mandatory requirement for Sync to make sure objects can be distinguished from each other. We will be using the field `Id` here. Note that you can add a `MapTo` attribute in case the name of the field in the `RealmObject` differs from the name set in Atlas. By default the primary key is named `_id` in Atlas which would conflict with the .NET coding guidelines. By adding `MapTo("_id")]` we can address this fact.
```cs
using MongoDB.Bson;
```
```cs
[PrimaryKey]
[MapTo("_id")]
public ObjectId Id { get; set; } = ObjectId.GenerateNewId();
```
## Who am I playing with?
The local Realm tutorial showed you how to create a persisted game locally. While you could play with someone else using the same game client, there was only ever one game running at a time since every game is accessing the same table in the database and therefore the same objects.
This would still be the same when using Realm Sync if we do not separate those games. Everyone accessing the game from wherever they are would see the same state. We need a way to create multiple games and identify which one we are playing. Realm Sync offers a feature that let's us achieve exactly this: [partitions.
> A partition represents a subset of the documents in a synced cluster that are related in some way and have the same read/write permissions for a given user. Realm directly maps partitions to individual synced .realm files so each object in a synced realm has a corresponding document in the partition.
What does this mean for our game? If we use one partition per match we can make sure that only players using the same partition will actually play the same game. Furthermore, we can start as many games as we want. Using the same partition simply means using the same `partiton key` when opening a synced Realm. Partition keys are restricted to the following types: `String`, `ObjectID`, `Guid`, `Long`.
For our game we will use a string that we ask the user for when they start the game. We will do this by adding a new scene to the game which also acts as a welcome and loading scene.
Go to `Assets -> Create -> Scene` to create a new scene and name it `WelcomeScene`. Double click it to activate it.
Using `GameObject -> UI` we then add `Text`, `Input Field` and `Button` to the new scene. The input will be our partition key. To make it easier to understand for the player we will call its placeholder `game id`. The `Text` object can be set to `Your Game ID:` and the button's text to `Start Game`. Make sure to reposition them to your liking.
## Getting everything in Sync
Add a script to the button called `StartGameButton` by clicking `Add Component` in the Inspector with the start button selected. Then select `script` and type in its name.
```cs
using Realms;
using Realms.Sync;
using System;
using System.IO;
using System.Threading.Tasks;
using UnityEngine;
using UnityEngine.SceneManagement;
using UnityEngine.UI;
public class StartGameButton : MonoBehaviour
{
SerializeField] private GameObject loadingIndicator = default; // 1
[SerializeField] private InputField gameIdInputField = default; // 2
public async void OnStartButtonClicked() // 3
{
loadingIndicator.SetActive(true); // 4
// 5
var gameId = gameIdInputField.text;
PlayerPrefs.SetString(Constants.PlayerPrefsKeys.GameId, gameId);
await CreateRealmAsync(gameId); // 5
SceneManager.LoadScene(Constants.SceneNames.Main); // 13
}
private async Task CreateRealmAsync(string gameId)
{
var app = App.Create(Constants.Realm.AppId); // 6
var user = app.CurrentUser; // 7
if (user == null) // 8
{
// This example focuses on an introduction to Sync.
// We will keep the registration simple for now by just creating a random email and password.
// We'll also not create a separate registration dialog here and instead just register a new user every time.
// In a different example we will focus on authentication methods, login / registration dialogs, etc.
var email = Guid.NewGuid().ToString();
var password = Guid.NewGuid().ToString();
await app.EmailPasswordAuth.RegisterUserAsync(email, password); // 9
user = await app.LogInAsync(Credentials.EmailPassword(email, password)); // 10
}
RealmConfiguration.DefaultConfiguration = new SyncConfiguration(gameId, user);
if (!File.Exists(RealmConfiguration.DefaultConfiguration.DatabasePath)) // 11
{
// If this is the first time we start the game, we need to create a new Realm and sync it.
// This is done by `GetInstanceAsync`. There is nothing further we need to do here.
// The Realm is then used by `GameState` in it's `Awake` method.
using var realm = await Realm.GetInstanceAsync(); // 12
}
}
}
```
The `StartGameButton` knows two other game objects: the `gameIdInputField` (1) that we created above and a `loadingIndicator` (2) that we will be creating in a moment. If offers one action that will be executed when the button is clicked: `OnStartButtonClicked` (3).
First, we want to show a loading indicator (4) in case loading the game takes a moment. Next we grab the `gameId` from the `InputField` and save it using the [`PlayerPrefs`. Saving data using the `PlayerPrefs` is acceptable if it is user input that does not need to be saved safely and only has a simple structure since `PlayerPrefs` can only take a limited set of data types: `string`, `float`, `int`.
Next, we need to create a Realm (5). Note that this is done asynchrounously using `await`. There are a couple of components necessary for opening a synced Realm:
- `app`: An instance of `App` (6) represents your Realm App that you created in Atlas. Therefore we need to pass the `app id` in here.
- `user`: If a user has been logged in before, we can access them by using `app.CurrentUser` (7). In case there has not been a successful login before this variable will be null (8) and we need to register a new user.
The actual values for `email` and `password` are not really relevant for this example. In your game you would use more `Input Field` objects to ask the user for this data. Here we can just use `Guid` to generate random values. Using `EmailPasswordAuth.RegisterUserAsync` offered by the `App` class we can then register the user (9) and finally log them in (10) using these credentials. Note that we need to await this asynchrounous call again.
When we are done with the login, all we need to do is to create a new `SyncConfiguration` with the `gameId` (which acts as our partition key) and the `user` and save it as the `RealmConfiguration.DefaultConfiguration`. This will make sure whenever we open a new Realm, we will be using this `user` and `partitionKey`.
Finally we want to open the Realm and synchronize it to get it ready for the game. We can detect if this is the first start of the game simply by checking if a Realm file for the given coonfiguration already exists or not (11). If there is no such file we open a Realm using `Realm.GetInstanceAsync()` (12) which automatically uses the `DefaultConfiguration` that we set before.
When this is done, we can load the main scene (13) using the `SceneManager`. Note that the name of the main scene was extracted into a file called `Constants` in which we also added the app id and the key we use to save the `game id` in the `PlayerPrefs`. You can either add another class in your IDE or in Unity (using `Assets -> Create -> C# Script`).
```cs
sealed class Constants
{
public sealed class Realm
{
public const string AppId = "insert your Realm App ID here";
}
public sealed class PlayerPrefsKeys
{
public const string GameId = "GAME_ID_KEY";
}
public sealed class SceneNames
{
public const string Main = "MainScene";
}
}
```
One more thing we need to do is adding the main scene in the build settings, otherwise the `SceneManager` will not be able to find it. Go to `File -> Build Settings ...` and click `Add Open Scenes` while the `MainScene` is open.
With these adjustments we are ready to synchronize data. Let's add the loading indicator to improve the user experience before we start and test our game.
## Loading Indicator
As mentioned before we want to add a loading indicator while the game is starting up. Don't worry, we will keep it simple since it is not the focus of this tutorial. We will just be using a simple `Text` and an `Image` which can both be found in the same `UI` sub menu we used above.
The make sure things are a bit more organised, embed both of them into another `GameObject` using `GameObject -> Create Empty`.
You can arrange and style the UI elements to your liking and when you're done just add a script to the `LoadingIndicatorImage`:
The script itself should look like this:
```cs
using UnityEngine;
public class LoadingIndicator : MonoBehaviour
{
// 1
SerializeField] private float maxLeft = -150;
[SerializeField] private float maxRight = 150;
[SerializeField] private float speed = 100;
// 2
private enum MovementDirection { None, Left, Right }
private MovementDirection movementDirection = MovementDirection.Left;
private void Update()
{
switch (movementDirection) // 3
{
case MovementDirection.None:
break;
case MovementDirection.Left:
transform.Translate(speed * Time.deltaTime * Vector3.left);
if (transform.localPosition.x <= maxLeft) // 4
{
transform.localPosition = new Vector3(maxLeft, transform.localPosition.y, transform.localPosition.z); // 5
movementDirection = MovementDirection.Right; // 6
}
break;
case MovementDirection.Right:
transform.Translate(speed * Time.deltaTime * Vector3.right);
if (transform.localPosition.x >= maxRight) // 4
{
transform.localPosition = new Vector3(maxRight, transform.localPosition.y, transform.localPosition.z); // 5
movementDirection = MovementDirection.Left; // 6
}
break;
}
}
}
```
The loading indicator that we will be using for this example is just a simple square moving sideways to indicate progress. There are two fields (1) we are going to expose to the Unity Editor by using `SerializeField` so that you can adjust these values while seing the indicator move. `maxMovement` will tell the indicator how far to move to the left and right from the original position. `speed` - as the name indicates - will determine how fast the indicator moves. The initial movement direction (2) is set to left, with `Vector3.Left` and `Vector3.Right` being the options given here.
The movement itself will be calculated in `Update()` which is run every frame. We basically just want to do one of two things:
- Move the loading indicator to the left until it reaches the left boundary, then swap the movement direction.
- Move the loading indicator to the right until it reaches the right boundary, then swap the movement direction.
Using the [`transform` component of the `GameObject` we can move it by calling `Translate`. The movement consists of the direction (`Vector3.left` or `Vector3.right`), the speed (set via the Unity Editor) and `Time.deltaTime` which represents the time since the last frame. The latter makes sure we see a smooth movement no matter what the frame time is. After moving the square we check (3) if we have reached the boundary and if so, set the position to this boundary (4). This is just to make sure the indicator does not visibly slip out of bounds in case we see a low frame rate. Finally the position is swapped (5).
The loading indicator will only be shown when the start button is clicked. The script above takes care of showing it. We need to disable it so that it does not show up before. This can be done by clicking the checkbox next to the name of the `LoadingIndicator` parent object in the Inspector.
## Connecting UI and code
The scripts we have written above are finished but still need to be connected to the UI so that it can act on it.
First, let's assign the action to the button. With the `StartGameButton` selected in the `Hierarchy` open the `Inspector` and scroll down to the `On Click ()` area. Click the plus icon in the lower right to add a new on click action.
Next, drag and drop the `StartGameButton` from the `Hierarchy` onto the new action. This tells Unity which `GameObject` to use to look for actions that can be executed (which are functions that we implement like `OnStartButtonClicked()`).
Finally, we can choose the action that should be assigned to the `On Click ()` event by opening the drop down. Choose the `StartGameButton` and then `OnStartButtonClicked ()`.
We also need to connect the input field and the loading indicator to the `StartGameButton` script so that it can access those. This is done via drag&drop again as before.
## Let's play!
Now that the loading indicator is added the game is finished and we can start and run it. Go ahead and try it!
You will notice the experience when using one local Unity instance with Sync is the same as it was in the local Realm version. To actually test multiple game instances you can open the project on another computer. An easier way to test multiple Unity instances is ParallelSync. After following the installation instruction you will find a new menu item `ParallelSync` which offers a `Clones Manager`.
Within the `Clones Manager` you add and open a new clone by clicking `Add new clone` and `Open in New Editor`.
Using both instances you can then test the game and Realm Sync.
Remember that you need to use the same `game id` / `partition key` to join the same game with both instances.
Have fun!
## Recap and Conclusion
In this tutorial we have learned how to turn a game with a local Realm into a multiplayer experience using MongoDB Realm Sync. Let's summarise what needed to be done:
- Create an Atlas account and a Realm App therein
- Enable Sync, an authentication method and development mode
- Make sure every `RealmObject` has an `_id` field
- Choose a partition strategy (in this case: use the `partition key` to identify the match)
- Open a Realm using the `SyncConfiguration` (which incorporates the `App` and `User`)
The code for all of this can be found in our example repository.
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB and Realm. | md | {
"tags": [
"Realm",
"Mobile"
],
"pageDescription": "This article shows how to migrate from using a local Realm to MongoDB Realm Sync. We will cover everything you need to know to transform your game into a multiplayer experience.",
"contentType": "Tutorial"
} | Turning Your Local Game into an Online Experience with MongoDB Realm Sync | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/kotlin/realm-startactivityforresult-registerforactivityresult-deprecated-android-kotlin | created | # StartActivityForResult is Deprecated!
## Introduction
Android has been on the edge of evolution for a while recently, with updates to `androidx.activity:activity-ktx` to `1.2.0`. It has deprecated `startActivityForResult` in favour of `registerForActivityResult`.
It was one of the first fundamentals that any Android developer has learned, and the backbone of Android's way of communicating between two components. API design was simple enough to get started quickly but had its cons, like how it’s hard to find the caller in real-world applications (except for cmd+F in the project 😂), getting results on the fragment, results missed if the component is recreated, conflicts with the same request code, etc.
Let’s try to understand how to use the new API with a few examples.
## Example 1: Activity A calls Activity B for the result
Old School:
```kotlin
// Caller
val intent = Intent(context, Activity1::class.java)
startActivityForResult(intent, REQUEST_CODE)
// Receiver
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == Activity.RESULT_OK && requestCode == REQUEST_CODE) {
val value = data?.getStringExtra("input")
}
}
```
New Way:
```kotlin
// Caller
val intent = Intent(context, Activity1::class.java)
getResult.launch(intent)
// Receiver
private val getResult =
registerForActivityResult(
ActivityResultContracts.StartActivityForResult()
) {
if (it.resultCode == Activity.RESULT_OK) {
val value = it.data?.getStringExtra("input")
}
}
```
As you would have noticed, `registerForActivityResult` takes two parameters. The first defines the type of action/interaction needed (`ActivityResultContracts`) and the second is a callback function where we receive the result.
Nothing much has changed, right? Let’s check another example.
## Example 2: Start external component like the camera to get the image:
```kotlin
//Caller
getPreviewImage.launch(null)
//Receiver
private val getPreviewImage = registerForActivityResult(ActivityResultContracts.TakePicture { bitmap ->
// we get bitmap as result directly
})
```
The above snippet is the complete code getting a preview image from the camera. No need for permission request code, as this is taken care of automatically for us!
Another benefit of using the new API is that it forces developers to use the right contract. For example, with `ActivityResultContracts.TakePicture()` — which returns the full image — you need to pass a `URI` as a parameter to `launch`, which reduces the development time and chance of errors.
Other default contracts available can be found here.
---
## Example 3: Fragment A calls Activity B for the result
This has been another issue with the old system, with no clean implementation available, but the new API works consistently across activities and fragments. Therefore, we refer and add the snippet from example 1 to our fragments.
---
## Example 4: Receive the result in a non-Android class
Old Way: 😄
With the new API, this is possible using `ActivityResultRegistry` directly.
```kotlin
class MyLifecycleObserver(private val registry: ActivityResultRegistry) : DefaultLifecycleObserver {
lateinit var getContent: ActivityResultLauncher
override fun onCreate(owner: LifecycleOwner) {
getContent = registry.register("key", owner, GetContent()) { uri ->
// Handle the returned Uri
}
}
fun selectImage() {
getContent.launch("image/*")
}
}
class MyFragment : Fragment() {
lateinit var observer: MyLifecycleObserver
override fun onCreate(savedInstanceState: Bundle?) {
// ...
observer = MyLifecycleObserver(requireActivity().activityResultRegistry)
lifecycle.addObserver(observer)
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
val selectButton = view.findViewById(R.id.select_button)
selectButton.setOnClickListener {
// Open the activity to select an image
observer.selectImage()
}
}
}
```
## Summary
I have found the registerForActivityResult useful and clean. Some of the pros, in my opinion, are:
1. Improve the code readability, no need to remember to jump to `onActivityResult()` after `startActivityForResult`.
2. `ActivityResultLauncher` returned from `registerForActivityResult` used to launch components, clearly defining the input parameter for desired results.
3. Removed the boilerplate code for requesting permission from the user.
Hope this was informative and enjoyed reading it.
| md | {
"tags": [
"Kotlin"
],
"pageDescription": "Learn the benefits and usage of registerForActivityResult for Android in Kotlin.",
"contentType": "Article"
} | StartActivityForResult is Deprecated! | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/enhancing-diabetes-data-visibility-with-tidepool-and-mongodb | created | # Making Diabetes Data More Accessible and Meaningful with Tidepool and MongoDB
The data behind diabetes management can be overwhelming — understanding it all is empowering. Tidepool turns diabetes data points into accessible, actionable, and meaningful insights using an open source tech stack that incorporates MongoDB. Tidepool is a nonprofit organization founded by people with diabetes, caregivers, and leading healthcare providers committed to helping all people with dependent diabetes safely achieve great outcomes through more accessible, actionable, and meaningful diabetes data.
They are committed to empowering the next generation of innovations in diabetes management. We harness the power of technology to provide intuitive software products that help people with diabetes.
In this episode of the MongoDB Podcast, Michael and Nic sit down with Tapani Otala, V.P. of Engineering at Tidepool, to talk about their platform, how it was built, and how it uses MongoDB to provide unparalleled flexibility and visibility into the critical data that patients use to manage their condition.
:youtube]{vid=Ocf6ZJiq7ys}
### MongoDB Podcast - Tidepool with Tapani Otala and Christopher Snyder
###
**Tapani: [00:00:00]** Hi, my name is Tapani Otala. I'm the VP of engineering at [Tidepool. We are a nonprofit organization whose mission is to make diabetes data more accessible, meaningful, and actionable. The software we develop is designed to integrate [00:01:00] data from various diabetes devices like insulin pumps, continuous glucose monitors, and blood glucose meters into a single intuitive interface that allows people with diabetes and their care team to make sense of that data.
And we're using Mongo DB to power all this. Stay tuned for more.
**Chris: [00:00:47]**
My name is Christopher Snyder. I've been living with type one diabetes since 2002. I'm also Tidepool's community and clinic success manager. Having this data available to me just gives me the opportunity to make sense of everything that's happening. Prior to using Tidepool, if I wanted to look at my data, I either had to write everything down and keep track of all those notes.
Or I do use proprietary software for each of my devices and then potentially print things out and hold them up to the light to align events and data points and things like that. Because Tidepool brings everything together in one place, I am biased. I think it looks real pretty. It makes it a lot easier for me to identify trends, make meaningful changes in my diabetes management habits, and hopefully lead a healthier life.
**Mike: [00:01:28]** So we're talking today about Tidepool and maybe you could give us a quick description of what Tidepool is and who it may appeal to
**Tapani: [00:01:38] **We're a nonprofit organization. And we're developing software that helps people with diabetes manage that condition. We enable people to upload data from their devices, different types of devices, like glucose monitors, meters, insulin pumps, and so on into a single place where you can view that data in one place.
And you can share it with your care team members like doctors, clinicians, or [00:02:00] your family members. They can view that data in real time as well.
**Mike: [00:02:03]** Are there many companies that are doing this type of thing today?
**Tapani: [00:02:06]** There are a
few companies, as far as I'm aware, the only non-profit in this space though. Everything else is for profit.
And there are a lot of companies that look at it from diabetes, from different perspective. They might work with type two diabetes or type one. We work with any kind. There's no difference.
**Nic: [00:02:24]** In regards to Tidepool, are you building hardware as well as software? Or are you just looking at data? Can you shed some more light into that?
**Tapani: [00:02:33]** Sure. We're a hundred percent software company. We don't make any other hardware. We do work with lots of great manufacturers of those devices in the space and medical space in general, but in particular diabetes that make those devices. And so we collaborate with them.
**Mike: [00:02:48]** So what stage is Tidepool in today? Are you live?
**Tapani: [00:02:50]** Yeah, we've been live since 2013 and we we've grown since a fair bit. And we're now at 33 or so people, but still, I guess you could consider as a [00:03:00] startup, substance. So
**Nic: [00:03:01]** I'd actually like to dig deeper into the software that Tidepool produces.
So you said that there are many great hardware manufacturers working in this space. How are you obtaining that data? Are you like a mobile application connecting to the hardware? Are you some kind of IoT or are they sending you that information and you're working with it at that point?
**Tapani: [00:03:22]** So it really depends on the device and the integration that we have. For most devices, we talk directly to the device. So these are devices that you would use at your home and you connect them to a PC over Bluetooth or USB or your phone for that matter. And we have software that can read the data directly from the device and upload it to our backend service that's using Mongo DB to store that data.
**Mike: [00:03:43]** Is there a common format that is required in order to send data to Tidepool?
**Tapani: [00:03:49]** We wish. That would make our life a whole lot simpler. No, actually a good chunk of the work that's involved in here is writing software that knows how to talk to each individual device.
And there's some [00:04:00] families of devices that, that use similar protocols and so on, but no, there's no really universal protocol that talk to the devices or for the format of the data that comes from the devices for that matter. So a lot of the work goes into normalizing that data so that when it is stored in in our backend, it's then visible and viewable by people.
**Nic: [00:04:21]** So we'll get to this in a second. It does sound like a perfect case for a kind of a document database, but in regards to supporting all of these other devices, so I imagine that any single device over its lifetime might experience different kind of data output through the versions.
What kind of compatibility is Tidepool having on these devices? Do you use, do say support like the latest version or? Maybe you can shed some light on that, how many devices in general you're supporting.
Tapani: [00:04:50] Right now, we support over 50 different devices. And then by extension anything that Apple Health supports.
So if you have a device that stores data in apple [00:05:00] health kit, we can read that as well. But 50 devices directly. You can actually go to type bullet org slash devices, and you can see the list full list there. You can filter it by different types of devices and manufacturers and so on. And that those devices are some of them are actually obsolete at this point.
They're end of life. You can't buy them anymore. So we support devices even long past the point when there've been sold. We try to keep up with the latest devices, but that's not always feasible.
**Mike: [00:05:26]** This is it's like a health oriented IOT application right?
**Tapani: [00:05:30]** Yeah. In a way that that's certainly true.
The only difference here maybe is that those devices don't directly usually connect to the net. So they need an intermediary. Like in our case, we have a mobile application. We have a desktop application that talks to the device that's in your possession, but you can't reach the device directly over internet.
**Mike:** And just so we can understand the scale, how many devices are reporting into Tidepool today?
**Tapani:** I don't actually know exactly how many devices there are. Those are discreet different types of devices. [00:06:00] What I can say is our main database production database, we're storing something it's approaching to 6 billion documents at this point
in terms of the amount of data across across and hundreds of thousands of users.
**Nic: [00:06:11]** Just for clarity, because I want to get to, because the diabetes space is not something I'm personally too familiar in. And the different hardware that exists. So say I'm a user of the hardware and it's reporting to Tidepool.
Is Tidepool gonna alert you if there's some kind of low blood sugar level or does it serve a different purpose?
**Tapani: [00:06:32]** Both. And this is actually a picture that's changing. So right now what we have out there in terms of the products, they're backward looking. So what happened in the past, but you might might be using these devices and you might upload data, a few times a day.
But if you're using some of the more, more newer devices like continuous glucose monitors, those record data every five minutes. So the opposite frequency, it could be much higher, but that's going to change going [00:07:00] forward as more and more people start using this continuous glucose monitors that are actually doing that. For the older devices might be, this is classic fingerprint what glucose meter or you poke your finger, or you draw some little bit of blood and you measure it and you might do that five to 10 times a day.
Versus 288 times, if you have a glucose monitor, continuous glucose monitor that sends data every five minutes. So it varies from device to device.
**Mike: [00:07:24]** This is a fascinating space. I test myself on a regular basis as part of my diet not necessarily for diabetes, but for for ketosis and that's an interesting concept to me. The continuous monitoring devices, though,
that's something that you attach to your body, right?
**Tapani: [00:07:39]** Yeah. These are little devices about the size of a stack of quarters that sits somewhere on your skin, on an arm or leg or somewhere on your body. There's a little filament that goes onto your skin, that does the actual measurements, but it's basically a little full.
**Mike: [00:07:54]** So thinking about the application itself and how you're leveraging MongoDB, do you want to talk a little bit about how the [00:08:00] application comes together and what the stack looks like?
**Tapani: [00:08:01]** Sure. So we're hosted in AWS, first of all. We have about 20 or so microservices in there. And as part of those microservices, they all communicate to all MongoDB Atlas.
That's implemented with the sort of best practices of suppose security in mind because security and privacy are critically important for us. So we're using the busy gearing from our microservices to MongoDB Atlas. And we're using a three node replica set in MongoDB Atlas, so that there's no chance of losing any of that data.
**Mike: [00:08:32]** And in terms of the application itself, is it largely an API? I'm sure that there's a user interface or your application set, but what does the backend or the API look like in terms of the technology?
**Tapani: [00:08:43]** So, what people see in front of them as a, either a desktop application or mobile application, that's the visible manifestation of it.
Both of those communicate to our backend through a set of rest APIs for authentication authorization, data upload, data retrieval, and so on. Those APIs then take that data and they store it in our MongoDB production cluster. So the API is very from give me our user profile to upload this pile of continuous glucose monitor samples.
**Mike: [00:09:13]** What is the API written in? What technologies are you using?
**Tapani: [00:09:16]** It's a mix of Node JS and Golang. I would say 80% Golang and 20% Node JS.
**Nic: [00:09:23]** I'm interested in why Golang for this type of application. I wouldn't have thought it as a typical use case. So are you able to shed any light on that?
**Tapani: [00:09:32]** The decision to switch to Golang? And so this actually the growing set of services. That happened before my time. I would say it's pretty well suited for this particular application. This, the backend service is fundamentally, it's a set of APIs that have no real user visible manifestation themselves.
We do have a web service, a web front end to all this as well, and that's written in React and so on, but the Golang is proven to be a very good language for developing this, services specifically that respond to API requests because really all they do is they're taking a bunch of inputs from the, on the caller and translating, applying business policy and so on, and then storing the data in Mongo.
So it's a good way to do it.
**Nic: [00:10:16]** Awesome. So we know that you're using Go and Node for your APIs, and we know that you're using a MongaDB as your data layer. What features in particular using with MongoDB specifically?
**Tapani: [00:10:26]** So right now, and I mentioned we were running a three node replica set.
We don't yet use sharding, but that's actually the next big thing that we'll be tackling in the near future because that set of data that we have is growing fairly fast and it will be growing very fast, even faster in the future with a new product coming out. But sharding will be next one.
We do a lot of aggregate queries across several different collections. So some fairly complicated queries. And as I mentioned, that largest collection is fairly large. So performance, that becomes critical. Having the right indices in place and being able to look for all the right data is critical.
**Nic: [00:11:07]** You mentioned aggregations across numerous collections at a high level. Are you able to talk us through what exactly you're aggregating to give us an idea of a use case.
**Tapani: [00:11:16]** Yeah. Sure. In fact, the one thing I should've mentioned earlier perhaps is besides being non-profit, we're also open source.
So everything we do is actually visible on GitHub in our open-source repo. So if anybody's interested in the details, they're welcome to take a look in there. But in the sort of broader sense, we have a user collection where all the user accounts profiles are stored. We have a data collection or device data collection, rather.
That's where all the data from diabetes devices goes. There's other collections for things like messages that we sent to the users, emails, basically invitations to join this account or so on and confirmations of those and so different collections for different use cases. Broadly speaking is it's, there's one collection for each use case like user profiles or messages, notifications, device data.
**Mike: [00:12:03]** And I'm thinking about the schema and the aggregations across multiple collections. Can you share what that schema looks like? And maybe even just the number of collections that you're storing.
**Tapani: [00:12:12]** Sure. Number of collections is actually relatively small. It's only a half a dozen or so, but the schema is pretty straightforward for most of them.
They like the user profiles. There's only so many things you store in a user profile, but that device data collection is perhaps the most complex because it stores data from all the devices, regardless of type. So the data that comes out of a continuous glucose monitor is different than the data that comes from an insulin pump.
For instance, for example. So there's different fields. There are different units that we're dealing with and so on.
**Mike: [00:12:44]** Okay, so Tapani, what other features within the Atlas platform are you leveraging today? And have you possibly look at automated scalability as a solution moving forward?
**Tapani: [00:12:55]** So our use of MongoDB Atlas right now is pretty straightforward and intensive. So a lot of data in the different collections, indices and aggregate queries that are used to manage that data and so on. The things that we're looking forward in the future are things like sharding because of the scale of data that's growing.
Other things are a data lake, for instance, archiving some of the data. Currently our production database stores all the data from 2013 onwards. And really the value of that data beyond the past few months to a few years is not that important. So we'd want to archive it. We can't lose it because it's important data, but we don't want to archive it and move it someplace else.
So that, and bucketizing the data in the more effective ways. And so it's faster to access by different stakeholders in the company.
**Mike: [00:13:43]** So some really compelling features that are available today around online archiving. I think we can definitely help out there. And coming down the pike, we've got some really exciting stuff happening in the time series space.
So stay tuned for that. We'll be talking more about that at our .live conference in July. So stay tuned for that.
**Nic: [00:14:04]** Hey Mike, how about you to give a plug about that conference right now?
**Mike: [00:14:06]** Yeah, sure. It's our biggest user conference of the year. And we get together, thousands of developers join us and we present all of the feature updates.
We're going to be talking about MongoDB 5.0, which is the latest upcoming release and some really super exciting announcements there. There's a lot of breaks and brain breaking activities and just a great way to get plugged into the MongoDB community. You can get more information at mongodb.com/live.
So Tapani, thanks so much for sharing the details of how you're leveraging Mongo DB. As we touched on earlier, this is an application that users are going to be sharing very sensitive details about their health. Do you want to talk a little bit about the security?
**Tapani: [00:14:49]** Sure. Yeah, it's actually, it's a critically important piece for us. So first of all of those APS that we talked about earlier, those are all the traffic is encrypted in transit. There's no unauthorized or unauthenticated access to any other data or API. In MongoDB Atlas, what we're obviously leveraging is we use the encryption at rest.
So all the data that's stored by MongoDB is encrypted. We're using VPC peering between our services and MongoDB Atlas, to make sure that traffic is even more secure. And yeah, privacy and security of the data is key thing for us, because this is all what what the health and human services calls, protected health information or PHI. That's the sort of highest level of private information you could possibly have.
**Nic: [00:15:30]** So in regards to the information being sent, we know that the information is being encrypted at rest. Are you collecting data that could be sensitive, like social security numbers and things like that that might need to be encrypted at a field level to prevent prying eyes of DBAs and similar?
**Tapani: [00:15:45]** We do not collect any social security information or anything like that. That's purely healthcare data. Um, diabetes device data, and so on. No credit cards. No SSNs.
**Nic: [00:15:56]** Got it. So nothing that could technically tie the information back to an individual or be used in a malicious way?
**Tapani: [00:16:02]** Not in that way now. I mean, I think it's fair to say that this is obviously people's healthcare information, so that is sensitive regardless of whether it could be used maliciously or not.
**Mike: [00:16:13]** Makes sense. Okay. So I'm wondering if you want to talk a little bit about what's next for Tidepool. You did make a brief mention of another application that you'll be launching.
Maybe talk a little bit about the roadmap.
**Tapani: [00:16:25]** Sure. We're working on, besides the existing products we're working on a new product that's called Tidepool Loop and that's an effort to build an automatic insulin dosing system. This takes a more proactive role in the treatment of diabetes.
Existing products show data that you already have. This is actually helping you administer insulin. And so it's a smartphone application that's currently under FDA review. We are working with a couple of great partners and the medical device space to launch that with them, with their products.
**Mike: [00:16:55]** Well, I love the open nature of Tidepool.
It seems like everything you're doing is kind of out in the open. From open source to full disclosure on the architecture stack. That's something that that I can really appreciate as a developer. I love the ability to kind of dig a little deeper and see how things work.
Is there anything else that you'd like to cover from an organizational perspective? Any other details you wanna share?
**Tapani: [00:17:16]** Sure. I mean, you mentioned the transparency and openness. We practice what some people might call radical transparency. Not only is our software open source. It's in GitHub.
Anybody can take a look at it. Our JIRA boards for bugs and so on. They're also open, visible to anybody. Our interactions with the FDA, our meeting minutes, filings, and so on. We also make those available. Our employee handbook is open. We actually forked another company's employee handbook, committed ours opened as well.
And in the hopes that people can benefit from that. Ultimately, why we do this is we hope that we can help improve public health by making everything as, as much as possible we can do make it publicly. And as far as the open source projects go, we have a, several people out there who are making open source contributions or pull requests and so on. Now, because we do operate in the healthcare space,
we have to review those submissions pretty carefully before we integrate them into the product. But yeah, we do take to take full requests from people we've gotten community submissions, for instance, translations to Spanish and German and French products. But we'd have to verify those before we can roll them up.
**Mike: [00:18:25]** Well, this has been a great discussion. Is there anything else that you'd like to share with the audience before we begin to wrap up?
**Tapani: [00:18:29]** Oh, a couple of things it's closing. So I was, I guess it would be one is first of all we're a hundred percent remote first and globally distributed organization.
We have people in five countries in 14 states within the US right now. We're always hiring in some form or another. So if anybody's interested in, they're welcome to take a look at our job postings tidepool.org/jobs. The other thing is as a nonprofit, we tend suddenly gracefully accept donations as well.
So there's another link there that will donate. And if anybody's interested in the technical details of how we actually built this all, there's a couple of links that I can throw out there. One is tidepool.org/pubsecc, that'll be secc, that's a R a security white paper, basically whole lot of information about the architecture and infrastructure and security and so on.
We also publish a series of blood postings, at tidepool.org/blog, where the engineering team has put out a couple of things in there about our infrastructure. We went through some pretty significant upgrades over the past couple of years, and then finally github.com/tidepool is where are all our sources.
**Nic: [00:19:30]** Awesome. And you mentioned that you're a remote company and that you were looking for candidates. Were these candidates global, strictly to the US, does it matter?
**Tapani: [00:19:39]** So we hire anywhere people are, and they work from wherever they are. We don't require relocation. We don't require a visa in that sense that you'd have to come to the US, for instance, to work. We have people in five countries, us, Canada, UK, Bulgaria, and Croatia right now.
**Mike: [00:19:55]** Well, Tapani I want to thank you so much for joining us today. I really enjoyed the conversation.
**Tapani: [00:19:58]** Thanks as well. Really enjoyed it. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Tapani Otala is the VP of Engineering at Tidepool, an open source, not-for-profit company focused on liberating data from diabetes devices, supporting researchers, and providing great, free software to people with diabetes and their care teams. He joins us today to share details of the Tidepool solution, how it enables enhanced visibility into Diabetes data and enables people living with this disease to better manage their condition. Visit https://tidepool.org for more information.",
"contentType": "Podcast"
} | Making Diabetes Data More Accessible and Meaningful with Tidepool and MongoDB | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/build-movie-search-application | created | # Tutorial: Build a Movie Search Application Using Atlas Search
Let me guess. You want to give your application users the ability to find *EXACTLY* what they are looking for FAST! Who doesn't? Search is a requirement for most applications today. With MongoDB Atlas Search, we have made it easier than ever to integrate simple, fine-grained, and lightning-fast search capabilities into all of your MongoDB applications. To demonstrate just how easy it is, let's build a web application to find our favorite movies.
This tutorial is the first in a four-part series where we will learn over the next few months to build out the application featured in our Atlas Search Product Demo.
:youtube]{vid=kZ77X67GUfk}
Armed with only a basic knowledge of HTML and Javascript, we will build out our application in the following four parts.
##### The Path to Our Movie Search Application
| | |
|---|---|
| **Part 1** | Get up and running with a basic search movie engine allowing us to look for movies based on a topic in our MongoDB Atlas movie data. |
| **Part 2** | Make it even easier for our users by building more advanced search queries with fuzzy matching and wildcard paths to forgive them for fat fingers and misspellings. We'll introduce custom score modifiers to allow us to influence our movie results. |
| **Part 3** | Add autocomplete capabilities to our movie application. We'll also discuss index mappings and analyzers and how to use them to optimize the performance of our application. |
| **Part 4** | Wrap up our application by creating filters to query across dates and numbers to even further fine-tune our movie search results. We'll even host the application on Realm, our serverless backend platform, so you can deliver your movie search website anywhere in the world. |
Now, without any further adieu, let's get this show on the road!
This tutorial will guide you through building a very basic movie search engine on a free tier Atlas cluster. We will set it up in a way that will allow us to scale our search in a highly performant manner as we continue building out new features in our application over the coming weeks. By the end of Part 1, you will have something that looks like this:
To accomplish this, here are our tasks for today:
## STEP 1. SPIN UP ATLAS CLUSTER AND LOAD MOVIE DATA
To **Get Started**, we will need only an Atlas cluster, which you can get for free, loaded with the Atlas sample dataset. If you do not already have one, sign up to [create an Atlas cluster on your preferred cloud provider and region.
Once you have your cluster, you can load the sample dataset by clicking the ellipse button and **Load Sample Dataset**.
>For more detailed information on how to spin up a cluster, configure your IP address, create a user, and load sample data, check out Getting Started with MongoDB Atlas from our documentation.
Now, let's have a closer look at our sample data within the Atlas Data Explorer. In your Atlas UI, click on **Collections** to examine the **movies** collection in the new **sample_mflix** database. This collection has over 23k movie documents with information such as title, plot, and cast. The **sample_mflix.movies** collection provides the dataset for our application.
## STEP 2. CREATE A SEARCH INDEX
Since our movie search engine is going to look for movies based on a topic, we will use Atlas Search to query for specific words and phrases in the `fullplot` field of the documents.
The first thing we need is an Atlas Search index. Click on the tab titled **Search Indexes** under **Collections**. Click on the green **Create Search Index** button. Let's accept the default settings and click **Create Index**. That's all you need to do to start taking advantage of Search in your MongoDB Atlas data!
By accepting the default settings when we created the Search index, we dynamically mapped all the fields in the collection as indicated in the default index configuration:
``` javascript
{
mappings: {
"dynamic":true
}
}
```
Mapping is simply how we define how the fields on our documents are indexed and stored. If a field's value looks like a string, we'll treat it as a full-text field, similarly for numbers and dates. This suits MongoDB's flexible data model perfectly. As you add new data to your collection and your schema evolves, dynamic mapping accommodates those changes in your schema and adds that new data to the Atlas Search index automatically.
We'll talk more about mapping and indexes in Part 3 of our series. For right now, we can check off another item from our task list.
## STEP 3. WRITE A BASIC AGGREGATION WITH $SEARCH OPERATORS
Search queries take the form of an aggregation pipeline stage. The `$search` stage performs a search query on the specified field(s) covered by the Search index and must be used as the first stage in the aggregation pipeline.
Let's use the aggregation pipeline builder inside of the Atlas UI to make an aggregation pipeline that makes use of our Atlas Search index. Our basic aggregation will consist of only three stages: $search, $project, and $limit.
>You do not have to use the pipeline builder tool for this stage, but I really love the easy-to-use user interface. Plus, the ability to preview the results by stage makes troubleshooting a snap!
Navigate to the **Aggregation** tab in the **sample_mflix.movies** collection:
### Stage 1. $search
For the first stage, select the `$search` aggregation operator to search for the *text* "werewolves and vampires" in the `fullplot` field *path.*
You can also add the **highlight** option, which will return the highlights by adding fields to the result payload that display search terms in their original context, along with the adjacent text content. (More on this later.)
Your final `$search` aggregation stage should be:
``` javascript
{
text: {
query: "werewolves and vampires",
path: "fullplot",
},
highlight: {
path: "fullplot"
}
}
```
>Note the returned movie documents in the preview panel on the right. If no documents are in the panel, double-check the formatting in your aggregation code.
### Stage 2: $project
Add stage `$project` to your pipeline to get back only the fields we will use in our movie search application. We also use the `$meta` operator to surface each document's **searchScore** and **searchHighlights** in the result set.
``` javascript
{
title: 1,
year:1,
fullplot:1,
_id:0,
score: {
$meta:'searchScore'
},
highlight:{
$meta: 'searchHighlights'
}
}
```
Let's break down the individual pieces in this stage further:
**SCORE:** The `"$meta": "searchScore"` contains the assigned score for the document based on relevance. This signifies how well this movie's `fullplot` field matches the query terms "werewolves and vampires" above.
Note that by scrolling in the right preview panel, the movie documents are returned with the score in *descending* order. This means we get the best matched movies first.
**HIGHLIGHT:** The `"$meta": "searchHighlights"` contains the highlighted results.
*Because* **searchHighlights** *and* **searchScore** *are not part of the original document, it is necessary to use a $project pipeline stage to add them to the query output.*
Now, open a document's **highlight** array to show the data objects with text **values** and **types**.
``` bash
title:"The Mortal Instruments: City of Bones"
fullplot:"Set in contemporary New York City, a seemingly ordinary teenager, Clar..."
year:2013
score:6.849891185760498
highlight:Array
0:Object
path:"fullplot"
texts:Array
0:Object
value:"After the disappearance of her mother, Clary must join forces with a g..."
type:"text"
1:Object
value:"vampires"
type:"hit"
2:Object
3:Object
4:Object
5:Object
6:Object
score:3.556248188018799
```
**highlight.texts.value** - text from the `fullplot` field returning a match
**highlight.texts.type** - either a hit or a text
- **hit** is a match for the query
- **text** is the surrounding text context adjacent to the matching
string
We will use these later in our application code.
### Stage 3: $limit
Remember that the results are returned with the scores in descending order. `$limit: 10` will therefore bring the 10 most relevant movie documents to your search query. $limit is very important in Search because speed is very important. Without `$limit:10`, we would get the scores for all 23k movies. We don't need that.
Finally, if you see results in the right preview panel, your aggregation pipeline is working properly! Let's grab that aggregation code with the Export Pipeline to Language feature by clicking the button in the top toolbar.
Your final aggregation code will be this:
``` bash
{
$search {
text: {
query: "werewolves and vampires",
path: "fullplot"
},
highlight: {
path: "fullplot"
}
}},
{
$project: {
title: 1,
_id: 0,
year: 1,
fullplot: 1,
score: { $meta: 'searchScore' },
highlight: { $meta: 'searchHighlights' }
}},
{
$limit: 10
}
]
```
This small snippet of code powers our movie search engine!
## STEP 4. CREATE A REST API
Now that we have the heart of our movie search engine in the form of an aggregation pipeline, how will we use it in an application? There are lots of ways to do this, but I found the easiest was to simply create a RESTful API to expose this data - and for that, I leveraged [MongoDB Realm's HTTP Service from right inside of Atlas.
Realm is MongoDB's serverless platform where functions written in Javascript automatically scale to meet current demand. To create a Realm application, return to your Atlas UI and click **Realm.** Then click the green **Start a New Realm App** button.
Name your Realm application **MovieSearchApp** and make sure to link to your cluster. All other default settings are fine.
Now click the **3rd Party Services** menu on the left and then **Add a Service**. Select the HTTP service and name it **movies**:
Click the green **Add a Service** button, and you'll be directed to **Add Incoming Webhook**.
Once in the **Settings** tab, name your webhook **getMoviesBasic**. Enable **Respond with Result**, and set the HTTP Method to **GET**. To make things simple, let's just run the webhook as the System and skip validation with **No Additional Authorization.** Make sure to click the **Review and Deploy** button at the top along the way.
In this service function editor, replace the example code with the following:
``` javascript
exports = function(payload) {
const movies = context.services.get("mongodb-atlas").db("sample_mflix").collection("movies");
let arg = payload.query.arg;
return movies.aggregate(<>).toArray();
};
```
Let's break down some of these components. MongoDB Realm interacts with your Atlas movies collection through the global **context** variable. In the service function, we use that context variable to access the **sample_mflix.movies** collection in your Atlas cluster. We'll reference this collection through the const variable **movies**:
``` javascript
const movies =
context.services.get("mongodb-atlas").db("sample_mflix").collection("movies");
```
We capture the query argument from the payload:
``` javascript
let arg = payload.query.arg;
```
Return the aggregation code executed on the collection by pasting your aggregation copied from the aggregation pipeline builder into the code below:
``` javascript
return movies.aggregate(<>).toArray();
```
Finally, after pasting the aggregation code, change the terms "werewolves and vampires" to the generic `arg` to match the function's payload query argument - otherwise our movie search engine capabilities will be *extremely* limited.
Your final code in the function editor will be:
``` javascript
exports = function(payload) {
const movies = context.services.get("mongodb-atlas").db("sample_mflix").collection("movies");
let arg = payload.query.arg;
return movies.aggregate(
{
$search: {
text: {
query: arg,
path:'fullplot'
},
highlight: {
path: 'fullplot'
}
}},
{
$project: {
title: 1,
_id: 0,
year: 1,
fullplot: 1,
score: { $meta: 'searchScore'},
highlight: {$meta: 'searchHighlights'}
}
},
{
$limit: 10
}
]).toArray();
};
```
Now you can test in the Console below the editor by changing the argument from **arg1: "hello"** to **arg: "werewolves and vampires"**.
>Please make sure to change BOTH the field name **arg1** to **arg**, as well as the string value **"hello"** to **"werewolves and vampires"** - or it won't work.
Click **Run** to verify the result:
If this is working, congrats! We are almost done! Make sure to **SAVE** and deploy the service by clicking **REVIEW & DEPLOY CHANGES** at the top of the screen.
### Use the API
The beauty of a REST API is that it can be called from just about anywhere. Let's execute it in our browser. However, if you have tools like Postman installed, feel free to try that as well.
Switch back to the **Settings** of your **getMoviesBasic** function, and you'll notice a Webhook URL has been generated.
Click the **COPY** button and paste the URL into your browser. Then append the following to the end of your URL: **?arg="werewolves and vampires"**
If you receive an output like what we have above, congratulations! You
have successfully created a movie search API! 🙌 💪
## STEP 5. FINALLY! THE FRONT-END
Now that we have this endpoint, it takes a single call from the front-end application using the Fetch API to retrieve this data. Download the following [index.html file and open it in your browser. You will see a simple search bar:
Entering data in the search bar will bring you movie search results because the application is currently pointing to an existing API.
Now open the HTML file with your favorite text editor and familiarize yourself with the contents. You'll note this contains a very simple container and two javascript functions:
- Line 81 - **userAction()** will execute when the user enters a
search. If there is valid input in the search box and no errors, we
will call the **buildMovieList()** function.
- Line 125 - **buildMovieList()** is a helper function for
**userAction()**.
The **buildMovieList()** function will build out the list of movies along with their scores and highlights from the `fullplot` field. Notice in line 146 that if the **highlight.texts.type === "hit"** we highlight the **highlight.texts.value** with a style attribute tag.\*
``` javascript
if (moviesi].highlight[j].texts[k].type === "hit") {
txt += ` ${movies[i].highlight[j].texts[k].value} `;
} else {
txt += movies[i].highlight[j].texts[k].value;
}
```
### Modify the Front-End Code to Use Your API
In the **userAction()** function, notice on line 88 that the **webhook_url** is already set to a RESTful API I created in my own Movie Search application.
``` javascript
let webhook_url = "https://webhooks.mongodb-realm.com/api/client/v2.0/app/ftsdemo-zcyez/service/movies-basic-FTS/incoming_webhook/movies-basic-FTS";
```
We capture the input from the search form field in line 82 and set it equal to **searchString**. In this application, we append that **searchString** input to the **webhook_url**
``` javascript
let url = webhook_url + "?arg=" + searchString;
```
before calling it in the fetch API in line 92.
To make this application fully your own, simply replace the existing **webhook_url** value on line 88 with your own API from the **getMoviesBasic** Realm HTTP Service webhook you just created. 🤞 Now save these changes, and open the **index.html** file once more in your browser, et voilà! You have just built your movie search engine using Atlas Search. 😎
Pass the popcorn! 🍿 What kind of movie do you want to watch?!
## That's a Wrap!
You have just seen how easy it is to build a simple, powerful search into an application with [MongoDB Atlas Search. In our next tutorial, we continue by building more advanced search queries into our movie application with fuzzy matching and wildcard to forgive fat fingers and typos. We'll even introduce custom score modifiers to allow us to shape our search results. Check out our $search documentation for other possibilities.
Harnessing the power of Apache Lucene for efficient search algorithms, static and dynamic field mapping for flexible, scalable indexing, all while using the same MongoDB Query Language (MQL) you already know and love, spoken in our very best Liam Neeson impression - MongoDB now has a very particular set of skills. Skills we have acquired over a very long career. Skills that make MongoDB a DREAM for developers like you.
Looking forward to seeing you in Part 2. Until then, if you have any questions or want to connect with other MongoDB developers, check out our community forums. Come to learn. Stay to connect. | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Check out this blog tutorial to learn how to build a movie search application using MongoDB Atlas Search.",
"contentType": "Tutorial"
} | Tutorial: Build a Movie Search Application Using Atlas Search | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-keypath-filtering | created | # Filter Realm Notifications in Your iOS App with KeyPaths
## Introduction
Realm Swift v10.12.0 introduced the ability to filter change notifications for desired key paths. This level of granularity has been something we've had our eye on, so it’s really satisfying to release this kind of control and performance benefit. Here’s a quick rundown on what’s changed and why it matters.
## Notifications Before
By default, notifications return changes for all insertions, modifications, and deletions. Suppose that I have a schema that looks like the one below.
If I observe a `Results` object and the name of one of the companies in the results changes, the notification block would fire and my UI would update:
```swift
let results = realm.objects(Company.self)
let notificationToken = results.observe() { changes in
// update UI
}
```
That’s quite straightforward for non-collection properties. But what about other types, like lists?
Naturally, the block I passed into .`observe` will execute each time an `Order` is added or removed. But the block also executes each time a property on the `Order` list is edited. The same goes for _those_ properties’ collections too (and so on!). Even though I’m observing “just” a collection of `Company` objects, I’ll receive change notifications for properties on a half-dozen other collections.
This isn’t necessarily an issue for most cases. Small object graphs, or “siloed” objects, that don’t feature many relationships might not experience unneeded notifications at all. But for complex webs of objects, where several layers of children objects exist, an app developer may benefit from a **major performance enhancement and added control from KeyPath filtering**.
## KeyPath Filtering
Now `.observe` comes with an optional `keyPaths` parameter:
```swift
public func observe(keyPaths: String]? = nil,
on queue: DispatchQueue? = nil,
_ block: @escaping (ObjectChange) -> Void) -> NotificationToken
```
The `.observe `function will only notify on the field or fields specified in the `keyPaths` parameter. Other fields are ignored unless explicitly passed into the parameter.
This allows the app developer to tailor which relationship paths are observed. This reduces computing cost and grants finer control over when the notification fires.
Our modified code might look like this:
```swift
let results = realm.objects(Company.self)
let notificationToken = results.observe(keyPaths: ["orders.status"]) { changes in
// update UI
}
```
`.observe `can alternatively take a `PartialKeyPath`:
```swift
let results = realm.objects(Company.self)
let notificationToken = results.observe(keyPaths: [\Company.orders.status]) { changes in
// update UI
}
```
If we applied the above snippets to our previous example, we’d only receive notifications for this portion of the schema:
![Graph showing that just a single path through the Objects components is selected
The notification process is no longer traversing an entire tree of relationships each time a modification is made. Within a complex tree of related objects, the change-notification checker will now traverse only the relevant paths. This saves huge amounts of work.
In a large database, this can be a serious performance boost! The end-user can spend less time with a spinner and more time using your application.
## Conclusion
- `.observe` has a new optional `keyPaths` parameter.
- The app developer has more granular control over when notifications are fired.
- This can greatly improve notification performance for large databases and complex object graphs.
Please provide feedback and ask any questions in the Realm Community Forum.
| md | {
"tags": [
"Realm",
"iOS"
],
"pageDescription": "How to customize your notifications when your iOS app is observing Realm",
"contentType": "Tutorial"
} | Filter Realm Notifications in Your iOS App with KeyPaths | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/pause-resume-atlas-clusters | created | # How to Easily Pause and Resume MongoDB Atlas Clusters
One of the most important things to think about in the cloud is what is burning dollars while you sleep. In the case of MongoDB Atlas, that is your live clusters. The minute you start a cluster (with the exception of our free tier), we start accumulating cost.
If you're using a dedicated cluster—not one of the cheaper, shared cluster types, such as M0, M2 or M5—then it's easy enough to pause a cluster using the Atlas UI, but logging in over 2FA can be a drag. Wouldn't it be great if we could just jump on a local command line to look at our live clusters?
This you can do with a command line tool like `curl`, some programming savvy, and knowledge of the MongoDB Atlas Admin API. But who has time for that? Not me, for sure.
That is why I wrote a simple script to automate those steps. It's now a Python package up on PyPi called mongodbatlas.
You will need Python 3.6 or better installed to run the script. (This is your chance to escape the clutches of 2.x.)
Just run:
``` bash
$ pip install mongodbatlas
Collecting mongodbatlas
Using cached mongodbatlas-0.2.6.tar.gz (17 kB)
...
...
Building wheels for collected packages: mongodbatlas
Building wheel for mongodbatlas (setup.py) ... done
Created wheel for mongodbatlas: filename=mongodbatlas-0.2.6-py3-none-any.whl size=23583 sha256=d178ab386a8104f4f5100a6ccbe61670f9a1dd3501edb5dcfb585fb759cb749c
Stored in directory: /Users/jdrumgoole/Library/Caches/pip/wheels/d1/84/74/3da8d3462b713bfa67edd02234c968cb4b1367d8bc0af16325
Successfully built mongodbatlas
Installing collected packages: certifi, chardet, idna, urllib3, requests, six, python-dateutil, mongodbatlas
Successfully installed certifi-2020.11.8 chardet-3.0.4 idna-2.10 mongodbatlas-0.2.6 python-dateutil-2.8.1 requests-2.25.0 six-1.15.0 urllib3-1.26.1
```
Now you will have a script installed called `atlascli`. To test the install worked, run `atlascli -h`.
``` bash
$ atlascli -h
usage: atlascli -h] [--publickey PUBLICKEY] [--privatekey PRIVATEKEY]
[-p PAUSE_CLUSTER] [-r RESUME_CLUSTER] [-l] [-lp] [-lc]
[-pid PROJECT_ID_LIST] [-d]
A command line program to list organizations,projects and clusters on a
MongoDB Atlas organization.You need to enable programmatic keys for this
program to work. See https://docs.atlas.mongodb.com/reference/api/apiKeys/
optional arguments:
-h, --help show this help message and exit
--publickey PUBLICKEY
MongoDB Atlas public API key.Can be read from the
environment variable ATLAS_PUBLIC_KEY
--privatekey PRIVATEKEY
MongoDB Atlas private API key.Can be read from the
environment variable ATLAS_PRIVATE_KEY
-p PAUSE_CLUSTER, --pause PAUSE_CLUSTER
pause named cluster in project specified by project_id
Note that clusters that have been resumed cannot be
paused for the next 60 minutes
-r RESUME_CLUSTER, --resume RESUME_CLUSTER
resume named cluster in project specified by
project_id
-l, --list List everything in the organization
-lp, --listproj List all projects
-lc, --listcluster List all clusters
-pid PROJECT_ID_LIST, --project_id PROJECT_ID_LIST
specify the project ID for cluster that is to be
paused
-d, --debug Turn on logging at debug level
Version: 0.2.6
```
To make this script work, you will need to do a little one-time setup on your cluster. You will need a [programmatic key for your cluster. You will also need to enable the IP address that the client is making requests from.
There are two ways to create an API key:
- If you have a single project, it's probably easiest to create a single project API key
- If you have multiple projects, you should probably create an organization API key and add it to each of your projects.
## Single Project API Key
Going to your "Project Settings" page by clicking on the "three dot" button next your project name at the top-left of the screen and selecting "Project Settings". Then click on "Access Manager" on the left side of the screen and click on "Create API Key". Take a note of the public *and* private parts of the key, and ensure that the key has the "Project Cluster Manager" permission. More detailed steps can be found in the documentation.
## Organization API Key
Click on the cog icon next to your organization name at the top-left of the screen. Click on "Access Manager" on the left-side of the screen and click on "Create API Key". Take a note of the public *and* private parts of the key. Don't worry about selecting any specific organization permissions.
Now you'll need to invite the API key to each of the projects containing clusters you wish to control. Click on "Projects' on the left-side of the screen. For each of the projects, click on the "three dots" icon on the same row in the project table and select "Visit Project Settings" Click on "Access Manager", and click on "Invite to Project" on the top-right. Paste your public key into the search box and select it in the menu that appears. Ensure that the key has the "Project Cluster Manager" permission that it will need to pause and resume clusters in that project.
More detailed steps can be found in the documentation.
## Configuring `atlascli`
The programmatic key has two parts: a public key and a private key. Both of these are used by the `atlascli` program to query the projects and clusters associated with the organization.
You can pass the keys in on the command line, but this is not recommended because they will be stored in the command line history. It's better to store them in environment variables, and the `atlascli` program will look for these two:
- `ATLAS_PUBLIC_KEY`: stores the public key part of the programmatic key
- `ATLAS_PRIVATE_KEY`: stores the private part of the programmatic key
Once you have created these environment variables, you can run `atlascli -l` to list the organization and its associated projects and clusters. I've blocked out part of the actual IDs with `xxxx` characters for security purposes:
``` bash
$ atlascli -l
{'id': 'xxxxxxxxxxxxxxxx464d175c',
'isDeleted': False,
'links': {'href': 'https://cloud.mongodb.com/api/atlas/v1.0/orgs/599eeced9f78f769464d175c',
'rel': 'self'}],
'name': 'Open Data at MongoDB'}
Organization ID:xxxxxxxxxxxxf769464d175c Name:'Open Data at MongoDB'
project ID:xxxxxxxxxxxxd6522bc457f1 Name:'DevHub'
Cluster ID:'xxxxxxxxxxxx769c2577a54' name:'DRA-Data' state=running
project ID:xxxxxxxxx2a0421d9bab Name:'MUGAlyser Project'
Cluster ID:'xxxxxxxxxxxb21250823bfba' name:'MUGAlyser' state=paused
project ID:xxxxxxxxxxxxxxxx736dfdcddf Name:'MongoDBLive'
project ID:xxxxxxxxxxxxxxxa9a5a04e7 Name:'Open Data Covid-19'
Cluster ID:'xxxxxxxxxxxxxx17cec56acf' name:'pre-prod' state=running
Cluster ID:'xxxxxxxxxxxxxx5fbfe04313' name:'dev' state=running
Cluster ID:'xxxxxxxxxxxxxx779f979879' name:'covid-19' state=running
project ID xxxxxxxxxxxxxxxxa132a8010 Name:'Open Data Project'
Cluster ID:'xxxxxxxxxxxxxx5ce1ef94dd' name:'MOT' state=paused
Cluster ID:'xxxxxxxxxxxxxx22bf6c226f' name:'GDELT' state=paused
Cluster ID:'xxxxxxxxxxxxxx5647797ac5' name:'UKPropertyPrices' state=paused
Cluster ID:'xxxxxxxxxxxxxx0f270da18a' name:'New-York-Taxi' state=paused
Cluster ID:'xxxxxxxxxxxxxx11eab32cf8' name:'demodata' state=running
Cluster ID:'xxxxxxxxxxxxxxxdcaef39c8' name:'stackoverflow' state=paused
project ID:xxxxxxxxxxc9503a77fcce0c Name:'Realm'
```
To pause a cluster, you will need to specify the `project ID` and the `cluster name`. Here is an example:
``` bash
$ atlascli --project_id xxxxxxxxxxxxxxxxa132a8010 --pause demodata
Pausing 'demodata'
Paused cluster 'demodata'
```
To resume the same cluster, do the converse:
``` bash
$ atlascli --project_id xxxxxxxxxxxxxxxxa132a8010 --resume demodata
Resuming cluster 'demodata'
Resumed cluster 'demodata'
```
Note that once a cluster has been resumed, it cannot be paused again for a while.
This delay allows the Atlas service to apply any pending changes or patches to the cluster that may have accumulated while it was paused.
Now go save yourself some money. This script can easily be run from a `crontab` entry or the Windows Task Scheduler.
Want to see the code? It's in this [repo on GitHub.
For a much more full-featured Atlas Admin API in Python, please check out my colleague Matthew Monteleone's PyPI package AtlasAPI.
> If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to easily pause and resume MongoDB Atlas clusters.",
"contentType": "Article"
} | How to Easily Pause and Resume MongoDB Atlas Clusters | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/time-series-candlestick | created | # Currency Analysis with Time Series Collections #1 — Generating Candlestick Charts Data
## Introduction
Technical analysis is a methodology used in finance to provide price forecasts for financial assets based on historical market data.
When it comes to analyzing market data, you need a better toolset. You will have a good amount of data, hence storing, accessing, and fast processing of this data becomes harder.
The financial assets price data is an example of time-series data. MongoDB 5.0 comes with a few important features to facilitate time-series data processing:
- Time Series Collections: This specialized MongoDB collection makes it incredibly simple to store and process time-series data with automatic bucketing capabilities.
- New Aggregation Framework Date Operators: `$dateTrunc`, `$dateAdd`, `$dateTrunc`, and `$dateDiff`.
- Window Functions: Performs operations on a specified span of documents in a collection, known as a window, and returns the results based on the chosen window operator.
This three-part series will explain how you can build a currency analysis platform where you can apply well-known financial analysis techniques such as SMA, EMA, MACD, and RSI. While you can read through this article series and grasp the main concepts, you can also get your hands dirty and run the entire demo-toolkit by yourself. All the code is available in the Github repository.
## Data Model
We want to save the last price of every currency in MongoDB, in close to real time. Depending on the currency data provider, it can be millisecond level to minute level. We insert the data as we get it from the provider with the following simple data model:
```json
{
"time": ISODate("20210701T13:00:01.343"),
"symbol": "BTC-USD",
"price": 33451.33
}
```
We only have three fields in MongoDB:
- `time` is the time information when the symbol information is received.
- `symbol` is the currency symbol such as "BTC-USD." There can be hundreds of different symbols.
- `price` field is the numeric value which indicates the value of currency at the time.
## Data Source
Coinbase, one of the biggest cryptocurrency exchange platforms, provides a WebSocket API to consume real-time cryptocurrency price updates. We will connect to Coinbase through a WebSocket, retrieve the data in real-time, and insert it into MongoDB. In order to increase the efficiency of insert operations, we can apply bulk insert.
Even though our data source in this post is a cryptocurrency exchange, this article and the demo toolkit are applicable to any exchange platform that has time, symbol, and price information.
## Bucketing Design Pattern
The MongoDB document model provides a lot of flexibility in how you model data. That flexibility is incredibly powerful, but that power needs to be harnessed in terms of your application’s data access patterns; schema design in MongoDB has a tremendous impact on the performance of your application.
The bucketing design pattern is one MongoDB design pattern that groups raw data from multiple documents into one document rather than keeping separate documents for each and every raw piece of data. Therefore, we see performance benefits in terms of index size savings and read/write speed. Additionally, by grouping the data together with bucketing, we make it easier to organize specific groups of data, thus increasing the ability to discover historical trends or provide future forecasting.
However, prior to MongoDB 5.0, in order to take advantage of bucketing, it required application code to be aware of bucketing and engineers to make conscious upfront schema decisions, which added overhead to developing efficient time series solutions within MongoDB.
## Time Series Collections for Currency Analysis
Time Series collections are a new collection type introduced in MongoDB 5.0. It automatically optimizes for the storage of time series data and makes it easier, faster, and less expensive to work with time series data in MongoDB. There is a great blog post that covers MongoDB’s newly introduced Time Series collections in more detail that you may want to read first or for additional information.
For our use case, we will create a Time Series collection as follows:
```javascript
db.createCollection("ticker", {
timeseries: {
timeField: "time",
metaField: "symbol",
},
});
```
While defining the time series collection, we set the `timeField` of the time series collection as `time`, and the `metaField` of the time series collection as `symbol`. Therefore, a particular symbol’s data for a period will be stored together in the time series collection.
### How the Currency Data is Stored in the Time Series Collection
The application code will make a simple insert operation as it does in a regular collection:
```javascript
db.ticker.insertOne({
time: ISODate("20210101T01:00:00"),
symbol: "BTC-USD",
price: 34114.1145,
});
```
We read the data in the same way we would from any other MongoDB collection:
```javascript
db.ticker.findOne({"symbol" : "BTC-USD"})
{
"time": ISODate("20210101T01:00:00"),
"symbol": "BTC-USD",
"price": 34114.1145,
"_id": ObjectId("611ea97417712c55f8d31651")
}
```
However, the underlying storage optimization specific to time series data will be done by MongoDB. For example, "BTC-USD" is a digital currency and every second you make an insert operation, it looks and feels like it’s stored as a separate document when you query it. However, the underlying optimization mechanism keeps the same symbols’ data together for faster and efficient processing. This allows us to automatically provide the advantages of the bucket pattern in terms of index size savings and read/write performance without sacrificing the way you work with your data.
## Candlestick Charts
We have already inserted hours of data for different currencies. A particular currency’s data is stored together, thanks to the Time Series collection. Now it’s time to start analyzing the currency data.
Now, instead of individually analyzing second level data, we will group the data by five-minute intervals, and then display the data on candlestick charts. Candlestick charts in technical analysis represent the movement in prices over a period of time.
As an example, consider the following candlestick. It represents one time interval, e.g. five minutes between `20210101-17:30:00` and `20210101-17:35:00`, and it’s labeled with the start date, `20210101-17:30:00.` It has four metrics: high, low, open, and close. High is the highest price, low is the lowest price, open is the first price, and close is the last price of the currency in this duration.
In our currency dataset, we have to reach a stage where we need to have grouped the data by five-minute intervals like: `2021-01-01T01:00:00`, `2021-01-01T01:05:00`, etc. And every interval group needs to have four metrics: high, low, open, and close price. Examples of interval data are as follows:
```json
{
"time": ISODate("20210101T01:00:00"),
"symbol": "BTC-USD",
"open": 34111.12,
"close": 34192.23,
"high": 34513.28,
"low": 33981.17
},
{
"time": ISODate("20210101T01:05:00"),
"symbol": "BTC-USD",
"open": 34192.23,
"close": 34244.16,
"high": 34717.90,
"low": 34001.13
}]
```
However, we only currently have second-level data for each ticker stored in our Time Series collection as we push the data for every second. We need to group the data, but how can we do this?
In addition to Time Series collections, MongoDB 5.0 has introduced a new aggregation operator, [`$dateTrunc`. This powerful new aggregation operator can do many things, but essentially, its core functionality is to truncate the date information to the closest time or a specific datepart, by considering the given parameters. In our scenario, we want to group currency data for five-minute intervals. Therefore, we can set the `$dateTrunc` operator parameters accordingly:
```json
{
$dateTrunc: {
date: "$time",
unit: "minute",
binSize: 5
}
}
```
In order to set the high, low, open, and close prices for each group (each candlestick), we can use other MongoDB operators, which were already available before MongoDB 5.0:
- high: `$max`
- low: `$min`
- open: `$first`
- close: `$last`
After grouping the data, we need to sort the data by time to analyze it properly. Therefore, recent data (represented by a candlestick) will be at the right-most of the chart.
Putting this together, our entire aggregation query will look like this:
```js
db.ticker.aggregate(
{
$match: {
symbol: "BTC-USD",
},
},
{
$group: {
_id: {
symbol: "$symbol",
time: {
$dateTrunc: {
date: "$time",
unit: "minute",
binSize: 5
},
},
},
high: { $max: "$price" },
low: { $min: "$price" },
open: { $first: "$price" },
close: { $last: "$price" },
},
},
{
$sort: {
"_id.time": 1,
},
},
]);
```
After we grouped the data based on five-minute intervals, we can visualize it in a candlestick chart as follows:
![Candlestick chart
We are currently using an open source visualization tool to display five-minute grouped data of BTC-USD currency. Every stick in the chart represents a five-minute interval and has four metrics: high, low, open, and close price.
## Conclusion
With the introduction of Time Series collections and advanced aggregation operators for date calculations, MongoDB 5.0 makes currency analysing much easier.
After you’ve grouped the data for the selected intervals, you can allow MongoDB to remove old data by setting the `expireAfterSeconds` parameter in the collection options. It will automatically remove the older data than the specified time in seconds.
Another option is to archive raw data to cold storage for further analysis. Fortunately, MongoDB Atlas has automatic archiving capability to offload the old data in a MongoDB Atlas cluster to cold object storage, such as cloud object storage - Amazon S3 or Microsoft Azure Blob Storage. To do that, you can set your archiving rules on the time series collection and it will automatically offload the old data to the cold storage. Online Archive will be available for time-series collections very soon.
Is the currency data already placed in Kafka topics? That’s perfectly fine. You can easily transfer the data in Kafka topics to MongoDB through MongoDB Sink Connector for Kafka. Please check out this article for further details on the integration of Kafka topics and the MongoDB Time Series collection.
In the following posts, we’ll discuss how well-known financial technical indicators can be calculated via windowing functions on time series collections. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Time series collections part 1: generating data for a candlestick chart from time-series data",
"contentType": "Tutorial"
} | Currency Analysis with Time Series Collections #1 — Generating Candlestick Charts Data | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/go/golang-change-streams | created | # Reacting to Database Changes with MongoDB Change Streams and Go
If you've been keeping up with my getting started with Go and MongoDB tutorial series, you'll remember that we've accomplished quite a bit so far. We've had a look at everything from CRUD interaction with the database to data modeling, and more. To play catch up with everything we've done, you can have a look at the following tutorials in the series:
- How to Get Connected to Your MongoDB Cluster with Go
- Creating MongoDB Documents with Go
- Retrieving and Querying MongoDB Documents with Go
- Updating MongoDB Documents with Go
- Deleting MongoDB Documents with Go
- Modeling MongoDB Documents with Native Go Data Structures
- Performing Complex MongoDB Data Aggregation Queries with Go
In this tutorial we're going to explore change streams in MongoDB and how they might be useful, all with the Go programming language (Golang).
Before we take a look at the code, let's take a step back and understand what change streams are and why there's often a need for them.
Imagine this scenario, one of many possible:
You have an application that engages with internet of things (IoT) clients. Let's say that this is a geofencing application and the IoT clients are something that can trigger the geofence as they come in and out of range. Rather than having your application constantly run queries to see if the clients are in range, wouldn't it make more sense to watch in real-time and react when it happens?
With MongoDB change streams, you can create a pipeline to watch for changes on a collection level, database level, or deployment level, and write logic within your application to do something as data comes in based on your pipeline.
## Creating a Real-Time MongoDB Change Stream with Golang
While there are many possible use-cases for change streams, we're going to continue with the example that we've been using throughout the scope of this getting started series. We're going to continue working with podcast show and podcast episode data.
Let's assume we have the following code to start:
``` go
package main
import (
"context"
"fmt"
"os"
"sync"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func main() {
client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
if err != nil {
panic(err)
}
defer client.Disconnect(context.TODO())
database := client.Database("quickstart")
episodesCollection := database.Collection("episodes")
}
```
The above code is a very basic connection to a MongoDB cluster, something that we explored in the How to Get Connected to Your MongoDB Cluster with Go, tutorial.
To watch for changes, we can do something like the following:
``` go
episodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{})
if err != nil {
panic(err)
}
```
The above code will watch for any and all changes to documents within the `episodes` collection. The result is a cursor that we can iterate over indefinitely for data as it comes in.
We can iterate over the curser and make sense of our data using the following code:
``` go
episodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{})
if err != nil {
panic(err)
}
defer episodesStream.Close(context.TODO())
for episodesStream.Next(context.TODO()) {
var data bson.M
if err := episodesStream.Decode(&data); err != nil {
panic(err)
}
fmt.Printf("%v\n", data)
}
```
If data were to come in, it might look something like the following:
``` none
map_id:map[_data:825E4EFCB9000000012B022C0100296E5A1004D960EAE47DBE4DC8AC61034AE145240146645F696400645E3B38511C9D4400004117E80004] clusterTime:{1582234809 1} documentKey:map[_id:ObjectID("5e3b38511c9d
4400004117e8")] fullDocument:map[_id:ObjectID("5e3b38511c9d4400004117e8") description:The second episode duration:30 podcast:ObjectID("5e3b37e51c9d4400004117e6") title:Episode #3] ns:map[coll:episodes
db:quickstart] operationType:replace]
```
In the above example, I've done a `Replace` on a particular document in the collection. In addition to information about the data, I also receive the full document that includes the change. The results will vary depending on the `operationType` that takes place.
While the code that we used would work fine, it is currently a blocking operation. If we wanted to watch for changes and continue to do other things, we'd want to use a [goroutine for iterating over our change stream cursor.
We could make some changes like this:
``` go
package main
import (
"context"
"fmt"
"os"
"sync"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func iterateChangeStream(routineCtx context.Context, waitGroup sync.WaitGroup, stream *mongo.ChangeStream) {
defer stream.Close(routineCtx)
defer waitGroup.Done()
for stream.Next(routineCtx) {
var data bson.M
if err := stream.Decode(&data); err != nil {
panic(err)
}
fmt.Printf("%v\n", data)
}
}
func main() {
client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
if err != nil {
panic(err)
}
defer client.Disconnect(context.TODO())
database := client.Database("quickstart")
episodesCollection := database.Collection("episodes")
var waitGroup sync.WaitGroup
episodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{})
if err != nil {
panic(err)
}
waitGroup.Add(1)
routineCtx, cancelFn := context.WithCancel(context.Background())
go iterateChangeStream(routineCtx, waitGroup, episodesStream)
waitGroup.Wait()
}
```
A few things are happening in the above code. We've moved the stream iteration into a separate function to be used in a goroutine. However, running the application would result in it terminating quite quickly because the `main` function will terminate not too longer after creating the goroutine. To resolve this, we are making use of a `WaitGroup`. In our example, the `main` function will wait until the `WaitGroup` is empty and the `WaitGroup` only becomes empty when the goroutine terminates.
Making use of the `WaitGroup` isn't an absolute requirement as there are other ways to keep the application running while watching for changes. However, given the simplicity of this example, it made sense in order to see any changes in the stream.
To keep the `iterateChangeStream` function from running indefinitely, we are creating and passing a context that can be canceled. While we don't demonstrate canceling the function, at least we know it can be done.
## Complicating the Change Stream with the Aggregation Pipeline
In the previous example, the aggregation pipeline that we used was as basic as you can get. In other words, we were looking for any and all changes that were happening to our particular collection. While this might be good in a lot of scenarios, you'll probably get more out of using a better defined aggregation pipeline.
Take the following for example:
``` go
matchPipeline := bson.D{
{
"$match", bson.D{
{"operationType", "insert"},
{"fullDocument.duration", bson.D{
{"$gt", 30},
}},
},
},
}
episodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{matchPipeline})
```
In the above example, we're still watching for changes to the `episodes` collection. However, this time we're only watching for new documents that have a `duration` field greater than 30. Any other insert or other change stream operation won't be detected.
The results of the above code, when a match is found, might look like the following:
``` none
map_id:map[_data:825E4F03CF000000012B022C0100296E5A1004D960EAE47DBE4DC8AC61034AE145240146645F696400645E4F03A01C9D44000063CCBD0004] clusterTime:{1582236623 1} documentKey:map[_id:ObjectID("5e4f03a01c9d
44000063ccbd")] fullDocument:map[_id:ObjectID("5e4f03a01c9d44000063ccbd") description:a quick start into mongodb duration:35 podcast:1234 title:getting started with mongodb] ns:map[coll:episodes db:qui
ckstart] operationType:insert]
```
With change streams, you'll have access to a subset of the MongoDB aggregation pipeline and its operators. You can learn more about what's available in the [official documentation.
## Conclusion
You just saw how to use MongoDB change streams in a Golang application using the MongoDB Go driver. As previously pointed out, change streams make it very easy to react to database, collection, and deployment changes without having to constantly query the cluster. This allows you to efficiently plan out aggregation pipelines to respond to as they happen in real-time.
If you're looking to catch up on the other tutorials in the MongoDB with Go quick start series, you can find them below:
- How to Get Connected to Your MongoDB Cluster with Go
- Creating MongoDB Documents with Go
- Retrieving and Querying MongoDB Documents with Go
- Updating MongoDB Documents with Go
- Deleting MongoDB Documents with Go
- Modeling MongoDB Documents with Native Go Data Structures
- Performing Complex MongoDB Data Aggregation Queries with Go
To bring the series to a close, the next tutorial will focus on transactions with the MongoDB Go driver. | md | {
"tags": [
"Go",
"MongoDB"
],
"pageDescription": "Learn how to use change streams to react to changes to MongoDB documents, databases, and clusters in real-time using the Go programming language.",
"contentType": "Quickstart"
} | Reacting to Database Changes with MongoDB Change Streams and Go | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/building-service-based-atlas-management | created | # Building Service-Based Atlas Cluster Management
## Developer Productivity
MongoDB Atlas is changing the database industry standards when it comes to database provisioning, maintenance, and scaling, as it just works. However, even superheroes like Atlas know that with Great Power Comes Great Responsibility.
For this reason, Atlas provides Enterprise-grade security features for your clusters and a set of user management roles that can be assigned to log in users or programmatic API keys.
However, since the management roles were built for a wide use case of
our customers there are some customers who need more fine-grained
permissions for specific teams or user types. Although, at the moment
the management roles are predefined, with the help of a simple Realm
service and the programmatic API we can allow user access for very
specific management/provisioning features without exposing them to a
wider sudo all ability.
To better understand this scenario I want to focus on the specific use
case of database user creation for the application teams. In this
scenario perhaps each developer per team may need its own user and
specific database permissions. With the current Atlas user roles you
will need to grant the team a `Cluster Manager Role`, which allows them
to change cluster properties as well as pause and resume a cluster. In
some cases this power is unnecessary for your users.
> If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
## Proposed Solution
Your developers will submit their requests to a pre-built service which
will authenticate them and request an input for the user description.
Furthermore, the service will validate the input and post it to the
Atlas Admin API without exposing any additional information or API keys.
The user will receive a confirmation that the user was created and ready
to use.
## Work Flow
To make the service more accessible for users I am using a form-based
service called Typeform, you can choose many other available form builders (e.g Google Forms). This form will gather the information and password/secret for the service authentication from the user and pass it to the Realm webhook which will perform the action.
The input is an Atlas Admin API user object that we want to create, looking something like the following object:
``` javascript
{
"databaseName": ,
"password": ,
"roles": ...],
"username":
}
```
For more information please refer to our Atlas Role Based Authentication
[documentation.
## Webhook Back End
This section will require you to use an existing Realm Application or
build a new one.
MongoDB Realm is a serverless platform and mobile database. In our case
we will use the following features:
- Realm webhooks
- Realm context HTTP Module
- Realm Values/Secrets
You will also need to configure an Atlas Admin API key for the relevant Project and obtain it's Project Id. This can be done from your Atlas project url (e.g., `https://cloud.mongodb.com/v2/#clusters`).
The main part of the Realm application is to hold the Atlas Admin API keys and information as private secure secrets.
This is the webhook configuration that will call our Realm Function each
time the form is sent:
The function below receives the request. Fetch the needed API
information and sends the Atlas Admin API command. The result of which is
returned to the Form.
``` javascript
// This function is the webhook's request handler.
exports = async function(payload, response) {
// Get payload
const body = JSON.parse(payload.body.text());
// Get secrets for the Atlas Admin API
const username = context.values.get("AtlasPublicKey");
const password = context.values.get("AtlasPrivateKey");
const projectID = context.values.get("AtlasGroupId");
//Extract the Atlas user object description
const userObject = JSON.parse(body.form_response.answers0].text);
// Database users post command
const postargs = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: 'api/atlas/v1.0/groups/' + projectID + '/databaseUsers',
username: username,
password: password,
headers: {'Content-Type': ['application/json'], 'Accept': ['application/json']},
digestAuth:true,
body: JSON.stringify(userObject)};
var res = await context.http.post(postargs);
console.log(JSON.stringify(res));
// Check status of the user creation and report back to the user.
if (res.statusCode == 201)
{
response.setStatusCode(200)
response.setBody(`Successfully created ${userObject.username}.`);
} else {
// Respond with a malformed request error
response.setStatusCode(400)
response.setBody(`Could not create user ${userObject.username}.`);
}
};
```
Once the webhook is set and ready we can use it as a webhook url input
in the Typeform configuration.
The Realm webhook url can now be placed in the Typform webhook section.
Now the submitted data on the form will be forwarded via Webhook
integration to our webhook:
![
To strengthen the security around our Realm app we can strict the
allowed domain for the webhook request origin. Go to Realm application
"Manage" - "Settings" \> "Allowed Request Origins":
We can test the form now by providing an Atlas Admin API user
object.
If you go to the Atlas UI under the Database Access tab you will see the
created user.
## Summary
Now our developers will be able to create users quickly without being
exposed to any unnecessary privileges or human errors.
The webhook code can be converted to a function that can be called from
other webhooks or triggers allowing us to build sophisticated controlled
and secure provisioning methods. For example, we can configure a
scheduled trigger that pulls any newly created clusters and continuously
provision any new required users for our applications or edit any
existing users to add the needed new set of permissions.
MongoDB Atlas and Realm platforms can work in great synergy allowing us to bring our devops and development cycles to the
next level.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to build Service-Based Atlas Cluster Management webhooks/functionality with Atlas Admin API and MongoDB Realm.",
"contentType": "Article"
} | Building Service-Based Atlas Cluster Management | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/aggregation-expression-builders | created | # Java Aggregation Expression Builders in MongoDB
MongoDB aggregation pipelines allow developers to create rich document retrieval, manipulation, and update processes expressed as a sequence — or pipeline — of composable stages, where the output of one stage becomes the input to the next stage in the pipeline.
With aggregation operations, it is possible to:
* Group values from multiple documents together.
* Reshape documents.
* Perform aggregation operations on the grouped data to return a single result.
* Apply specialized operations to documents such as geographical functions, full text search, and time-window functions.
* Analyze data changes over time.
The aggregation framework has grown since its introduction in MongoDB version 2.2 to — as of version 6.1 — cover over 35 different stages and over 130 different operators.
Working with the MongoDB shell or in tools such as MongoDB Compass, aggregation pipelines are defined as an array of BSON1] objects, with each object defining a stage in the pipeline. In an online-store system, a simple pipeline to find all orders placed between January 1st 2023 and March 31st 2023, and then provide a count of those orders grouped by product type, might look like:
```JSON
db.orders.aggregate(
[
{
$match:
{
orderDate: {
$gte: ISODate("2023-01-01"),
},
orderDate: {
$lte: ISODate("2023-03-31"),
},
},
},
{
$group:
{
_id: "$productType",
count: {
$sum: 1
},
},
},
])
```
_Expressions_ give aggregation pipeline stages their ability to manipulate data. They come in four forms:
**Operators**: expressed as objects with a dollar-sign prefix followed by the name of the operator. In the example above, **{$sum : 1}** is an example of an operator incrementing the count of orders for each product type by 1 each time a new order for a product type is found.
**Field Paths**: expressed as strings with a dollar-sign prefix, followed by the field’s path. In the case of embedded objects or arrays, dot-notation can be used to provide the path to the embedded item. In the example above, "**$productType**" is a field path.
**Variables**: expressed with a double dollar-sign prefix, variables can be system or user defined. For example, "**$$NOW**" returns the current datetime value.
**Literal Values**: In the example above, the literal value ‘1’ in **{$sum : 1}** can be considered an expression and could be replaced with — for example — a field path expression.
In Java applications using the MongoDB native drivers, aggregation pipelines can be defined and executed by directly building equivalent BSON document objects. Our example pipeline above might look like the following when being built in Java using this approach:
```Java
…
MongoDatabase database = mongoClient.getDatabase("Mighty_Products");
MongoCollection collection = database.getCollection("orders");
SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd");
Bson matchStage = new Document("$match",
new Document("orderDate",
new Document("$gte",
formatter.parse("2023-01-01")))
.append("orderDate",
new Document("$lte",
formatter.parse("2023-03-31"))));
Bson groupStage = new Document("$group",
new Document("_id", "$productType")
.append("count",
new Document("$sum", 1L)));
collection.aggregate(
Arrays.asList(
matchStage,
groupStage
)
).forEach(doc -> System.out.println(doc.toJson()));
```
The Java code above is perfectly functional and will execute as intended, but it does highlight a couple of issues:
* When creating the code, we had to understand the format of the corresponding BSON documents. We were not able to utilize IDE features such as code completion and discovery.
* Any mistakes in the formatting of the documents being created, or the parameters and data types being passed to its various operators, would not be identified until we actually try to run the code.
* Although our example above is relatively simple, in more complex pipelines, the level of indentation and nesting required in the corresponding document building code can lead to readability issues.
As an alternative to building BSON document objects, the MongoDB Java driver also defines a set of “builder” classes with static utility methods to simplify the execution of many operations in MongoDB, including the creation and execution of aggregation pipeline stages. Using the builder classes allows developers to discover more errors at compile rather than run time and to use code discovery and completion features in IDEs. Recent versions of the Java driver have additionally added extended support for [expression operators when using the aggregation builder classes, allowing pipelines to be written with typesafe methods and using fluent coding patterns.
Using this approach, the above code could be written as:
```Java
MongoDatabase database = mongoClient.getDatabase("Mighty_Products");
MongoCollection collection = database.getCollection("orders");
var orderDate = current().getDate("orderDate");
Bson matchStage = match(expr(orderDate.gte(of(Instant.parse("2023-01-01")))
.and(orderDate.lte(of(Instant.parse("2023-03-31"))))));
Bson groupStage = group(current().getString("productType"), sum("count", 1L));
collection.aggregate(
Arrays.asList(
matchStage,
groupStage
)
).forEach(doc -> System.out.println(doc.toJson()));
```
In the rest of this article, we’ll walk through an example aggregation pipeline using the aggregation builder classes and methods and highlight some of the new aggregation expression operator support.
## The ADSB air-traffic control application
Our aggregation pipeline example is based on a database collecting and analyzing Air Traffic Control data transmitted by aircraft flying in and out of Denver International Airport. The data is collected using a receiver built using a Raspberry Pi and USB Software Defined Radios (SDRs) using software from the rather excellent Stratux open-source project.
These cheap-to-build receivers have become popular with pilots of light aircraft in recent years as it allows them to project the location of nearby aircraft within the map display of tablet and smartphone-based navigation applications such as Foreflight, helping to avoid mid-air collisions.
In our application, the data received from the Stratux receiver is combined with aircraft reference data from the Opensky Network to give us documents that look like this:
```JSON
{
"_id": {
"$numberLong": "11262117"
},
"model": "B737",
"tailNum": "N8620H",
"positionReports":
{
"callsign": "SWA962",
"alt": {
"$numberLong": "12625"
},
"lat": {
"$numberDecimal": "39.782833"
},
"lng": {
"$numberDecimal": "-104.49988"
},
"speed": {
"$numberLong": "283"
},
"track": {
"$numberLong": "345"
},
"vvel": {
"$numberLong": "-1344"
},
"timestamp": {
"$date": "2023-01-31T23:28:26.294Z"
}
},
{
"callsign": "SWA962",
"alt": {
"$numberLong": "12600"
},
"lat": {
"$numberDecimal": "39.784744"
},
"lng": {
"$numberDecimal": "-104.50058"
},
"speed": {
"$numberLong": "283"
},
"track": {
"$numberLong": "345"
},
"vvel": {
"$numberLong": "-1344"
},
"timestamp": {
"$date": "2023-01-31T23:28:26.419Z"
}
},
{
"callsign": "SWA962",
"alt": {
"$numberLong": "12600"
},
"lat": {
"$numberDecimal": "39.78511"
},
"lng": {
"$numberDecimal": "-104.50071"
},
"speed": {
"$numberLong": "283"
},
"track": {
"$numberLong": "345"
},
"vvel": {
"$numberLong": "-1344"
},
"timestamp": {
"$date": "2023-01-31T23:28:26.955Z"
}
}
]
}
```
The “tailNum” field provides the unique registration number of the aircraft and doesn’t change between position reports. The position reports are in an array[2], with each entry giving the geographical coordinates of the aircraft, its altitude, speed (horizontal and vertical), heading, and a timestamp. The position reports also give the callsign of the flight the aircraft was operating at the time it broadcast the position report. This can vary if the aircraft’s position reports were picked up as it flew into Denver, and then again later as it flew out of Denver operating a different flight. In the sample above, aircraft N8620H, a Boeing 737, was operating flight SWA962 — a Southwest Airlines flight. It was flying at a speed of 283 knots, on a heading of 345 degrees, descending through 12,600 feet at 1344 ft/minute.
Using data collected over a 36-hour period, our collection contains information on over 500 different aircraft and over half a million position reports. We want to build an aggregation pipeline that will show the number of different aircraft operated by United Airlines grouped by aircraft type.
## Defining the aggregation pipeline
The aggregation pipeline that we will run on our data will consist of three stages:
The first — a **match** stage — will find all aircraft that transmitted a United Airlines callsign between two dates.
Next, we will carry out a **group** stage that takes the aircraft documents found by the match stage and creates a new set of documents — one for each model of aircraft found during the match stage, with each document containing a list of all the tail numbers of aircraft of that type found during the match stage.
Finally, we carry out a **project** stage which is used to reshape the data in each document into our final desired format.
### Stage 1: $match
A [match stage carries out a query to filter the documents being passed to the next stage in the pipeline. A match stage is typically used as one of the first stages in the pipeline in order to keep the number of documents the pipeline has to work with — and therefore its memory footprint — to a reasonable size.
In our pipeline, the match stage will select all aircraft documents containing at least one position report with a United Airlines callsign (United callsigns all start with the three-letter prefix “UAL”), and with a timestamp between falling within a selected date range. The BSON representation of the resulting pipeline stage looks like:
```JSON
{
$match: {
positionReports: {
$elemMatch: {
callsign: /^UAL/,
$and:
{
timestamp: {
$gte: ISODate(
"2023-01-31T12:00:00.000-07:00"
)
}
},
{
timestamp: {
$lt: ISODate(
"2023-02-01T00:00:00.000-07:00"
)
}
}
]
}
}
}
}
```
The **$elemMatch** operator specifies that the query criteria we provide must all occur within a single entry in an array to generate a match, so an aircraft document will only match if it contains at least one position report where the callsign starts with “UAL” and the timestamp is between 12:00 on January 31st and 00:00 on February 1st in the Mountain time zone.
In Java, after using either Maven or Gradle to [add the MongoDB Java drivers as a dependency within our project, we could define this stage by building an equivalent BSON document object:
```Java
//Create the from and to dates for the match stage
String sFromDate = "2023-01-31T12:00:00.000-07:00";
TemporalAccessor ta = DateTimeFormatter.ISO_INSTANT.parse(sFromDate);
Instant fromInstant = Instant.from(ta);
Date fromDate = Date.from(fromInstant);
String sToDate = "2023-02-01T00:00:00.000-07:00";
ta = DateTimeFormatter.ISO_INSTANT.parse(sToDate);
Instant toInstant = Instant.from(ta);
Date toDate = Date.from(toInstant);
Document matchStage = new Document("$match",
new Document("positionReports",
new Document("$elemMatch",
new Document("callsign", Pattern.compile("^UAL"))
.append("$and", Arrays.asList(
new Document("timestamp", new Document("$gte", fromDate)),
new Document("timestamp", new Document("$lt", toDate))
))
)
)
);
```
As we saw with the earlier online store example, whilst this code is perfectly functional, we did need to understand the structure of the corresponding BSON document, and any mistakes we made in constructing it would only be discovered at run-time.
As an alternative, after adding the necessary import statements to give our code access to the aggregation builder and expression operator static methods, we can build an equivalent pipeline stage with the following code:
```Java
import static com.mongodb.client.model.Aggregates.*;
import static com.mongodb.client.model.Filters.*;
import static com.mongodb.client.model.Projections.*;
import static com.mongodb.client.model.Accumulators.*;
import static com.mongodb.client.model.mql.MqlValues.*;
//...
//Create the from and to dates for the match stage
String sFromDate = "2023-01-31T12:00:00.000-07:00";
TemporalAccessor ta = DateTimeFormatter.ISO_INSTANT.parse(sFromDate);
Instant fromInstant = Instant.from(ta);
String sToDate = "2023-02-01T00:00:00.000-07:00";
ta = DateTimeFormatter.ISO_INSTANT.parse(sToDate);
Instant toInstant = Instant.from(ta);
var positionReports = current().getArray("positionReports");
Bson matchStage = match(expr(
positionReports.any(positionReport -> {
var callsign = positionReport.getString("callsign");
var ts = positionReport.getDate("timestamp");
return callsign
.substr(0,3)
.eq(of("UAL"))
.and(ts.gte(of(fromInstant)))
.and(ts.lt(of(toInstant)));
})
));
```
There’s a couple of things worth noting in this code:
Firstly, the expressions operators framework gives us access to a method **current()** which returns the document currently being processed by the aggregation pipeline. We use it initially to get the array of position reports from the current document.
Next, although we’re using the **match()** aggregation builder method to create our match stage, to better demonstrate the use of the expression operators framework and its associated coding style, we’ve used the **expr()**3] filter builder method to build an expression that uses the **any()** array expression operator to iterate through each entry in the positionReports array, looking for any that matches our predicate — i.e., that has a callsign field starting with the letters “UAL” and a timestamp falling within our specified date/time range. This is equivalent to what the **$elemMatch** operator in our original BSON document-based pipeline stage was doing.
Also, when using the expression operators to retrieve fields, we’ve used type-specific methods to indicate the type of the expected return value. **callsign** was retrieved using **getString()**, while the timestamp variable **ts** was retrieved using **getDate()**. This allows IDEs such as IntelliJ and Visual Studio Code to perform type checking, and for subsequent code completion to be tailored to only show methods and documentation relevant to the returned type. This can lead to faster and less error-prone coding.
![faster and less error-prone coding
Finally, note that in building the predicate for the **any()** expression operator, we’ve used a fluent coding style and idiosyncratic coding elements, such as lambdas, that many Java developers will be familiar with and more comfortable using rather than the MongoDB-specific approach needed to directly build BSON documents.
### Stage 2: $group
Having filtered our document list to only include aircraft operated by United Airlines in our match stage, in the second stage of the pipeline, we carry out a group operation to begin the task of counting the number of aircraft of each model. The BSON document for this stage looks like:
```JSON
{
$group:
{
_id: "$model",
aircraftSet: {
$addToSet: "$tailNum",
},
},
}
```
In this stage, we are specifying that we want to group the document data by the “model” field and that in each resulting document, we want an array called “aircraftSet” containing each unique tail number of observed aircraft of that model type. The documents output from this stage look like:
```JSON
{
"_id": "B757",
"aircraftSet":
"N74856",
"N77865",
"N17104",
"N19117",
"N14120",
"N57855",
"N77871"
]
}
```
The corresponding Java code for the stage looks like:
```java
Bson bGroupStage = group(current().getString("model"),
addToSet("aircraftSet", current().getString("tailNum")));
```
As before, we’ve used the expressions framework **current()** method to access the document currently being processed by the pipeline. The aggregation builders **addToSet()** accumulator method is used to ensure only unique tail numbers are added to the “aircraftSet” array.
### Stage 3: $project
In the third and final stage of our pipeline, we use a [project stage to:
* Rename the “_id” field introduced by the group stage back to “model.”
* Swap the array of tail numbers for the number of entries in the array.
* Add a new field, “airline,” populating it with the literal value “United.”
* Add a field named “manufacturer” and use a $cond conditional operator to populate it with:
* “AIRBUS” if the aircraft model starts with “A.”
* “BOEING” if it starts with a “B.”
* “CANADAIR” if it starts with a “C.”
* “EMBRAER” if it starts with an “E.”
* “MCDONNELL DOUGLAS” if it starts with an “M.”
* “UNKNOWN” in all other cases.
The BSON document for this stage looks like:
```java
{
$project: {
airline: "United",
model: "$_id",
count: {
$size: "$aircraftSet",
},
manufacturer: {
$let: {
vars: {
manufacturerPrefix: {
$substrBytes: "$_id", 0, 1],
},
},
in: {
$switch: {
branches: [
{
case: {
$eq: [
"$$manufacturerPrefix",
"A",
],
},
then: "AIRBUS",
},
{
case: {
$eq: [
"$$manufacturerPrefix",
"B",
],
},
then: "BOEING",
},
{
case: {
$eq: [
"$$manufacturerPrefix",
"C",
],
},
then: "CANADAIR",
},
{
case: {
$eq: [
"$$manufacturerPrefix",
"E",
],
},
then: "EMBRAER",
},
{
case: {
$eq: [
"$$manufacturerPrefix",
"M",
],
},
then: "MCDONNELL DOUGLAS",
},
],
default: "UNKNOWN",
},
},
},
},
_id: "$$REMOVE",
},
}
```
The resulting output documents look like:
```JSON
{
"airline": "United",
"model": "B777",
"count": 5,
"Manufacturer": "BOEING"
}
```
The Java code for this stage looks like:
```java
Bson bProjectStage = project(fields(
computed("airline", "United"),
computed("model", current().getString("_id")),
computed("count", current().getArray("aircraftSet").size()),
computed("manufacturer", current()
.getString("_id")
.substr(0, 1)
.switchStringOn(s -> s
.eq(of("A"), (m -> of("AIRBUS")))
.eq(of("B"), (m -> of("BOEING")))
.eq(of("C"), (m -> of("CANADAIR")))
.eq(of("E"), (m -> of("EMBRAER")))
.eq(of("M"), (m -> of("MCDONNELL DOUGLAS")))
.defaults(m -> of("UNKNOWN"))
)),
excludeId()
));
```
Note again the use of type-specific field accessor methods to get the aircraft model type (string) and aircraftSet (array of type MqlDocument). In determining the aircraft manufacturer, we’ve again used a fluent coding style to conditionally set the value to Boeing or Airbus.
With our three pipeline stages now defined, we can now run the pipeline against our collection:
```java
aircraftCollection.aggregate(
Arrays.asList(
matchStage,
groupStage,
projectStage
)
).forEach(doc -> System.out.println(doc.toJson()));
```
If all goes to plan, this should produce output to the console that look like:
```JSON
{"airline": "United", "model": "B757", "count": 7, "manufacturer": "BOEING"}
{"airline": "United", "model": "B777", "count": 5, "manufacturer": "BOEING"}
{"airline": "United", "model": "A320", "count": 21, "manufacturer": "AIRBUS"}
{"airline": "United", "model": "B737", "count": 45, "manufacturer": "BOEING"}
```
In this article, we shown examples of how expression operators and aggregation builder methods in the latest versions of the MongoDB Java drivers can be used to construct aggregation pipelines using a fluent, idiosyncratic style of Java programming that can utilize autocomplete functionality in IDEs and type-safety compiler features. This can result in code that is more robust and more familiar in style to many Java developers. The use of the builder classes also places less dependence on developers having an extensive understanding of the BSON document format for aggregation pipeline stages.
More information on the use of aggregation builder and expression operator classes can be found in the official MongoDB Java Driver [documentation.
The example Java code, aggregation pipeline BSON, and a JSON export of the data used in this article can be found in Github.
*More information*
1] MongoDB uses Binary JSON (BSON) to store data and define operations. BSON is a superset of JSON, stored in binary format and allowing data types over and above those defined in the JSON standard. [Get more information on BSON.
2] It should be noted that storing the position reports in an array for each aircraft like this works well for purposes of our example, but it’s probably not the best design for a production grade system as — over time — the arrays for some aircraft could become excessively large. A really good discussion of massive arrays and other anti patterns, and how to handle them, is available [over at Developer Center.
3] The use of expressions in Aggregation Pipeline Match stages can sometimes cause some confusion. For a discussion of this, and aggregations in general, Paul Done’s excellent eBook, “[Practical MongoDB Aggregations,” is highly recommended. | md | {
"tags": [
"Java"
],
"pageDescription": "Learn how expression builders can make coding aggregation pipelines in Java applications faster and more reliable.",
"contentType": "Tutorial"
} | Java Aggregation Expression Builders in MongoDB | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/free-atlas-cluster | created | # Getting Your Free MongoDB Atlas Cluster
**You probably already know that MongoDB Atlas is MongoDB as a service in the public cloud of your choice but did you know we also offer a free forever cluster? In this Quick Start, we'll show you why you should get one and how to create one.**
MongoDB Atlas's Free Tier clusters - which are also known as M0 Sandboxes - are limited to only 512MB of storage but it's more than enough for a pet project or to learn about MongoDB with our free MongoDB University courses.
The only restriction on them is that they are available in a few regions for each of our three cloud providers: currently there are six on AWS, five on Azure, and four on Google Cloud Platform.
In this tutorial video, I will show you how to create an account. Then I'll show you how to create your first 3 node cluster and populate it with sample data.
:youtube]{vid=rPqRyYJmx2g}
Now that you understand the basics of [MongoDB Atlas, you may want to explore some of our advanced features that are not available in the Free Tier clusters:
- Peering your MongoDB Clusters with your AWS, GCP or Azure machines is only available for dedicated instances (M10 at least),
- LDAP Authentication and Authorization,
- AWS PrivateLink.
Our new Lucene-based Full-Text Search engine is now available for free tier clusters directly.
| md | {
"tags": [
"Atlas",
"Azure",
"Google Cloud"
],
"pageDescription": "Want to know the quickest way to start with MongoDB? It begins with getting yourself a free MongoDB Atlas Cluster so you can leverage your learning",
"contentType": "Quickstart"
} | Getting Your Free MongoDB Atlas Cluster | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/building-autocomplete-form-element-atlas-search-javascript | created |
Recipe:
| md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to create an autocomplete form element that leverages the natural language processing of MongoDB Atlas Search.",
"contentType": "Tutorial"
} | Building an Autocomplete Form Element with Atlas Search and JavaScript | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-aggregation-pipeline | created | # Java - Aggregation Pipeline
## Updates
The MongoDB Java quickstart repository is available on GitHub.
### February 28th, 2024
- Update to Java 21
- Update Java Driver to 5.0.0
- Update `logback-classic` to 1.2.13
### November 14th, 2023
- Update to Java 17
- Update Java Driver to 4.11.1
- Update mongodb-crypt to 1.8.0
### March 25th, 2021
- Update Java Driver to 4.2.2.
- Added Client Side Field Level Encryption example.
### October 21st, 2020
- Update Java Driver to 4.1.1.
- The Java Driver logging is now enabled via the popular SLF4J API so, I added logback in the `pom.xml` and a configuration file `logback.xml`.
## What's the Aggregation Pipeline?
The aggregation pipeline is a framework for data aggregation modeled on the concept of data processing pipelines, just like the "pipe" in the Linux Shell. Documents enter a multi-stage pipeline that transforms the documents into aggregated results.
It's the most powerful way to work with your data in MongoDB. It will allow us to make advanced queries like grouping documents, manipulate arrays, reshape document models, etc.
Let's see how we can harvest this power using Java.
## Getting Set Up
I will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just update it if you already have it:
``` sh
git clone https://github.com/mongodb-developer/java-quick-start
```
>If you didn't set up your free cluster on MongoDB Atlas, now is great time to do so. You have all the instructions in this blog post.
## First Example with Zips
In the MongoDB Sample Dataset in MongoDB Atlas, let's explore a bit the `zips` collection in the `sample_training` database.
``` javascript
MongoDB Enterprise Cluster0-shard-0:PRIMARY> db.zips.find({city:"NEW YORK"}).limit(2).pretty()
{
"_id" : ObjectId("5c8eccc1caa187d17ca72f8a"),
"city" : "NEW YORK",
"zip" : "10001",
"loc" : {
"y" : 40.74838,
"x" : 73.996705
},
"pop" : 18913,
"state" : "NY"
}
{
"_id" : ObjectId("5c8eccc1caa187d17ca72f8b"),
"city" : "NEW YORK",
"zip" : "10003",
"loc" : {
"y" : 40.731253,
"x" : 73.989223
},
"pop" : 51224,
"state" : "NY"
}
```
As you can see, we have one document for each zip code in the USA and for each, we have the associated population.
To calculate the population of New York, I would have to sum the population of each zip code to get the population of the entire city.
Let's try to find the 3 biggest cities in the state of Texas. Let's design this on paper first.
- I don't need to work with the entire collection. I need to filter only the cities in Texas.
- Once this is done, I can regroup all the zip code from a same city together to get the total population.
- Then I can order my cities by descending order or population.
- Finally, I can keep the first 3 cities of my list.
The easiest way to build this pipeline in MongoDB is to use the aggregation pipeline builder that is available in MongoDB Compass or in MongoDB Atlas in the `Collections` tab.
Once this is done, you can export your pipeline to Java using the export button.
After a little code refactoring, here is what I have:
``` java
/**
* find the 3 most densely populated cities in Texas.
* @param zips sample_training.zips collection from the MongoDB Sample Dataset in MongoDB Atlas.
*/
private static void threeMostPopulatedCitiesInTexas(MongoCollection zips) {
Bson match = match(eq("state", "TX"));
Bson group = group("$city", sum("totalPop", "$pop"));
Bson project = project(fields(excludeId(), include("totalPop"), computed("city", "$_id")));
Bson sort = sort(descending("totalPop"));
Bson limit = limit(3);
List results = zips.aggregate(List.of(match, group, project, sort, limit)).into(new ArrayList<>());
System.out.println("==> 3 most densely populated cities in Texas");
results.forEach(printDocuments());
}
```
The MongoDB driver provides a lot of helpers to make the code easy to write and to read.
As you can see, I solved this problem with:
- A $match stage to filter my documents and keep only the zip code in Texas,
- A $group stage to regroup my zip codes in cities,
- A $project stage to rename the field `_id` in `city` for a clean output (not mandatory but I'm classy),
- A $sort stage to sort by population descending,
- A $limit stage to keep only the 3 most populated cities.
Here is the output we get:
``` json
==> 3 most densely populated cities in Texas
{
"totalPop": 2095918,
"city": "HOUSTON"
}
{
"totalPop": 940191,
"city": "DALLAS"
}
{
"totalPop": 811792,
"city": "SAN ANTONIO"
}
```
In MongoDB 4.2, there are 30 different aggregation pipeline stages that you can use to manipulate your documents. If you want to know more, I encourage you to follow this course on MongoDB University: M121: The MongoDB Aggregation Framework.
## Second Example with Posts
This time, I'm using the collection `posts` in the same database.
``` json
MongoDB Enterprise Cluster0-shard-0:PRIMARY> db.posts.findOne()
{
"_id" : ObjectId("50ab0f8bbcf1bfe2536dc3f9"),
"body" : "Amendment I\n
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.\n
\nAmendment II\n
\nA well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.\n
\nAmendment III\n
\nNo Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.\n
\nAmendment IV\n
\nThe right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.\n
\nAmendment V\n
\nNo person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offence to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.\n
\n\nAmendment VI\n
\nIn all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.\n
\nAmendment VII\n
\nIn Suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved, and no fact tried by a jury, shall be otherwise re-examined in any Court of the United States, than according to the rules of the common law.\n
\nAmendment VIII\n
\nExcessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.\n
\nAmendment IX\n
\nThe enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.\n
\nAmendment X\n
\nThe powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.\"\n
\n",
"permalink" : "aRjNnLZkJkTyspAIoRGe",
"author" : "machine",
"title" : "Bill of Rights",
"tags" :
"watchmaker",
"santa",
"xylophone",
"math",
"handsaw",
"dream",
"undershirt",
"dolphin",
"tanker",
"action"
],
"comments" : [
{
"body" : "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum",
"email" : "[email protected]",
"author" : "Santiago Dollins"
},
{
"body" : "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum",
"email" : "[email protected]",
"author" : "Omar Bowdoin"
}
],
"date" : ISODate("2012-11-20T05:05:15.231Z")
}
```
This collection of 500 posts has been generated artificially, but it contains arrays and I want to show you how we can manipulate arrays in a pipeline.
Let's try to find the three most popular tags and for each tag, I also want the list of post titles they are tagging.
Here is my solution in Java.
``` java
/**
* find the 3 most popular tags and their post titles
* @param posts sample_training.posts collection from the MongoDB Sample Dataset in MongoDB Atlas.
*/
private static void threeMostPopularTags(MongoCollection posts) {
Bson unwind = unwind("$tags");
Bson group = group("$tags", sum("count", 1L), push("titles", "$title"));
Bson sort = sort(descending("count"));
Bson limit = limit(3);
Bson project = project(fields(excludeId(), computed("tag", "$_id"), include("count", "titles")));
List results = posts.aggregate(List.of(unwind, group, sort, limit, project)).into(new ArrayList<>());
System.out.println("==> 3 most popular tags and their posts titles");
results.forEach(printDocuments());
}
```
Here I'm using the very useful [$unwind stage to break down my array of tags.
It allows me in the following $group stage to group my tags, count the posts and collect the titles in a new array `titles`.
Here is the final output I get.
``` json
==> 3 most popular tags and their posts titles
{
"count": 8,
"titles":
"Gettysburg Address",
"US Constitution",
"Bill of Rights",
"Gettysburg Address",
"Gettysburg Address",
"Declaration of Independence",
"Bill of Rights",
"Declaration of Independence"
],
"tag": "toad"
}
{
"count": 8,
"titles": [
"Bill of Rights",
"Gettysburg Address",
"Bill of Rights",
"Bill of Rights",
"Declaration of Independence",
"Declaration of Independence",
"Bill of Rights",
"US Constitution"
],
"tag": "forest"
}
{
"count": 8,
"titles": [
"Bill of Rights",
"Declaration of Independence",
"Declaration of Independence",
"Gettysburg Address",
"US Constitution",
"Bill of Rights",
"US Constitution",
"US Constitution"
],
"tag": "hair"
}
```
As you can see, some titles are repeated. As I said earlier, the collection was generated so the post titles are not uniq. I could solve this "problem" by using the [$addToSet operator instead of the $push one if this was really an issue.
## Final Code
``` java
package com.mongodb.quickstart;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import org.bson.Document;
import org.bson.conversions.Bson;
import org.bson.json.JsonWriterSettings;
import java.util.ArrayList;
import java.util.List;
import java.util.function.Consumer;
import static com.mongodb.client.model.Accumulators.push;
import static com.mongodb.client.model.Accumulators.sum;
import static com.mongodb.client.model.Aggregates.*;
import static com.mongodb.client.model.Filters.eq;
import static com.mongodb.client.model.Projections.*;
import static com.mongodb.client.model.Sorts.descending;
public class AggregationFramework {
public static void main(String] args) {
String connectionString = System.getProperty("mongodb.uri");
try (MongoClient mongoClient = MongoClients.create(connectionString)) {
MongoDatabase db = mongoClient.getDatabase("sample_training");
MongoCollection zips = db.getCollection("zips");
MongoCollection posts = db.getCollection("posts");
threeMostPopulatedCitiesInTexas(zips);
threeMostPopularTags(posts);
}
}
/**
* find the 3 most densely populated cities in Texas.
*
* @param zips sample_training.zips collection from the MongoDB Sample Dataset in MongoDB Atlas.
*/
private static void threeMostPopulatedCitiesInTexas(MongoCollection zips) {
Bson match = match(eq("state", "TX"));
Bson group = group("$city", sum("totalPop", "$pop"));
Bson project = project(fields(excludeId(), include("totalPop"), computed("city", "$_id")));
Bson sort = sort(descending("totalPop"));
Bson limit = limit(3);
List results = zips.aggregate(List.of(match, group, project, sort, limit)).into(new ArrayList<>());
System.out.println("==> 3 most densely populated cities in Texas");
results.forEach(printDocuments());
}
/**
* find the 3 most popular tags and their post titles
*
* @param posts sample_training.posts collection from the MongoDB Sample Dataset in MongoDB Atlas.
*/
private static void threeMostPopularTags(MongoCollection posts) {
Bson unwind = unwind("$tags");
Bson group = group("$tags", sum("count", 1L), push("titles", "$title"));
Bson sort = sort(descending("count"));
Bson limit = limit(3);
Bson project = project(fields(excludeId(), computed("tag", "$_id"), include("count", "titles")));
List results = posts.aggregate(List.of(unwind, group, sort, limit, project)).into(new ArrayList<>());
System.out.println("==> 3 most popular tags and their posts titles");
results.forEach(printDocuments());
}
private static Consumer printDocuments() {
return doc -> System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));
}
}
```
## Wrapping Up
The aggregation pipeline is very powerful. We have just scratched the surface with these two examples but trust me if I tell you that it's your best ally if you can master it.
>I encourage you to follow the [M121 course on MongoDB University to become an aggregation pipeline jedi.
>
>If you want to learn more and deepen your knowledge faster, I recommend you check out the M220J: MongoDB for Java Developers training available for free on MongoDB University.
In the next blog post, I will explain to you the Change Streams in Java.
| md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "Learn how to use the Aggregation Pipeline using the MongoDB Java Driver.",
"contentType": "Quickstart"
} | Java - Aggregation Pipeline | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/serverless-with-cloud-run-mongodb-atlas | created | # Serverless MEAN Stack Applications with Cloud Run and MongoDB Atlas
## Plea and the Pledge: Truly Serverless
As modern application developers, we’re juggling many priorities: performance, flexibility, usability, security, reliability, and maintainability. On top of that, we’re handling dependencies, configuration, and deployment of multiple components in multiple environments and sometimes multiple repositories as well. And then we have to keep things secure and simple. Ah, the nightmare!
This is the reason we love serverless computing. Serverless allows developers to focus on the thing they like to do the most—development—and leave the rest of the attributes, including infrastructure and maintenance, to the platform offerings.
In this read, we’re going to see how Cloud Run and MongoDB come together to enable a completely serverless MEAN stack application development experience. We'll learn how to build a serverless MEAN application with Cloud Run and MongoDB Atlas, the multi-cloud developer data platform by MongoDB.
### Containerized deployments with Cloud Run
All serverless platform offer exciting capabilities:
* Event-driven function (not a hard requirement though)
* No-infrastructure maintenance
* Usage-based pricing
* Auto-scaling capabilities
Cloud Run stands out of the league by enabling us to:
* Package code in multiple stateless containers that are request-aware and invoke it via HTTP requests
* Only be charged for the exact resources you use
* Support any programming language or any operating system library of your choice, or any binary
Check this link for more features in full context.
However, many serverless models overlook the fact that traditional databases are not managed. You need to manually provision infrastructure (vertical scaling) or add more servers (horizontal scaling) to scale the database. This introduces a bottleneck in your serverless architecture and can lead to performance issues.
### Deploy a serverless database with MongoDB Atlas
MongoDB launched serverless instances, a new fully managed, serverless database deployment in Atlas to solve this problem. With serverless instances you never have to think about infrastructure — simply deploy your database and it will scale up and down seamlessly based on demand — requiring no hands-on management. And the best part, you will only be charged for the operations you run. To make our architecture truly serverless, we'll combine Cloud Run and MongoDB Atlas capabilities.
## What's the MEAN stack?
The MEAN stack is a technology stack for building full-stack web applications entirely with JavaScript and JSON. The MEAN stack is composed of four main components—MongoDB, Express, Angular, and Node.js.
* **MongoDB** is responsible for data storage.
* **Express.js** is a Node.js web application framework for building APIs.
* **Angular** is a client-side JavaScript platform.
* **Node.js** is a server-side JavaScript runtime environment. The server uses the MongoDB Node.js driver to connect to the database and retrieve and store data.
## Steps for deploying truly serverless MEAN stack apps with Cloud Run and MongoDB
In the following sections, we’ll provision a new MongoDB serverless instance, connect a MEAN stack web application to it, and finally, deploy the application to Cloud Run.
### 1. Create the database
Before you begin, get started with MongoDB Atlas on Google Cloud.
Once you sign up, click the “Build a Database” button to create a new serverless instance. Select the following configuration:
Once your serverless instance is provisioned, you should see it up and running.
Click on the “Connect” button to add a connection IP address and a database user.
For this blog post, we’ll use the “Allow Access from Anywhere” setting. MongoDB Atlas comes with a set of security and access features. You can learn more about them in the security features documentation article.
Use credentials of your choice for the database username and password. Once these steps are complete, you should see the following:
Proceed by clicking on the “Choose a connection method” button and then selecting “Connect your application”.
Copy the connection string you see and replace the password with your own. We’ll use that string to connect to our database in the following sections.
### 2. Set up a Cloud Run project
First, sign in to Cloud Console, create a new project, or reuse an existing one.
Remember the Project Id for the project you created. Below is an image from https://codelabs.developers.google.com/codelabs/cloud-run-hello#1 that shows how to create a new project in Google Cloud.
Then, enable Cloud Run API from Cloud Shell:
* Activate Cloud Shell from the Cloud Console. Simply click Activate Cloud Shell.
* Use the below command:
*gcloud services enable run.googleapis.com*
We will be using Cloud Shell and Cloud Shell Editor for code references. To access Cloud Shell Editor, click Open Editor from the Cloud Shell Terminal:
Finally, we need to clone the MEAN stack project we’ll be deploying.
We’ll deploy an employee management web application. The REST API is built with Express and Node.js; the web interface, with Angular; and the data will be stored in the MongoDB Atlas instance we created earlier.
Clone the project repository by executing the following command in the Cloud Shell Terminal:
`git clone` https://github.com/mongodb-developer/mean-stack-example.git
In the following sections, we will deploy a couple of services—one for the Express REST API and one for the Angular web application.
### 3. Deploy the Express and Node.js REST API
First, we’ll deploy a Cloud Run service for the Express REST API.
The most important file for our deployment is the Docker configuration file. Let’s take a look at it:
**mean-stack-example/server/Dockerfile**
```
FROM node:17-slim
WORKDIR /usr/app
COPY ./ /usr/app
# Install dependencies and build the project.
RUN npm install
RUN npm run build
# Run the web service on container startup.
CMD "node", "dist/server.js"]
```
The configuration sets up Node.js, and copies and builds the project. When the container starts, the command “node dist/server.js” starts the service.
To start a new Cloud Run deployment, click on the Cloud Run icon on the left sidebar:
![Select the 'Cloud Run' icon from the left sidebar
Then, click on the Deploy to Cloud Run icon:
Fill in the service configuration as follows:
* Service name: node-express-api
* Deployment platform: Cloud Run (fully managed)
* Region: Select a region close to your database region to reduce latency
* Authentication: Allow unauthenticated invocations
Under Revision Settings, click on Show Advanced Settings to expand them:
* Container port: 5200
* Environment variables. Add the following key-value pair and make sure you add the connection string for your own MongoDB Atlas deployment:
`ATLAS_URI:mongodb+srv:/:@sandbox.pv0l7.mongodb.net/meanStackExample?retryWrites=true&w=majority`
For the Build environment, select Cloud Build.
Finally, in the Build Settings section, select:
* Builder: Docker
* Docker: mean-stack-example/server/Dockerfile
Click the Deploy button and then Show Detailed Logs to follow the deployment of your first Cloud Run service!
After the build has completed, you should see the URL of the deployed service:
Open the URL and append ‘/employees’ to the end. You should see an empty array because currently, there are no documents in the database. Let’s deploy the user interface so we can add some!
### 4. Deploy the Angular web application
Our Angular application is in the client directory. To deploy it, we’ll use the Nginx server and Docker.
> Just a thought, there is also an option to use Firebase Hosting for your Angular application deployment as you can serve your content to a CDN (content delivery network) directly.
Let’s take a look at the configuration files:
**mean-stack-example/client/nginx.conf**
```
events{}
http {
include /etc/nginx/mime.types;
server {
listen 8080;
server_name 0.0.0.0;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
}
```
In the Nginx configuration, we specify the default port—8080, and the starting file—`index.html`.
**mean-stack-example/client/Dockerfile**
```
FROM node:17-slim AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
# Install dependencies and copy them to the container
RUN npm install
COPY . .
# Build the Angular application for production
RUN npm run build --prod
# Configure the nginx web server
FROM nginx:1.17.1-alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build /usr/src/app/dist/client /usr/share/nginx/html
# Run the web service on container startup.
CMD "nginx", "-g", "daemon off;"]
```
In the Docker configuration, we install Node.js dependencies and build the project. Then, we copy the built files to the container, configure, and start the Nginx service.
Finally, we need to configure the URL to the REST API so that our client application can send requests to it. Since we’re only using the URL in a single file in the project, we’ll hardcode the URL. Alternatively, you can attach the environment variable to the window object and access it from there.
**mean-stack-example/client/src/app/employee.service.ts**
```
@Injectable({
providedIn: 'root'
})
export class EmployeeService {
// Replace with the URL of your REST API
private url = 'https://node-express-api-vsktparjta-uc.a.run.app';
…
```
We’re ready to deploy to Cloud Run! Start a new deployment with the following configuration settings:
* Service Settings: Create a service
* Service name: angular-web-app
* Deployment platform: Cloud Run (fully managed)
* Authentication: Allow unauthenticated invocations
For the Build environment, select Cloud Build.
Finally, in the Build Settings section, select:
* Builder: Docker
* Docker: mean-stack-example/client/Dockerfile
Click that Deploy button again and watch the logs as your app is shipped to the cloud! When the deployment is complete, you should see the URL for the client app:
![Screenshot displaying the message 'Deployment completed successfully!' and the deployment URL for the Angular service.
Open the URL, and play with your application!
### Command shell alternative for build and deploy
The steps covered above can alternatively be implemented from Command Shell as below:
Step 1: Create the new project directory named “mean-stack-example” either from the Code Editor or Cloud Shell Command (Terminal):
*mkdir mean-stack-demo
cd mean-stack-demo*
Step 2: Clone project repo and make necessary changes in the configuration and variables, same as mentioned in the previous section.
Step 3: Build your container image using Cloud build by running the command in Cloud Shell:
*gcloud builds submit --tag gcr.io/$GOOGLECLOUDPROJECT/mean-stack-demo*
$GOOGLE_CLOUD_PROJECT is an environment variable containing your Google Cloud project ID when running in Cloud Shell.
Step 4: Test it locally by running:
docker run -d -p 8080:8080 gcr.io/$GOOGLE_CLOUD_PROJECT/mean-stack-demo
and by clicking Web Preview, Preview on port 8080.
Step 5: Run the following command to deploy your containerized app to Cloud Run:
*gcloud run deploy mean-stack-demo --image
gcr.io/$GOOGLECLOUDPROJECT/mean-stack-demo --platform managed --region us-central1 --allow-unauthenticated --update-env-vars DBHOST=$DB_HOST*
a. –allow-unauthenticated will let the service be reached without authentication.
b. –platform-managed means you are requesting the fully managed environment and not the Kubernetes one via Anthos.
c. –update-env-vars expects the MongoDB Connection String to be passed on to the environment variable DBHOST.
Hang on until the section on Env variable and Docker for Continuous Deployment for Secrets and Connection URI management.
d. When the deployment is done, you should see the deployed service URL in the command line.
e. When you hit the service URL, you should see your web page on the browser and the logs in the Cloud Logging Logs Explorer page.
### 5. Environment variables and Docker for continuous deployment
If you’re looking to automate the process of building and deploying across multiple containers, services, or components, storing these configurations in the repo is not only cumbersome but also a security threat.
1. For ease of cross-environment continuous deployment and to avoid security vulnerabilities caused by leaking credential information, we can choose to pass variables at build/deploy/up time.
*--update-env-vars* allows you to set the environment variable to a value that is passed only at run time. In our example, the variable DBHOST is assigned the value of $DB_HOST. which is set as *DB_HOST = ‘<>’*.
Please note that unencoded symbols in Connection URI (username, password) will result in connection issues with MongoDB. For example, if you have a $ in the password or username, replace it with %24 in the encoded Connection URI.
2. Alternatively, you can also pass configuration variables as env variables at build time into docker-compose (*docker-compose.yml*). By passing configuration variables and credentials, we avoid credential leakage and automate deployment securely and continuously across multiple environments, users, and applications.
## Conclusion
MongoDB Atlas with Cloud Run makes for a truly serverless MEAN stack solution, and for those looking to build an application with a serverless option to run in a stateless container, Cloud Run is your best bet.
## Before you go…
Now that you have learnt how to deploy a simple MEAN stack application on Cloud Run and MongoDB Atlas, why don’t you take it one step further with your favorite client-server use case? Reference the below resources for more inspiration:
* Cloud Run HelloWorld: https://codelabs.developers.google.com/codelabs/cloud-run-hello#4
* MongoDB - MEAN Stack: https://www.mongodb.com/languages/mean-stack-tutorial
If you have any comments or questions, feel free to reach out to us online: Abirami Sukumaran and Stanimira Vlaeva. | md | {
"tags": [
"Atlas",
"JavaScript",
"Docker",
"Google Cloud"
],
"pageDescription": "In this blog, we'll see how Cloud Run and MongoDB come together to enable a completely serverless MEAN stack application development experience.",
"contentType": "Tutorial"
} | Serverless MEAN Stack Applications with Cloud Run and MongoDB Atlas | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-api-google-apps-script | created | # Using the Atlas Data API with Google Apps Script
> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.
The MongoDB Atlas Data API is an HTTPS-based API which allows us to read and write data in Atlas where a MongoDB driver library is either not available or not desirable. In this article, we will see how a business analyst or other back office user, who often may not be a professional developer, can access data from and record data in Atlas. The Atlas Data API can easily be used by users unable to create or configure back-end services, who simply want to work with data in tools they know, like Google Sheets or Excel.
Learn about enabling the Atlas Data API and obtaining API keys.
Google Office accesses external data using Google Apps Script, a cloud-based JavaScript platform that lets us integrate with and automate tasks across Google products. We will use Google Apps Script to call the Data API.
## Prerequisites
Before we begin, we will need a Google account and the ability to create Google Sheets. We will also need an Atlas cluster for which we have enabled the Data API, and our **endpoint URL** and **API Key**. You can learn how to get these in this article or this video, if you do not have them already.
A common use of Atlas with Google Sheets might be to look up some business data manually, or produce an export for a third party. To test this, we first need to have some business data in MongoDB Atlas. This can be added by selecting the three dots next to our cluster name and choosing "Load Sample Dataset", or following the instructions here.
## Creating a Google Apps Script from a Google Sheet
Our next step is to create a new Google sheet. We can do this by going to https://docs.google.com/spreadsheets/ and selecting a new blank sheet, or, if using Chrome, by going to the URL https://sheets.new . We end up viewing a sheet like this. Replace the name "Untitled spreadsheet" with "Atlas Data API Demo".
We are going to create a simple front end to allow us to verify the business inspection certificate and history for a property. We will get this from the collection **inspections** in the **sample\_training** database. The first step is to add some labels in our sheet as shown below. Don't worry if your formatting isn't.exactly the same. Cell B1 is where we will enter the name we are searching for. For now, enter "American".
Now we need to add code that queries Atlas and retrieves the data. To do this, select **Extensions -> Apps Script** from the menu bar. (If you are using Google for Business, it might be under **Tools->Script Editor** instead.)
A new tab will open with the Apps Script Development environment, and an empty function named myFunction(). In this tab, we can write JavaScript code to interact with our sheet, MongoDB Atlas, and most other parts of the Google infrastructure.
Click on the name 'Untitled project", Type in "My Data API Script" in the popup and click Rename.
Before we connect to Atlas, we will first write and test some very basic code that gets a handle to our open spreadsheet and retrieves the contents of cell B1 where we enter what we want to search for. Replace all the code with the code below.
```
function lookupInspection() {
const activeSheetsApp = SpreadsheetApp.getActiveSpreadsheet();
const sheet = activeSheetsApp.getSheets()0];
const partialName = sheet.getRange("B1").getValue();
SpreadsheetApp.getUi().alert(partialName)
}
```
## Granting Permissions to Google Apps Scripts
We need now to grant permission to the script to access our spreadsheet. Although we just created this script, Google requires explicit permission to trust scripts accessing documents or services.
Make sure the script is saved by typing Control/Command + S, then click "Run" on the toolbar, and then "Review Permissions" on the "Authorization required" popup. Select the name of the Google account you intend to run this as. You will then get a warning that "Google hasn't verified this app".
![
This warning is intended for someone who runs a sheet they got from someone else, rather than us as the author. To continue, click on Advanced, then "Go to My Data API Script (unsafe)". *This is not unsafe for you as the author, but anyone else accessing this sheet should be aware it can access any of their Google sheets.*
Finally, click "Allow" when asked if the app can "See, edit, create, and delete all your Google Sheets spreadsheets."
As we change our script and require additional permissions, we will need to go through this approval process again.
## Adding a Launch Button for a Google Apps Script in a Spreadsheet
We now need to add a button on the sheet to call this function when we want to use it. Google Sheets does not have native buttons to launch scripts, but there is a trick to emulate one.
Return to the tab that shows the sheet, dismiss the popup if there is one, and use **Insert->Drawing**. Add a textbox by clicking the square with the letter T in the middle and dragging to make a small box. Double click it to set the text to "Search" and change the background colour to a nice MongoDB green. Then click "Save and Close."
Once back in the spreadsheet, drag this underneath the name "Search For:" at the top left. You can move and resize it to fit nicely.
Finally, click on the green button, then the three dots in the top right corner. Choose "Assign a Script" in the popup type **lookupInspection**. Whilst this feels quite a clumsy way to bind a script to a button, it's the only thing Google Sheets gives us.
Now click the green button you created, it should pop up a dialog that says 'American'. We have now bound our script to the button successfully. You change the value in cell B1 to "Pizza" and run the script again checking it says "Pizza" this time. *Note the value of B1 does not change until you then click in another cell.*
If, after you have bound a button to a script you need to select the button for moving, sizing or formatting you can do so with Command/Control + Click.
## Retrieving data from MongoDB Atlas using Google Apps Scripts
Now we have a button to launch our script, we can fill in the rest of the code to call the Data API and find any matching results.
From the menu bar on the sheet, once again select **Extensions->Apps Script** (or **Tools->Script Editor**). Now change the code to match the code shown below. Make sure you set the endpoint in the first line to your URL endpoint from the Atlas GUI. The part that says "**amzuu**" will be different for you.
```
const findEndpoint = 'https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find';
const clusterName = "Cluster0"
function getAPIKey() {
const userProperties = PropertiesService.getUserProperties();
let apikey = userProperties.getProperty('APIKEY');
let resetKey = false; //Make true if you have to change key
if (apikey == null || resetKey ) {
var result = SpreadsheetApp.getUi().prompt(
'Enter API Key',
'Key:', SpreadsheetApp.getUi().ButtonSet);
apikey = result.getResponseText()
userProperties.setProperty('APIKEY', apikey);
}
return apikey;
}
function lookupInspection() {
const activeSheetsApp = SpreadsheetApp.getActiveSpreadsheet();
const sheet = activeSheetsApp.getSheets()0];
const partname = sheet.getRange("B1").getValue();
sheet.getRange(`C3:K103`).clear()
const apikey = getAPIKey()
//We can do operators like regular expression with the Data API
const query = { business_name: { $regex: `${partname}`, $options: 'i' } }
const order = { business_name: 1, date: -1 }
const limit = 100
//We can Specify sort, limit and a projection here if we want
const payload = {
filter: query, sort: order, limit: limit,
collection: "inspections", database: "sample_training", dataSource: clusterName
}
const options = {
method: 'post',
contentType: 'application/json',
payload: JSON.stringify(payload),
headers: { "api-key": apikey }
};
const response = UrlFetchApp.fetch(findEndpoint, options);
const documents = JSON.parse(response.getContentText()).documents
for (d = 1; d <= documents.length; d++) {
let doc = documents[d - 1]
fields = [[doc.business_name, doc.date, doc.result, doc.sector,
doc.certificate_number, doc.address.number,
doc.address.street, doc.address.city, doc.address.zip]]
let row = d + 2
sheet.getRange(`C${row}:K${row}`).setValues(fields)
}
}
```
We can now test this by clicking “Run” on the toolbar. As we have now requested an additional permission (the ability to connect to an external web service), we will once again have to approve permissions for our account by following the process above.
Once we have granted permission, the script will runLog a successful start but not appear to be continuing. This is because it is waiting for input. Returning to the tab with the sheet, we can see it is now requesting we enter our Atlas Data API key. If we paste our Atlas Data API key into the box, we will see it complete the search.
![
We can now search the company names by typing part of the name in B1 and clicking the Search button. This search uses an unindexed regular expression. For production use, you should use either indexed MongoDB searches or, for free text searching, Atlas Search, but that is outside the scope of this article.
## Securing Secret API Keys in Google Apps Scripts
Atlas API keys give the holder read and write access to all databases in the cluster, so it's important to manage the API key with care.
Rather than simply hard coding the API key in the script, where it might be seen by someone else with access to the spreadsheet, we check if it is in the user's personal property store (a server-side key-value only accessible by that Google user). If not, we prompt for it and store it. This is all encapsulated in the getAPIKey() function.
```
function getAPIKey() {
const userProperties = PropertiesService.getUserProperties();
let apikey = userProperties.getProperty('APIKEY');
let resetKey = false; //Make true if you have to change key
if (apikey == null || resetKey ) {
var result = SpreadsheetApp.getUi().prompt(
'Enter API Key',
'Key:', SpreadsheetApp.getUi().ButtonSet);
apikey = result.getResponseText()
userProperties.setProperty('APIKEY', apikey);
}
return apikey;
}
```
*Should you enter the key incorrectly - or need to change the stored one. Change resetKey to true, run the script and enter the new key then change it back to false.*
## Writing to MongoDB Atlas from Google Apps Scripts
We have created this simple, sheets-based user interface and we could adapt it to perform any queries or aggregations when reading by changing the payload. We can also write to the database using the Data API. To keep the spreadsheet simple, we will add a usage log for our new search interface showing what was queried for, and when. Remember to change "**amzuu**" in the endpoint value at the top to the endpoint for your own project. Add this to the end of the code, keeping the existing functions.
```
const insertOneEndpoint = 'https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/insertOne'
function logUsage(query, nresults, apikey) {
const document = { date: { $date: { $numberLong: ${(new Date()).getTime()} } }, query, nresults, by: Session.getActiveUser().getEmail() }
console.log(document)
const payload = {
document: document, collection: "log",
database: "sheets_usage", dataSource: "Cluster0"
}
const options = {
method: 'post',
contentType: 'application/json',
payload: JSON.stringify(payload),
headers: { "api-key": apikey }
};
const response = UrlFetchApp.fetch(insertOneEndpoint, options);
}
```
## Using Explicit Data Types in JSON with MongoDB EJSON
When we add the data with this, we set the date field to be a date type in Atlas rather than a string type with an ISO string of the date. We do this using EJSON syntax.
EJSON, or Extended JSON, is used to get around the limitation of plain JSON not being able to differentiate data types. JSON is unable to differentiate a date from a string, or specify if a number is a Double, 64 Bit Integer, or 128 Bit BigDecimal value. MongoDB data is data typed and when working with other languages and code, in addition to the Data API, it is important to be aware of this, especially if adding or updating data.
In this example, rather than using `{ date : (new Date()).toISOString() }`, which would store the date as a string value, we use the much more efficient and flexible native date type in the database by specifying the value using EJSON. The EJSON form is ` { date : { $date : { $numberLong: }}}`.
## Connecting up our Query Logging Function
We must now modify our code to log each query that is performed by adding the following line in the correct place inside the `lookupInspection` function.
```
const response = UrlFetchApp.fetch(findendpoint, options);
const documents = JSON.parse(response.getContentText()).documents
logUsage(partname, documents.length, apikey); // <---- Add This line
for (d = 1; d <= documents.length; d++) {
...
```
If we click the Search button now, not only do we get our search results but checking Atlas data explorer shows us a log of what we searched for, at what time, and what user performed it.
## Conclusion
You can access the completed sheet here. This is read-only, so you will need to create a copy using the file menu to run the script.
Calling the Data API from Google Apps Script is simple. The HTTPS call is just a few lines of code. Securing the API key and specifying the correct data type when inserting or updating data are just a little more complex, but hopefully, this post will give you a good indication of how to go about it.
If you have questions, please head to ourdeveloper community websitewhere the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "This article teaches you how to call the Atlas Data API from a Google Sheets spreadsheet using Google Apps Script.",
"contentType": "Quickstart"
} | Using the Atlas Data API with Google Apps Script | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-kotlin-041-announcement | created | # Realm Kotlin 0.4.1 Announcement
In this blogpost we are announcing v0.4.1 of the Realm Kotlin Multiplatform SDK. This release contains a significant architectural departure from previous releases of Realm Kotlin as well as other Realm SDK’s, making it much more compatible with modern reactive frameworks like Kotlin Flows. We believe this change will hugely benefit users in the Kotlin ecosystem.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## **Some background**
The Realm Java and Kotlin SDK’s have historically exposed a model of interacting with data we call Live Objects. Its primary design revolves around database objects acting as Live Views into the underlying database.
This was a pretty novel approach when Realm Java was first released 7 years ago. It had excellent performance characteristics and made it possible to avoid a wide range of nasty bugs normally found in concurrent systems.
However, it came with one noticeable drawback: Thread Confinement.
Thread-confinement was not just an annoying restriction. This was what guaranteed that users of the API would always see a consistent view of the data, even across decoupled queries. Which was also the reason that Kotlin Native adopted a similar memory model
But it also meant that you manually had to open and close realms on each thread where you needed data, and it was impossible to pass objects between threads without additional boilerplate.
Both of which put a huge burden on developers.
More importantly, this approach conflicts with another model for working with concurrent systems, namely Functional Reactive Programming (FRP). In the Android ecosystem this was popularized by the RxJava framework and also underpins Kotlin Flows.
In this mode, you see changes to data as immutable events in a stream, allowing complex mapping and transformations. Consistency is then guaranteed by the semantics of the stream; each operation is carried out in sequence so no two threads operate on the same object at the same time.
In this model, however, it isn’t uncommon for different operations to happen on different threads, breaking the thread-confinement restrictions of Realm.
Looking at the plethora of frameworks that support this model (React JS, RxJava, Java Streams, Apple Combine Framework and Kotlin Flows) It becomes clear that this way of reasoning about concurrency is here to stay.
For that reason we decided to change our API to work much better in this context.
## The new API
So today we are introducing a new architecture, which we internally have called the Frozen Architecture. It looks similar to the old API, but works in a fundamentally different way.
Realm instances are now thread-safe, meaning that you can use the same instance across the entire application, making it easier to pass around with e.g. dependency injection.
All query results and objects from the database are frozen or immutable by default. They can now be passed freely between threads. This also means that they no longer automatically are kept up to date. Instead you must register change listeners in order to be notified about any change.
All modifications to data must happen by using a special instance of a `MutableRealm`, which is only available inside write transactions. Objects inside a write transaction are still live.
## Opening a Realm
Opening a realm now only needs to happen once. It can either be stored in a global variable or made available via dependency injection.
```
// Global App variable
class MyApp: Application() {
companion object {
private val config = RealmConfiguration(schema = setOf(Person::class))
public val REALM = Realm(config)
}
}
// Using dependency injection
val koinModule = module {
single { RealmConfiguration(schema = setOf(Person::class)) }
single { Realm(get()) }
}
// Realms are now thread safe
val realm = Realm(config)
val t1 = Thread {
realm.writeBlocking { /* ... */ }
}
val t2 = Thread {
val queryResult = realm.objects(Person::class)
}
```
You can now safely keep your realm instance open for the lifetime of the application. You only need to close your realm when interacting with the realm file itself, such as when deleting the file or compacting it.
```
// Close Realm to free native resources
realm.close()
```
## Creating Data
You can only write within write closures, called `write` and `writeBlocking`. Writes happen through a MutableRealm which is a receiver of the `writeBlocking` and `write` lambdas.
Blocking:
```
val jane = realm.writeBlocking {
val unmanaged = Person("Jane")
copyToRealm(unmanaged)
}
```
Or run as a suspend function. Realm automatically dispatch writes to a write dispatcher backed by a background thread, so launching this from a scope on the UI thread like `viewModelScope` is safe:
```
CoroutineScope(Dispatchers.Main).launch {
// Write automatically happens on a background dispatcher
val jane = realm.write {
val unmanaged = Person("Jane")
// Add unmanaged objects
copyToRealm(unmanaged)
}
// Objects returned from writes are automatically frozen
jane.isFrozen() // == true
// Access any property.
// All properties are still lazy-loaded.
jane.name // == "Jane"
}
```
## **Updating data**
Since everything is frozen by default, you need to retrieve a live version of the object that you want to update, then write to that live object to update the underlying data in the realm.
```
CoroutineScope(Dispatchers.Main).launch {
// Create initial object
val jane = realm.write {
copyToRealm(Person("Jane"))
}
realm.write {
// Find latest version and update it
// Note, this always involves a null-check
// as another thread might have deleted the
// object.
// This also works on objects without
// primary keys.
findLatest(jane)?.apply {
name = "Jane Doe"
}
}
}
```
## Observing Changes
Changes to all Realm classes are supported through Flows. Standard change listener API support is coming in a future release.
```
val jane = getJane()
CoroutineScope(Dispatchers.Main).launch {
// Updates are observed using Kotlin Flow
val flow: Flow = jane.observe()
flow.collect {
// Listen to changes to the object
println(it.name)
}
}
```
As all Realm objects are now frozen by default, it is now possible to pass objects between different dispatcher threads without any additional boilerplate:
```
val jane = getJane()
CoroutineScope(Dispatchers.Main).launch {
// Run mapping/transform logic in the background
val flow: Flow = jane.observe()
.filter { it.name.startsWith("Jane") }
.flowOn(Dispatchers.Unconfined)
// Before collecting on the UI thread
flow.collect {
println(it.name)
}
}
```
## Pitfalls
With the change to frozen architecture, there are some new pitfalls to be aware of:
Unrelated queries are no longer guaranteed to run on the same version.
```
// A write can now happen between two queries
val results1: RealmResults = realm.objects(Person::class)
val results2: RealmResults = realm.objects(Person::class)
// Resulting in subsequent queries not returning the same result
results1.version() != results2.version()
results1.size != results2.size
```
We will introduce API’s in the future that can guarantee that all operations within a certain scope are guaranteed to run on the same version. Making it easier to combine the results of multiple queries.
Depending on the schema, it is also possible to navigate the entire object graph for a single object. It is only unrelated queries that risk this behaviour.
Storing objects for extended periods of time can lead to Version Pinning. This results in an increased realm file size. It is thus not advisable to store Realm Objects in global variables unless they are unmanaged.
```
// BAD: Store a global managed object
MyApp.GLOBAL_OBJECT = realm.objects(Person::class).first()
// BETTER: Copy data out into an unmanaged object
val person = realm.objects(Person::class).first()
MyApp.GLOBAL_OBJECT = Person(person.name)
```
We will monitor how big an issue this is in practise and will introduce future API’s that can work around this if needed. It is currently possible to detect this happening by setting `RealmConfiguration.Builder.maxNumberOfActiveVersions()`
Ultimately we believe that these drawbacks are acceptable given the advantages we otherwise get from this architecture, but we’ll keep a close eye on these as the API develops further.
## Conclusion
We are really excited about this change as we believe it will fundamentally make it a lot easier to use Realm Kotlin in Android and will also enable you to use Realm in Kotlin Multilplatform projects.
You can read more about how to get started at https://docs.mongodb.com/realm/sdk/kotlin-multiplatform/. We encourage you to try out this new version and leave any feedback at https://github.com/realm/realm-kotlin/issues/new. Sample projects can be found here.
The SDK is still in alpha and as such none of the API’s are considered stable, but it is possible to follow our progress at https://github.com/realm/realm-kotlin.
If you are interested about learning more about how this works under the hood, you can also read more here
Happy hacking! | md | {
"tags": [
"Realm",
"Kotlin",
"Mobile"
],
"pageDescription": "In this blogpost we are announcing v0.4.1 of the Realm Kotlin Multiplatform SDK. This release contains a significant architectural departure from previous releases of Realm Kotlin as well as other Realm SDK’s, making it much more compatible with modern reactive frameworks like Kotlin Flows. We believe this change will hugely benefit users in the Kotlin ecosystem.\n",
"contentType": "News & Announcements"
} | Realm Kotlin 0.4.1 Announcement | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/node-crud-tutorial-3-3-2 | created | # MongoDB and Node.js 3.3.2 Tutorial - CRUD Operations
In the first post in this series, I walked you through how to connect to a MongoDB database from a Node.js script, retrieve a list of databases, and print the results to your console. If you haven't read that post yet, I recommend you do so and then return here.
>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.
>
>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.
Now that we have connected to a database, let's kick things off with the CRUD (create, read, update, and delete) operations.
If you prefer video over text, I've got you covered. Check out the video in the section below. :-)
>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
Here is a summary of what we'll cover in this post:
- Learn by Video
- How MongoDB Stores Data
- Create
- Read
- Update
- Delete
- Wrapping Up
## Learn by Video
I created the video below for those who prefer to learn by video instead
of text. You might also find this video helpful if you get stuck while
trying the steps in the text-based instructions below.
Here is a summary of what the video covers:
- How to connect to a MongoDB database hosted on MongoDB Atlas from inside of a Node.js script (00:40)
- How MongoDB stores data in documents and collections (instead of rows and tables) (08:51)
- How to create documents using `insertOne()` and `insertMany()` (11:01)
- How to read documents using `findOne()` and `find()` (20:04)
- How to update documents using `updateOne()` with and without `upsert` as well as `updateMany()` (31:13)
- How to delete documents using `deleteOne()` and `deleteMany()` (46:07)
:youtube]{vid=ayNI9Q84v8g}
Note: In the video, I type `main().catch(console.err);`, which is incorrect. Instead, I should have typed `main().catch(console.error);`.
Below are the links I mentioned in the video.
- [MongoDB Atlas
- How to create a free cluster on Atlas
- MongoDB University's Data Modeling Course
- MongoDB University's JavaScript Course
## How MongoDB Stores Data
Before we go any further, let's take a moment to understand how data is stored in MongoDB.
MongoDB stores data in BSON documents. BSON is a binary representation of JSON (JavaScript Object Notation) documents. When you read MongoDB documentation, you'll frequently see the term "document," but you can think of a document as simply a JavaScript object. For those coming from the SQL world, you can think of a document as being roughly equivalent to a row.
MongoDB stores groups of documents in collections. For those with a SQL background, you can think of a collection as being roughly equivalent to a table.
Every document is required to have a field named `_id`. The value of `_id` must be unique for each document in a collection, is immutable, and can be of any type other than an array. MongoDB will automatically create an index on `_id`. You can choose to make the value of `_id` meaningful (rather than a somewhat random ObjectId) if you have a unique value for each document that you'd like to be able to quickly search.
In this blog series, we'll use the sample Airbnb listings dataset. The `sample_airbnb` database contains one collection: `listingsAndReviews`. This collection contains documents about Airbnb listings and their reviews.
Let's take a look at a document in the `listingsAndReviews` collection. Below is part of an Extended JSON representation of a BSON document:
``` json
{
"_id":"10057447",
"listing_url":"https://www.airbnb.com/rooms/10057447",
"name":"Modern Spacious 1 Bedroom Loft",
"summary":"Prime location, amazing lighting and no annoying neighbours. Good place to rent if you want a relaxing time in Montreal.",
"property_type":"Apartment",
"bedrooms":{"$numberInt":"1"},
"bathrooms":{"$numberDecimal":"1.0"},
"amenities":"Internet","Wifi","Kitchen","Heating","Family/kid friendly","Washer","Dryer","Smoke detector","First aid kit","Safety card","Fire extinguisher","Essentials","Shampoo","24-hour check-in","Hangers","Iron","Laptop friendly workspace"],
}
```
For more information on how MongoDB stores data, see the [MongoDB Back to Basics Webinar that I co-hosted with Ken Alger.
## Create
Now that we know how to connect to a MongoDB database and we understand how data is stored in a MongoDB database, let's create some data!
### Create One Document
Let's begin by creating a new Airbnb listing. We can do so by calling Collection's insertOne(). `insertOne()` will insert a single document into the collection. The only required parameter is the new document (of type object) that will be inserted. If our new document does not contain the `_id` field, the MongoDB driver will automatically create an id for the document.
Our function to create a new listing will look something like the following:
``` javascript
async function createListing(client, newListing){
const result = await client.db("sample_airbnb").collection("listingsAndReviews").insertOne(newListing);
console.log(`New listing created with the following id: ${result.insertedId}`);
}
```
We can call this function by passing a connected MongoClient as well as an object that contains information about a listing.
``` javascript
await createListing(client,
{
name: "Lovely Loft",
summary: "A charming loft in Paris",
bedrooms: 1,
bathrooms: 1
}
);
```
The output would be something like the following:
``` none
New listing created with the following id: 5d9ddadee415264e135ccec8
```
Note that since we did not include a field named `_id` in the document, the MongoDB driver automatically created an `_id` for us. The `_id` of the document you create will be different from the one shown above. For more information on how MongoDB generates `_id`, see Quick Start: BSON Data Types - ObjectId.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Create Multiple Documents
Sometimes, you will want to insert more than one document at a time. You could choose to repeatedly call `insertOne()`. The problem is that, depending on how you've structured your code, you may end up waiting for each insert operation to return before beginning the next, resulting in slow code.
Instead, you can choose to call Collection's insertMany(). `insertMany()` will insert an array of documents into your collection.
One important option to note for `insertMany()` is `ordered`. If `ordered` is set to `true`, the documents will be inserted in the order given in the array. If any of the inserts fail (for example, if you attempt to insert a document with an `_id` that is already being used by another document in the collection), the remaining documents will not be inserted. If ordered is set to `false`, the documents may not be inserted in the order given in the array. MongoDB will attempt to insert all of the documents in the given array—regardless of whether any of the other inserts fail. By default, `ordered` is set to `true`.
Let's write a function to create multiple Airbnb listings.
``` javascript
async function createMultipleListings(client, newListings){
const result = await client.db("sample_airbnb").collection("listingsAndReviews").insertMany(newListings);
console.log(`${result.insertedCount} new listing(s) created with the following id(s):`);
console.log(result.insertedIds);
}
```
We can call this function by passing a connected MongoClient and an array of objects that contain information about listings.
``` javascript
await createMultipleListings(client,
{
name: "Infinite Views",
summary: "Modern home with infinite views from the infinity pool",
property_type: "House",
bedrooms: 5,
bathrooms: 4.5,
beds: 5
},
{
name: "Private room in London",
property_type: "Apartment",
bedrooms: 1,
bathroom: 1
},
{
name: "Beautiful Beach House",
summary: "Enjoy relaxed beach living in this house with a private beach",
bedrooms: 4,
bathrooms: 2.5,
beds: 7,
last_review: new Date()
}
]);
```
Note that every document does not have the same fields, which is perfectly OK. (I'm guessing that those who come from the SQL world will find this incredibly uncomfortable, but it really will be OK 😊.) When you use MongoDB, you get a lot of flexibility in how to structure your documents. If you later decide you want to add [schema validation rules so you can guarantee your documents have a particular structure, you can.
The output of calling `createMultipleListings()` would be something like the following:
``` none
3 new listing(s) created with the following id(s):
{
'0': 5d9ddadee415264e135ccec9,
'1': 5d9ddadee415264e135cceca,
'2': 5d9ddadee415264e135ccecb
}
```
Just like the MongoDB Driver automatically created the `_id` field for us when we called `insertOne()`, the Driver has once again created the `_id` field for us when we called `insertMany()`.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Read
Now that we know how to **create** documents, let's **read** one!
### Read One Document
Let's begin by querying for an Airbnb listing in the listingsAndReviews collection by name.
We can query for a document by calling Collection's findOne(). `findOne()` will return the first document that matches the given query. Even if more than one document matches the query, only one document will be returned.
`findOne()` has only one required parameter: a query of type object. The query object can contain zero or more properties that MongoDB will use to find a document in the collection. If you want to query all documents in a collection without narrowing your results in any way, you can simply send an empty object.
Since we want to search for an Airbnb listing with a particular name, we will include the name field in the query object we pass to `findOne()`:
``` javascript
findOne({ name: nameOfListing })
```
Our function to find a listing by querying the name field could look something like the following:
``` javascript
async function findOneListingByName(client, nameOfListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews").findOne({ name: nameOfListing });
if (result) {
console.log(`Found a listing in the collection with the name '${nameOfListing}':`);
console.log(result);
} else {
console.log(`No listings found with the name '${nameOfListing}'`);
}
}
```
We can call this function by passing a connected MongoClient as well as the name of a listing we want to find. Let's search for a listing named "Infinite Views" that we created in an earlier section.
``` javascript
await findOneListingByName(client, "Infinite Views");
```
The output should be something like the following.
``` none
Found a listing in the collection with the name 'Infinite Views':
{
_id: 5da9b5983e104518671ae128,
name: 'Infinite Views',
summary: 'Modern home with infinite views from the infinity pool',
property_type: 'House',
bedrooms: 5,
bathrooms: 4.5,
beds: 5
}
```
Note that the `_id` of the document in your database will not match the `_id` in the sample output above.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Read Multiple Documents
Now that you know how to query for one document, let's discuss how to query for multiple documents at a time. We can do so by calling Collection's find().
Similar to `findOne()`, the first parameter for `find()` is the query object. You can include zero to many properties in the query object.
Let's say we want to search for all Airbnb listings that have minimum numbers of bedrooms and bathrooms. We could do so by making a call like the following:
``` javascript
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
);
```
As you can see above, we have two properties in our query object: one for bedrooms and one for bathrooms. We can leverage the $gte comparison query operator to search for documents that have bedrooms greater than or equal to a given number. We can do the same to satisfy our minimum number of bathrooms requirement. MongoDB provides a variety of other comparison query operators that you can utilize in your queries. See the official documentation for more details.
The query above will return a Cursor. A Cursor allows traversal over the result set of a query.
You can also use Cursor's functions to modify what documents are included in the results. For example, let's say we want to sort our results so that those with the most recent reviews are returned first. We could use Cursor's sort() function to sort the results using the `last_review` field. We could sort the results in descending order (indicated by passing -1 to `sort()`) so that listings with the most recent reviews will be returned first. We can now update our existing query to look like the following.
``` javascript
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
).sort({ last_review: -1 });
```
The above query matches 192 documents in our collection. Let's say we don't want to process that many results inside of our script. Instead, we want to limit our results to a smaller number of documents. We can chain another of `sort()`'s functions to our existing query: limit(). As the name implies, `limit()` will set the limit for the cursor. We can now update our query to only return a certain number of results.
``` javascript
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
).sort({ last_review: -1 })
.limit(maximumNumberOfResults);
```
We could choose to iterate over the cursor to get the results one by one. Instead, if we want to retrieve all of our results in an array, we can call Cursor's toArray() function. Now our code looks like the following:
``` javascript
const cursor = client.db("sample_airbnb").collection("listingsAndReviews").find(
{
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
).sort({ last_review: -1 })
.limit(maximumNumberOfResults);
const results = await cursor.toArray();
```
Now that we have our query ready to go, let's put it inside an asynchronous function and add functionality to print the results.
``` javascript
async function findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {
minimumNumberOfBedrooms = 0,
minimumNumberOfBathrooms = 0,
maximumNumberOfResults = Number.MAX_SAFE_INTEGER
} = {}) {
const cursor = client.db("sample_airbnb").collection("listingsAndReviews")
.find({
bedrooms: { $gte: minimumNumberOfBedrooms },
bathrooms: { $gte: minimumNumberOfBathrooms }
}
)
.sort({ last_review: -1 })
.limit(maximumNumberOfResults);
const results = await cursor.toArray();
if (results.length > 0) {
console.log(`Found listing(s) with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms:`);
results.forEach((result, i) => {
date = new Date(result.last_review).toDateString();
console.log();
console.log(`${i + 1}. name: ${result.name}`);
console.log(` _id: ${result._id}`);
console.log(` bedrooms: ${result.bedrooms}`);
console.log(` bathrooms: ${result.bathrooms}`);
console.log(` most recent review date: ${new Date(result.last_review).toDateString()}`);
});
} else {
console.log(`No listings found with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms`);
}
}
```
We can call this function by passing a connected MongoClient as well as an object with properties indicating the minimum number of bedrooms, the minimum number of bathrooms, and the maximum number of results.
``` javascript
await findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {
minimumNumberOfBedrooms: 4,
minimumNumberOfBathrooms: 2,
maximumNumberOfResults: 5
});
```
If you've created the documents as described in the earlier section, the output would be something like the following:
``` none
Found listing(s) with at least 4 bedrooms and 2 bathrooms:
1. name: Beautiful Beach House
_id: 5db6ed14f2e0a60683d8fe44
bedrooms: 4
bathrooms: 2.5
most recent review date: Mon Oct 28 2019
2. name: Spectacular Modern Uptown Duplex
_id: 582364
bedrooms: 4
bathrooms: 2.5
most recent review date: Wed Mar 06 2019
3. name: Grace 1 - Habitat Apartments
_id: 29407312
bedrooms: 4
bathrooms: 2.0
most recent review date: Tue Mar 05 2019
4. name: 6 bd country living near beach
_id: 2741869
bedrooms: 6
bathrooms: 3.0
most recent review date: Mon Mar 04 2019
5. name: Awesome 2-storey home Bronte Beach next to Bondi!
_id: 20206764
bedrooms: 4
bathrooms: 2.0
most recent review date: Sun Mar 03 2019
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Update
We're halfway through the CRUD operations. Now that we know how to **create** and **read** documents, let's discover how to **update** them.
### Update One Document
Let's begin by updating a single Airbnb listing in the listingsAndReviews collection.
We can update a single document by calling Collection's updateOne(). `updateOne()` has two required parameters:
1. `filter` (object): the Filter used to select the document to update. You can think of the filter as essentially the same as the query param we used in findOne() to search for a particular document. You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.
2. `update` (object): the update operations to be applied to the document. MongoDB has a variety of update operators you can use such as `$inc`, `$currentDate`, `$set`, and `$unset`, among others. See the official documentation for a complete list of update operators and their descriptions.
`updateOne()` also has an optional `options` param. See the updateOne() docs for more information on these options.
`updateOne()` will update the first document that matches the given query. Even if more than one document matches the query, only one document will be updated.
Let's say we want to update an Airbnb listing with a particular name. We can use `updateOne()` to achieve this. We'll include the name of the listing in the filter param. We'll use the $set update operator to set new values for new or existing fields in the document we are updating. When we use `$set`, we pass a document that contains fields and values that should be updated or created. The document that we pass to `$set` will not replace the existing document; any fields that are part of the original document but not part of the document we pass to `$set` will remain as they are.
Our function to update a listing with a particular name would look like the following:
``` javascript
async function updateListingByName(client, nameOfListing, updatedListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.updateOne({ name: nameOfListing }, { $set: updatedListing });
console.log(`${result.matchedCount} document(s) matched the query criteria.`);
console.log(`${result.modifiedCount} document(s) was/were updated.`);
}
```
Let's say we want to update our Airbnb listing that has the name "Infinite Views." We created this listing in an earlier section.
``` javascript
{
_id: 5db6ed14f2e0a60683d8fe42,
name: 'Infinite Views',
summary: 'Modern home with infinite views from the infinity pool',
property_type: 'House',
bedrooms: 5,
bathrooms: 4.5,
beds: 5
}
```
We can call `updateListingByName()` by passing a connected MongoClient, the name of the listing, and an object containing the fields we want to update and/or create.
``` javascript
await updateListingByName(client, "Infinite Views", { bedrooms: 6, beds: 8 });
```
Executing this command results in the following output.
``` none
1 document(s) matched the query criteria.
1 document(s) was/were updated.
```
Now our listing has an updated number of bedrooms and beds.
``` json
{
_id: 5db6ed14f2e0a60683d8fe42,
name: 'Infinite Views',
summary: 'Modern home with infinite views from the infinity pool',
property_type: 'House',
bedrooms: 6,
bathrooms: 4.5,
beds: 8
}
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Upsert One Document
One of the options you can choose to pass to `updateOne()` is upsert. Upsert is a handy feature that allows you to update a document if it exists or insert a document if it does not.
For example, let's say you wanted to ensure that an Airbnb listing with a particular name had a certain number of bedrooms and bathrooms. Without upsert, you'd first use `findOne()` to check if the document existed. If the document existed, you'd use `updateOne()` to update the document. If the document did not exist, you'd use `insertOne()` to create the document. When you use upsert, you can combine all of that functionality into a single command.
Our function to upsert a listing with a particular name can be basically identical to the function we wrote above with one key difference: We'll pass `{upsert: true}` in the `options` param for `updateOne()`.
``` javascript
async function upsertListingByName(client, nameOfListing, updatedListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.updateOne({ name: nameOfListing },
{ $set: updatedListing },
{ upsert: true });
console.log(`${result.matchedCount} document(s) matched the query criteria.`);
if (result.upsertedCount > 0) {
console.log(`One document was inserted with the id ${result.upsertedId._id}`);
} else {
console.log(`${result.modifiedCount} document(s) was/were updated.`);
}
}
```
Let's say we aren't sure if a listing named "Cozy Cottage" is in our collection or, if it does exist, if it holds old data. Either way, we want to ensure the listing that exists in our collection has the most up-to-date data. We can call `upsertListingByName()` with a connected MongoClient, the name of the listing, and an object containing the up-to-date data that should be in the listing.
``` javascript
await upsertListingByName(client, "Cozy Cottage", { name: "Cozy Cottage", bedrooms: 2, bathrooms: 1 });
```
If the document did not previously exist, the output of the function would be something like the following:
``` none
0 document(s) matched the query criteria.
One document was inserted with the id 5db9d9286c503eb624d036a1
```
We have a new document in the listingsAndReviews collection:
``` json
{
_id: 5db9d9286c503eb624d036a1,
name: 'Cozy Cottage',
bathrooms: 1,
bedrooms: 2
}
```
If we discover more information about the "Cozy Cottage" listing, we can use `upsertListingByName()` again.
``` javascript
await upsertListingByName(client, "Cozy Cottage", { beds: 2 });
```
And we would see the following output.
``` none
1 document(s) matched the query criteria.
1 document(s) was/were updated.
```
Now our document has a new field named "beds."
``` json
{
_id: 5db9d9286c503eb624d036a1,
name: 'Cozy Cottage',
bathrooms: 1,
bedrooms: 2,
beds: 2
}
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Update Multiple Documents
Sometimes, you'll want to update more than one document at a time. In this case, you can use Collection's updateMany(). Like `updateOne()`, `updateMany()` requires that you pass a filter of type object and an update of type object. You can choose to include options of type object as well.
Let's say we want to ensure that every document has a field named `property_type`. We can use the $exists query operator to search for documents where the `property_type` field does not exist. Then we can use the $set update operator to set the `property_type` to "Unknown" for those documents. Our function will look like the following.
``` javascript
async function updateAllListingsToHavePropertyType(client) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.updateMany({ property_type: { $exists: false } },
{ $set: { property_type: "Unknown" } });
console.log(`${result.matchedCount} document(s) matched the query criteria.`);
console.log(`${result.modifiedCount} document(s) was/were updated.`);
}
```
We can call this function with a connected MongoClient.
``` javascript
await updateAllListingsToHavePropertyType(client);
```
Below is the output from executing the previous command.
``` none
3 document(s) matched the query criteria.
3 document(s) was/were updated.
```
Now our "Cozy Cottage" document and all of the other documents in the Airbnb collection have the `property_type` field.
``` json
{
_id: 5db9d9286c503eb624d036a1,
name: 'Cozy Cottage',
bathrooms: 1,
bedrooms: 2,
beds: 2,
property_type: 'Unknown'
}
```
Listings that contained a `property_type` before we called `updateMany()` remain as they were. For example, the "Spectacular Modern Uptown Duplex" listing still has `property_type` set to `Apartment`.
``` json
{
_id: '582364',
listing_url: 'https://www.airbnb.com/rooms/582364',
name: 'Spectacular Modern Uptown Duplex',
property_type: 'Apartment',
room_type: 'Entire home/apt',
bedrooms: 4,
beds: 7
...
}
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Delete
Now that we know how to **create**, **read**, and **update** documents, let's tackle the final CRUD operation: **delete**.
### Delete One Document
Let's begin by deleting a single Airbnb listing in the listingsAndReviews collection.
We can delete a single document by calling Collection's deleteOne(). `deleteOne()` has one required parameter: a filter of type object. The filter is used to select the document to delete. You can think of the filter as essentially the same as the query param we used in findOne() and the filter param we used in updateOne(). You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.
`deleteOne()` also has an optional `options` param. See the deleteOne() docs for more information on these options.
`deleteOne()` will delete the first document that matches the given query. Even if more than one document matches the query, only one document will be deleted. If you do not specify a filter, the first document found in natural order will be deleted.
Let's say we want to delete an Airbnb listing with a particular name. We can use `deleteOne()` to achieve this. We'll include the name of the listing in the filter param. We can create a function to delete a listing with a particular name.
``` javascript
async function deleteListingByName(client, nameOfListing) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.deleteOne({ name: nameOfListing });
console.log(`${result.deletedCount} document(s) was/were deleted.`);
}
```
Let's say we want to delete the Airbnb listing we created in an earlier section that has the name "Cozy Cottage." We can call `deleteListingsByName()` by passing a connected MongoClient and the name "Cozy Cottage."
``` javascript
await deleteListingByName(client, "Cozy Cottage");
```
Executing the command above results in the following output.
``` none
1 document(s) was/were deleted.
```
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
### Deleting Multiple Documents
Sometimes, you'll want to delete more than one document at a time. In this case, you can use Collection's deleteMany(). Like `deleteOne()`, `deleteMany()` requires that you pass a filter of type object. You can choose to include options of type object as well.
Let's say we want to remove documents that have not been updated recently. We can call `deleteMany()` with a filter that searches for documents that were scraped prior to a particular date. Our function will look like the following.
``` javascript
async function deleteListingsScrapedBeforeDate(client, date) {
const result = await client.db("sample_airbnb").collection("listingsAndReviews")
.deleteMany({ "last_scraped": { $lt: date } });
console.log(`${result.deletedCount} document(s) was/were deleted.`);
}
```
To delete listings that were scraped prior to February 15, 2019, we can call `deleteListingsScrapedBeforeDate()` with a connected MongoClient and a Date instance that represents February 15.
``` javascript
await deleteListingsScrapedBeforeDate(client, new Date("2019-02-15"));
```
Executing the command above will result in the following output.
``` none
606 document(s) was/were deleted.
```
Now, only recently scraped documents are in our collection.
If you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.
## Wrapping Up
We covered a lot today! Let's recap.
We began by exploring how MongoDB stores data in documents and collections. Then we learned the basics of creating, reading, updating, and deleting data.
Continue on to the next post in this series, where we'll discuss how you can analyze and manipulate data using the aggregation pipeline.
Comments? Questions? We'd love to chat with you in the MongoDB Community.
| md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Learn how to execute the CRUD (create, read, update, and delete) operations in MongoDB using Node.js in this step-by-step tutorial.",
"contentType": "Quickstart"
} | MongoDB and Node.js 3.3.2 Tutorial - CRUD Operations | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/swift/build-command-line-swift-mongodb | created | # Build a Command Line Tool with Swift and MongoDBBuild a Command Line Tool with Swift and MongoDB
## Table of Contents
- Introduction
- TL;DR:
- Goals
- Prerequisites
- Overview of Steps
- Requirements for Solution
- Launching Your Database Cluster in Atlas
- Setting Up The Project
- Looking at our Data
- Integrating the MongoDB Swift Driver
- Conclusion
- Resources
- Troubleshooting
## Introduction
Building something with your bare hands gives a sense of satisfaction like few other tasks. But there's really no comparison to the feeling you get when you create something that not only accomplishes the immediate task at hand but also enables you to more efficiently accomplish that same task in the future. Or, even better, when someone else can use what you have built to more easily accomplish their tasks. That is what we are going to do today. We are going to build something that will automate the process of importing data into MongoDB.
An executable program is powerful because it's self contained and transportable. There's no requirement to compile it or ensure that other elements are present in the environment. It just runs. You can share it with others and assuming they have a relatively similar system, it'll just run for them too. We're going to focus on accomplishing our goal using Swift, Apple's easy-to-learn programming language. We'll also feature use of our brand new MongoDB Swift Driver that enables you to create, read, update and delete data in a MongoDB database.
## TL;DR:
Rather have a video run-through of this content? Check out the Youtube Video where my colleague Nic Raboy, and I talk through this very same content.
:youtube]{vid=cHB8hzUSCpE}
## Goals
Here are the goals for this article.
1. Increase your familiarity with MongoDB Atlas
2. Introduce you to the [Swift Language, and the Xcode Development Environment
3. Introduce you to the MongoDB Swift Driver
4. Introduce you to the Swift Package Manager
By the end of this article, if we've met our goals, you will be able to do the following:
1. Use Xcode to begin experimenting with Swift
2. Use Swift Package Manager to:
- Create a basic project.
- Integrate the MongoDB Swift Driver into your project
- Create an exectuable on your Mac.
## Prerequisites
Before we begin, let's clarify some of the things you'll have to have in place to get started.
- A Mac & MacOS (not an iOS device). You may be reading this on your Windows PC or an iPad. Sorry folks this tutorial was written for you to follow along on your Mac machine: MacBook, MacBook Pro, iMac, etc. You may want to check out macincloud if you're interested in a virtual Mac experience.
- Xcode. You should have Xcode Installed - Visit Apple's App Store to install on your Mac.
- Swift Installed - Visit Apple's Developer Site to learn more.
- Access to a MongoDB Database - Visit MongoDB Atlas to start for free. Read more about MongoDB Atlas.
>If you haven't had much experience with Xcode or MacOS Application Development, check out the guides on Apple's Developer Hub. Getting started is very easy and it's free!
## What will we build?
The task I'm trying to automate involves importing data into a MongoDB database. Before we get too far down the path of creating a solution, let's document our set of requirements for what we'll create.
## Overview of Steps
Here's a quick run-down of the steps we'll work on to complete our task.
1. Launch an Atlas Cluster.
2. Add a Database user/password, and a network exception entry so you can access your database from your IP Address.
3. Create a Swift project using Swift Package Manager (`swift package init --type=executable`)
4. Generate an Xcode project using Swift Package Manager (`swift package generate-xcodeproj`)
5. Create a (`for loop`) using (String) to access, and print out the data in your `example.csv` file. (See csvread.swift)
6. Modify your package to pull in the MongoDB Swift Driver. (See Package.swift)
7. Test. (`swift build; swift run`) Errors? See FAQ section below.
8. Modify your code to incorporate the MongoDB Swift Driver, and write documents. (See Sources/command-line-swift-mongodb/main.swift)
9. Test. (`swift build; swift run`) Errors? See FAQ section below.
10. Create executable and release. (`swift package release`)
## Requirements for Solution
1. The solution must **import a set of data** that starts in CSV (or tabular/excel) format into an existing MongoDB database.
2. Each row of the data in the CSV file **should become a separate document in the MongoDB Database**. Further, each new document should include a new field with the import date/time.
3. It **must be done with minimal knowledge of MongoDB** - i.e. Someone with relatively little experience and knowledge of MongoDB should be able to perform the task within several minutes.
We could simply use mongoimport with the following command line:
``` bash
mongoimport --host localhost:27017 --type csv --db school --collection students --file example.csv --headerline
```
If you're familiar with MongoDB, the above command line won't seem tricky at all. However, this will not satisfy our requirements for the following reasons:
- **Requirement 1**: Pass - It will result in data being imported into MongoDB.
- **Requirement 2**: Fail - While each row WILL become a separate document, we'll not get our additional date field in those documents.
- **Requirement 3**: Fail - While the syntax here may seem rather straight-forward if you've used MongoDB before, to a newcomer, it can be a bit confusing. For example, I'm using localhost here... when we run this executable on another host, we'll need to replace that with the actual hostname for our MongoDB Database. The command syntax will get quite a bit more complex once this happens.
So then, how will we build something that meets all of our requirements?
We can build a command-line executable that uses the MongoDB Swift Driver to accomplish the task. Building a program to accomplish our task enables us to abstract much of the complexity associated with our task. Fortunately, there's a driver for Swift and using it to read CSV data, manipulate it and write it to a MongoDB database is really straight forward.
## Launching Your Database Cluster in Atlas
You'll need to create a new cluster and load it with sample data. My colleague Maxime Beugnet has created a video tutorial to help you out, but I also explain the steps below:
- Click "Start free" on the MongoDB homepage.
- Enter your details, or just sign up with your Google account, if you have one.
- Accept the Terms of Service
- Create a *Starter* cluster.
- Select the cloud provider where you'd like to store your MongoDB Database
- Pick a region that makes sense for you.
- You can change the name of the cluster if you like. I've called mine "MyFirstCluster".
Once your cluster launches, be sure that you add a Network Exception entry for your current IP and then add a database username and password. Take note of the username and password - you'll need these shortly.
## Setting Up The Project
We'll start on our journey by creating a Swift Package using Swift Package Manager. This tool will give us a template project and establish the directory structure and some scaffolding we'll need to get started. We're going to use the swift command line tool with the `package` subcommand.
There are several variations that we can use. Before jumping in, let's example the difference in some of the flags.
``` bash
swift package init
```
This most basic variation will give us a general purpose project. But, since we're building a MacOS, executable, let's add the `--type` flag to indicate the type of project we're working on.
``` bash
swift package init --type=executable
```
This will create a project that defines the "product" of a build -- which is in essense our executable. Just remember that if you're creating an executable, typically for server-side Swift, you'll want to incorporate the `--type=executable` flag.
Xcode is where most iOS, and Apple developers in general, write and maintain code so let's prepare a project so we can use Xcode too. Now that we've got our basic project scaffolding in place, let's create an Xcode project where we can modify our code.
To create an Xcode project simply execute the following command:
``` bash
swift package generate-xcodeproj
```
Then, we can open the `.xcproject` file. Your mac should automatically open Xcode as a result of trying to open an Xcode Project file.
``` bash
open .xcodeproj/ # change this to the name that was created by the previous command.
```
## Looking at our Data
With our project scaffolding in place, let's turn our focus to the data we'll be manipulating with our executable. Let's look at the raw data first. Let's say there's a list of students that come out every month that I need to get into my database. It might look something like this:
``` bash
firstname,lastname,assigned
Michael,Basic,FALSE
Dan,Acquilone,FALSE
Eli,Zimmerman,FALSE
Liam,Tyler,FALSE
Jane,Alberts,FALSE
Ted,Williams,FALSE
Suzy,Langford,FALSE
Paulina,Stern,FALSE
Jared,Lentz,FALSE
June,Gifford,FALSE
Wilma,Atkinson,FALSE
```
In this example data, we have 3 basic fields of information: First Name, Last Name, and a Boolean value indicating whether or not the student has been assigned to a specific class.
We want to get this data from it's current form (CSV) into documents inside the database and along the way, add a field to record the date that the document was imported. This is going to require us to read the CSV file inside our Swift application. Before proceeding, make sure you either have similar data in a file to which you know the path. We'll be creating some code next to access that file with Swift.
Once we're finished, the data will look like the following, represented in a JSON document:
``` json
{
"_id": {
"$oid": "5f491a3bf983e96173253352" // this will come from our driver.
},
"firstname": "Michael",
"lastname": "Basic",
"date": {
"$date": "2020-08-28T14:52:43.398Z" // this will be set by our Struct default value
},
"assigned": false
}
```
In order to get the rows and fields of names into MongoDB, we'll use Swift's built-in String class. This is a powerhouse utility that can do everything from read the contents of a file to interpolate embedded variables and do comparisons between two or more sets of strings. The class method contentsOfFile of the String class will access the file based on a filepath we provide, open the file and enable us to access its contents. Here's what our code might look like if we were just going to loop through the CSV file and print out the rows it contains.
>You may be tempted to just copy/paste the code below. I would suggest that you type it in by hand... reading it from the screen. This will enable you to experience the power of auto-correct, and code-suggest inside Xcode. Also, be sure to modify the value of the `path` variable to point to the location where you put your `example.csv` file.
``` swift
import Foundation
let path = "/Users/mlynn/Desktop/example.csv" // change this to the path of your csv file
do {
let contents = try String(contentsOfFile: path, encoding: .utf8)
let rows = contents.components(separatedBy: NSCharacterSet.newlines)
for row in rows {
if row != "" {
print("Got Row: \(row)")
}
}
}
```
Let's take a look at what's happening here.
- Line 1: We'll use the Foundation core library. This gives us access to some basic string, character and comparison methods. The import declaration gives us access to native, as well as third party libraries and modules.
- Line 3: Hard code a path variable to the CSV file.
- Lines 6-7: Use the String method to access the contents of the CSV file.
- Line 8: Loop through each row in our file and display the contents.
To run this simple example, let's open the `main.swift` file that our that the command `swift package init` created for us. To edit this file, in Xcode, To begin, let's open the main.swift file that our that the command `swift package init` created for us. To edit this file, in Xcode, traverse the folder tree under Project->Sources-Project name... and open `main.swift`. Replace the simple `hello world` with the code above.
Running this against our `example.csv` file, you should see something like the following output. We'll use the commands `swift build`, and `swift run`.
## Integrating the MongoDB Swift Driver
With this basic construct in place, we can now begin to incorporate the code necessary to insert a document into our database for each row of data in the csv file. Let's start by configuring Swift Package Manager to integrate the MongoDB Swift Driver.
Navigate in the project explorer to find the Package.swift file. Replace the contents with the Package.swift file from the repo:
``` swift
// swift-tools-version:5.2
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "csvimport-swift",
platforms:
.macOS(.v10_15),
],
dependencies: [
.package(url: "https://github.com/mongodb/mongo-swift-driver.git", from: "1.0.1"),
],
targets: [
.target(
name: "csvimport-swift",
dependencies: [.product(name: "MongoSwiftSync", package: "mongo-swift-driver")]),
.testTarget(
name: "csvimport-swiftTests",
dependencies: ["csvimport-swift"]),
]
)
```
>If you're unfamiliar with [Swift Package Manager take a detour and read up over here.
We're including a statement that tells Swift Package Manager that we're building this executable for a specific set of MacOS versions.
``` swift
platforms:
.macOS(.v10_15)
],
```
>Tip: If you leave this statement out, you'll get a message stating that the package was designed to be built for MacOS 10.10 or similar.
Next we've included references to the packages we'll need in our software to insert, and manipulate MongoDB data. In this example, we'll concentrate on an asynchronous implementation. Namely, the [mongo-swift-driver.
Now that we've included our dependencies, let's build the project. Build the project often so you catch any errors you may have inadvertently introduced early on.
``` none
swift package build
```
You should get a response similar to the following:
``` none
3/3] Linking cmd
```
Now let's modify our basic program project to make use of our MongoDB driver.
``` swift
import Foundation
import MongoSwiftSync
var murl: String = "mongodb+srv://:\(ProcessInfo.processInfo.environment["PASS"]!)@myfirstcluster.zbcul.mongodb.net/?retryWrites=true&w=majority"
let client = try MongoClient(murl)
let db = client.db("students")
let session = client.startSession(options: ClientSessionOptions(causalConsistency: true))
struct Person: Codable {
let firstname: String
let lastname: String
let date: Date = Date()
let assigned: Bool
let _id: BSONObjectID
}
let path = "/Users/mlynn/Desktop/example.csv"
var tempAssigned: Bool
var count: Int = 0
var header: Bool = true
let personCollection = db.collection("people", withType: Person.self)
do {
let contents = try String(contentsOfFile: path, encoding: .utf8)
let rows = contents.components(separatedBy: NSCharacterSet.newlines)
for row in rows {
if row != "" {
var values: [String] = []
values = row.components(separatedBy: ",")
if header == true {
header = false
} else {
if String(values[2]).lowercased() == "false" || Bool(values[2]) == false {
tempAssigned = false
} else {
tempAssigned = true
}
try personCollection.insertOne(Person(firstname: values[0], lastname: values[1], assigned: tempAssigned, _id: BSONObjectID()), session: session)
count.self += 1
print("Inserted: \(count) \(row)")
}
}
}
}
```
Line 2 imports the driver we'll need (mongo-swift).
Next, we configure the driver.
``` swift
var murl: String = "mongodb+srv://:\(ProcessInfo.processInfo.environment["PASS"]!)@myfirstcluster.zbcul.mongodb.net/?retryWrites=true&w=majority"
let client = try MongoClient(murl)
let db = client.db("students")
let session = client.startSession(options: ClientSessionOptions(causalConsistency: true))
```
Remember to replace `` with the user you created in Atlas.
To read and write data from and to MongoDB in Swift, we'll need to leverage a Codable structure. [Codeables are an amazing feature of Swift and definitely helpful for writing code that will write data to MongoDB. Codables is actually an alias for two protocols: Encodable, and Decodable. When we make our `Struct` conform to the Codable protocol, we're able to encode our string data into JSON and then decode it back into a simple `Struct` using JSONEncoder and JSONDecoder respectively. We'll need this structure because the format used to store data in MongoDB is slightly different that the representation you see of that data structure in Swift. We'll create a structure to describe what our document schema should look like inside MongoDB. Here's what our schema `Struct` should look like:
``` swift
struct Code: Codable {
let code: String
let assigned: Bool
let date: Date = Date()
let _id: BSONObjectID
}
```
Notice we've got all the elements from our CSV file plus a date field.
We'll also need a few temporary variables that we will use as we process the data. `count` and a special temporary variable I'll use when I determine whether or not a student is assigned to a class or not... `tempAssigned`. Lastly, in this code block, I'll create a variable to store the state of our position in the file. **header** will be set to true initially because we'll want to skip the first row of data. That's where the column headers live.
``` swift
let path = "/Users/mlynn/Desktop/example.csv"
var tempAssigned: Bool
var count: Int = 0
var header: Bool = true
```
Now we can create a reference to the collection in our MongoDB Database that we'll use to store our student data. For lack of a better name, I'm calling mine `personCollection`. Also, notice that we're providing a link back to our `Struct` using the `withType` argument to the collection method. This ensures that the driver knows what type of data we're dealing with.
``` swift
let personCollection = db.collection("people", withType: Person.self)
```
The next bit of code is at the heart of our task. We're going to loop through each row and create a document. I've commented and explained each row inline.
``` swift
let contents = try String(contentsOfFile: path, encoding: .utf8) // get the contents of our csv file with the String built-in
let rows = contents.components(separatedBy: NSCharacterSet.newlines) // get the individual rows separated by newline characters
for row in rows { // Loop through all rows in the file.
if row != "" { // in case we have an empty row... skip it.
var values: String] = [] // create / reset the values array of type string - to null.
values = row.components(separatedBy: ",") // assign the values array to the fields in the row of data
if header == true { // if it's the first row... skip it and.
header = false // Set the header to false so we do this only once.
} else {
if String(values[2]).lowercased() == "false" || Bool(values[2]) == false {
tempAssigned = false // Above: if its the string or boolean value false, so be it
} else {
tempAssigned = true // otherwise, explicitly set it to true
}
try personCollection.insertOne(Person(firstname: values[0], lastname: values[1], assigned: tempAssigned, _id: BSONObjectID()), session: session)
count.self += 1 // Above: use the insertOne method of the collection class form
print("Inserted: \(count) \(row)") // the mongo-swift-driver and create a document with the Person ``Struct``.
}
}
}
```
## Conclusion
Importing data is a common challenge. Even more common is when we want to automate the task of inserting, or manipulating data with MongoDB. In this **how-to**, I've explained how you can get started with Swift and accomplish the task of simplifying data import by creating an executable, command-line tool that you can share with a colleague to enable them to import data for you. While this example is quite simple in terms of how it solves the problem at hand, you can certainly take the next step and begin to build on this to support command-line arguments and even use it to not only insert data but also to remove, and merge or update data.
I've prepared a section below titled **Troubleshooting** in case you come across some common errors. I've tried my best to think of all of the usual issues you may find. However, if you do find another, issue, please let me know. The best way to do this is to [Sign Up for the MongoDB Community and be sure to visit the section for Drivers and ODMs.
## Resources
- GitHub
- MongoDB Swift Driver Repository
- Announcing the MongoDB Swift Driver
- MongoDB Swift Driver Examples
- Mike's Twitter
## Troubleshooting
Use this section to help solve some common problems. If you still have issues after reading these common solutions, please visit me in the MongoDB Community.
### No Such Module
This occurs when Swift was unable to build the `mongo-swift-driver` module. This most typically occurs when a developer is attempting to use Xcode and has not specified a minimum target OS version. Review the attached image and note the sequence of clicks to get to the appropriate setting. Change that setting to 10.15 or greater.
| md | {
"tags": [
"Swift",
"MongoDB"
],
"pageDescription": "Build a Command Line Tool with Swift and MongoDB",
"contentType": "Code Example"
} | Build a Command Line Tool with Swift and MongoDBBuild a Command Line Tool with Swift and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/capture-iot-data-stitch | created | # Capture IoT Data With MongoDB in 5 Minutes
> Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.
Capturing IoT (Internet of Things) data is a complex task for 2 main reasons:
- We have to deal with a huge amount of data so we need a rock solid
architecture.
- While keeping a bulletproof security level.
First, let's have a look at a standard IoT capture architecture:
On the left, we have our sensors. Let's assume they can push data every
second over TCP using a
POST) and let's suppose we
have a million of them. We need an architecture capable to handle a
million queries per seconds and able to resist any kind of network or
hardware failure. TCP queries need to be distributed evenly to the
application servers using load
balancers) and
finally, the application servers are able to push the data to our
multiple
Mongos
routers from our MongoDB Sharded
Cluster.
As you can see, this architecture is relatively complex to install. We
need to:
- buy and maintain a lot of servers,
- make security updates on a regular basis of the Operating Systems
and applications,
- have an auto-scaling capability (reduce maintenance cost & enable
automatic failover).
This kind of architecture is expensive and maintenance cost can be quite
high as well.
Now let's solve this same problem with MongoDB Stitch!
Once you have created a MongoDB Atlas
cluster, you can attach a
MongoDB Stitch application to it
and then create an HTTP
Service
containing the following code:
``` javascript
exports = function(payload, response) {
const mongodb = context.services.get("mongodb-atlas");
const sensors = mongodb.db("stitch").collection("sensors");
var body = EJSON.parse(payload.body.text());
body.createdAt = new Date();
sensors.insertOne(body)
.then(result => {
response.setStatusCode(201);
});
};
```
And that's it! That's all we need! Our HTTP POST service can be reached
directly by the sensors from the webhook provided by MongoDB Stitch like
so:
``` bash
curl -H "Content-Type: application/json" -d '{"temp":22.4}' https://webhooks.mongodb-stitch.com/api/client/v2.0/app/stitchtapp-abcde/service/sensors/incoming_webhook/post_sensor?secret=test
```
Because MongoDB Stitch is capable of scaling automatically according to
demand, you no longer have to take care of infrastructure or handling
failovers.
## Next Step
Thanks for taking the time to read my post. I hope you found it useful
and interesting.
If you are looking for a very simple way to get started with MongoDB,
you can do that in just 5 clicks on our MongoDB
Atlas database service in the
cloud.
You can also try MongoDB Stitch for
free and discover how the
billing works.
If you want to query your data sitting in MongoDB Atlas using MongoDB
Stitch, I recommend this article from Michael
Lynn.
| md | {
"tags": [
"MongoDB",
"JavaScript"
],
"pageDescription": "Learn how to use MongoDB for Internet of Things data in as little as 5 minutes.",
"contentType": "Article"
} | Capture IoT Data With MongoDB in 5 Minutes | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/llm-accuracy-vector-search-unstructured-metadata | created | # Enhancing LLM Accuracy Using MongoDB Vector Search and Unstructured.io Metadata
Despite the remarkable strides in artificial intelligence, particularly in generative AI (GenAI), precision remains an elusive goal for large language model (LLM) outputs. According to the latest annual McKinsey Global Survey, “The state of AI in 2023,” GenAI has had a breakout year. Nearly one-quarter of C-suite executives personally use general AI tools for work, and over 25% of companies with AI implementations have general AI on their boards' agendas. Additionally, 40% of respondents plan to increase their organization's investment in AI due to advances in general AI. The survey reflects the immense potential and rapid adoption of AI technologies. However, the survey also points to a significant concern: **inaccuracy**.
Inaccuracy in LLMs often results in "hallucinations" or incorrect information due to limitations like shallow semantic understanding and varying data quality. Incorporating semantic vector search using MongoDB can help by enabling real-time querying of training data, ensuring that generated responses align closely with what the model has learned. Furthermore, adding metadata filtering extracted by Unstructured tools can refine accuracy by allowing the model to weigh the reliability of its data sources. Together, these methods can significantly minimize the risk of hallucinations and make LLMs more reliable.
This article addresses this challenge by providing a comprehensive guide on enhancing the precision of your LLM outputs using MongoDB's Vector Search and Unstructured Metadata extraction techniques. The main purpose of this tutorial is to equip you with the knowledge and tools needed to incorporate external source documents in your LLM, thereby enriching the model's responses with well-sourced and contextually accurate information. At the end of this tutorial, you can generate precise output from the OpenAI GPT-4 model to cite the source document, including the filename and page number. The entire notebook for this tutorial is available on Google Colab, but we will be going over sections of the tutorial together.
## Why use MongoDB Vector Search?
MongoDB is a NoSQL database, which stands for "Not Only SQL," highlighting its flexibility in handling data that doesn't fit well in tabular structures like those in SQL databases. NoSQL databases are particularly well-suited for storing unstructured and semi-structured data, offering a more flexible schema, easier horizontal scaling, and the ability to handle large volumes of data. This makes them ideal for applications requiring quick development and the capacity to manage vast metadata arrays.
MongoDB's robust vector search capabilities and ability to seamlessly handle vector data and metadata make it an ideal platform for improving the precision of LLM outputs. It allows for multifaceted searches based on semantic similarity and various metadata attributes. This unique feature set distinguishes MongoDB from traditional developer data platforms and significantly enhances the accuracy and reliability of the results in language modeling tasks.
## Why use Unstructured metadata?
The Unstructured open-source library provides components for ingesting and preprocessing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of unstructured revolve around streamlining and optimizing the data processing workflow for LLMs. The Unstructured modular bricks and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and efficiently transforming unstructured data into structured outputs.
Metadata is often referred to as "data about data." It provides contextual or descriptive information about the primary data, such as its source, format, and relevant characteristics. The metadata from the Unstructured tools tracks various details about elements extracted from documents, enabling users to filter and analyze these elements based on particular metadata of interest. The metadata fields include information about the source document and data connectors.
The concept of metadata is familiar, but its application in the context of unstructured data brings many opportunities. The Unstructured package tracks a variety of metadata at the element level. This metadata can be accessed with `element.metadata` and converted to a Python dictionary representation using `element.metadata.to_dict()`.
In this article, we particularly focus on `filename` and `page_number` metadata to enhance the traceability and reliability of the LLM outputs. By doing so, we can cite the exact location of the PDF file that provides the answer to a user query. This becomes especially crucial when the LLM answers queries related to sensitive topics such as financial, legal, or medical questions.
## Code walkthrough
### Requirements
1. Sign up for a MongoDB Atlas account and install the PyMongo library in the IDE of your choice or Colab.
2. Install the Unstructured library in the IDE of your choice or Colab.
3. Install the Sentence Transformer library for embedding in the IDE of your choice or Colab.
4. Get the OpenAI API key. To do this, please ensure you have an OpenAI account.
### Step-by-step process
1. Extract the texts and metadata from source documents using Unstructured's partition_pdf.
2. Prepare the data for storage and retrieval in MongoDB.
- Vectorize the texts using the SentenceTransformer library.
- Connect and upload records into MongoDB Atlas.
- Query the index based on embedding similarity.
3. Generate the LLM output using the OpenAI Model.
#### **Step 1: Text and metadata extraction**
Please make sure you have installed the required libraries to run the necessary code.
```
# Install Unstructured partition for PDF and dependencies
pip install unstructured“pdf”]
!apt-get -qq install poppler-utils tesseract-ocr
!pip install -q --user --upgrade pillow
pip install pymongo
pip install sentence-transformers
```
We'll delve into extracting data from a PDF document, specifically the seminal "Attention is All You Need" paper, using the `partition_pdf` function from the `Unstructured` library in Python. First, you'll need to import the function with `from unstructured.partition.pdf import partition_pdf`. Then, you can call `partition_pdf` and pass in the necessary parameters:
- `filename` specifies the PDF file to process, which is "example-docs/Attention is All You Need.pdf."
- `strategy` sets the extraction type, and for a more comprehensive scan, we use "hi_res."
- Finally, `infer_table_structured=True` tells the function to also extract table metadata.
Properly set up, as you can see in our Colab file, the code looks like this:
```
from unstructured.partition.pdf import partition_pdf
elements = partition_pdf("example-docs/Attention is All You Need.pdf",
strategy="hi_res",
infer_table_structured=True)
```
By running this code, you'll populate the `elements` variable with all the extracted information from the PDF, ready for further analysis or manipulation. In the Colab’s code snippets, you can inspect the extracted texts and element metadata. To observe the sample outputs — i.e., the element type and text — please run the line below. Use a print statement, and please make sure the output you receive matches the one below.
```
display(*[(type(element), element.text) for element in elements[14:18]])
```
Output:
```
(unstructured.documents.elements.NarrativeText,
'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English- to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.')
(unstructured.documents.elements.NarrativeText,
'∗Equal contribution. Listing order is random....
```
You can also use Counter from Python Collection to count the number of element types identified in the document.
```
from collections import Counter
display(Counter(type(element) for element in elements))
# outputs
Counter({unstructured.documents.elements.NarrativeText: 86,
unstructured.documents.elements.Title: 56,
unstructured.documents.elements.Text: 45,
unstructured.documents.elements.Header: 3,
unstructured.documents.elements.Footer: 9,
unstructured.documents.elements.Image: 5,
unstructured.documents.elements.FigureCaption: 5,
unstructured.documents.elements.Formula: 5,
unstructured.documents.elements.ListItem: 43,
unstructured.documents.elements.Table: 4})
```
Finally, you can convert the element objects into Python dictionaries using `convert_to_dict` built-in function to selectively extract and modify the element metadata.
```
from unstructured.staging.base import convert_to_dict
# built-in function to convert elements into Python dictionary
records = convert_to_dict(elements)
# display the first record
records[0]
# output
{'type': 'NarrativeText',
'element_id': '6b82d499d67190c0ceffe3a99958e296',
'metadata': {'coordinates': {'points': ((327.6542053222656,
199.8135528564453),
(327.6542053222656, 315.7165832519531),
(1376.0062255859375, 315.7165832519531),
(1376.0062255859375, 199.8135528564453)),
'system': 'PixelSpace',
'layout_width': 1700,
'layout_height': 2200},
'filename': 'Attention is All You Need.pdf',
'last_modified': '2023-10-09T20:15:36',
'filetype': 'application/pdf',
'page_number': 1,
'detection_class_prob': 0.5751863718032837},
'text': 'Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works.'}
```
#### **Step 2: Data preparation, storage, and retrieval**
**Step 2a:** Vectorize the texts using the SentenceTransformer library.
We must include the extracted element metadata when storing and retrieving the texts from MongoDB Atlas to enable data retrieval with metadata and vector search.
First, we vectorize the texts to perform a similarity-based vector search. In this example, we use `microsoft/mpnet-base` from the Sentence Transformer library. This model has a 768 embedding size.
```
from sentence_transformers import SentenceTransformer
from pprint import pprint
model = SentenceTransformer('microsoft/mpnet-base')
# Let's test and check the number of embedding size using this model
emb = model.encode("this is a test").tolist()
print(len(emb))
print(emb[:10])
print("\n")
# output
768
[-0.15820945799350739, 0.008249259553849697, -0.033347081393003464, …]
```
It is important to use a model with the same embedding size defined in MongoDB Atlas Index. Be sure to use the embedding size compatible with MongoDB Atlas indexes. You can define the index using the JSON syntax below:
```json
{
"type": "vectorSearch,
"fields": [{
"path": "embedding",
"dimensions": 768, # the dimension of `mpnet-base` model
"similarity": "euclidean",
"type": "vector"
}]
}
```
Copy and paste the JSON index into your MongoDB collection so it can index the `embedding` field in the records. Please view this documentation on [how to index vector embeddings for Vector Search.
Next, create the text embedding for each record before uploading them to MongoDB Atlas:
```
for record in records:
txt = record'text']
# use the embedding model to vectorize the text into the record
record['embedding'] = model.encode(txt).tolist()
# print the first record with embedding
records[0]
# output
{'type': 'NarrativeText',
'element_id': '6b82d499d67190c0ceffe3a99958e296',
'metadata': {'coordinates': {'points': ((327.6542053222656,
199.8135528564453),
(327.6542053222656, 315.7165832519531),
(1376.0062255859375, 315.7165832519531),
(1376.0062255859375, 199.8135528564453)),
'system': 'PixelSpace',
'layout_width': 1700,
'layout_height': 2200},
'filename': 'Attention is All You Need.pdf',
'last_modified': '2023-10-09T20:15:36',
'filetype': 'application/pdf',
'page_number': 1,
'detection_class_prob': 0.5751863718032837},
'text': 'Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works.',
'embedding': [-0.018366225063800812,
-0.10861606895923615,
0.00344603369012475,
0.04939081519842148,
-0.012352174147963524,
-0.04383034259080887,...],
'_id': ObjectId('6524626a6d1d8783bb807943')}
}
```
**Step 2b**: Connect and upload records into MongoDB Atlas
Before we can store our records on MongoDB, we will use the PyMongo library to establish a connection to the target MongoDB database and collection. Use this code snippet to connect and test the connection (see the MongoDB documentation on [connecting to your cluster).
```
from pymongo.mongo_client import MongoClient
from pymongo.server_api import ServerApi
uri = "<>"
# Create a new client and connect to the server
client = MongoClient(uri, server_api=ServerApi('1'))
# Send a ping to confirm a successful connection
try:
client.admin.command('ping')
print("Pinged your deployment. You successfully connected to MongoDB!")
except Exception as e:
print(e)
```
Once run, the output: “Pinged your deployment. You successfully connected to MongoDB!” will appear.
Next, we can upload the records using PyMongo's `insert_many` function.
To do this, we must first grab our MongoDB database connection string. Please make sure the database and collection names match with the ones in MongoDB Atlas.
```
db_name = "unstructured_db"
collection_name = "unstructured_col"
# delete all first
clientdb_name][collection_name].delete_many({})
# insert
client[db_name][collection_name].insert_many(records)
```
Let’s preview the records in MongoDB Atlas:
![Fig 2. preview the records in the MongoDB Atlas collection
**Step 2c**: Query the index based on embedding similarity
Now, we can retrieve the relevant records by computing the similarity score defined in the index vector search. When a user sends a query, we need to vectorize it using the same embedding model we used to store the data. Using the `aggregate` function, we can pass a `pipeline` that contains the information to perform a vector search.
Now that we have the records stored in MongoDB Atlas, we can search the relevant texts using the vector search. To do so, we need to vectorize the query using the same embedding model and use the aggregate function to retrieve the records from the index.
In the pipeline, we will specify the following:
- **index**: The name of the vector search index in the collection
- **vector**: The vectorized query from the user
- **k**: Number of the most similar records we want to extract from the collection
- **score**: The similarity score generated by MongoDB Atlas
```
query = "Does the encoder contain self-attention layers?"
vector_query = model.encode(query).tolist()
pipeline =
{
"$vectorSearch": {
"index":"default",
"queryVector": vector_query,
"path": "embedding",
"limit": 5,
"numCandidates": 50
}
},
{
"$project": {
"embedding": 0,
"_id": 0,
"score": {
"$meta": "searchScore"
},
}
}
]
results = list(client[db_name][collection_name].aggregate(pipeline))
```
The above pipeline will return the top five records closest to the user’s query embedding. We can define `k` to retrieve the [top-k records in MongoDB Atlas. Please note that the results contain the `metadata`, `text`, and `score`. We can use this information to generate the LLM output in the following step.
Here’s one example of the top five nearest neighbors from the query above:
```
{'element_id': '7128012294b85295c89efee3bc5e72d2',
'metadata': {'coordinates': {'layout_height': 2200,
'layout_width': 1700,
'points': [290.50477600097656,
1642.1170677777777],
[290.50477600097656,
1854.9523748867755],
[1403.820083618164,
1854.9523748867755],
[1403.820083618164,
1642.1170677777777]],
'system': 'PixelSpace'},
'detection_class_prob': 0.9979791045188904,
'file_directory': 'example-docs',
'filename': 'Attention is All You Need.pdf',
'filetype': 'application/pdf',
'last_modified': '2023-09-20T17:08:35',
'page_number': 3,
'parent_id': 'd1375b5e585821dff2d1907168985bfe'},
'score': 0.2526094913482666,
'text': 'Decoder: The decoder is also composed of a stack of N = 6 identical '
'layers. In addition to the two sub-layers in each encoder layer, '
'the decoder inserts a third sub-layer, which performs multi-head '
'attention over the output of the encoder stack. Similar to the '
'encoder, we employ residual connections around each of the '
'sub-layers, followed by layer normalization. We also modify the '
'self-attention sub-layer in the decoder stack to prevent positions '
'from attending to subsequent positions. This masking, combined with '
'fact that the output embeddings are offset by one position, ensures '
'that the predictions for position i can depend only on the known '
'outputs at positions less than i.',
'type': 'NarrativeText'}
```
**Step 3: Generate the LLM output with source document citation**
We can generate the output using the OpenAI GPT-4 model. We will use the `ChatCompletion` function from OpenAI API for this final step. [ChatCompletion API processes a list of messages to generate a model-driven response. Designed for multi-turn conversations, they're equally adept at single-turn tasks. The primary input is the 'messages' parameter, comprising an array of message objects with designated roles ("system", "user", or "assistant") and content. Usually initiated with a system message to guide the assistant's behavior, conversations can vary in length with alternating user and assistant messages. While the system message is optional, its absence may default the model to a generic helpful assistant behavior.
You’ll need an OpenAI API key to run the inferences. Before attempting this step, please ensure you have an OpenAI account. Assuming you store your OpenAI API key in your environment variable, you can import it using the `os.getenv` function:
```
import os
import openai
# Get the API key from the env
openai.api_key = os.getenv("OPENAI_API_KEY")
```
Next, having a compelling prompt is crucial for generating a satisfactory result. Here’s the prompt to generate the output with specific reference where the information comes from — i.e., filename and page number.
```
response = openai.ChatCompletion.create(
model="gpt-4",
messages=
{"role": "system", "content": "You are a useful assistant. Use the assistant's content to answer the user's query \
Summarize your answer using the 'texts' and cite the 'page_number' and 'filename' metadata in your reply."},
{"role": "assistant", "content": context},
{"role": "user", "content": query},
],
temperature = 0.2
)
```
In this Python script, a request is made to the OpenAI GPT-4 model through the `ChatCompletion.create` method to process a conversation. The conversation is structured with predefined roles and messages. It is instructed to generate a response based on the provided context and user query, summarizing the answer while citing the page number and file name. The `temperature` parameter set to 0.2 influences the randomness of the output, favoring more deterministic responses.
## Evaluating the LLM output quality with source document
One of the key features of leveraging unstructured metadata in conjunction with MongoDB's Vector Search is the ability to provide highly accurate and traceable outputs.
```
User query: "Does the encoder contain self-attention layers?"
```
You can insert this query into the ChatCompletion API as the “user” role and the context from MongoDB retrieval results as the “assistant” role. To enforce the model responds with the filename and page number, you can provide the instruction in the “system” role.
```
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a useful assistant. Use the assistant's content to answer the user's query \
Summarize your answer using the 'texts' and cite the 'page_number' and 'filename' metadata in your reply."},
{"role": "assistant", "content": context},
{"role": "user", "content": query},
],
temperature = 0.2
)
print(response)
# output
{
"id": "chatcmpl-87rNcLaEYREimtuWa0bpymWiQbZze",
"object": "chat.completion",
"created": 1696884180,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Yes, the encoder does contain self-attention layers. This is evident from the text on page 5 of the document \"Attention is All You Need.pdf\"."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1628,
"completion_tokens": 32,
"total_tokens": 1660
}
}
```
Source document:
![Fig 3. The relevant texts in the source document to answer user query
LLM Output:
The highly specific output cites information from the source document, "Attention is All You Need.pdf," stored in the 'example-docs' directory. The answers are referenced with exact page numbers, making it easy for anyone to verify the information. This level of detail is crucial when answering queries related to research, legal, or medical questions, and it significantly enhances the trustworthiness and reliability of the LLM outputs.
## Conclusion
This article presents a method to enhance LLM precision using MongoDB's Vector Search and Unstructured Metadata extraction techniques. These approaches, facilitating real-time querying and metadata filtering, substantially mitigate the risk of incorrect information generation. MongoDB's capabilities, especially in handling vector data and facilitating multifaceted searches, alongside the Unstructured library's data processing efficiency, emerge as robust solutions. These techniques not only improve accuracy but also enhance the traceability and reliability of LLM outputs, especially when dealing with sensitive topics, equipping users with the necessary tools to generate more precise and contextually accurate outputs from LLMs.
Ready to get started? Request your Unstructured API key today and unlock the power of Unstructured API and Connectors. Join the Unstructured community group to connect with other users, ask questions, share your experiences, and get the latest updates. We can’t wait to see what you’ll build.
| md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "This article provides a comprehensive guide on improving the precision of large language models using MongoDB's Vector Search and Unstructured.io's metadata extraction techniques, aiming to equip readers with the tools to produce well-sourced and contextually accurate AI outputs.",
"contentType": "Tutorial"
} | Enhancing LLM Accuracy Using MongoDB Vector Search and Unstructured.io Metadata | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/how-to-use-custom-archival-rules-and-partitioning-on-mongodb-atlas-online-archive | created | # How to Use Custom Archival Rules and Partitioning on MongoDB Atlas Online Archive
>As of June 2022, the functionality previously known as Atlas Data Lake is now named Atlas Data Federation. Atlas Data Federation’s functionality is unchanged and you can learn more about it here. Atlas Data Lake will remain in the Atlas Platform, with newly introduced functionality that you can learn about here.
Okay, so you've set up a simple MongoDB Atlas Online Archive, and now you might be wondering, "What's next?" In this post, we will cover some more advanced Online Archive use cases, including setting up custom archival rules and how to improve query performance through partitioning.
## Prerequisites
- The Online Archive feature is available on M10 and greater Atlas clusters that run MongoDB 3.6 or later. So for this demo, you will need to create a M10 cluster in MongoDB Atlas. Click here for information on setting up a new MongoDB Atlas cluster or check out How to Manage Data at Scale With MongoDB Atlas Online Archive.
- Ensure that each database has been seeded by loading sample data into our Atlas cluster. I will be using the `sample_analytics.customers` dataset for this demo.
## Creating a Custom Archival Rule
Creating an Online Archive rule based on the date makes sense for a lot of archiving situations, such as automatically archiving documents that are over X years old, or that were last updated Y months ago. But what if you want to have more control over what gets archived? Some examples of data that might be eligible to be archived are:
- Data that has been flagged for archival by an administrator.
- Discontinued products on your eCommerce site.
- User data from users that have closed their accounts on your platform (unless they are European citizens).
- Employee data from employees that no longer work at your company.
There are lots of reasons why you might want to set up custom rules for archiving your cold data. Let's dig into how you can achieve this using custom archive rules with MongoDB Atlas Online Archive. For this demo, we will be setting up an automatic archive of all users in the `sample_analytics.customers` collection that have the 'active' field set to `false`.
In order to configure our Online Archive, first navigate to the Cluster page for your project, click on the name of the cluster you want to configure Online Archive for, and click on the **Online Archive** tab.
Next, click the Configure Online Archive button the first time and the **Add Archive** button subsequently to start configuring Online Archive for your collection. Then, you will need to create an Archiving Rule by specifying the collection namespace, which will be `sample_analytics.customers`.
You will also need to specify your custom criteria for archiving documents. You can specify the documents you would like to filter for archival with a MongoDB query, in JSON, the same way as you would write filters in MongoDB Atlas.
> Note: You can use any valid MongoDB Query Language (MQL) query, however, you cannot use the empty document argument ({}) to return all documents.
To retrieve the documents staged for archival, we will use the following find command. This will retrieve all documents that have the \`active\` field set to \`false\` or do not have an \`active\` key at all.
```
{ $or:
{ active: false },
{ active: null }
] }
```
Continue setting up your archive, and then you should be done!
> Note: It's always a good idea to run your custom queries in the [mongo shell first to ensure that you are archiving the correct documents.
> Note: Once you initiate an archive and a MongoDB document is queued for archiving, you can no longer edit the document.
## Improving Query Performance Through Partitioning
One of the reasons we archive data is to access and query it in the future, if for some reason we still need to use it. In fact, you might be accessing this data quite frequently! That's why it's useful to be able to partition your archived data and speed up query times. With Atlas Online Archive, you can specify the two most frequently queried fields in your collection to create partitions in your online archive.
Fields with a moderate to high cardinality (or the number of elements in a set or grouping) are good choices to be used as a partition. Queries that don't contain these fields will require a full collection scan of all archived documents, which will take longer and increase your costs. However, it's a bit of a bit of a balancing act.
For example, fields with low cardinality wont partition the data well and therefore wont improve performance greatly. However, this may be OK for range queries or collection scans, but will result in fast archival performance.
Fields with mid to high cardinality will partition the data better leading to better general query performance, but maybe slightly slower archival performance.
Fields with extremely high cardinality like `_id` will lead to poor query performance for everything but "point queries" that query on _id, and will lead to terrible archival performance due to writing many partitions.
> Note: Online Archive is powered by MongoDB Atlas Data Lake. To learn more about how partitions improve your query performance in Data Lake, see Data Structure in cloud object storage - Amazon S3 or Microsoft Azure Blob Storage.
The specified fields are used to partition your archived data for optimal query performance. Partitions are similar to folders. You can move whichever field to the first position of the partition if you frequently query by that field.
The order of fields listed in the path is important in the same way as it is in Compound Indexes. Data in the specified path is partitioned first by the value of the first field, and then by the value of the next field, and so on. Atlas supports queries on the specified fields using the partitions.
You can specify the two most frequently queried fields in your collection and order them from the most frequently queried in the first position to the least queried field in the second position. For example, suppose you are configuring the online archive for your `customers` collection in the `sample_analytics` database. If your archived field is set to the custom archival rule in our example above, your first queried field is `username`, and your second queried field is `email`, your partition will look like the following:
```
/username/email
```
Atlas creates partitions first for the `username` field, followed by the `email`. Atlas uses the partitions for queries on the following fields:
- the `username` field
- the ` username` field and the `email` field
> Note: The value of a partition field can be up to a maximum of 700 characters. Documents with values exceeding 700 characters are not archived.
For more information on how to partition data in your Online Archive, please refer to the documentation.
## Summary
In this post, we covered some advanced use cases for Online Archive to help you take advantage of this MongoDB Atlas feature. We initialized a demo project to show you how to set up custom archival rules with Atlas Online Archive, as well as improve query performance through partitioning your archived data.
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "So you've set up a simple MongoDB Atlas Online Archive, and now you might be wondering, \"What's next?\" In this post, we will cover some more advanced Online Archive use cases, including setting up custom archival rules and how to improve query performance through partitioning.",
"contentType": "Tutorial"
} | How to Use Custom Archival Rules and Partitioning on MongoDB Atlas Online Archive | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/create-data-api-10-min-realm | created | # Create a Custom Data Enabled API in MongoDB Atlas in 10 Minutes or Less
## Objectives
- Deploy a Free Tier Cluster
- Load Sample Data into your MongoDB Atlas Cluster
- Create a MongoDB Realm application
- Create a 3rd Party Service, an API with an HTTP service listener
- Test the API using Postman
## Prerequisites
- MongoDB Atlas Account with a Cluster Running
- Postman Installed - See
## Getting Started
Creating an Application Programming Interface (API) that exposes data and responds to HTTP requests is very straightforward. With MongoDB Realm, you can create a data enabled endpoint in about 10 minutes or less. In this article, I'll explain the steps to follow to quickly create an API that exposes data from a sample database in MongoDB Atlas. We'll deploy the sample dataset, create a Realm App with an HTTP listener, and then we'll test it using Postman.
> I know that some folks prefer to watch and learn, so I've created this video overview. Be sure to pause the video at the various points where you need to install the required components and complete some of the required steps.
>
> :youtube]{vid=bM3fcw4M-yk}
## Step 1: Deploy a Free Tier Cluster
If you haven't done so already, visit [this link and follow along to deploy a free tier cluster. This cluster will be where we store and manage the data associated with our data API.
## Step 2: Load Sample Datasets into Your Atlas Cluster
MongoDB Atlas offers several sample datasets that you can easily deploy once you launch a cluster. Load the sample datasets by clicking on the three dots button to see additional options, and then select "Load Sample Dataset." This process will take approximately five minutes and will add a number of really helpful databases and collections to your cluster. Be aware that these will consume approximately 350mb of storage. If you intend to use your free tier cluster for an application, you may want to remove some of the datasets that you no longer need. You can always re-deploy these should you need them.
Navigate to the **Collections** tab to see them all. All of the datasets will be created as separate databases prefixed with `sample_` and then the name of the dataset. The one we care about for our API is called `sample_analytics`. Open this database up and you'll see one collection called `customers`. Click on it to see the data we will be working with.
This collection will have 500 documents, with each containing sample Analytics Customer documents. Don't worry about all the fields or the structure of these documents just now—we'll just be using this as a simple data source.
## Step 3: Create a New App
To begin creating a new Application Service, navigation from Atlas to App Services.
At the heart of the entire process are Application Services. There are several from which to choose and to create a data enabled endpoint, you'll choose the HTTP Service with HTTPS Endpoints. HTTPS Endpoints, like they sound, are simply hooks into the web interface of the back end. Coming up, I'll show you the code (a function) that gets executed when the hook receives data from your web client.
To access and create 3rd Party Services, click the link in the left-hand navigation labeled "3rd Party Services."
Next, let's add a service. Find, and click the button labeled "Add a Service."
Next, we'll specify that we're creating an HTTP service and we'll provide a name for the service. The name is not incredibly significant. I'm using `api` in this example.
When you create an HTTP Service, you're enabling access to this service from Realm's serverless functions in the form of an object called `context.services`. More on that later when we create a serverless function attached to this service. Name and add the service and you'll then get to create an Incoming HTTPS Endpoint. This is the process that will be contacted when your clients request data of your API.
Call the HTTPS Endpoint whatever you like, and set the parameters as you see below:
##### HTTPS Endpoint Properties
| Property | Description |
|------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Name | Choose a name for your HTTPS Endpoint... any value will do. |
| Authentication | This is how your HTTPS Endpoint will authenticate users of your API. For this simple exercise, let's choose `System`. |
| Log Function Arguments | Enabling this allows you to get additional log content with the arguments sent from your web clients. Turn this on. |
| HTTPS Endpoint URL | This is the URL created by Realm. Take note of this - we'll be using this URL to test our API. |
| HTTP Method | Our API can listen for the various HTTP methods (GET, POST, PATCH, etc.). Set this to POST for our example. |
| Respond with Result | Our API can respond to web client requests with a dataset result. You'll want this on for our example. |
| AUTHORIZATION - Can evaluate | This is a JSON expression that must evaluate to TRUE before the function may run. If this field is blank, it will evaluate to TRUE. This expression is evaluated before service-specific rules. |
| Request Validation | Realm can validate incoming requests to protect against DDOS attacks and users that you don't want accessing your API. Set this to `Require Secret` for our example. |
| Secret | This is the secret passphrase we'll create and use from our web client. We'll send this using a `PARAM` in the POST request. More on this below. |
As mentioned above, our example API will respond to `POST` requests. Next up, you'll get to create the logic in a function that will be executed whenever your API is contacted with a POST request.
### Defining the Function
Let's define the function that will be executed when the HTTPS Endpoint receives a POST request.
> As you modify the function, and save settings, you will notice a blue bar appear at the top of the console.
>
>
>
> This appears to let you know you have modified your Realm Application but have not yet deployed those changes. It's good practice to batch your changes. However, make sure you remember to review and deploy prior to testing.
Realm gives you the ability to specify what logic gets executed as a result of receiving a request on the HTTPS Endpoint URL. What you see above is the default function that's created for you when you create the service. It's meant to be an example and show you some of the things you can do in a Realm Backend function. Pay close attention to the `payload` variable. This is what's sent to you by the calling process. In our case, that's going to be from a form, or from an external JavaScript script. We'll come back to this function shortly and modify it accordingly.
Using our sample database `sample_analytics` and our `customers`, let's write a basic function to return 10 customer documents.
And here's the source:
``` JavaScript
exports = function(payload) {
const mongodb = context.services.get("mongodb-atlas");
const mycollection = mongodb.db("sample_analytics").collection("customers");
return mycollection.find({}).limit(10).toArray();
};
```
This is JavaScript - ECMAScript 6, to be specific, also known as ES6 and ECMAScript 2015, was the second major revision to JavaScript.
Let's call out an important element of this script: `context`.
Realm functions can interact with connected services, user information, predefined values, and other functions through modules attached to the global `context` variable.
The `context` variable contains the following modules:
| Property | Description |
|---------------------|------------------------------------------------------------------------------|
| `context.services` | Access service clients for the services you've configured. |
| `context.values` | Access values that you've defined. |
| `context.user` | Access information about the user that initiated the request. |
| `context.request` | Access information about the HTTP request that triggered this function call. |
| `context.functions` | Execute other functions in your Realm app. |
| `context.http` | Access the HTTP service for get, post, put, patch, delete, and head actions. |
Once you've set your configuration for the Realm HTTPS Endpoint, copy the HTTPS Endpoint URL, and take note of the Secret you created. You'll need these to begin sending data and testing.
Speaking of testing... Postman is a great tool that enables you to test an API like the one we've just created. Postman acts like a web client - either a web application or a browser.
> If you don't have Postman installed, visit this link (it's free!):
Let's test our API with Postman:
1. Launch Postman and click the plus (+ New) to add a new request. You may also use the Launch screen - whichever you're more comfortable with.
2. Give your request a name and description, and choose/create a collection to save it in.
3. Paste the HTTPS Endpoint URL you created above into the URL bar in Postman labeled `Enter request URL`.
4. Change the `METHOD` from `GET` to `POST` - this will match the `HTTP Method` we configured in our HTTPS Endpoint above.
5. We need to append our `secret` parameter to our request so that our HTTPS Endpoint validates and authorizes the request. Remember, we set the secret parameter above. There are two ways you can send the secret parameter. The first is by appending it to the HTTPS Endpoint URL by adding `?secret=YOURSECRET`. The other is by creating a `Parameter` in Postman. Either way will work.
Once you've added the secret, you can click `SEND` to send the request to your newly created HTTPS Endpoint.
If all goes well, Postman will send a POST request to your API and Realm will execute the Function you created, returning 10 records from the `Sample_Analytics` database, and the `Customers` collection...
``` javascript
{
"_id": {
"$oid": "5ca4bbcea2dd94ee58162a68"
},
"username": "fmiller",
"name": "Elizabeth Ray",
"address": "9286 Bethany Glens\nVasqueztown, CO 22939",
"birthdate": {
"$date": {
"$numberLong": "226117231000"
}
},
"email": "[email protected]",
"active": true,
"accounts": [
{
"$numberInt": "371138"
},
...
],
"tier_and_details": {
"0df078f33aa74a2e9696e0520c1a828a": {
"tier": "Bronze",
"id": "0df078f33aa74a2e9696e0520c1a828a",
"active": true,
"benefits": [
"sports tickets"
]
},
"699456451cc24f028d2aa99d7534c219": {
"tier": "Bronze",
"benefits": [
"24 hour dedicated line",
"concierge services"
],
"active": true,
"id": "699456451cc24f028d2aa99d7534c219"
}
}
},
// remaining documents clipped for brevity
...
]
```
## Taking This Further
In just a few minutes, we've managed to create an API that exposes (READs) data stored in a MongoDB Database. This is just the beginning, however. From here, you can now expand on the API and create additional methods that handle all aspects of data management, including inserts, updates, and deletes.
To do this, you'll create additional HTTPS Endpoints, or modify this HTTPS Endpoint to take arguments that will control the flow and behavior of your API.
Consider the following example, showing how you might evaluate parameters sent by the client to manage data.
``` JavaScript
exports = async function(payload) {
const mongodb = context.services.get("mongodb-atlas");
const db = mongodb.db("sample_analytics");
const customers = db.collection("customers");
const cmd=payload.query.command;
const doc=payload.query.doc;
switch(cmd) {
case "create":
const result= await customers.insertOne(doc);
if(result) {
return { text: `Created customer` };
}
return { text: `Error stashing` };
case "read":
const findresult = await customers.find({'username': doc.username}).toArray();
return { findresult };
case "delete":
const delresult = await customers.deleteOne( { username: { $eq: payload.query.username }});
return { text: `Deleted ${delresult.deletedCount} stashed items` };
default:
return { text: "Unrecognized command." };
}
}
```
## Conclusion
MongoDB Realm enables developers to quickly create fully functional application components without having to implement a lot of boilerplate code typically required for APIs. Note that the above example, while basic, should provide you with a good starting point. for you. Please join me in the [Community Forums if you have questions.
You may also be interested in learning more from an episode of the MongoDB Podcast where we covered Mobile Application Development with Realm.
#### Other Resources
Data API Documentation - docs:https://docs.atlas.mongodb.com/api/data-api/
| md | {
"tags": [
"Realm"
],
"pageDescription": "Learn how to create a data API with Atlas Data API in 10 minutes or less",
"contentType": "Tutorial"
} | Create a Custom Data Enabled API in MongoDB Atlas in 10 Minutes or Less | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/how-build-healthcare-interoperability-microservice-using-fhir-mongodb | created | # How to Build a Healthcare Interoperability Microservice Using FHIR and MongoDB
# How to Build a Healthcare Interoperability Microservice Using FHIR and MongoDB
Interoperability refers to a system’s or software's capability to exchange and utilize information. Modern interoperability standards, like Fast Healthcare interoperability Resources (or FHIR), precisely define how data should be communicated. Like most current standards, FHIR uses REST APIs which are set in JSON format. However, these standards do not set how data should be stored, providing software vendors with flexibility in managing information according to their preferences.
This is where MongoDB's approach comes into play — data that is accessed together should be stored together. The compatibility between the FHIR’s resource format and MongoDB's document model allows the data to be stored exactly as it should be communicated. This brings several benefits, such as removing the need for any middleware/data processing tool which decreases development complexity and accelerates read/write operations.
Additionally, MongoDB can also allow you to create a FHIR-compliant Atlas data API. This benefits healthcare providers using software vendors by giving them control over their data without complex integrations. It reduces integration complexity by handling data processing at a platform level. MongoDB's app services also offer security features like authentication. This, however, is not a full clinical data repository nor is it meant to replace one. Rather, this is yet another integration capability that MongoDB has.
In this article, we will walk you through how you can expose the data of FHIR resources through Atlas Data API to two different users with different permissions.
## Scenario
- Dataset: We have a simple dataset where we have modeled the data using FHIR-compliant schemas. These resources are varied: patients, locations, practitioners, and appointments.
- We have two users groups that have different responsibilities:
- The first is a group of healthcare providers. These individuals work in a specific location and should only have access to the appointments in said location.
- The second is a group that works at a healthcare agency. These individuals analyze the appointments from several centers. They should not be able to look at personal identifiable information (or PII).
## Prerequisites
- Deploy an M0+ Atlas cluster.
- Install Python3 along with PyMongo and Mimesis modules to generate and insert documents.
## Step 1: Insert FHIR documents into the database
Clone this GitHub repository on your computer.
- Add your connection string on the config.py file. You can find it by following the instructions in our docs.
- Execute the files: locGen.py,pracGen.py, patientGen.py, and ProposedAppointmentGeneration.py in that order.
> Note: The last script will take a couple of minutes as it creates the appointments with the relevant information from the other collections.
Before continuing, you should check that you have a new “FHIR” database along with four collections inside it:
- Locations with 22 locations
- Practitioners with 70 documents
- Patients with 20,000 documents
- Appointments with close to 7,000 documents
## Step 2: Create an App Services application
After you’ve created a cluster and loaded the sample dataset, you can create an application in Atlas App Services.
Follow the steps to create a new App Services application if you haven’t done so already.
I used the name “FHIR-search” and chose the cluster “FHIR” that I’ve already loaded the sample dataset into.
or from below.
```javascript
exports = async function(request, response) {
const queryParams = request.query;
const collection = context.services.get("mongodb-atlas").db("FHIR").collection("appointments");
const query = {};
const sort = {};
const project = {};
const codeParams = {};
const aggreg = ];
const pageSize = 20;
const limit={};
let tot = true;
let dynamicPageSize = null;
const URL = 'https://fakeurl.com/endpoint/appointment'//put your http endpoint URL here
const FieldMap = {
'actor': 'participant.actor.reference',
'date': 'start',
'identifier':'_id',
'location': 'location.reference',
'part-status': 'participant.0.actor.status',
'patient':'participant.0.actor.reference',
'practitioner': 'participant.1.actor.reference',
'status': 'status',
};
for (const key in queryParams) {
switch (key) {
case "actor":
query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);
break;
case "date":
const dateParams = queryParams[key].split(",");
const dateFilters = dateParams.map((dateParam) => {
const firstTwoChars = dateParam.substr(0, 2);
const dateValue = dateParam.slice(2);
if (firstTwoChars === "ge" || firstTwoChars === "le") {
const operator = firstTwoChars === "ge" ? "$gte" : "$lte";
return { ["start"]: { [operator] : new Date(dateValue) } };
}
return null;
});
query["$and"] = dateFilters.filter((filter) => filter !== null);
break;
case "identifier":
query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);
break;
case "location":
try {
query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);
} catch (error) {
const locValues = queryParams[key].split(",");
query[FieldMap[key]] = { $in: locValues };
}
break;
case "location:contains" :
try {
query[FieldMap[key]] = {"$regex": new BSON.ObjectId(queryParams[key]), "$options": "i"};
} catch (error) {
query[FieldMap[key]] = {"$regex": queryParams[key], "$options": "i"};
}
break;
case "part-status":
query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);
break;
case "patient":
query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);
break;
case "practitioner":
query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);
break;
case "status":
const statusValues = queryParams[key].split(",");
query[FieldMap[key]] = { $in: statusValues };
break;
case "_count":
dynamicPageSize = parseInt(queryParams[key]);
break;
case "_elements":
const Params = queryParams[key].split(",");
for (const param of Params) {
if (FieldMap[param]) {
project[FieldMap[param]] = 1;
}
}
break;
case "_sort":
// sort logic
const sortDirection = queryParams[key].startsWith("-") ? -1 : 1;
const sortField = queryParams[key].replace(/^-/, '');
sort[FieldMap[sortField]] = sortDirection;
break;
case "_maxresults":
// sort logic
limit["_maxresults"]=parseInt(queryParams[key])
break;
case "_total":
tot = false;
break;
default:
// Default case for other keys
codeParams[key] = queryParams[key];
break;
}
}
let findResult;
const page = parseInt(codeParams.page) || 1;
if (tot) {
aggreg.push({'$match':query});
if(Object.keys(sort).length > 0){
aggreg.push({'$sort':sort});
} else {
aggreg.push({'$sort':{"start":1}});
}
if(Object.keys(project).length > 0){
aggreg.push({'$project':project});
}
if(Object.keys(limit).length > 0){
aggreg.push({'$limit':limit["_maxresults"]});
}else{
aggreg.push({'$limit':(dynamicPageSize||pageSize)*page});
}
try {
//findResult = await collection.find(query).sort(sort).limit((dynamicPageSize||pageSize)*pageSize).toArray();
findResult = await collection.aggregate(aggreg).toArray();
} catch (err) {
console.log("Error occurred while executing find:", err.message);
response.setStatusCode(500);
response.setHeader("Content-Type", "application/json");
return { error: err.message };
}
} else {
findResult = [];
}
let total
if(Object.keys(limit).length > 0){
total=limit["_maxresults"];
}else{
total = await collection.count(query);
}
const totalPages = Math.ceil(total / (dynamicPageSize || pageSize));
const startIdx = (page - 1) * (dynamicPageSize || pageSize);
const endIdx = startIdx + (dynamicPageSize || pageSize);
const resultsInBundle = findResult.slice(startIdx, endIdx);
const bundle = {
resourceType: "Bundle",
type: "searchset",
total:total,
link:[],
entry: resultsInBundle.map((resource) => ({
fullUrl: `${URL}?id=${resource._id}`,
resource,
search: {
mode: 'match'
},
})),
};
if (page <= totalPages) {
if (page > 1 && page!==totalPages) {
bundle.link = [
{ relation: "previous", url: `${URL}${getQueryString(queryParams,sort,page-1,dynamicPageSize || pageSize)}` },
{ relation: "self", url: `${URL}${getQueryString(queryParams,sort,page,dynamicPageSize || pageSize)}` },
{ relation: "next", url: `${URL}${getQueryString(queryParams,sort,page+1,dynamicPageSize || pageSize)}` },
];
} else if(page==totalPages && totalPages!==1) {
bundle.link = [
{ relation: "previous", url: `${URL}${getQueryString(queryParams,sort,page-1,dynamicPageSize || pageSize)}` },
{ relation: "self", url: `${URL}${getQueryString(queryParams,sort,page,dynamicPageSize || pageSize)}` }
];
} else if(totalPages==1 || dynamicPageSize==0) {
bundle.link = [
{ relation: "self", url: `${URL}${getQueryString(queryParams,null,0,0)}` },
];
} else {
bundle.link = [
{ relation: "self", url: `${URL}${getQueryString(queryParams,sort,page,dynamicPageSize || pageSize)}` },
{ relation: "next", url: `${URL}${getQueryString(queryParams,sort,page+1,dynamicPageSize || pageSize)}` },
];
}
}
response.setStatusCode(200);
response.setHeader("Content-Type", "application/json");
response.setBody(JSON.stringify(bundle, null, 2));
};
// Helper function to generate query string from query parameters
function getQueryString(params,sort, p, pageSize) {
let paramString = "";
let queryString = "";
if (params && Object.keys(params).length > 0) {
paramString = Object.keys(params)
.filter((key) => key !== "page" && key !== "_count")
.map((key) => `${(key)}=${params[key]}`)
.join("&");
}
if (paramString!==""){
if (p > 1) {
queryString = `?`+ paramString.replace(/ /g, "%20") + `&page=${(p)}&_count=${pageSize}`;
} else {
queryString += `?`+ paramString.replace(/ /g, "%20") +`&_count=${pageSize}`
}
} else if (p > 1) {
queryString = `?page=${(p)}&_count=${pageSize}`;
}
return queryString;
}
```
- Make sure to change the fake URL in said function with the one that was just created from your HTTPS endpoint.
![Add Authentication to the Endpoint Function][3]
- Enable both “Fetch Custom User Data” and “Create User Upon Authentication.”
- Lastly, save the draft and deploy it.
![Publish the Endpoint Function][4]
Now, your API endpoint is ready and accessible! But if you test it, you will get the following authentication error since no authentication provider has been enabled.
```bash
curl --location --request GET https://.com/app//endpoint/appointment' \
--header 'Content-Type: application/json' \
{"error":"no authentication methods were specified","error_code":"InvalidParameter","link":"https://realm.mongodb.com/groups/64e34f487860ee7a5c8fc990/apps/64e35fe30e434ffceaca4c89/logs?co_id=64e369ca7b46f09497deb46d"}
```
> Side note: To view the result without any security, you can go into your function, then go to the settings tab and set the authentication to system. However, this will treat any request as if it came from the system, so proceed with caution.
## Step 3.1: Enable JWT-based authentication
FHIR emphasizes the importance of secure data exchange in healthcare. While FHIR itself doesn't define a specific authentication protocol, it recommends using OAuth for web-centric applications and highlights the HL7 SMART App Launch guide for added context. This focus on secure authentication aligns with MongoDB Atlas's provision for JWT (JSON Web Tokens) as an authentication method, making it an advantageous choice when building FHIR-based microservices.
Then, to add authentication, navigate to the homepage of the App Services application. Click “Authentication” on the left-hand side menu and click the EDIT button of the row where the provider is Custom JWT Authentication.
![Enable JWT Authentication for the Endpoint][5]
JWT (JSON Web Token) provides a token-based authentication where a token is generated by the client based on an agreed secret and cryptography algorithm. After the client transmits the token, the server validates the token with the agreed secret and cryptography algorithm and then processes client requests if the token is valid.
In the configuration options of the Custom JWT Authentication, fill out the options with the following:
- Enable the Authentication Provider (Provider Enabled must be turned on).
- Keep the verification method as is (manually specify signing keys).
- Keep the signing algorithm as is (HS256).
- Add a new signing key.
- Provide the signing key name.
- For example, APITestJWTSigningKEY
- Provide the secure key content (between 32 and 512 characters) and note it somewhere secure.
- For example, FipTEgYJ6WfUEhCJq3e@pm8-TkE9*UZN
- Add two fields in the metadata fields.
- The path should be metadata.group and the corresponding field should be group.
- The path should be metadata.name and the corresponding field should be name.
- Keep the audience field as is (empty).
Below, you can find how the JWT Authentication Provider form has been filled accordingly.
![JWT Authentication Provider Example][6]
Save it and then deploy it.
After it’s deployed, you can see the secret that has been created in the [App Services Values. It’s accessible on the left side menu by clicking “Values.”
.
These are the steps to generate an encoded JWT:
- Visit jwt.io.
- On the right-hand side in the section Decoded, we can fill out the values. On the left-hand side, the corresponding Encoded JWT will be generated.
- In the Decoded section:
- Keep the header section the same.
- In the Payload section, set the following fields:
- Sub
- Represents the owner of the token
- Provide value unique to the user
- Metadata
- Represents metadata information regarding this token and can be used for further processing in App Services
- We have two sub fields here
- Name
- Represents the username of the client that will initiate the API request
- Will be used as the username in App Services
- Group
- Represents the group information of the client that we’ll use later for rule-based access
- Exp
- Represents when the token is going to expire
- Provides a future time to keep expiration impossible during our tests
- Aud
- Represents the name of the App Services application that you can get from the homepage of your application in App Services
- In the Verify Signature section:
- Provide the same secret that you’ve already provided while enabling Custom JWT Authentication in Step 3.1.
Below, you can find how the values have been filled out in the Decoded section and the corresponding Encoded JWT that has been generated.
defined, we were not able to access any data.
Even though the request is not successful due to the no rule definition, you can check out the App Users page to list authenticated users, as shown below. user01 was the name of the user that was provided in the metadata.name field of the JWT.
.
Otherwise, let’s create a role that will have access to all of the fields.
- Navigate to the Rules section on the left-hand side of the menu in App Services.
- Choose the collection appointments on the left side of the menu.
- Click **readAll** on the right side of the menu, as shown below.
. As a demo, this won’t be presenting all of FHIR search capabilities. Instead, we will focus on the basic ones.
In our server, we will be able to respond to two types of inputs. First, there are the regular search parameters that we can see at the bottom of the resources’ page. And second, we will implement the Search Result Parameters that can modify the results of a performed search. Because of our data schema, not all will apply. Hence, not all were coded into the function.
More precisely, we will be able to call the search parameters: actor, date, identifier, location, part-status, patient, practitioner, and status. We can also call the search result parameters: _count, _elements, _sort, _maxresults, and _total, along with the page parameter. Please refer to the FHIR documentation to see how they work.
Make sure to test both users as the response for each of them will be different. Here, you have a couple of examples. To keep it short, I’ll set the page to a single appointment by adding ?_count=1 to the URL.
Healthcare provider:
```
curl --request GET '{{URL}}?_count=1' \ --header 'jwtTokenString: {{hcproviderJWT}}' \ --header 'Content-Type: application/json'
HTTP/1.1 200 OK
content-encoding: gzip
content-type: application/json
strict-transport-security: max-age=31536000; includeSubdomains;
vary: Origin
x-appservices-request-id: 64e5e47e6dbb75dc6700e42c
x-frame-options: DENY
date: Wed, 23 Aug 2023 10:50:38 GMT
content-length: 671
x-envoy-upstream-service-time: 104
server: mdbws
x-envoy-decorator-operation: baas-main.baas-prod.svc.cluster.local:8086/*
connection: close
{
"resourceType": "Bundle",
"type": "searchset",
"total": 384,
"link":
{
"relation": "self",
"url": "https://fakeurl.com/endpoint/appointment"
},
{
"relation": "next",
"url": "https://fakeurl.com/endpoint/appointment?page=2\u0026_count=1"
}
],
"entry": [
{
"fullUrl": "https://fakeurl.com/endpoint/appointment?id=64e35896eaf6edfdbe5f22be",
"resource": {
"_id": "64e35896eaf6edfdbe5f22be",
"resourceType": "Appointment",
"status": "proposed",
"created": "2023-08-21T14:29:10.312Z",
"start": "2023-08-21T14:29:09.535Z",
"description": "Breast Mammography Screening",
"serviceType": [
{
"coding": [
{
"system": "http://snomed.info/sct",
"code": "278110001",
"display": "radiographic imaging"
}
],
"text": "Mammography"
}
],
"participant": [
{
"actor": {
"reference": "64e354874f5c09af1a8fc2b6",
"display": [
{
"given": [
"Marta"
],
"family": "Donovan"
}
]
},
"required": true,
"status": "needs-action"
},
{
"actor": {
"reference": "64e353d80727df4ed8d00839",
"display": [
{
"use": "official",
"family": "Harrell",
"given": [
"Juan Carlos"
]
}
]
},
"required": true,
"status": "accepted"
}
],
"location": {
"reference": "64e35380f2f2059b24dafa60",
"display": "St. Barney clinic"
}
},
"search": {
"mode": "match"
}
}
]
}
```
Healthcare agency:
```
curl --request GET '{{URL}}?_count=1' \ --header 'jwtTokenString: {{hcagencyJWT}}' \ --header 'Content-Type: application/json'\
HTTP/1.1 200 OK
content-encoding: gzip
content-type: application/json
strict-transport-security: max-age=31536000; includeSubdomains;
vary: Origin
x-appservices-request-id: 64e5e4eee069ab6f307d792e
x-frame-options: DENY
date: Wed, 23 Aug 2023 10:52:30 GMT
content-length: 671
x-envoy-upstream-service-time: 162
server: mdbws
x-envoy-decorator-operation: baas-main.baas-prod.svc.cluster.local:8086/*
connection: close
{
"resourceType": "Bundle",
"type": "searchset",
"total": 6720,
"link": [
{
"relation": "self",
"url": "https://fakeurl.com/endpoint/appointment"
},
{
"relation": "next",
"url": "https://fakeurl.com/endpoint/appointment?page=2\u0026_count=1"
}
],
"entry": [
{
"fullUrl": "https://fakeurl.com/endpoint/appointment?id=64e35896eaf6edfdbe5f22be",
"resource": {
"_id": "64e35896eaf6edfdbe5f22be",
"resourceType": "Appointment",
"status": "proposed",
"created": "2023-08-21T14:29:10.312Z",
"start": "2023-08-21T14:29:09.535Z",
"description": "Breast Mammography Screening",
"serviceType": [
{
"coding": [
{
"system": "http://snomed.info/sct",
"code": "278110001",
"display": "radiographic imaging"
}
],
"text": "Mammography"
}
],
"participant": [
{
"actor": {
"reference": "64e354874f5c09af1a8fc2b6",
"display": [
{
"given": [
"Marta"
],
"family": "Donovan"
}
]
},
"required": true,
"status": "needs-action"
},
{
"actor": {
"reference": "64e353d80727df4ed8d00839",
"display": [
{
"use": "official",
"family": "Harrell",
"given": [
"Juan Carlos"
]
}
]
},
"required": true,
"status": "accepted"
}
],
"location": {
"reference": "64e35380f2f2059b24dafa60",
"display": "St. Barney clinic"
}
},
"search": {
"mode": "match"
}
}
]
}
```
Please note the difference on the total number of documents fetched as well as the participant.actor.display fields missing for the agency user.
## Step 6: How to call the microservice from an application
The calls that were shown up to this point were from API platforms such as Postman or Visual Studio’s REST client. However, for security reasons, when putting this into an application such as a React.js application, then the calls might be blocked by the CORS policy. To avoid this, we need to authenticate our data API request. You can read more on how to manage your user sessions [in our docs. But for us, it should be as simple as sending the following request:
```bash
curl -X POST 'https://..realm.mongodb.com/api/client/v2.0/app//auth/providers/custom-token/login' \
--header 'Content-Type: application/json' \
--data-raw '{
"token": ""
}'
```
This will return something like:
```json
{
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RldmljZV9pZCI6IjAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMCIsImJhYXNfZG9tYWluX2lkIjoiNWVlYTg2NjdiY2I0YzgxMGI2NTFmYjU5IiwiZXhwIjoxNjY3OTQwNjE4LCJpYXQiOjE2Njc5Mzg4MTgsImlzcyI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjgzZSIsInN0aXRjaF9kZXZJZCI6IjAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMCIsInN0aXRjaF9kb21haW5JZCI6IjVlZWE4NjY3YmNiNGM4MTBiNjUxZmI1OSIsInN1YiI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjdmOSIsInR5cCI6ImFjY2VzcyJ9.pyq3nfzFUT-6r-umqGrEVIP8XHOw0WGnTZ3-EbvgbF0",
"refresh_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RhdGEiOm51bGwsImJhYXNfZGV2aWNlX2lkIjoiMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwIiwiYmFhc19kb21haW5faWQiOiI1ZWVhODY2N2JjYjRjODEwYjY1MWZiNTkiLCJiYWFzX2lkIjoiNjM2YWJhMDIxNzI4YjZjMWMwM2RiODNlIiwiYmFhc19pZGVudGl0eSI6eyJpZCI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjdmOC1ud2hzd2F6ZHljbXZycGVuZHdkZHRjZHQiLCJwcm92aWRlcl90eXBlIjoiYW5vbi11c2VyIiwicHJvdmlkZXJfaWQiOiI2MjRkZTdiYjhlYzZjOTM5NjI2ZjU0MjUifSwiZXhwIjozMjQ0NzM4ODE4LCJpYXQiOjE2Njc5Mzg4MTgsInN0aXRjaF9kYXRhIjpudWxsLCJzdGl0Y2hfZGV2SWQiOiIwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAiLCJzdGl0Y2hfZG9tYWluSWQiOiI1ZWVhODY2N2JjYjRjODEwYjY1MWZiNTkiLCJzdGl0Y2hfaWQiOiI2MzZhYmEwMjE3MjhiNmMxYzAzZGI4M2UiLCJzdGl0Y2hfaWRlbnQiOnsiaWQiOiI2MzZhYmEwMjE3MjhiNmMxYzAzZGI3Zjgtbndoc3dhemR5Y212cnBlbmR3ZGR0Y2R0IiwicHJvdmlkZXJfdHlwZSI6ImFub24tdXNlciIsInByb3ZpZGVyX2lkIjoiNjI0ZGU3YmI4ZWM2YzkzOTYyNmY1NDI1In0sInN1YiI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjdmOSIsInR5cCI6InJlZnJlc2gifQ.h9YskmSpSLK8DMwBpPGuk7g1s4OWZDifZ1fmOJgSygw",
"user_id": "636aba021728b6c1c03db7f9"
}
```
These tokens will allow your application to request data from your FHIR microservice. You will just need to replace the header 'jwtTokenString: {{JWT}}' with 'Authorization: Bearer {{token above}}', like so:
```
curl --request GET {{URL}} \ --header 'Authorization: Bearer {{token above}}' \
--header 'Content-Type: application/json'
{"error": "no matching rule found" }
```
You can find additional information in our docs for authenticating Data API requests.
## Summary
In conclusion, interoperability plays a crucial role in enabling the exchange and utilization of information within systems and software. Modern standards like Fast Healthcare Interoperability Resources (FHIR) define data communication methods, while MongoDB's approach aligns data storage with FHIR's resource format, simplifying integration and improving performance.
MongoDB's capabilities, including Atlas Data API, offer healthcare providers and software vendors greater control over their data, reducing complexity and enhancing security. However, it's important to note that this integration capability complements rather than replaces clinical data repositories. In the previous sections, we explored how to:
- Generate your own FHIR data.
- Configure serverless functions along with Custom JWT Authentication to seamlessly integrate user-specific information.
- Implement precise data access control through roles and filters.
- Call the configured APIs directly from the code.
Are you ready to dive in and leverage these capabilities for your projects? Don't miss out on the chance to explore the full potential of MongoDB Atlas App Services. Get started for free by provisioning an M0 Atlas instance and creating your own App Services application.
Should you encounter any roadblocks or have questions, our vibrant developer forums are here to support you every step of the way.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1f8674230e64660c/652eb2153b618bf623f212fa/image12.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte32e786ba5092f3c/652eb2573fc0c855d1c9446c/image13.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdbf6696269e8c0fa/652eb2a18fc81358f36c2dd2/image6.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb9458dc116962c7b/652eb2fe74aa53528e325ffc/image4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt22ed4f126322aaf9/652eb3460418d27708f75d8b/image5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt27df3f81feb75a2a/652eb36e8fc81306dc6c2dda/image10.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt46ecccd80736dca1/652eb39a701ffe37d839cfd2/image2.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1b44f38890b2ea44/652eb3d88dd295fac0efc510/image7.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt01966229cb4a8966/652eb40148aba383898b1f9a/image11.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf52ce0e04106ac69/652eb46b8d3ed4341e55286b/image3.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2b60e8592927d6fa/652eb48e3feebb0b40291c9a/image9.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0e97db616428a8ad/652eb4ce8d3ed41c7e55286f/image8.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc35e52e0efa26c03/652eb4fff92b9e5644aa21a4/image1.png | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to build a healthcare interoperability microservice, secured with JWT using FHIR and MongoDB.",
"contentType": "Tutorial"
} | How to Build a Healthcare Interoperability Microservice Using FHIR and MongoDB | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-sdk-schema-migration-android | created | # How to Update Realm SDK Database Schema for Android
> This is a follow-up article in the **Getting Started Series**.
> In this article, we learn how to modify/migrate Realm **local** database schema.
## Introduction
As you add and change application features, you need to modify database schema, and the need for migrations arises, which is very important for a seamless user experience.
By the end of this article, you will learn:
1. How to update database schema post-production release on play store.
2. How to migrate user data from one schema to another.
Before we get down to business, let's quickly recap how we set `Realm` in our application.
```kotlin
const val REALM_SCHEMA_VERSION: Long = 1
const val REALM_DB_NAME = "rMigrationSample.db"
fun setupRealm(context: Context) {
Realm.init(context)
val config = RealmConfiguration.Builder()
.name(REALM_DB_NAME)
.schemaVersion(REALM_SCHEMA_VERSION)
.build()
Realm.setDefaultConfiguration(config)
}
```
Doing migration in Realm is very straightforward and simple. The high-level steps for the successful migration of any database are:
1. Update the database version.
2. Make changes to the database schema.
3. Migrate user data from old schema to new.
## Update the Database Version
This is the simplest step, which can be done by incrementing the version of
`REALM_SCHEMA_VERSION`, which notifies `Relam` about database changes. This, in turn, runs triggers migration, if provided.
To add migration, we use the `migration` function available in `RealmConfiguration.Builder`, which takes an argument of `RealmMigration`, which we will review in the next step.
```kotlin
val config = RealmConfiguration.Builder()
.name(REALM_DB_NAME)
.schemaVersion(REALM_SCHEMA_VERSION)
.migration(DBMigrationHelper())
.build()
```
## Make Changes to the Database Schema
In `Realm`, all the migration-related operation has to be performed within the scope
of `RealmMigration`.
```kotlin
class DBMigrationHelper : RealmMigration {
override fun migrate(realm: DynamicRealm, oldVersion: Long, newVersion: Long) {
migration1to2(realm.schema)
migration2to3(realm.schema)
migration3to4(realm.schema)
}
private fun migration3to4(schema: RealmSchema?) {
TODO("Not yet implemented")
}
private fun migration2to3(schema: RealmSchema?) {
TODO("Not yet implemented")
}
private fun migration1to2(schema: RealmSchema) {
TODO("Not yet implemented")
}
}
```
To add/update/rename any field:
```kotlin
private fun migration1to2(schema: RealmSchema) {
val userSchema = schema.get(UserInfo::class.java.simpleName)
userSchema?.run {
addField("phoneNumber", String::class.java, FieldAttribute.REQUIRED)
renameField("phoneNumber", "phoneNo")
removeField("phoneNo")
}
}
```
## Migrate User Data from Old Schema to New
All the data transformation during migration can be done with `transform` function with the help of `set` and `get` methods.
```kotlin
private fun migration2to3(schema: RealmSchema) {
val userSchema = schema.get(UserInfo::class.java.simpleName)
userSchema?.run {
addField("fullName", String::class.java, FieldAttribute.REQUIRED)
transform {
it.set("fullName", it.get("firstName") + it.get("lastName"))
}
}
}
```
In the above snippet, we are setting the default value of **fullName** by extracting the value from old data, like **firstName** and **lastName**.
We can also use `transform` to update the data type.
```kotlin
val personSchema = schema! or tweet
me @codeWithMohit.
In the next article, we will discuss how to migrate the Realm database with Atlas Device Sync.
If you have an iOS app, do check out the iOS tutorial
on Realm iOS Migration. | md | {
"tags": [
"Realm",
"Kotlin",
"Java",
"Android"
],
"pageDescription": "In this article, we explore and learn how to make Realm SDK database schema changes. ",
"contentType": "Tutorial"
} | How to Update Realm SDK Database Schema for Android | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/javascript/node-connect-mongodb-3-3-2 | created | # Connect to a MongoDB Database Using Node.js 3.3.2
Use Node.js? Want to learn MongoDB? This is the blog series for you!
In this Quick Start series, I'll walk you through the basics of how to get started using MongoDB with Node.js. In today's post, we'll work through connecting to a MongoDB database from a Node.js script, retrieving a list of databases, and printing the results to your console.
>
>
>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.
>
>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.
>
>
>
>
>Prefer to learn by video? I've got ya covered. Check out the video below that covers how to get connected as well as how to perform the CRUD operations.
>
>:youtube]{vid=fbYExfeFsI0}
>
>
## Set Up
Before we begin, we need to ensure you've completed a few prerequisite steps.
### Install Node.js
First, make sure you have a supported version of Node.js installed (the MongoDB Node.js Driver requires Node 4.x or greater, and, for these examples, I've used Node.js 10.16.3).
### Install the MongoDB Node.js Driver
The MongoDB Node.js Driver allows you to easily interact with MongoDB databases from within Node.js applications. You'll need the driver in order to connect to your database and execute the queries described in this Quick Start series.
If you don't have the MongoDB Node.js Driver installed, you can install it with the following command.
``` bash
npm install mongodb
```
At the time of writing, this installed version 3.3.2 of the driver. Running `npm list mongodb` will display the currently installed driver version number. For more details on the driver and installation, see the [official documentation.
### Create a Free MongoDB Atlas Cluster and Load the Sample Data
Next, you'll need a MongoDB database. The easiest way to get started with MongoDB is to use Atlas, MongoDB's fully-managed database-as-a-service.
Head over to Atlas and create a new cluster in the free tier. At a high level, a cluster is a set of nodes where copies of your database will be stored. Once your tier is created, load the sample data. If you're not familiar with how to create a new cluster and load the sample data, check out this video tutorial from MongoDB Developer Advocate Maxime Beugnet.
>
>
>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
>
>
### Get Your Cluster's Connection Info
The final step is to prep your cluster for connection.
In Atlas, navigate to your cluster and click **CONNECT**. The Cluster Connection Wizard will appear.
The Wizard will prompt you to add your current IP address to the IP Access List and create a MongoDB user if you haven't already done so. Be sure to note the username and password you use for the new MongoDB user as you'll need them in a later step.
Next, the Wizard will prompt you to choose a connection method. Select **Connect Your Application**. When the Wizard prompts you to select your driver version, select **Node.js** and **3.0 or later**. Copy the provided connection string.
For more details on how to access the Connection Wizard and complete the steps described above, see the official documentation.
## Connect to Your Database From a Node.js Application
Now that everything is set up, it's time to code! Let's write a Node.js script that connects to your database and lists the databases in your cluster.
### Import MongoClient
The MongoDB module exports `MongoClient`, and that's what we'll use to connect to a MongoDB database. We can use an instance of MongoClient to connect to a cluster, access the database in that cluster, and close the connection to that cluster.
``` js
const { MongoClient } = require('mongodb');
```
### Create Our Main Function
Let's create an asynchronous function named `main()` where we will connect to our MongoDB cluster, call functions that query our database, and disconnect from our cluster.
The first thing we need to do inside of `main()` is create a constant for our connection URI. The connection URI is the connection string you copied in Atlas in the previous section. When you paste the connection string, don't forget to update `` and `` to be the credentials for the user you created in the previous section. The connection string includes a `` placeholder. For these examples, we'll be using the `sample_airbnb` database, so replace `` with `sample_airbnb`.
**Note**: The username and password you provide in the connection string are NOT the same as your Atlas credentials.
``` js
/**
* Connection URI. Update , , and to reflect your cluster.
* See https://docs.mongodb.com/ecosystem/drivers/node/ for more details
*/
const uri = "mongodb+srv://:@/sample_airbnb?retryWrites=true&w=majority";
```
Now that we have our URI, we can create an instance of MongoClient.
``` js
const client = new MongoClient(uri);
```
**Note**: When you run this code, you may see DeprecationWarnings around the URL string `parser` and the Server Discover and Monitoring engine. If you see these warnings, you can remove them by passing options to the MongoClient. For example, you could instantiate MongoClient by calling `new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true })`. See the Node.js MongoDB Driver API documentation for more information on these options.
Now we're ready to use MongoClient to connect to our cluster. `client.connect()` will return a promise. We will use the await keyword when we call `client.connect()` to indicate that we should block further execution until that operation has completed.
``` js
await client.connect();
```
We can now interact with our database. Let's build a function that prints the names of the databases in this cluster. It's often useful to contain this logic in well-named functions in order to improve the readability of your codebase. Throughout this series, we'll create new functions similar to the function we're creating here as we learn how to write different types of queries. For now, let's call a function named `listDatabases()`.
``` js
await listDatabases(client);
```
Let's wrap our calls to functions that interact with the database in a `try/catch` statement so that we handle any unexpected errors.
``` js
try {
await client.connect();
await listDatabases(client);
} catch (e) {
console.error(e);
}
```
We want to be sure we close the connection to our cluster, so we'll end our `try/catch` with a finally statement.
``` js
finally {
await client.close();
}
```
Once we have our `main()` function written, we need to call it. Let's send the errors to the console.
``` js
main().catch(console.error);
```
Putting it all together, our `main()` function and our call to it will look something like the following.
``` js
async function main(){
/**
* Connection URI. Update , , and to reflect your cluster.
* See https://docs.mongodb.com/ecosystem/drivers/node/ for more details
*/
const uri = "mongodb+srv://:@/test?retryWrites=true&w=majority";
const client = new MongoClient(uri);
try {
// Connect to the MongoDB cluster
await client.connect();
// Make the appropriate DB calls
await listDatabases(client);
} catch (e) {
console.error(e);
} finally {
await client.close();
}
}
main().catch(console.error);
```
### List the Databases in Our Cluster
In the previous section, we referenced the `listDatabases()` function. Let's implement it!
This function will retrieve a list of databases in our cluster and print the results in the console.
``` js
async function listDatabases(client){
databasesList = await client.db().admin().listDatabases();
console.log("Databases:");
databasesList.databases.forEach(db => console.log(` - ${db.name}`));
};
```
### Save Your File
You've been implementing a lot of code. Save your changes, and name your file something like `connection.js`. To see a copy of the complete file, visit the nodejs-quickstart GitHub repo.
### Execute Your Node.js Script
Now you're ready to test your code! Execute your script by running a command like the following in your terminal: `node connection.js`.
You will see output like the following:
``` js
Databases:
- sample_airbnb
- sample_geospatial
- sample_mflix
- sample_supplies
- sample_training
- sample_weatherdata
- admin
- local
```
## What's Next?
Today, you were able to connect to a MongoDB database from a Node.js script, retrieve a list of databases in your cluster, and view the results in your console. Nice!
Now that you're connected to your database, continue on to the next post in this series, where you'll learn to execute each of the CRUD (create, read, update, and delete) operations.
In the meantime, check out the following resources:
- MongoDB Node.js Driver
- Official MongoDB Documentation on the MongoDB Node.js Driver
- MongoDB University Free Course: M220JS: MongoDB for Javascript Developers
Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.
| md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Node.js and MongoDB is a powerful pairing and in this code example project we show you how.",
"contentType": "Code Example"
} | Connect to a MongoDB Database Using Node.js 3.3.2 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/nodejs-change-streams-triggers | created | # Change Streams & Triggers with Node.js Tutorial
Sometimes you need to react immediately to changes in your database. Perhaps you want to place an order with a distributor whenever an item's inventory drops below a given threshold. Or perhaps you want to send an email notification whenever the status of an order changes. Regardless of your particular use case, whenever you want to react immediately to changes in your MongoDB database, change streams and triggers are fantastic options.
If you're just joining us in this Quick Start with MongoDB and Node.js series, welcome! We began by walking through how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. Then we jumped into more advanced topics like the aggregation framework and transactions. The code we write today will use the same structure as the code we built in the first post in the series, so, if you have any questions about how to get started or how the code is structured, head back to that post.
And, with that, let's dive into change streams and triggers! Here is a summary of what we'll cover today:
- What are Change Streams?
- Setup
- Create a Change Stream
- Resume a Change Stream
- What are MongoDB Atlas Triggers?
- Create a MongoDB Atlas Trigger
- Wrapping Up
- Additional Resources
>
>
>Prefer a video over an article? Check out the video below that covers the exact same topics that I discuss in this article.
>
>:youtube]{vid=9LA7_CSyZb8}
>
>
>
>
>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
>
>
## What are Change Streams?
Change streams allow you to receive notifications about changes made to your MongoDB databases and collections. When you use change streams, you can choose to program actions that will be automatically taken whenever a change event occurs.
Change streams utilize the aggregation framework, so you can choose to filter for specific change events or transform the change event documents.
For example, let's say I want to be notified whenever a new listing in the Sydney, Australia market is added to the **listingsAndReviews** collection. I could create a change stream that monitors the **listingsAndReviews** collection and use an aggregation pipeline to match on the listings I'm interested in.
Let's take a look at three different ways to implement this change stream.
## Set Up
As with all posts in this MongoDB and Node.js Quick Start series, you'll need to ensure you've completed the prerequisite steps outlined in the **Set up** section of the first post in this series.
I find it helpful to have a script that will generate sample data when I'm testing change streams. To help you quickly generate sample data, I wrote changeStreamsTestData.js. Download a copy of the file, update the `uri` constant to reflect your Atlas connection info, and run it by executing `node changeStreamsTestData.js`. The script will do the following:
1. Create 3 new listings (Opera House Views, Private room in London, and Beautiful Beach House)
2. Update 2 of those listings (Opera House Views and Beautiful Beach House)
3. Create 2 more listings (Italian Villa and Sydney Harbour Home)
4. Delete a listing (Sydney Harbour Home).
## Create a Change Stream
Now that we're set up, let's explore three different ways to work with a change stream in Node.js.
### Get a Copy of the Node.js Template
To make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.
1. Download a copy of template.js.
2. Open `template.js` in your favorite code editor.
3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.
4. Save the file as `changeStreams.js`.
You can run this file by executing `node changeStreams.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.
### Create a Helper Function to Close the Change Stream
Regardless of how we monitor changes in our change stream, we will want to close the change stream after a certain amount of time. Let's create a helper function to do just that.
1. Paste the following function in `changeStreams.js`.
``` javascript
function closeChangeStream(timeInMs = 60000, changeStream) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
resolve(changeStream.close());
}, timeInMs)
})
};
```
### Monitor Change Stream using EventEmitter's on()
The MongoDB Node.js Driver's ChangeStream class inherits from the Node Built-in class EventEmitter. As a result, we can use EventEmitter's on() function to add a listener function that will be called whenever a change occurs in the change stream.
#### Create the Function
Let's create a function that will monitor changes in the change stream using EventEmitter's `on()`.
1. Continuing to work in `changeStreams.js`, create an asynchronous function named `monitorListingsUsingEventEmitter`. The function should have the following parameters: a connected MongoClient, a time in ms that indicates how long the change stream should be monitored, and an aggregation pipeline that the change stream will use.
``` javascript
async function monitorListingsUsingEventEmitter(client, timeInMs = 60000, pipeline = ]){
}
```
2. Now we need to access the collection we will monitor for changes. Add the following code to `monitorListingsUsingEventEmitter()`.
``` javascript
const collection = client.db("sample_airbnb").collection("listingsAndReviews");
```
3. Now we are ready to create our change stream. We can do so by using [Collection's watch(). Add the following line beneath the existing code in `monitorListingsUsingEventEmitter()`.
``` javascript
const changeStream = collection.watch(pipeline);
```
4. Once we have our change stream, we can add a listener to it. Let's log each change event in the console. Add the following line beneath the existing code in `monitorListingsUsingEventEmitter()`.
``` javascript
changeStream.on('change', (next) => {
console.log(next);
});
```
5. We could choose to leave the change stream open indefinitely. Instead, let's call our helper function to set a timer and close the change stream. Add the following line beneath the existing code in `monitorListingsUsingEventEmitter()`.
``` javascript
await closeChangeStream(timeInMs, changeStream);
```
#### Call the Function
Now that we've implemented our function, let's call it!
1. Inside of `main()` beneath the comment that says
`Make the appropriate DB calls`, call your
`monitorListingsUsingEventEmitter()` function:
``` javascript
await monitorListingsUsingEventEmitter(client);
```
2. Save your file.
3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 60 seconds.
4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Output similar to the following will be displayed in your first shell where you are running `changeStreams.js`.
``` javascript
{
_id: { _data: '825DE67A42000000012B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7640004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1575385666 },
fullDocument: {
_id: 5de67a42113ea7de6472e764,
name: 'Opera House Views',
summary: 'Beautiful apartment with views of the iconic Sydney Opera House',
property_type: 'Apartment',
bedrooms: 1,
bathrooms: 1,
beds: 1,
address: { market: 'Sydney', country: 'Australia' }
},
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e764 }
}
{
_id: { _data: '825DE67A42000000022B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7650004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1575385666 },
fullDocument: {
_id: 5de67a42113ea7de6472e765,
name: 'Private room in London',
property_type: 'Apartment',
bedrooms: 1,
bathroom: 1
},
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e765 }
}
{
_id: { _data: '825DE67A42000000032B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7660004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 3, high_: 1575385666 },
fullDocument: {
_id: 5de67a42113ea7de6472e766,
name: 'Beautiful Beach House',
summary: 'Enjoy relaxed beach living in this house with a private beach',
bedrooms: 4,
bathrooms: 2.5,
beds: 7,
last_review: 2019-12-03T15:07:46.730Z
},
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e766 }
}
{
_id: { _data: '825DE67A42000000042B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7640004' },
operationType: 'update',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 4, high_: 1575385666 },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e764 },
updateDescription: {
updatedFields: { beds: 2 },
removedFields: ]
}
}
{
_id: { _data: '825DE67A42000000052B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7660004' },
operationType: 'update',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 5, high_: 1575385666 },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e766 },
updateDescription: {
updatedFields: { address: [Object] },
removedFields: []
}
}
{
_id: { _data: '825DE67A42000000062B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7670004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 6, high_: 1575385666 },
fullDocument: {
_id: 5de67a42113ea7de6472e767,
name: 'Italian Villa',
property_type: 'Entire home/apt',
bedrooms: 6,
bathrooms: 4,
address: { market: 'Cinque Terre', country: 'Italy' }
},
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e767 }
}
{
_id: { _data: '825DE67A42000000072B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 7, high_: 1575385666 },
fullDocument: {
_id: 5de67a42113ea7de6472e768,
name: 'Sydney Harbour Home',
bedrooms: 4,
bathrooms: 2.5,
address: { market: 'Sydney', country: 'Australia' } },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e768 }
}
{
_id: { _data: '825DE67A42000000082B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },
operationType: 'delete',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 8, high_: 1575385666 },
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67a42113ea7de6472e768 }
}
```
If you run `node changeStreamsTestData.js` again before the 60
second timer has completed, you will see similar output.
After 60 seconds, the following will be displayed:
``` sh
Closing the change stream
```
#### Call the Function with an Aggregation Pipeline
In some cases, you will not care about all change events that occur in a collection. Instead, you will want to limit what changes you are monitoring. You can use an aggregation pipeline to filter the changes or transform the change stream event documents.
In our case, we only care about new listings in the Sydney, Australia market. Let's create an aggregation pipeline to filter for only those changes in the `listingsAndReviews` collection.
To learn more about what aggregation pipeline stages can be used with change streams, see the [official change streams documentation.
1. Inside of `main()` and above your existing call to `monitorListingsUsingEventEmitter()`, create an aggregation pipeline:
``` javascript
const pipeline =
{
'$match': {
'operationType': 'insert',
'fullDocument.address.country': 'Australia',
'fullDocument.address.market': 'Sydney'
},
}
];
```
2. Let's use this pipeline to filter the changes in our change stream. Update your existing call to `monitorListingsUsingEventEmitter()` to only leave the change stream open for 30 seconds and use the pipeline.
``` javascript
await monitorListingsUsingEventEmitter(client, 30000, pipeline);
```
3. Save your file.
4. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 30 seconds.
5. Create and update sample data by executing [node changeStreamsTestData.js in a new shell. Because the change stream is using the pipeline you just created, only documents inserted into the `listingsAndReviews` collection that are in the Sydney, Australia market will be in the change stream. Output similar to the following will be displayed in your first shell where you are running `changeStreams.js`.
``` javascript
{
_id: { _data: '825DE67CED000000012B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67CED150EA2DF172344370004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1575386349 },
fullDocument: {
_id: 5de67ced150ea2df17234437,
name: 'Opera House Views',
summary: 'Beautiful apartment with views of the iconic Sydney Opera House',
property_type: 'Apartment',
bedrooms: 1,
bathrooms: 1,
beds: 1,
address: { market: 'Sydney', country: 'Australia' }
},
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67ced150ea2df17234437 }
}
{
_id: { _data: '825DE67CEE000000032B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67CEE150EA2DF1723443B0004' },
operationType: 'insert',
clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 3, high_: 1575386350 },
fullDocument: {
_id: 5de67cee150ea2df1723443b,
name: 'Sydney Harbour Home',
bedrooms: 4,
bathrooms: 2.5,
address: { market: 'Sydney', country: 'Australia' }
},
ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },
documentKey: { _id: 5de67cee150ea2df1723443b }
}
```
After 30 seconds, the following will be displayed:
``` sh
Closing the change stream
```
### Monitor Change Stream using ChangeStream's hasNext()
In the section above, we used EventEmitter's `on()` to monitor the change stream. Alternatively, we can create a `while` loop that waits for the next element in the change stream by using hasNext() from MongoDB Node.js Driver's ChangeStream class.
#### Create the Function
Let's create a function that will monitor changes in the change stream using ChangeStream's `hasNext()`.
1. Continuing to work in `changeStreams.js`, create an asynchronous function named `monitorListingsUsingHasNext`. The function should have the following parameters: a connected MongoClient, a time in ms that indicates how long the change stream should be monitored, and an aggregation pipeline that the change stream will use.
``` javascript
async function monitorListingsUsingHasNext(client, timeInMs = 60000, pipeline = ]) {
}
```
2. Now we need to access the collection we will monitor for changes. Add the following code to `monitorListingsUsingHasNext()`.
``` javascript
const collection = client.db("sample_airbnb").collection("listingsAndReviews");
```
3. Now we are ready to create our change stream. We can do so by using [Collection's watch(). Add the following line beneath the existing code in `monitorListingsUsingHasNext()`.
``` javascript
const changeStream = collection.watch(pipeline);
```
4. We could choose to leave the change stream open indefinitely. Instead, let's call our helper function that will set a timer and close the change stream. Add the following line beneath the existing code in `monitorListingsUsingHasNext()`.
``` javascript
closeChangeStream(timeInMs, changeStream);
```
5. Now let's create a `while` loop that will wait for new changes in the change stream. We can use ChangeStream's hasNext() inside of the `while` loop. `hasNext()` will wait to return true until a new change arrives in the change stream. `hasNext()` will throw an error as soon as the change stream is closed, so we will surround our `while` loop with a `try { }` block. If an error is thrown, we'll check to see if the change stream is closed. If the change stream is closed, we'll log that information. Otherwise, something unexpected happened, so we'll throw the error. Add the following code beneath the existing code in `monitorListingsUsingHasNext()`.
``` javascript
try {
while (await changeStream.hasNext()) {
console.log(await changeStream.next());
}
} catch (error) {
if (changeStream.isClosed()) {
console.log("The change stream is closed. Will not wait on any more changes.")
} else {
throw error;
}
}
```
#### Call the Function
Now that we've implemented our function, let's call it!
1. Inside of `main()`, replace your existing call to `monitorListingsUsingEventEmitter()` with a call to your new `monitorListingsUsingHasNext()`:
``` javascript
await monitorListingsUsingHasNext(client);
```
2. Save your file.
3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 60 seconds.
4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Output similar to what we saw earlier will be displayed in your first shell where you are running `changeStreams.js`. If you run `node changeStreamsTestData.js` again before the 60 second timer has completed, you will see similar output again. After 60 seconds, the following will be displayed:
``` sh
Closing the change stream
```
#### Call the Function with an Aggregation Pipeline
As we discussed earlier, sometimes you will want to use an aggregation pipeline to filter the changes in your change stream or transform the change stream event documents. Let's pass the aggregation pipeline we created in an earlier section to our new function.
1. Update your existing call to `monitorListingsUsingHasNext()` to only leave the change stream open for 30 seconds and use the aggregation pipeline.
``` javascript
await monitorListingsUsingHasNext(client, 30000, pipeline);
```
2. Save your file.
3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 30 seconds.
4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Because the change stream is using the pipeline you just created, only documents inserted into the `listingsAndReviews` collection that are in the Sydney, Australia market will be in the change stream. Output similar to what we saw earlier while using a change stream with an aggregation pipeline will be displayed in your first shell where you are running `changeStreams.js`. After 30 seconds, the following will be displayed:
``` sh
Closing the change stream
```
### Monitor Changes Stream using the Stream API
In the previous two sections, we used EventEmitter's `on()` and ChangeStreams's `hasNext()` to monitor changes. Let's examine a third way to monitor a change stream: using Node's Stream API.
#### Load the Stream Module
In order to use the Stream module, we will need to load it.
1. Continuing to work in `changeStreams.js`, load the Stream module at the top of the file.
``` javascript
const stream = require('stream');
```
#### Create the Function
Let's create a function that will monitor changes in the change stream using the Stream API.
1. Continuing to work in `changeStreams.js`, create an asynchronous function named `monitorListingsUsingStreamAPI`. The function should have the following parameters: a connected MongoClient, a time in ms that indicates how long the change stream should be monitored, and an aggregation pipeline that the change stream will use.
``` javascript
async function monitorListingsUsingStreamAPI(client, timeInMs = 60000, pipeline = ]) {
}
```
2. Now we need to access the collection we will monitor for changes. Add the following code to `monitorListingsUsingStreamAPI()`.
``` javascript
const collection = client.db("sample_airbnb").collection("listingsAndReviews");
```
3. Now we are ready to create our change stream. We can do so by using [Collection's watch(). Add the following line beneath the existing code in `monitorListingsUsingStreamAPI()`.
``` javascript
const changeStream = collection.watch(pipeline);
```
4. Now we're ready to monitor our change stream. ChangeStream's stream() will return a Node Readable stream. We will call Readable's pipe() to pull the data out of the stream and write it to the console.
``` javascript
changeStream.stream().pipe(
new stream.Writable({
objectMode: true,
write: function (doc, _, cb) {
console.log(doc);
cb();
}
})
);
```
5. We could choose to leave the change stream open indefinitely. Instead, let's call our helper function that will set a timer and close the change stream. Add the following line beneath the existing code in `monitorListingsUsingStreamAPI()`.
``` javascript
await closeChangeStream(timeInMs, changeStream);
```
#### Call the Function
Now that we've implemented our function, let's call it!
1. Inside of `main()`, replace your existing call to `monitorListingsUsingHasNext()` with a call to your new `monitorListingsUsingStreamAPI()`:
``` javascript
await monitorListingsUsingStreamAPI(client);
```
2. Save your file.
3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 60 seconds.
4. Output similar to what we saw earlier will be displayed in your first shell where you are running `changeStreams.js`. If you run `node changeStreamsTestData.js` again before the 60 second timer has completed, you will see similar output again. After 60 seconds, the following will be displayed:
``` sh
Closing the change stream
```
#### Call the Function with an Aggregation Pipeline
As we discussed earlier, sometimes you will want to use an aggregation pipeline to filter the changes in your change stream or transform the change stream event documents. Let's pass the aggregation pipeline we created in an earlier section to our new function.
1. Update your existing call to `monitorListingsUsingStreamAPI()` to only leave the change stream open for 30 seconds and use the aggregation pipeline.
``` javascript
await monitorListingsUsingStreamAPI(client, 30000, pipeline);
```
2. Save your file.
3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 30 seconds.
4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Because the change stream is using the pipeline you just created, only documents inserted into the `listingsAndReviews` collection that are in the Sydney, Australia market will be in the change stream. Output similar to what we saw earlier while using a change stream with an aggregation pipeline will be displayed in your first shell where you are running `changeStreams.js`. After 30 seconds, the following will be displayed:
``` sh
Closing the change stream
```
## Resume a Change Stream
At some point, your application will likely lose the connection to the change stream. Perhaps a network error will occur and a connection between the application and the database will be dropped. Or perhaps your application will crash and need to be restarted (but you're a 10x developer and that would never happen to you, right?).
In those cases, you may want to resume the change stream where you previously left off so you don't lose any of the change events.
Each change stream event document contains a resume token. The Node.js driver automatically stores the resume token in the `_id` of the change event document.
The application can pass the resume token when creating a new change stream. The change stream will include all events that happened after the event associated with the given resume token.
The MongoDB Node.js driver will automatically attempt to reestablish connections in the event of transient network errors or elections. In those cases, the driver will use its cached copy of the most recent resume token so that no change stream events are lost.
In the event of an application failure or restart, the application will need to pass the resume token when creating the change stream in order to ensure no change stream events are lost. Keep in mind that the driver will lose its cached copy of the most recent resume token when the application restarts, so your application should store the resume token.
For more information and sample code for resuming change streams, see the official documentation.
## What are MongoDB Atlas Triggers?
Change streams allow you to react immediately to changes in your database. If you want to constantly be monitoring changes to your database, ensuring that your application that is monitoring the change stream is always up and not missing any events is possible... but can be challenging. This is where MongoDB Atlas triggers come in.
MongoDB supports triggers in Atlas. Atlas triggers allow you to execute functions in real time based on database events (just like change streams) or on scheduled intervals (like a cron job). Atlas triggers have a few big advantages:
- You don't have to worry about programming the change stream. You simply program the function that will be executed when the database event is fired.
- You don't have to worry about managing the server where your change stream code is running. Atlas takes care of the server management for you.
- You get a handy UI to configure your trigger, which means you have less code to write.
Atlas triggers do have a few constraints. The biggest constraint I hit in the past was that functions did not support module imports (i.e. **import** and **require**). That has changed, and you can now upload external dependencies that you can use in your functions. See Upload External Dependencies for more information. To learn more about functions and their constraints, see the official Realm Functions documentation.
## Create a MongoDB Atlas Trigger
Just as we did in earlier sections, let's look for new listings in the Sydney, Australia market. Instead of working locally in a code editor to create and monitor a change stream, we'll create a trigger in the Atlas web UI.
### Create a Trigger
Let's create an Atlas trigger that will monitor the `listingsAndReviews` collection and call a function whenever a new listing is added in the Sydney, Australia market.
1. Navigate to your project in Atlas.
2. In the Data Storage section of the left navigation pane, click **Triggers**.
3. Click **Add Trigger**. The **Add Trigger** wizard will appear.
4. In the **Link Data Source(s)** selection box, select your cluster that contains the `sample_airbnb` database and click **Link**. The changes will be deployed. The deployment may take a minute or two. Scroll to the top of the page to see the status.
5. In the **Select a cluster...** selection box, select your cluster that contains the `sample_airbnb` database.
6. In the **Select a database name...** selection box, select **sample_airbnb**.
7. In the **Select a collection name...** selection box, select **listingsAndReviews**.
8. In the Operation Type section, check the box beside **Insert**.
9. In the Function code box, replace the commented code with a call to log the change event. The code should now look like the following:
``` javascript
exports = function(changeEvent) {
console.log(JSON.stringify(changeEvent.fullDocument));
};
```
10. We can create a $match statement to filter our change events just as we did earlier with the aggregation pipeline we passed to the change stream in our Node.js script. Expand the **ADVANCED (OPTIONAL)** section at the bottom of the page and paste the following in the **Match Expression** code box.
``` javascript
{
"fullDocument.address.country": "Australia",
"fullDocument.address.market": "Sydney"
}
```
11. Click **Save**. The trigger will be enabled. From that point on, the function to log the change event will be called whenever a new document in the Sydney, Australia market is inserted in the `listingsAndReviews` collection.
### Fire the Trigger
Now that we have the trigger configured, let's create sample data that will fire the trigger.
1. Return to the shell on your local machine.
2. Create and update sample data by executing node changeStreamsTestData.js in a new shell.
### View the Trigger Results
When you created the trigger, MongoDB Atlas automatically created a Realm application for you named **Triggers_RealmApp**.
The function associated with your trigger doesn't currently do much. It simply prints the change event document. Let's view the results in the logs of the Realm app associated with your trigger.
1. Return to your browser where you are viewing your trigger in Atlas.
2. In the navigation bar toward the top of the page, click **Realm**.
3. In the Applications pane, click **Triggers_RealmApp**. The **Triggers_RealmApp** Realm application will open.
4. In the MANAGE section of the left navigation pane, click **Logs**. Two entries will be displayed in the Logs pane—one for each of the listings in the Sydney, Australia market that was inserted into the collection.
5. Click the arrow at the beginning of each row in the Logs pane to expand the log entry. Here you can see the full document that was inserted.
If you insert more listings in the Sydney, Australia market, you can refresh the Logs page to see the change events.
## Wrapping Up
Today we explored four different ways to accomplish the same task of reacting immediately to changes in the database. We began by writing a Node.js script that monitored a change stream using Node.js's Built-in EventEmitter class. Next we updated the Node.js script to monitor a change stream using the MongoDB Node.js Driver's ChangeStream class. Then we updated the Node.js script to monitor a change stream using the Stream API. Finally, we created an Atlas trigger to monitor changes. In all four cases, we were able to use $match to filter the change stream events.
This post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.
The examples we explored today all did relatively simple things whenever an event was fired: they logged the change events. Change streams and triggers become really powerful when you start doing more in response to change events. For example, you might want to fire alarms, send emails, place orders, update other systems, or do other amazing things.
This is the final post in the Node.js and MongoDB Quick Start Series (at least for now!). I hope you've enjoyed it! If you have ideas for other topics you'd like to see covered, let me know in the MongoDB Community.
## Additional Resources
- MongoDB Official Documentation: Change Streams
- MongoDB Official Documentation: Triggers
- Blog Post: An Introduction to Change Streams
- Video: Using Change Streams to Keep Up with Your Data
| md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Discover how to react to changes in your MongoDB database using change streams implemented in Node.js and Atlas triggers.",
"contentType": "Quickstart"
} | Change Streams & Triggers with Node.js Tutorial | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/visually-showing-atlas-search-highlights-javascript-html | created |
Search
| md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js"
],
"pageDescription": "Learn how to use JavaScript and HTML to show MongoDB Atlas Search highlights on the screen.",
"contentType": "Tutorial"
} | Visually Showing Atlas Search Highlights with JavaScript and HTML | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-5-0-schema-validation | created | # Improved Error Messages for Schema Validation in MongoDB 5.0
## Intro
Many MongoDB users rely on schema
validation to
enforce rules governing the structure and integrity of documents in
their collections. But one of the challenges they faced was quickly
understanding why a document that did not match the schema couldn't be
inserted or updated. This is changing in the upcoming MongoDB 5.0
release.
Schema validation ease-of-use will be significantly improved by
generating descriptive error messages whenever an operation fails
validation. This additional information provides valuable insight into
which parts of a document in an insert/update operation failed to
validate against which parts of a collection's validator, and how. From
this information, you can quickly identify and remediate code errors
that are causing documents to not comply with your validation rules. No
more tedious debugging by slicing your document into pieces to isolate
the problem!
>
>
>If you would like to evaluate this feature and provide us early
>feedback, fill in this
>form to
>participate in the preview program.
>
>
The most popular way to express the validation rules is JSON
Schema.
It is a widely adopted standard that is also used within the REST API
specification and validation. And in MongoDB, you can combine JSON
Schema with the MongoDB Query Language (MQL) to do even more.
In this post, I would like to go over a few examples to reiterate the
capabilities of schema validation and showcase the addition of new
detailed error messages.
## What Do the New Error Messages Look Like?
First, let's look at the new error message. It is a structured message
in the BSON format, explaining which part of the document didn't match
the rules and which validation rule caused this.
Consider this basic validator that ensures that the price field does not
accept negative values. In JSON Schema, the property is the equivalent
of what we call "field" in MongoDB.
``` json
{
"$jsonSchema": {
"properties": {
"price": {
"minimum": 0
}
}
}
}
```
When trying to insert a document with `{price: -2}`, the following error
message will be returned.
``` json
{
"code": 121,
"errmsg": "Document failed validation",
"errInfo": {
"failingDocumentId": ObjectId("5fe0eb9642c10f01eeca66a9"),
"details": {
"operatorName": "$jsonSchema",
"schemaRulesNotSatisfied":
{
"operatorName": "properties",
"propertiesNotSatisfied": [
{
"propertyName": "price",
"details": [
{
"operatorName": "minimum",
"specifiedAs": {
"minimum": 0
},
"reason": "comparison failed",
"consideredValue": -2
}
]
}
]
}
]
}
}
}
```
Some of the key fields in the response are:
- `failingDocumentId` - the \_id of the document that was evaluated
- `operatorName` - the operator used in the validation rule
- `propertiesNotSatisfied` - the list of fields (properties) that
failed validation checks
- `propertyName` - the field of the document that was evaluated
- `specifiedAs` - the rule as it was expressed in the validator
- `reason - explanation` of how the rule was not satisfied
- `consideredValue` - value of the field in the document that was
evaluated
The error may include more fields depending on the specific validation
rule, but these are the most common. You will likely find the
`propertyName` and `reason` to be the most useful fields in the
response.
Now we can look at the examples of the different validation rules and
see how the new detailed message helps us identify the reason for the
validation failure.
## Exploring a Sample Collection
As an example, we'll use a collection of real estate properties in NYC
managed by a team of real estate agents.
Here is a sample document:
``` json
{
"PID": "EV10010A1",
"agents": [ { "name": "Ana Blake", "email": "[email protected]" } ],
"description": "Spacious 2BR apartment",
"localization": { "description_es": "Espacioso apartamento de 2 dormitorios" },
"type": "Residential",
"address": {
"street1": "235 E 22nd St",
"street2": "Apt 42",
"city": "New York",
"state": "NY",
"zip": "10010"
},
"originalPrice": 990000,
"discountedPrice": 980000,
"geoLocation": [ -73.9826509, 40.737499 ],
"listedDate": "Wed Dec 11 2020 10:05:10 GMT-0500 (EST)",
"saleDate": "Wed Dec 21 2020 12:00:04 GMT-0500 (EST)",
"saleDetails": {
"price": 970000,
"buyer": { "id": "24434" },
"bids": [
{
"price": 950000,
"winner": false,
"bidder": {
"id": "24432",
"name": "Sam James",
"contact": { "email": "[email protected]" }
}
},
{
"price": 970000,
"winner": true,
"bidder": {
"id": "24434",
"name": "Joana Miles",
"contact": { "email": "[email protected]" }
}
}
]
}
}
```
## Using the Value Pattern
Our real estate properties are identified with property id (PID) that
has to follow a specific naming format: It should start with two letters
followed by five digits, and some letters and digits after, like this:
WS10011FG4 or EV10010A1.
We can use JSON Schema `pattern` operator to create a rule for this as a
regular expression.
Validator:
``` json
{
"$jsonSchema": {
"properties": {
"PID": {
"bsonType": "string",
"pattern": "^[A-Z]{2}[0-9]{5}[A-Z]+[0-9]+$"
}
}
}
}
```
If we try to insert a document with a PID field that doesn't match the
pattern, for example `{ PID: "apt1" }`, we will receive an error.
The error states that the field `PID` had the value of `"apt1"` and it
did not match the regular expression, which was specified as
`"^[A-Z]{2}[0-9]{5}[A-Z]+[0-9]+$"`.
``` json
{ ...
"schemaRulesNotSatisfied": [
{
"operatorName": "properties",
"propertiesNotSatisfied": [
{
"propertyName": "PID",
"details": [
{
"operatorName": "pattern",
"specifiedAs": {
"pattern": "^[A-Z]{2}[0-9]{5}[A-Z]+[0-9]+$"
},
"reason": "regular expression did not match",
"consideredValue": "apt1"
}
]
}
]
...
}
```
## Additional Properties and Property Pattern
The description may be localized into several languages. Currently, our
application only supports Spanish, German, and French, so the
localization object can only contain fields `description_es`,
`description_de`, or `description_fr`. Other fields will not be allowed.
We can use operator `patternProperties` to describe this requirement as
regular expression and indicate that no other fields are expected here
with `"additionalProperties": false`.
Validator:
``` json
{
"$jsonSchema": {
"properties": {
"PID": {...},
"localization": {
"additionalProperties": false,
"patternProperties": {
"^description_(es|de|fr)+$": {
"bsonType": "string"
}
}
}
}
}
}
```
Document like this can be inserted successfully:
``` json
{
"PID": "TS10018A1",
"type": "Residential",
"localization": {
"description_es": "Amplio apartamento de 2 dormitorios",
"description_de": "Geräumige 2-Zimmer-Wohnung",
}
}
```
Document like this will fail the validation check:
``` json
{
"PID": "TS10018A1",
"type": "Residential",
"localization": {
"description_cz": "Prostorný byt 2 + kk"
}
}
```
The error below indicates that field `localization` contains additional
property `description_cz`. `description_cz` does not match the expected
pattern, so it is considered an additional property.
``` json
{ ...
"propertiesNotSatisfied": [
{
"propertyName": "localization",
"details": [
{
"operatorName": "additionalProperties",
"specifiedAs": {
"additionalProperties": false
},
"additionalProperties": [
"description_cz"
]
}
]
}
]
...
}
```
## Enumeration of Allowed Options
Each real estate property in our collection has a type, and we want to
use one of the four types: "Residential," "Commercial," "Industrial," or
"Land." This can be achieved with the operator `enum`.
Validator:
``` json
{
"$jsonSchema": {
"properties": {
"type": {
"enum": [ "Residential", "Commercial", "Industrial", "Land" ]
}
}
}
}
```
The following document will be considered invalid:
``` json
{
"PID": "TS10018A1", "type": "House"
}
```
The error states that field `type` failed validation because "value was
not found in enum."
``` json
{...
"propertiesNotSatisfied": [
{
"propertyName": "type",
"details": [
{
"operatorName": "enum",
"specifiedAs": {
"enum": [
"Residential",
"Commercial",
"Industrial",
"Land"
]
},
"reason": "value was not found in enum",
"consideredValue": "House"
}
]
}
]
...
}
```
## Arrays: Enforcing Number of Elements and Uniqueness
Agents who manage each real estate property are stored in the `agents`
array. Let's make sure there are no duplicate elements in the array, and
no more than three agents are working with the same property. We can use
`uniqueItems` and `maxItems` for this.
``` json
{
"$jsonSchema": {
"properties": {
"agents": {
"bsonType": "array",
"uniqueItems": true,
"maxItems": 3
}
}
}
}
```
The following document violates both if the validation rules.
``` json
{
"PID": "TS10018A1",
"agents": [
{ "name": "Ana Blake" },
{ "name": "Felix Morin" },
{ "name": "Dilan Adams" },
{ "name": "Ana Blake" }
]
}
```
The error returns information about failure for two rules: "array did
not match specified length" and "found a duplicate item," and it also
points to what value was a duplicate.
``` json
{
...
"propertiesNotSatisfied": [
{
"propertyName": "agents",
"details": [
{
"operatorName": "maxItems",
"specifiedAs": { "maxItems": 3 },
"reason": "array did not match specified length",
"consideredValue": [
{ "name": "Ana Blake" },
{ "name": "Felix Morin" },
{ "name": "Dilan Adams" },
{ "name": "Ana Blake" }
]
},
{
"operatorName": "uniqueItems",
"specifiedAs": { "uniqueItems": true },
"reason": "found a duplicate item",
"consideredValue": [
{ "name": "Ana Blake" },
{ "name": "Felix Morin" },
{ "name": "Dilan Adams" },
{ "name": "Ana Blake" }
],
"duplicatedValue": { "name": "Ana Blake" }
}
]
...
}
```
## Enforcing Required Fields
Now, we want to make sure that there's contact information available for
the agents. We need each agent's name and at least one way to contact
them: phone or email. We will use `required`and `anyOf` to create this
rule.
Validator:
``` json
{
"$jsonSchema": {
"properties": {
"agents": {
"bsonType": "array",
"uniqueItems": true,
"maxItems": 3,
"items": {
"bsonType": "object",
"required": [ "name" ],
"anyOf": [ { "required": [ "phone" ] }, { "required": [ "email" ] } ]
}
}
}
}
}
```
The following document will fail validation:
``` json
{
"PID": "TS10018A1",
"agents": [
{ "name": "Ana Blake", "email": "[email protected]" },
{ "name": "Felix Morin", "phone": "+12019878749" },
{ "name": "Dilan Adams" }
]
}
```
Here the error indicates that the third element of the array
(`"itemIndex": 2`) did not match the rule.
``` json
{
...
"propertiesNotSatisfied": [
{
"propertyName": "agents",
"details": [
{
"operatorName": "items",
"reason": "At least one item did not match the sub-schema",
"itemIndex": 2,
"details": [
{
"operatorName": "anyOf",
"schemasNotSatisfied": [
{
"index": 0,
"details": [
{
"operatorName": "required",
"specifiedAs": { "required": [ "phone" ] },
"missingProperties": [ "phone" ]
}
]
},
{
"index": 1,
"details": [
{
"operatorName": "required",
"specifiedAs": { "required": [ "email" ] },
"missingProperties": [ "email" ]
}
]
}
]
}
]
}
]
}
]
...
}
```
## Creating Dependencies
Let's create another rule to ensure that if the document contains the
`saleDate` field, `saleDetails` is also present, and vice versa: If
there is `saleDetails`, then `saleDate` also has to exist.
``` json
{
"$jsonSchema": {
"dependencies": {
"saleDate": [ "saleDetails"],
"saleDetails": [ "saleDate"]
}
}
}
```
Now, let's try to insert the document with `saleDate` but with no
`saleDetails`:
``` json
{
"PID": "TS10018A1",
"saleDate": Date("2020-05-01T04:00:00.000Z")
}
```
The error now includes the property with dependency `saleDate` and a
property missing from the dependencies: `saleDetails`.
``` json
{
...
"details": {
"operatorName": "$jsonSchema",
"schemaRulesNotSatisfied": [
{
"operatorName": "dependencies",
"failingDependencies": [
{
"conditionalProperty": "saleDate",
"missingProperties": [ "saleDetails" ]
}
]
}
]
}
...
}
```
Notice that in JSON Schema, the field `dependencies` is in the root
object, and not inside of the specific property. Therefore in the error
message, the `details` object will have a different structure:
``` json
{ "operatorName": "dependencies", "failingDependencies": [...]}
```
In the previous examples, when the JSON Schema rule was inside of the
"properties" object, like this:
``` json
"$jsonSchema": { "properties": { "price": { "minimum": 0 } } }
```
the details of the error message contained
`"operatorName": "properties"` and a `"propertyName"`:
``` json
{ "operatorName": "properties",
"propertiesNotSatisfied": [ { "propertyName": "...", "details": [] } ]
}
```
## Adding Business Logic to Your Validation Rules
You can use MongoDB Query Language (MQL) in your validator right next to
JSON Schema to add richer business logic to your rules.
As one example, you can use
[$expr
to add a check for a `discountPrice` to be less than `originalPrice`
just like this:
``` json
{
"$expr": {
"$lt": "$discountedPrice", "$originalPrice" ]
},
"$jsonSchema": {...}
}
```
[$expr
resolves to `true` or `false`, and allows you to use aggregation
expressions to create sophisticated business rules.
For a little more complex example, let's say we keep an array of bids in
the document of each real estate property, and the boolean field
`isWinner` indicates if a particular bid is a winning one.
Sample document:
``` json
{
"PID": "TS10018A1",
"type": "Residential",
"saleDetails": {
"bids":
{
"price": 500000,
"isWinner": false,
"bidder": {...}
},
{
"price": 530000,
"isWinner": true,
"bidder": {...}
}
]
}
}
```
Let's make sure that only one of the `bids` array elements can be marked
as the winner. The validator will have an expression where we apply a
filter to the array of bids to only keep the elements with `"isWinner":`
true, and check the size of the resulting array to be less or equal to
1.
Validator:
``` json
{
"$and": [
{
"$expr": {
"$lte": [
{
"$size": {
"$filter": {
"input": "$saleDetails.bids.isWinner",
"cond": "$$this"
}
}
},
1
]
}
},
{
"$expr": {...}
},
{
"$jsonSchema": {...}
}
]
}
```
Let's try to insert the document with few bids having
`"isWinner": true`.
``` json
{
"PID": "TS10018A1",
"type": "Residential",
"originalPrice": 600000,
"discountedPrice": 550000,
"saleDetails": {
"bids": [
{ "price": 500000, "isWinner": true },
{ "price": 530000, "isWinner": true }
]
}
}
```
The produced error message will indicate which expression evaluated to
false.
``` json
{
...
"details": {
"operatorName": "$expr",
"specifiedAs": {
"$expr": {
"$lte": [
{
"$size": {
"$filter": {
"input": "$saleDetails.bids.isWinner",
"cond": "$$this"
}
}
},
1
]
}
},
"reason": "expression did not match",
"expressionResult": false
}
...
}
```
## Geospatial Validation
As the last example, let's see how we can use the geospatial features of
MQL to ensure that all the real estate properties in the collection are
located within the New York City boundaries. Our documents include a
`geoLocation` field with coordinates. We can use `$geoWithin` to check
that these coordinates are inside the geoJSON polygon (the polygon for
New York City in this example is approximate).
Validator:
``` json
{
"geoLocation": {
"$geoWithin": {
"$geometry": {
"type": "Polygon",
"coordinates": [
[ [ -73.91326904296874, 40.91091803848203 ],
[ -74.01626586914062, 40.75297891717686 ],
[ -74.05677795410156, 40.65563874006115 ],
[ -74.08561706542969, 40.65199222800328 ],
[ -74.14329528808594, 40.64417760251725 ],
[ -74.18724060058594, 40.643656594948524 ],
[ -74.234619140625, 40.556591288249905 ],
[ -74.26345825195312, 40.513277131087484 ],
[ -74.2510986328125, 40.49500373230525 ],
[ -73.94691467285156, 40.543026009954986 ],
[ -73.740234375, 40.589449604232975 ],
[ -73.71826171874999, 40.820045086716505 ],
[ -73.78829956054686, 40.8870435151357 ],
[ -73.91326904296874, 40.91091803848203 ] ]
]
}
}
},
"$jsonSchema": {...}
}
```
A document like this will be inserted successfully.
``` json
{
"PID": "TS10018A1",
"type": "Residential",
"geoLocation": [ -73.9826509, 40.737499 ],
"originalPrice": 600000,
"discountedPrice": 550000,
"saleDetails": {...}
}
```
The following document will fail.
``` json
{
"PID": "TS10018A1",
"type": "Residential",
"geoLocation": [ -73.9826509, 80.737499 ],
"originalPrice": 600000,
"discountedPrice": 550000,
"saleDetails": {...}
}
```
The error will indicate that validation failed the `$geoWithin`
operator, and the reason is "none of the considered geometries were
contained within the expression's geometry."
``` json
{
...
"details": {
"operatorName": "$geoWithin",
"specifiedAs": {
"geoLocation": {
"$geoWithin": {...}
}
},
"reason": "none of the considered geometries were contained within the
expression's geometry",
"consideredValues": [ -73.9826509, 80.737499 ]
}
...
}
```
## Conclusion and Next Steps
Schema validation is a great tool to enforce governance over your data
sets. You have the choice to express the validation rules using JSON
Schema, MongoDB Query Language, or both. And now, with the detailed
error messages, it gets even easier to use, and you can have the rules
be as sophisticated as you need, without the risk of costly maintenance.
You can find the full validator code and sample documents from this post
[here.
>
>
>If you would like to evaluate this feature and provide us early
>feedback, fill in this
>form to
>participate in the preview program.
>
>
More posts on schema validation:
- JSON Schema Validation - Locking down your model the smart
way
- JSON Schema Validation - Dependencies you can depend
on
- JSON Schema Validation - Checking Your
Arrays
Questions? Comments? We'd love to connect with you. Join the
conversation on the MongoDB Community
Forums.
**Safe Harbor**
The development, release, and timing of any features or functionality
described for our products remains at our sole discretion. This
information is merely intended to outline our general product direction
and it should not be relied on in making a purchasing decision nor is
this a commitment, promise or legal obligation to deliver any material,
code, or functionality.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn about improved error messages for schema validation in MongoDB 5.0.",
"contentType": "News & Announcements"
} | Improved Error Messages for Schema Validation in MongoDB 5.0 | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-mapping-pojos | created | # Java - Mapping POJOs
## Updates
The MongoDB Java quickstart repository is available on GitHub.
### February 28th, 2024
- Update to Java 21
- Update Java Driver to 5.0.0
- Update `logback-classic` to 1.2.13
### November 14th, 2023
- Update to Java 17
- Update Java Driver to 4.11.1
- Update mongodb-crypt to 1.8.0
### March 25th, 2021
- Update Java Driver to 4.2.2.
- Added Client Side Field Level Encryption example.
### October 21st, 2020
- Update Java Driver to 4.1.1.
- The Java Driver logging is now enabled via the popular SLF4J API, so I added logback in the `pom.xml` and a configuration file `logback.xml`.
## Introduction
Java is an object-oriented programming language and MongoDB stores documents, which look a lot like objects. Indeed, this is not a coincidence because that's the core idea behind the MongoDB database.
In this blog post, as promised in the first blog post of this series, I will show you how to automatically map MongoDB documents to Plain Old Java Objects (POJOs) using only the MongoDB driver.
## Getting Set Up
I will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just update it if you already have it:
``` sh
git clone https://github.com/mongodb-developer/java-quick-start
```
If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
## The Grades Collection
If you followed this series, you know that we have been working with the `grades` collection in the `sample_training` database. You can import it easily by loading the sample dataset in MongoDB Atlas.
Here is what a MongoDB document looks like in extended JSON format. I'm using the extended JSON because it's easier to identify the field types and we will need them to build the POJOs.
``` json
{
"_id": {
"$oid": "56d5f7eb604eb380b0d8d8ce"
},
"student_id": {
"$numberDouble": "0"
},
"scores": {
"type": "exam",
"score": {
"$numberDouble": "78.40446309504266"
}
}, {
"type": "quiz",
"score": {
"$numberDouble": "73.36224783231339"
}
}, {
"type": "homework",
"score": {
"$numberDouble": "46.980982486720535"
}
}, {
"type": "homework",
"score": {
"$numberDouble": "76.67556138656222"
}
}],
"class_id": {
"$numberDouble": "339"
}
}
```
## POJOs
The first thing we need is a representation of this document in Java. For each document or subdocument, I need a corresponding POJO class.
As you can see in the document above, I have the main document itself and I have an array of subdocuments in the `scores` field. Thus, we will need 2 POJOs to represent this document in Java:
- One for the grade,
- One for the scores.
In the package `com.mongodb.quickstart.models`, I created two new POJOs: `Grade.java` and `Score.java`.
[Grade.java:
``` java
package com.mongodb.quickstart.models;
// imports
public class Grade {
private ObjectId id;
@BsonProperty(value = "student_id")
private Double studentId;
@BsonProperty(value = "class_id")
private Double classId;
private List scores;
// getters and setters with builder pattern
// toString()
// equals()
// hashCode()
}
```
>In the Grade class above, I'm using `@BsonProperty` to avoid violating Java naming conventions for variables, getters, and setters. This allows me to indicate to the mapper that I want the `"student_id"` field in JSON to be mapped to the `"studentId"` field in Java.
Score.java:
``` java
package com.mongodb.quickstart.models;
import java.util.Objects;
public class Score {
private String type;
private Double score;
// getters and setters with builder pattern
// toString()
// equals()
// hashCode()
}
```
As you can see, we took care of matching the Java types with the JSON value types to follow the same data model. You can read more about types and documents in the documentation.
## Mapping POJOs
Now that we have everything we need, we can start the MongoDB driver code.
I created a new class `MappingPOJO` in the `com.mongodb.quickstart` package and here are the key lines of code:
- I need a `ConnectionString` instance instead of the usual `String` I have used so far in this series. I'm still retrieving my MongoDB Atlas URI from the system properties. See my starting and setup blog post if you need a reminder.
``` java
ConnectionString connectionString = new ConnectionString(System.getProperty("mongodb.uri"));
```
- I need to configure the CodecRegistry to include a codec to handle the translation to and from BSON for our POJOs.
``` java
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
```
- And I need to add the default codec registry, which contains all the default codecs. They can handle all the major types in Java-like `Boolean`, `Double`, `String`, `BigDecimal`, etc.
``` java
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(),
pojoCodecRegistry);
```
- I can now wrap all my settings together using `MongoClientSettings`.
``` java
MongoClientSettings clientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.codecRegistry(codecRegistry)
.build();
```
- I can finally initialise my connection with MongoDB.
``` java
try (MongoClient mongoClient = MongoClients.create(clientSettings)) {
MongoDatabase db = mongoClient.getDatabase("sample_training");
MongoCollection grades = db.getCollection("grades", Grade.class);
...]
}
```
As you can see in this last line of Java, all the magic is happening here. The `MongoCollection` I'm retrieving is typed by `Grade` and not by `Document` as usual.
In the previous blog posts in this series, I showed you how to use CRUD operations by manipulating `MongoCollection`. Let's review all the CRUD operations using POJOs now.
- Here is an insert (create).
``` java
Grade newGrade = new Grade().setStudent_id(10003d)
.setClass_id(10d)
.setScores(List.of(new Score().setType("homework").setScore(50d)));
grades.insertOne(newGrade);
```
- Here is a find (read).
``` java
Grade grade = grades.find(eq("student_id", 10003d)).first();
System.out.println("Grade found:\t" + grade);
```
- Here is an update with a `findOneAndReplace` returning the newest version of the document.
``` java
List newScores = new ArrayList<>(grade.getScores());
newScores.add(new Score().setType("exam").setScore(42d));
grade.setScores(newScores);
Document filterByGradeId = new Document("_id", grade.getId());
FindOneAndReplaceOptions returnDocAfterReplace = new FindOneAndReplaceOptions()
.returnDocument(ReturnDocument.AFTER);
Grade updatedGrade = grades.findOneAndReplace(filterByGradeId, grade, returnDocAfterReplace);
System.out.println("Grade replaced:\t" + updatedGrade);
```
- And finally here is a `deleteOne`.
``` java
System.out.println(grades.deleteOne(filterByGradeId));
```
## Final Code
`MappingPojo.java` ([code):
``` java
package com.mongodb.quickstart;
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.mongodb.client.model.FindOneAndReplaceOptions;
import com.mongodb.client.model.ReturnDocument;
import com.mongodb.quickstart.models.Grade;
import com.mongodb.quickstart.models.Score;
import org.bson.codecs.configuration.CodecRegistry;
import org.bson.codecs.pojo.PojoCodecProvider;
import org.bson.conversions.Bson;
import java.util.ArrayList;
import java.util.List;
import static com.mongodb.client.model.Filters.eq;
import static org.bson.codecs.configuration.CodecRegistries.fromProviders;
import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;
public class MappingPOJO {
public static void main(String] args) {
ConnectionString connectionString = new ConnectionString(System.getProperty("mongodb.uri"));
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);
MongoClientSettings clientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.codecRegistry(codecRegistry)
.build();
try (MongoClient mongoClient = MongoClients.create(clientSettings)) {
MongoDatabase db = mongoClient.getDatabase("sample_training");
MongoCollection grades = db.getCollection("grades", Grade.class);
// create a new grade.
Grade newGrade = new Grade().setStudentId(10003d)
.setClassId(10d)
.setScores(List.of(new Score().setType("homework").setScore(50d)));
grades.insertOne(newGrade);
System.out.println("Grade inserted.");
// find this grade.
Grade grade = grades.find(eq("student_id", 10003d)).first();
System.out.println("Grade found:\t" + grade);
// update this grade: adding an exam grade
List newScores = new ArrayList<>(grade.getScores());
newScores.add(new Score().setType("exam").setScore(42d));
grade.setScores(newScores);
Bson filterByGradeId = eq("_id", grade.getId());
FindOneAndReplaceOptions returnDocAfterReplace = new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER);
Grade updatedGrade = grades.findOneAndReplace(filterByGradeId, grade, returnDocAfterReplace);
System.out.println("Grade replaced:\t" + updatedGrade);
// delete this grade
System.out.println("Grade deleted:\t" + grades.deleteOne(filterByGradeId));
}
}
}
```
To start this program, you can use this maven command line in your root project (where the `src` folder is) or your favorite IDE.
``` bash
mvn compile exec:java -Dexec.mainClass="com.mongodb.quickstart.MappingPOJO" -Dmongodb.uri="mongodb+srv://USERNAME:[email protected]/test?w=majority"
```
## Wrapping Up
Mapping POJOs and your MongoDB documents simplifies your life a lot when you are solving real-world problems with Java, but you can certainly be successful without using POJOs.
MongoDB is a dynamic schema database which means your documents can have different schemas within a single collection. Mapping all the documents from such a collection can be a challenge. So, sometimes, using the "old school" method and the `Document` class will be easier.
>If you want to learn more and deepen your knowledge faster, I recommend you check out the [MongoDB Java Developer Path training available for free on MongoDB University.
In the next blog post, I will show you the aggregation framework in Java.
| md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "Learn how to use the native mapping of POJOs using the MongoDB Java Driver.",
"contentType": "Quickstart"
} | Java - Mapping POJOs | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/time-series-macd-rsi | created | # Currency Analysis with Time Series Collections #3 — MACD and RSI Calculation
In the first post of this series, we learned how to group currency data based on given time intervals to generate candlestick charts. In the second article, we learned how to calculate simple moving average and exponential moving average on the currencies based on a given time window. Now, in this post we’ll learn how to calculate more complex technical indicators.
## MACD Indicator
MACD (Moving Average Convergence Divergence) is another trading indicator and provides visibility of the trend and momentum of the currency/stock. MACD calculation fundamentally leverages multiple EMA calculations with different parameters.
As shown in the below diagram, MACD indicator has three main components: MACD Line, MACD Signal, and Histogram. (The blue line represents MACD Line, the red line represents MACD Signal, and green and red bars represent histogram):
- MACD Line is calculated by subtracting the 26-period (mostly, days are used for the period) exponential moving average from the 12-period exponential moving average.
- After we get the MACD Line, we can calculate the MACD Signal. MACD Signal is calculated by getting the nine-period exponential moving average of MACD Line.
- MACD Histogram is calculated by subtracting the MACD Signal from the MACD Line.
We can use the MongoDB Aggregation Framework to calculate this complex indicator.
In the previous blog posts, we learned how we can group the second-level raw data into five-minutes intervals through the `$group` stage and `$dateTrunc` operator:
```js
db.ticker.aggregate(
{
$match: {
symbol: "BTC-USD",
},
},
{
$group: {
_id: {
symbol: "$symbol",
time: {
$dateTrunc: {
date: "$time",
unit: "minute",
binSize: 5,
},
},
},
high: { $max: "$price" },
low: { $min: "$price" },
open: { $first: "$price" },
close: { $last: "$price" },
},
},
{
$sort: {
"_id.time": 1,
},
},
{
$project: {
_id: 1,
price: "$close",
},
}
]);
```
After that, we need to calculate two exponential moving averages with different parameters:
```js
{
$setWindowFields: {
partitionBy: "_id.symbol",
sortBy: { "_id.time": 1 },
output: {
ema_12: {
$expMovingAvg: { input: "$price", N: 12 },
},
ema_26: {
$expMovingAvg: { input: "$price", N: 26 },
},
},
},
}
```
After we calculate two separate exponential moving averages, we need to apply the `$subtract` operation in the next stage of the aggregation pipeline:
```js
{ $addFields : {"macdLine" : {"$subtract" : ["$ema_12", "$ema_26"]}}}
```
After we’ve obtained the `macdLine` field, then we can apply another exponential moving average to this newly generated field (`macdLine`) to obtain MACD signal value:
```js
{
$setWindowFields: {
partitionBy: "_id.symbol",
sortBy: { "_id.time": 1 },
output: {
macdSignal: {
$expMovingAvg: { input: "$macdLine", N: 9 },
},
},
},
}
```
Therefore, we will have two more fields: `macdLine` and `macdSignal`. We can generate another field as `macdHistogram` that is calculated by subtracting the `macdSignal` from `macdLine` value:
```js
{ $addFields : {"macdHistogram" : {"$subtract" : ["$macdLine", "$macdSignal"]}}}
```
Now we have three derived fields: `macdLine`, `macdSignal`, and `macdHistogram`. Below, you can see how MACD is visualized together with Candlesticks:
![Candlestick charts
This is the complete aggregation pipeline:
```js
db.ticker.aggregate(
{
$match: {
symbol: "BTC-USD",
},
},
{
$group: {
_id: {
symbol: "$symbol",
time: {
$dateTrunc: {
date: "$time",
unit: "minute",
binSize: 5,
},
},
},
high: { $max: "$price" },
low: { $min: "$price" },
open: { $first: "$price" },
close: { $last: "$price" },
},
},
{
$sort: {
"_id.time": 1,
},
},
{
$project: {
_id: 1,
price: "$close",
},
},
{
$setWindowFields: {
partitionBy: "_id.symbol",
sortBy: { "_id.time": 1 },
output: {
ema_12: {
$expMovingAvg: { input: "$price", N: 12 },
},
ema_26: {
$expMovingAvg: { input: "$price", N: 26 },
},
},
},
},
{ $addFields: { macdLine: { $subtract: ["$ema_12", "$ema_26"] } } },
{
$setWindowFields: {
partitionBy: "_id.symbol",
sortBy: { "_id.time": 1 },
output: {
macdSignal: {
$expMovingAvg: { input: "$macdLine", N: 9 },
},
},
},
},
{
$addFields: { macdHistogram: { $subtract: ["$macdLine", "$macdSignal"] } },
},
]);
```
## RSI Indicator
[RSI (Relativity Strength Index) is another financial technical indicator that reveals whether the asset has been overbought or oversold. It usually uses a 14-period time frame window, and the value of RSI is measured on a scale of 0 to 100. If the value is closer to 100, then it indicates that the asset has been overbought within this time period. And if the value is closer to 0, then it indicates that the asset has been oversold within this time period. Mostly, 70 and 30 are used for upper and lower thresholds.
Calculation of RSI is a bit more complicated than MACD:
- For every data point, the gain and the loss values are set by comparing one previous data point.
- After we set gain and loss values for every data point, then we can get a moving average of both gain and loss for a 14-period. (You don’t have to apply a 14-period. Whatever works for you, you can set accordingly.)
- After we get the average gain and the average loss value, we can divide average gain by average loss.
- After that, we can smooth the value to normalize it between 0 and 100.
### Calculating Gain and Loss
Firstly, we need to define the gain and the loss value for each interval.
The gain and loss value are calculated by subtracting one previous price information from the current price information:
- If the difference is positive, it means there is a price increase and the value of the gain will be the difference between current price and previous price. The value of the loss will be 0.
- If the difference is negative, it means there is a price decline and the value of the loss will be the difference between previous price and current price. The value of the gain will be 0.
Consider the following input data set:
```js
{"_id": {"time": ISODate("20210101T17:00:00"), "symbol" : "BTC-USD"}, "price": 35050}
{"_id": {"time": ISODate("20210101T17:05:00"), "symbol" : "BTC-USD"}, "price": 35150}
{"_id": {"time": ISODate("20210101T17:10:00"), "symbol" : "BTC-USD"}, "price": 35280}
{"_id": {"time": ISODate("20210101T17:15:00"), "symbol" : "BTC-USD"}, "price": 34910}
```
Once we calculate the Gain and Loss, we will have the following data:
```js
{"_id": {"time": ISODate("20210101T17:00:00"), "symbol" : "BTC-USD"}, "price": 35050, "previousPrice": null, "gain":0, "loss":0}
{"_id": {"time": ISODate("20210101T17:05:00"), "symbol" : "BTC-USD"}, "price": 35150, "previousPrice": 35050, "gain":100, "loss":0}
{"_id": {"time": ISODate("20210101T17:10:00"), "symbol" : "BTC-USD"}, "price": 35280, "previousPrice": 35150, "gain":130, "loss":0}
{"_id": {"time": ISODate("20210101T17:15:00"), "symbol" : "BTC-USD"}, "price": 34910, "previousPrice": 35280, "gain":0, "loss":370}
```
But in the MongoDB Aggregation Pipeline, how can we refer to the previous document from the current document? How can we derive the new field (`$previousPrice`) from the previous document in the sorted window?
MongoDB 5.0 introduced the `$shift` operator that includes data from another document in the same partition at the given location, e.g., you can refer to the document that is three documents before the current document or two documents after the current document in the sorted window.
We set our window with partitioning and introduce new field as previousPrice:
```js
{
$setWindowFields: {
partitionBy: "$_id.symbol",
sortBy: { "_id.time": 1 },
output: {
previousPrice: { $shift: { by: -1, output: "$price" } },
},
},
}
```
`$shift` takes two parameters:
- `by` specifies the location of the document which we’ll include. Since we want to include the previous document, then we set it to `-1`. If we wanted to include one next document, then we would set it to `1`.
- `output` specifies the field of the document that we want to include in the current document.
After we set the `$previousPrice` information for the current document, then we need to subtract the previous value from current value. We will have another derived field “`diff`” that represents the difference value between current value and previous value:
```js
{
$addFields: {
diff: {
$subtract: "$price", { $ifNull: ["$previousPrice", "$price"] }],
},
},
}
```
We’ve set the `diff` value and now we will set two more fields, `gain` and `loss,` to use in the further stages. We just apply the gain/loss logic here:
```js
{
$addFields: {
gain: { $cond: { if: { $gte: ["$diff", 0] }, then: "$diff", else: 0 } },
loss: {
$cond: { if: { $lte: ["$diff", 0] }, then: { $abs: "$diff" }, else: 0 },
},
},
}
```
After we have enriched the symbol data with gain and loss information for every document, then we can apply further partitioning to get the moving average of gain and loss fields by considering the previous 14 data points:
```js
{
$setWindowFields: {
partitionBy: "$_id.symbol",
sortBy: { "_id.time": 1 },
output: {
avgGain: {
$avg: "$gain",
window: { documents: [-14, 0] },
},
avgLoss: {
$avg: "$loss",
window: { documents: [-14, 0] },
},
documentNumber: { $documentNumber: {} },
},
},
}
```
Here we also used another newly introduced operator, [`$documentNumber`. While we do calculations over the window, we give a sequential number for each document, because we will filter out the documents that have the document number less than or equal to 14. (RSI is calculated after at least 14 data points have been arrived.) We will do filtering out in the later stages. Here, we only set the number of the document.
After we calculate the average gain and average loss for every symbol, then we will find the relative strength value. That is calculated by dividing average gain value by average loss value. Since we apply the divide operation, then we need to anticipate the “divide by 0” problem as well:
```js
{
$addFields: {
relativeStrength: {
$cond: {
if: {
$gt: "$avgLoss", 0],
},
then: {
$divide: ["$avgGain", "$avgLoss"],
},
else: "$avgGain",
},
},
},
}
```
Relative strength value has been calculated and now it’s time to smooth the Relative Strength value to normalize the data between 0 and 100:
```js
{
$addFields: {
rsi: {
$cond: {
if: { $gt: ["$documentNumber", 14] },
then: {
$subtract: [
100,
{ $divide: [100, { $add: [1, "$relativeStrength"] }] },
],
},
else: null,
},
},
},
}
```
We basically set `null` to the first 14 documents. And for the others, RSI value has been set.
Below, you can see a one-minute interval candlestick chart and RSI chart. After 14 data points, RSI starts to be calculated. For every interval, we calculated the RSI through aggregation queries by processing the previous data of that symbol:
![Candlestick charts
This is the complete aggregation pipeline:
```js
db.ticker.aggregate(
{
$match: {
symbol: "BTC-USD",
},
},
{
$group: {
_id: {
symbol: "$symbol",
time: {
$dateTrunc: {
date: "$time",
unit: "minute",
binSize: 5,
},
},
},
high: { $max: "$price" },
low: { $min: "$price" },
open: { $first: "$price" },
close: { $last: "$price" },
},
},
{
$sort: {
"_id.time": 1,
},
},
{
$project: {
_id: 1,
price: "$close",
},
},
{
$setWindowFields: {
partitionBy: "$_id.symbol",
sortBy: { "_id.time": 1 },
output: {
previousPrice: { $shift: { by: -1, output: "$price" } },
},
},
},
{
$addFields: {
diff: {
$subtract: ["$price", { $ifNull: ["$previousPrice", "$price"] }],
},
},
},
{
$addFields: {
gain: { $cond: { if: { $gte: ["$diff", 0] }, then: "$diff", else: 0 } },
loss: {
$cond: { if: { $lte: ["$diff", 0] }, then: { $abs: "$diff" }, else: 0 },
},
},
},
{
$setWindowFields: {
partitionBy: "$_id.symbol",
sortBy: { "_id.time": 1 },
output: {
avgGain: {
$avg: "$gain",
window: { documents: [-14, 0] },
},
avgLoss: {
$avg: "$loss",
window: { documents: [-14, 0] },
},
documentNumber: { $documentNumber: {} },
},
},
},
{
$addFields: {
relativeStrength: {
$cond: {
if: {
$gt: ["$avgLoss", 0],
},
then: {
$divide: ["$avgGain", "$avgLoss"],
},
else: "$avgGain",
},
},
},
},
{
$addFields: {
rsi: {
$cond: {
if: { $gt: ["$documentNumber", 14] },
then: {
$subtract: [
100,
{ $divide: [100, { $add: [1, "$relativeStrength"] }] },
],
},
else: null,
},
},
},
},
]);
```
## Conclusion
MongoDB Aggregation Framework provides a great toolset to transform any shape of data into a desired format. As you see in the examples, we use a wide variety of aggregation pipeline [stages and operators. As we discussed in the previous blog posts, time-series collections and window functions are great tools to process time-based data over a window.
In this post we've looked at the $shift and $documentNumber operators that have been introduced with MongoDB 5.0. The `$shift` operator includes another document in the same window into the current document to process positional data together with current data. In an RSI technical indicator calculation, it is commonly used to compare the current data point with the previous data points, and `$shift` makes it easier to refer to positional documents in a window. For example, price difference between current data point and previous data point.
Another newly introduced operator is `$documentNumber`. `$documentNumber` gives a sequential number for the sorted documents to be processed later in subsequent aggregation stages. In an RSI calculation, we need to skip calculating RSI value for the first 14 periods of data and $documentNumber helps us to identify and filter out these documents at later stages in the aggregation pipeline. | md | {
"tags": [
"MongoDB",
"JavaScript"
],
"pageDescription": "Time series collections part 3: calculating MACD & RSI values",
"contentType": "Article"
} | Currency Analysis with Time Series Collections #3 — MACD and RSI Calculation | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-kotlin-0-6-0 | created | # Realm Kotlin 0.6.0.
Realm Kotlin 0.6.0
==================
We just released v0.6.0 of Realm Kotlin. It contains support for Kotlin/JVM, indexed fields as well as a number of bug fixes.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
Kotlin/JVM support
==================
The new Realm Kotlin SDK was designed from its inception to support Multiplatform. So far, we’ve been focusing on KMM targets i.e Android and iOS but there was a push from the community to add JVM support, this is now possible using 0.6.0 by enabling the following DSL into your project:
```
kotlin { jvm() // other targets …}
```
Now your app can target:
Android, iOS, macOS and JVM (Linux _since Centos 7_, macOS _x86\_64_ and Windows _8.1 64_).
What to build with Kotlin/JVM?
==============================
* You can build desktop applications using Compose Desktop (see examples: MultiplatformDemo and FantasyPremierLeague).
* You can build a classic Java console application (see JVMConsole).
* You can run your Android tests on JVM (note there’s a current issue on IntelliJ where the execution of Android tests from the common source-set is not possible, see/upvote :) https://youtrack.jetbrains.com/issue/KTIJ-15152, alternatively you can still run them as a Gradle task).
Where is it installed?
======================
The native library dependency is extracted from the cinterop-jar and installed into a default location on your machine:
* _Linux_:
```
$HOME/.cache/io.realm.kotlin/
```
* _macOS:_
```
$HOME/Library/Caches/io.realm.kotlin/
```
* _Windows:_
```
%localappdata%\io-realm-kotlin\
```
Support Indexed fields
======================
To index a field, use the _@Index_ annotation. Like primary keys, this makes writes slightly slower, but makes reads faster. It’s best to only add indexes when you’re optimizing the read performance for specific situations.
Abstracted public API into interfaces
=====================================
If you tried out the previous version, you will notice that we did an internal refactoring of the project in order to make public APIs consumable via interfaces instead of classes (ex: Realm and RealmConfiguration), this should increase decoupling and make mocking and testability easier for developers.
🎉 Thanks for reading. Now go forth and build amazing apps with Realm! As always, we’re around on GitHub, Twitter and #realm channel on the official Kotlin Slack.
See the full changelog for all the details. | md | {
"tags": [
"Realm",
"Kotlin"
],
"pageDescription": "We just released v0.6.0 of Realm Kotlin. It contains support for Kotlin/JVM, indexed fields as well as a number of bug fixes.",
"contentType": "News & Announcements"
} | Realm Kotlin 0.6.0. | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-quickstart-sanic | created | # Getting Started with MongoDB and Sanic
Sanic is a Python 3.6+ async web server and web framework that's written to go fast. The project's goal is to provide a simple way to get up and running a highly performant HTTP server that is easy to build, to expand, and ultimately to scale.
Unfortunately, because of its name and dubious choices in ASCII art, Sanic wasn't seen by some as a serious framework, but it has matured. It is worth considering if you need a fast, async, Python framework.
In this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your Sanic projects.
## Prerequisites
- Python 3.9.0
- A MongoDB Atlas cluster. Follow the "Get Started with Atlas" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.
## Running the Example
To begin, you should clone the example code from GitHub.
``` shell
git clone [email protected]:mongodb-developer/mongodb-with-sanic.git
```
You will need to install a few dependencies: Sanic, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.
``` shell
cd mongodb-with-sanic
pip install -r requirements.txt
```
It may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.
Once you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.
``` shell
export MONGODB_URL="mongodb+srv://:@/?retryWrites=true&w=majority"
```
Remember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.
The final step is to start your Sanic server.
``` shell
python app.py
```
Once the application has started, you can view it in your browser at . There won't be much to see at the moment as you do not have any data! We'll look at each of the end-points a little later in the tutorial, but if you would like to create some data now to test, you need to send a `POST` request with a JSON body to the local URL.
``` shell
curl -X "POST" "http://localhost:8000/" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json; charset=utf-8' \
-d '{
"name": "Jane Doe",
"email": "[email protected]",
"gpa": "3.9"
}'
```
Try creating a few students via these `POST` requests, and then refresh your browser.
## Creating the Application
All the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.
### Setting Up Our App and MongoDB Connection
We're going to use the sanic-motor package to wrap our motor client for ease of use. So, we need to provide a couple of settings when creating our Sanic app.
``` python
app = Sanic(__name__)
settings = dict(
MOTOR_URI=os.environ"MONGODB_URL"],
LOGO=None,
)
app.config.update(settings)
BaseModel.init_app(app)
class Student(BaseModel):
__coll__ = "students"
```
Sanic-motor's models are unlikely to be very similar to any other database models you have used before. They do not describe the schema, for example. Instead, we only specify the collection name.
### Application Routes
Our application has five routes:
- POST / - creates a new student.
- GET / - view a list of all students.
- GET /{id} - view a single student.
- PUT /{id} - update a student.
- DELETE /{id} - delete a student.
#### Create Student Route
``` python
@app.route("/", methods=["POST"])
async def create_student(request):
student = request.json
student["_id"] = str(ObjectId())
new_student = await Student.insert_one(student)
created_student = await Student.find_one(
{"_id": new_student.inserted_id}, as_raw=True
)
return json_response(created_student)
```
Note how I am converting the `ObjectId` to a string before assigning it as the `_id`. MongoDB stores data as [BSON. However, we are encoding and decoding our data as JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId`. JSON does not. Because of this, for simplicity, we convert ObjectIds to strings before storing them.
The `create_student` route receives the new student data as a JSON string in a `POST` request. Sanic will automatically convert this JSON string back into a Python dictionary which we can then pass to the sanic-motor wrapper.
The `insert_one` method response includes the `_id` of the newly created student. After we insert the student into our collection, we use the `inserted_id` to find the correct document and return it in the `json_response`.
sanic-motor returns the relevant model objects from any `find` method, including `find_one`. To override this behaviour, we specify `as_raw=True`.
##### Read Routes
The application has two read routes: one for viewing all students, and the other for viewing an individual student.
``` python
@app.route("/", methods="GET"])
async def list_students(request):
students = await Student.find(as_raw=True)
return json_response(students.objects)
```
In our example code, we are not placing any limits on the number of students returned. In a real application, you should use sanic-motor's `page` and `per_page` arguments to paginate the number of students returned.
``` python
@app.route("/", methods=["GET"])
async def show_student(request, id):
if (student := await Student.find_one({"_id": id}, as_raw=True)) is not None:
return json_response(student)
raise NotFound(f"Student {id} not found")
```
The student detail route has a path parameter of `id`, which Sanic passes as an argument to the `show_student` function. We use the `id` to attempt to find the corresponding student in the database. The conditional in this section is using an [assignment expression, a recent addition to Python (introduced in version 3.8) and often referred to by the incredibly cute sobriquet "walrus operator."
If a document with the specified `id` does not exist, we raise a `NotFound` exception which will respond to the request with a `404` response.
##### Update Route
``` python
@app.route("/", methods="PUT"])
async def update_student(request, id):
student = request.json
update_result = await Student.update_one({"_id": id}, {"$set": student})
if update_result.modified_count == 1:
if (
updated_student := await Student.find_one({"_id": id}, as_raw=True)
) is not None:
return json_response(updated_student)
if (
existing_student := await Student.find_one({"_id": id}, as_raw=True)
) is not None:
return json_response(existing_student)
raise NotFound(f"Student {id} not found")
```
The `update_student` route is like a combination of the `create_student` and the `show_student` routes. It receives the `id` of the document to update as well as the new data in the JSON body.
We attempt to `$set` the new values in the correct document with `update_one`, and then check to see if it correctly modified a single document. If it did, then we find that document that was just updated and return it.
If the `modified_count` is not equal to one, we still check to see if there is a document matching the `id`. A `modified_count` of zero could mean that there is no document with that `id`. It could also mean that the document does exist but it did not require updating as the current values are the same as those supplied in the `PUT` request.
It is only after that final `find` fail when we raise a `404` Not Found exception.
##### Delete Route
``` python
@app.route("/", methods=["DELETE"])
async def delete_student(request, id):
delete_result = await Student.delete_one({"_id": id})
if delete_result.deleted_count == 1:
return json_response({}, status=204)
raise NotFound(f"Student {id} not found")
```
Our final route is `delete_student`. Again, because this is acting upon a single document, we have to supply an `id` in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or "No Content." In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified id, then instead, we return a `404`.
## Wrapping Up
I hope you have found this introduction to Sanic with MongoDB useful. If you would like to find out [more about Sanic, please see their documentation. Unfortunately, documentation for sanic-motor is entirely lacking at this time. But, it is a relatively thin wrapper around the MongoDB Motor driver—which is well documented—so do not let that discourage you.
To see how you can integrate MongoDB with other async frameworks, check out some of the other Python posts on the MongoDB developer portal.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": "Getting started with MongoDB and Sanic",
"contentType": "Quickstart"
} | Getting Started with MongoDB and Sanic | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/connect-atlas-cloud-kubernetes-peering | created | # Securely Connect MongoDB to Cloud-Offered Kubernetes Clusters
## Introduction
Containerized applications are becoming an industry standard for virtualization. When we talk about managing those containers, Kubernetes will probably be brought up extremely quickly.
Kubernetes is a known open-source system for automating the deployment, scaling, and management of containerized applications. Nowadays, all of the major cloud providers (AWS, Google Cloud, and Azure) have a managed Kubernetes offering to easily allow organizations to get started and scale their Kubernetes environments.
Not surprisingly, MongoDB Atlas also runs on all of those offerings to give your modern containerized applications the best database offering. However, ease of development might yield in missing some critical aspects, such as security and connectivity control to our cloud services.
In this article, I will guide you on how to properly secure your Kubernetes cloud services when connecting to MongoDB Atlas using the recommended and robust solutions we have.
## Prerequisites
You will need to have a cloud provider account and the ability to deploy one of the Kubernetes offerings:
* Amazon EKS
* Google Cloud GKE
* Azure AKS
And of course, you'll need a MongoDB Atlas project where you are a project owner.
> If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post. Please note that for this tutorial you are required to have a M10+ cluster.
## Step 1: Set Up Networks
Atlas connections, by default, use credentials and end-to-end encryption to secure the connection. However, building a trusted network is a must for closing the security cycle between your application and the database.
No matter what cloud of choice you decide to build your Kubernetes cluster in, the basic foundation of securing that deployment is creating its own network. You can look into the following guides to create your own network and gather the main information (Names, Ids, and subnet Classless Inter-Domain Routing \- CIDR\)\.
##### Private Network Creation
| AWS | GCP | Azure |
| --- | --- | ----- |
| Create an AWS VPC | Create a GCP VPC | Create a VNET |
## Step 2: Create Network Peerings
Now, we'll configure connectivity of the virtual network that the Atlas region resides in to the virtual network we've created in Step 1. This connectivity is required to make sure the communication between networks is possible. We'll configure Atlas to allow connections from the virtual network from Step 1.
This process is called setting a Network Peering Connection. It's significant as it allows internal communication between networks of two different accounts (the Atlas cloud account and your cloud account).
The network peerings are established under our Projects > Network Access > Peering > "ADD PEERING CONNECTION." For more information, please read our documentation.
However, I will highlight the main points in each cloud for a successful peering setup:
##### Private Network Creation
AWSGCPAzure
1. Allow outbound traffic to Atlas CIDR on 2015-27017.
2. Obtain VPC information (Account ID, VPC Name, VPC Region, VPC CIDR). Enable DNS and Hostname resolution on that VPC.
3. Using this information, initiate the VPC Peering.
4. Approve the peering on AWS side.
5. Add peering route in the relevant subnet/s targeting Atlas CIDR and add those subnets/security groups in the Atlas access list page.
1. Obtain GCP VPC information (Project ID, VPC Name, VPC Region, and CIDR).
2. When you initiate a VPC peering on Atlas side, it will generate information you need to input on GCP VPC network peering page (Atlas Project ID and Atlas VPC Name).
3. Submit the peering request approval on GCP and add the GCP CIDR in Atlas access lists.
1. Obtain the following azure details from your subscription (Subscription ID, Azure Active Directory Directory ID, VNET Resource Group Name, VNet Name, VNet Region).
2. Input the gathered information and get a list of commands to perform on Azure console.
3. Open Azure console and run the commands, which will create a custom role and permissions for peering.
4. Validate and initiate peering.
## Step 3: Deploy the Kubernetes Cluster in Our Networks
The Kubernetes clusters that we launch must be associated with the
peered network. I will highlight each cloud provider's specifics.
## AWS EKS
When we launch our EKS via the AWS console service, we need to configure
the peered VPC under the "Networking" tab.
Place the correct settings:
* VPC Name
* Relevant Subnets (Recommended to pick at least three availability
zones)
* Choose a security group with open 27015-27017 ports to the Atlas
CIDR.
* Optionally, you can add an IP range for your pods.
## GCP GKE
When we launch our GKE service, we need to configure the peered VPC under the "Networking" section.
Place the correct settings:
* VPC Name
* Subnet Name
* Optionally, you can add an IP range for your pod's internal network that cannot overlap with the peered CIDR.
## Azure AKS
When we lunch our AKS service, we need to use the same resource group as the peered VNET and configure the peered VNET as the CNI network in the advanced Networking tab.
Place the correct settings:
* Resource Group
* VNET Name under "Virtual Network"
* Cluster Subnet should be the peered subnet range.
* The other CIDR should be a non-overlapping CIDR from the peered network.
## Step 4: Deploy Containers and Test Connectivity
Once the cluster is up and running in your cloud provider, you can test the connectivity to our peered cluster.
First, we will need to get our connection string and method from the Atlas cluster UI. Please note that GCP and Azure have private connection strings for peering, and those must be used for peered networks.
Now, let's test our connection from one of the Kubernetes pods:
That's it. We are securely connected!
## Wrap-Up
Kubernetes-managed clusters offer a simple and modern way to deploy containerized applications to the vendor of your choice. It's great that we can easily secure their connections to work with the best cloud database offering there is, MongoDB Atlas, unlocking other possibilities such as building cross-platform application with MongoDB Realm and Realm Sync or using MongoDB Data Lake and Atlas Search to build incredible applications.
> If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Atlas",
"Kubernetes",
"Google Cloud"
],
"pageDescription": "A high-level guide on how to securely connect MongoDB Atlas with the Kubernetes offerings from Amazon AWS, Google Cloud (GCP), and Microsoft Azure.",
"contentType": "Tutorial"
} | Securely Connect MongoDB to Cloud-Offered Kubernetes Clusters | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/introduction-realm-sync-android | created | # Introduction to Atlas Device Sync for Android
* * *
> Atlas App Services (Formerly MongoDB Realm )
>
> Atlas Device Sync (Formerly Realm Sync)
>
* * *
Welcome back! We really appreciate you coming back and showing your interest in Atlas App Services. This is a follow-up article to Introduction to Realm Java SDK for Android. If you haven't read that yet, we recommend you go through it first.
This is a beginner-level article, where we introduce you to Atlas Device Sync. As always, we demonstrate its usage by building an Android app using the MVVM architecture.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Prerequisites
>
>
>You have created at least one app using Android Studio.
>
>
## What Are We Trying to Solve?
In the previous article, we learned that the Realm Java SDK is easy to use when working with a local database. But in the world of the internet we want to share our data, so how do we do that with Realm?
>
>
>**MongoDB Atlas Device Sync**
>
>Atlas Device Sync is the solution to our problem. It's one of the many features provided by MongoDB Atlas App Services. It synchronizes the data between client-side Realms and the server-side cloud, MongoDB Atlas, without worrying about conflict resolution and error handling.
>
>
The illustration below demonstrates how MongoDB Atlas Device Sync has simplified the complex architecture:
To demonstrate how to use Atlas Device Sync, we will extend our previous application, which tracks app views, to use Atlas Device Sync.
## Step 1: Get the Base Code
Clone the original repo and rename it "HelloDeviceSync."
## Step 2: Enable Atlas Device Sync
Update the `syncEnabled` state as shown below in the Gradle file (at the module level):
``` kotlin
android {
// few other things
realm {
syncEnabled = true
}
}
```
Also, add the `buildConfigField` to `buildTypes` in the same file:
``` kotlin
buildTypes {
debug {
buildConfigField "String", "RealmAppId", "\"App Key\""
}
release {
buildConfigField "String", "RealmAppId", "\"App Key\""
}
}
```
You can ignore the value of `App Key` for now, as it will be covered in a later step.
## Step 3: Set Up Your Free MongoDB Atlas Cloud Database
Once this is done, we have a cloud database where all our mobile app data can be saved, i.e., MongoDB Atlas. Now we are left with linking our cloud database (in Atlas) with the mobile app.
## Step 4: Create a App Services App
In layman's terms, App Services apps on MongoDB Atlas are just links between the data flowing between the mobile apps (Realm Java SDK) and Atlas.
## Step 5: Add the App Services App ID to the Android Project
Copy the App ID and use it to replace `App Key` in the `build.gradle` file, which we added in **Step 2**.
With this done, MongoDB Atlas and your Android App are connected.
## Step 6: Enable Atlas Device Sync and Authentication
MongoDB Atlas App Services is a very powerful tool and has a bunch of cool features from data security to its manipulation. This is more than sufficient for one application. Let's enable authentication and sync.
### But Why Authentication?
Device Sync is designed to make apps secure by default, by not allowing an unknown user to access data.
We don't have to force a user to sign up for them to become a known user. We can enable anonymous authentication, which is a win-win for everyone.
So let's enable both of them:
Let's quickly recap what we have done so far.
In the Android app:
- Added App Services App ID to the Gradle file.
- Enabled Atlas Device Sync.
In MongoDB Atlas:
- Set up account.
- Created a free cluster for MongoDB Atlas.
- Created a App Services app.
- Enabled anonymous authentication.
- Enabled sync.
Now, the final piece is to make the necessary modifications to our Android app.
## Step 7: Update the Android App Code
The only code change is to get an instance of the Realm mobile database from the App Services app instance.
1. Get a App Services app instance from which the Realm instance can be derived:
``` kotlin
val realmSync by lazy {
App(AppConfiguration.Builder(BuildConfig.RealmAppId).build())
}
```
2. Update the creation of the View Model:
``` kotlin
private val homeViewModel: HomeViewModel by navGraphViewModels(
R.id.mobile_navigation,
factoryProducer = {
object : ViewModelProvider.Factory {
@Suppress("UNCHECKED_CAST")
override fun create(modelClass: Class): T {
val realmApp = (requireActivity().application as HelloRealmSyncApp).realmSync
return HomeViewModel(realmApp) as T
}
}
})
```
3. Update the View Model constructor to accept the App Services app instance:
``` kotlin
class HomeViewModel(private val realmApp: App) : ViewModel() {
}
```
4. Update the `updateData` method in `HomeViewModel`:
``` kotlin
private fun updateData() {
_isLoading.postValue(true)
fun onUserSuccess(user: User) {
val config = SyncConfiguration.Builder(user, user.id).build()
Realm.getInstanceAsync(config, object : Realm.Callback() {
override fun onSuccess(realm: Realm) {
realm.executeTransactionAsync {
var visitInfo = it.where(VisitInfo::class.java).findFirst()
visitInfo = visitInfo?.updateCount() ?: VisitInfo().apply {
partition = user.id
visitCount++
}
_visitInfo.postValue(it.copyFromRealm(visitInfo))
it.copyToRealmOrUpdate(visitInfo)
_isLoading.postValue(false)
}
}
override fun onError(exception: Throwable) {
super.onError(exception)
//TODO: Implementation pending
_isLoading.postValue(false)
}
})
}
realmApp.loginAsync(Credentials.anonymous()) {
if (it.isSuccess) {
onUserSuccess(it.get())
} else {
_isLoading.postValue(false)
}
}
}
```
In the above snippet, we are doing two primary things:
1. Getting a user instance by signing in anonymously.
2. Getting a Realm instance using `SyncConfiguration.Builder`.
``` kotlin
SyncConfiguration.Builder(user, user.id).build()
```
Where `user.id` is the partition key we defined in our Atlas Device Sync configuration (Step 6). In simple terms, partition key is an identifier that helps you to get the exact data as per client needs. For more details, please refer to the article on Atlas Device Sync Partitioning Strategies.
## Step 8: View Your Results in MongoDB Atlas
Thank you for reading. You can find the complete working code in our GitHub repo.
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Realm",
"Kotlin",
"Android"
],
"pageDescription": "Learn how to use Atlas Device Sync with Android.",
"contentType": "News & Announcements"
} | Introduction to Atlas Device Sync for Android | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-network-compression | created | # MongoDB Network Compression: A Win-Win
# MongoDB Network Compression: A Win-Win
An under-advertised feature of MongoDB is its ability to compress data between the client and the server. The CRM company Close has a really nice article on how compression reduced their network traffic from about 140 Mbps to 65 Mpbs. As Close notes, with cloud data transfer costs ranging from $0.01 per GB and up, you can get a nice little savings with a simple configuration change.
MongoDB supports the following compressors:
* snappy
* zlib (Available starting in MongoDB 3.6)
* zstd (Available starting in MongoDB 4.2)
Enabling compression from the client simply involves installing the desired compression library and then passing the compressor as an argument when you connect to MongoDB. For example:
```PYTHON
client = MongoClient('mongodb://localhost', compressors='zstd')
```
This article provides two tuneable Python scripts, read-from-mongo.py and write-to-mongo.py, that you can use to see the impact of network compression yourself.
## Setup
### Client Configuration
Edit params.py and at a minimum, set your connection string. Other tunables include the amount of bytes to read and insert (default 10 MB) and the batch size to read (100 records) and insert (1 MB):
``` PYTHON
# Read to Mongo
target_read_database = 'sample_airbnb'
target_read_collection = 'listingsAndReviews'
megabytes_to_read = 10
batch_size = 100 # Batch size in records (for reads)
# Write to Mongo
drop_collection = True # Drop collection on run
target_write_database = 'test'
target_write_collection = 'network-compression-test'
megabytes_to_insert = 10
batch_size_mb = 1 # Batch size of bulk insert in megabytes
```
### Compression Library
The snappy compression in Python requires the `python-snappy` package.
```pip3 install python-snappy```
The zstd compression requires the zstandard package
```pip3 install zstandard```
The zlib compression is native to Python.
### Sample Data
My read-from-mongo.py script uses the Sample AirBnB Listings Dataset but ANY dataset will suffice for this test.
The write-to-mongo.py script generates sample data using the Python package
Faker.
```pip3 install faker ```
## Execution
### Read from Mongo
The cloud providers notably charge for data egress, so anything that reduces network traffic out is a win.
Let's first run the script without network compression (the default):
```ZSH
✗ python3 read-from-mongo.py
MongoDB Network Compression Test
Network Compression: Off
Now: 2021-11-03 12:24:00.904843
Collection to read from: sample_airbnb.listingsAndReviews
Bytes to read: 10 MB
Bulk read size: 100 records
1 megabytes read at 307.7 kilobytes/second
2 megabytes read at 317.6 kilobytes/second
3 megabytes read at 323.5 kilobytes/second
4 megabytes read at 318.0 kilobytes/second
5 megabytes read at 327.1 kilobytes/second
6 megabytes read at 325.3 kilobytes/second
7 megabytes read at 326.0 kilobytes/second
8 megabytes read at 324.0 kilobytes/second
9 megabytes read at 322.7 kilobytes/second
10 megabytes read at 321.0 kilobytes/second
8600 records read in 31 seconds (276.0 records/second)
MongoDB Server Reported Megabytes Out: 188.278 MB
```
_You've obviously noticed the reported Megabytes out (188 MB) are more than 18 times our test size of 10 MBs. There are several reasons for this, including other workloads running on the server, data replication to secondary nodes, and the TCP packet being larger than just the data. Focus on the delta between the other tests runs._
The script accepts an optional compression argument, that must be either `snappy`, `zlib` or `zstd`. Let's run the test again using `snappy`, which is known to be fast, while sacrificing some compression:
```ZSH
✗ python3 read-from-mongo.py -c "snappy"
MongoDB Network Compression Test
Network Compression: snappy
Now: 2021-11-03 12:24:41.602969
Collection to read from: sample_airbnb.listingsAndReviews
Bytes to read: 10 MB
Bulk read size: 100 records
1 megabytes read at 500.8 kilobytes/second
2 megabytes read at 493.8 kilobytes/second
3 megabytes read at 486.7 kilobytes/second
4 megabytes read at 480.7 kilobytes/second
5 megabytes read at 480.1 kilobytes/second
6 megabytes read at 477.6 kilobytes/second
7 megabytes read at 488.4 kilobytes/second
8 megabytes read at 482.3 kilobytes/second
9 megabytes read at 482.4 kilobytes/second
10 megabytes read at 477.6 kilobytes/second
8600 records read in 21 seconds (410.7 records/second)
MongoDB Server Reported Megabytes Out: 126.55 MB
```
With `snappy` compression, our reported bytes out were about `62 MBs` fewer. That's a `33%` savings. But wait, the `10 MBs` of data was read in `10` fewer seconds. That's also a `33%` performance boost!
Let's try this again using `zlib`, which can achieve better compression, but at the expense of performance.
_zlib compression supports an optional compression level. For this test I've set it to `9` (max compression)._
```ZSH
✗ python3 read-from-mongo.py -c "zlib"
MongoDB Network Compression Test
Network Compression: zlib
Now: 2021-11-03 12:25:07.493369
Collection to read from: sample_airbnb.listingsAndReviews
Bytes to read: 10 MB
Bulk read size: 100 records
1 megabytes read at 362.0 kilobytes/second
2 megabytes read at 373.4 kilobytes/second
3 megabytes read at 394.8 kilobytes/second
4 megabytes read at 393.3 kilobytes/second
5 megabytes read at 398.1 kilobytes/second
6 megabytes read at 397.4 kilobytes/second
7 megabytes read at 402.9 kilobytes/second
8 megabytes read at 397.7 kilobytes/second
9 megabytes read at 402.7 kilobytes/second
10 megabytes read at 401.6 kilobytes/second
8600 records read in 25 seconds (345.4 records/second)
MongoDB Server Reported Megabytes Out: 67.705 MB
```
With `zlib` compression configured at its maximum compression level, we were able to achieve a `64%` reduction in network egress, although it took 4 seconds longer. However, that's still a `19%` performance improvement over using no compression at all.
Let's run a final test using `zstd`, which is advertised to bring together the speed of `snappy` with the compression efficiency of `zlib`:
```ZSH
✗ python3 read-from-mongo.py -c "zstd"
MongoDB Network Compression Test
Network Compression: zstd
Now: 2021-11-03 12:25:40.075553
Collection to read from: sample_airbnb.listingsAndReviews
Bytes to read: 10 MB
Bulk read size: 100 records
1 megabytes read at 886.1 kilobytes/second
2 megabytes read at 798.1 kilobytes/second
3 megabytes read at 772.2 kilobytes/second
4 megabytes read at 735.7 kilobytes/second
5 megabytes read at 734.4 kilobytes/second
6 megabytes read at 714.8 kilobytes/second
7 megabytes read at 709.4 kilobytes/second
8 megabytes read at 698.5 kilobytes/second
9 megabytes read at 701.9 kilobytes/second
10 megabytes read at 693.9 kilobytes/second
8600 records read in 14 seconds (596.6 records/second)
MongoDB Server Reported Megabytes Out: 61.254 MB
```
And sure enough, `zstd` lives up to its reputation, achieving `68%` percent improvement in compression along with a `55%` improvement in performance!
### Write to Mongo
The cloud providers often don't charge us for data ingress. However, given the substantial performance improvements with read workloads, what can be expected from write workloads?
The write-to-mongo.py script writes a randomly generated document to the database and collection configured in params.py, the default being `test.network_compression_test`.
As before, let's run the test without compression:
```ZSH
python3 write-to-mongo.py
MongoDB Network Compression Test
Network Compression: Off
Now: 2021-11-03 12:47:03.658036
Bytes to insert: 10 MB
Bulk insert batch size: 1 MB
1 megabytes inserted at 614.3 kilobytes/second
2 megabytes inserted at 639.3 kilobytes/second
3 megabytes inserted at 652.0 kilobytes/second
4 megabytes inserted at 631.0 kilobytes/second
5 megabytes inserted at 640.4 kilobytes/second
6 megabytes inserted at 645.3 kilobytes/second
7 megabytes inserted at 649.9 kilobytes/second
8 megabytes inserted at 652.7 kilobytes/second
9 megabytes inserted at 654.9 kilobytes/second
10 megabytes inserted at 657.2 kilobytes/second
27778 records inserted in 15.0 seconds
MongoDB Server Reported Megabytes In: 21.647 MB
```
So it took `15` seconds to write `27,778` records. Let's run the same test with `zstd` compression:
```ZSH
✗ python3 write-to-mongo.py -c 'zstd'
MongoDB Network Compression Test
Network Compression: zstd
Now: 2021-11-03 12:48:16.485174
Bytes to insert: 10 MB
Bulk insert batch size: 1 MB
1 megabytes inserted at 599.4 kilobytes/second
2 megabytes inserted at 645.4 kilobytes/second
3 megabytes inserted at 645.8 kilobytes/second
4 megabytes inserted at 660.1 kilobytes/second
5 megabytes inserted at 669.5 kilobytes/second
6 megabytes inserted at 665.3 kilobytes/second
7 megabytes inserted at 671.0 kilobytes/second
8 megabytes inserted at 675.2 kilobytes/second
9 megabytes inserted at 675.8 kilobytes/second
10 megabytes inserted at 676.7 kilobytes/second
27778 records inserted in 15.0 seconds
MongoDB Server Reported Megabytes In: 8.179 MB
```
Our reported megabytes in are reduced by `62%`. However, our write performance remained identical. Personally, I think most of this is due to the time it takes the Faker library to generate the sample data. But having gained compression without a performance impact it is still a win.
## Measurement
There are a couple of options for measuring network traffic. This script is using the db.serverStatus() `physicalBytesOut` and `physicalBytesIn`, reporting on the delta between the reading at the start and end of the test run. As mentioned previously, our measurements are corrupted by other network traffic occuring on the server, but my tests have shown a consistent improvement when run. Visually, my results achieved appear as follows:
Another option would be using a network analysis tool like Wireshark. But that's beyond the scope of this article for now.
Bottom line, compression reduces network traffic by more than 60%, which is in line with the improvement seen by Close. More importantly, compression also had a dramatic improvement on read performance. That's a Win-Win.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "An under advertised feature of MongoDB is its ability to compress data between the client and the server. This blog will show you exactly how to enable network compression along with a script you can run to see concrete results. Not only will you save some $, but your performance will also likely improve - a true win-win.\n",
"contentType": "Tutorial"
} | MongoDB Network Compression: A Win-Win | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-swiftui-maps-location | created | # Using Maps and Location Data in Your SwiftUI (+Realm) App
## Introduction
Embedding Apple Maps and location functionality in SwiftUI apps used to be a bit of a pain. It required writing your own SwiftUI wrapper around UIKit code—see these examples from the O-FISH app:
* Location helper
* Map views
If you only need to support iOS14 and later, then you can **forget most of that messy code 😊**. If you need to support iOS13—sorry, you need to go the O-FISH route!
iOS14 introduced the Map SwiftUI view (part of Mapkit) allowing you to embed maps directly into your SwiftUI apps without messy wrapper code.
This article shows you how to embed Apple Maps into your app views using Mapkit's Map view. We'll then look at how you can fetch the user's current location—with their permission, of course!
Finally, we'll see how to store the location data in Realm in a format that lets MongoDB Atlas Device Sync it to MongoDB Atlas. Once in Atlas, you can add a geospatial index and use MongoDB Charts to plot the data on a map—we'll look at that too.
Most of the code snippets have been extracted from the RChat app. That app is a good place to see maps and location data in action. Building a Mobile Chat App Using Realm – The New and Easier Way is a good place to learn more about the RChat app—including how to enable MongoDB Atlas Device Sync.
## Prerequisites
* Realm-Cocoa 10.8.0+ (may work with some 10.7.X versions)
* iOS 14.5+ (Mapkit was introduced in iOS 14.0 and so most features should work with earlier iOS 14.X versions)
* XCode12+
## How to Add an Apple Map to Your SwiftUI App
To begin, let's create a simple view that displays a map, the coordinates of the center of that map, and the zoom level:
With Mapkit and SwiftUI, this only takes a few lines of code:
``` swift
import MapKit
import SwiftUI
struct MyMapView: View {
@State private var region: MKCoordinateRegion = MKCoordinateRegion(
center: CLLocationCoordinate2D(latitude: MapDefaults.latitude, longitude: MapDefaults.longitude),
span: MKCoordinateSpan(latitudeDelta: MapDefaults.zoom, longitudeDelta: MapDefaults.zoom))
private enum MapDefaults {
static let latitude = 45.872
static let longitude = -1.248
static let zoom = 0.5
}
var body: some View {
VStack {
Text("lat: \(region.center.latitude), long: \(region.center.longitude). Zoom: \(region.span.latitudeDelta)")
.font(.caption)
.padding()
Map(coordinateRegion: $region,
interactionModes: .all,
showsUserLocation: true)
}
}
}
```
Note that `showsUserLocation` won't work unless the user has already given the app permission to use their location—we'll get to that.
`region` is initialized to a starting location, but it's updated by the `Map` view as the user scrolls and zooms in and out.
### Adding Bells and Whistles to Your Maps (Pins at Least)
Pins can be added to a map in the form of "annotations." Let's start with a single pin:
Annotations are provided as an array of structs where each instance must contain the coordinates of the pin. The struct must also conform to the Identifiable protocol:
``` swift
struct MyAnnotationItem: Identifiable {
var coordinate: CLLocationCoordinate2D
let id = UUID()
}
```
We can now create an array of `MyAnnotationItem` structs:
``` swift
let annotationItems =
MyAnnotationItem(coordinate: CLLocationCoordinate2D(
latitude: MapDefaults.latitude,
longitude: MapDefaults.longitude))]
```
We then pass `annotationItems` to the `MapView` and indicate that we want a `MapMarker` at the contained coordinates:
``` swift
Map(coordinateRegion: $region,
interactionModes: .all,
showsUserLocation: true,
annotationItems: annotationItems) { item in
MapMarker(coordinate: item.coordinate)
}
```
That gives us the result we wanted.
What if we want multiple pins? Not a problem. Just add more `MyAnnotationItem` instances to the array.
All of the pins will be the same default color. But, what if we want different colored pins? It's simple to extend our code to produce this:
![Embedded Apple Map showing red, yellow, and plue pins at different locations
Firstly, we need to extend `MyAnnotationItem` to include an optional `color` and a `tint` that returns `color` if it's been defined and "red" if not:
``` swift
struct MyAnnotationItem: Identifiable {
var coordinate: CLLocationCoordinate2D
var color: Color?
var tint: Color { color ?? .red }
let id = UUID()
}
```
In our sample data, we can now choose to provide a color for each annotation:
``` swift
let annotationItems =
MyAnnotationItem(
coordinate: CLLocationCoordinate2D(
latitude: MapDefaults.latitude,
longitude: MapDefaults.longitude)),
MyAnnotationItem(
coordinate: CLLocationCoordinate2D(
latitude: 45.8827419,
longitude: -1.1932383),
color: .yellow),
MyAnnotationItem(
coordinate: CLLocationCoordinate2D(
latitude: 45.915737,
longitude: -1.3300991),
color: .blue)
]
```
The `MapView` can then use the `tint`:
``` swift
Map(coordinateRegion: $region,
interactionModes: .all,
showsUserLocation: true,
annotationItems: annotationItems) { item in
MapMarker(
coordinate: item.coordinate,
tint: item.tint)
}
```
If you get bored of pins, you can use `MapAnnotation` to use any view you like for your annotations:
``` swift
Map(coordinateRegion: $region,
interactionModes: .all,
showsUserLocation: true,
annotationItems: annotationItems) { item in
MapAnnotation(coordinate: item.coordinate) {
Image(systemName: "gamecontroller.fill")
.foregroundColor(item.tint)
}
}
```
This is the result:
![Apple Map showing red, yellow and blue game controller icons at different locations on the map
You could also include the name of the system image to use with each annotation.
This gist contains the final code for the view.
## Finding Your User's Location
### Asking for Permission
Apple is pretty vocal about respecting the privacy of their users, and so it shouldn't be a shock that your app will have to request permission before being able to access a user's location.
The first step is to add a key-value pair to your Xcode project to indicate that the app may request permission to access the user's location, and what text should be displayed in the alert. You can add the pair to the "Info.plist" file:
```
Privacy - Location When In Use Usage Description : We'll only use your location when you ask to include it in a message
```
Once that setting has been added, the user should see an alert the first time that the app attempts to access their current location:
### Accessing Current Location
While Mapkit has made maps simple and native in SwiftUI, the same can't be said for location data.
You need to create a SwiftUI wrapper for Apple's Core Location functionality. There's not a lot of value in explaining this boilerplate code—just copy this code from RChat's LocationHelper.swift file, and paste it into your app:
``` swift
import CoreLocation
class LocationHelper: NSObject, ObservableObject {
static let shared = LocationHelper()
static let DefaultLocation = CLLocationCoordinate2D(latitude: 45.8827419, longitude: -1.1932383)
static var currentLocation: CLLocationCoordinate2D {
guard let location = shared.locationManager.location else {
return DefaultLocation
}
return location.coordinate
}
private let locationManager = CLLocationManager()
private override init() {
super.init()
locationManager.delegate = self
locationManager.desiredAccuracy = kCLLocationAccuracyBest
locationManager.requestWhenInUseAuthorization()
locationManager.startUpdatingLocation()
}
}
extension LocationHelper: CLLocationManagerDelegate {
func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: CLLocation]) { }
public func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) {
print("Location manager failed with error: \(error.localizedDescription)")
}
public func locationManager(_ manager: CLLocationManager, didChangeAuthorization status: CLAuthorizationStatus) {
print("Location manager changed the status: \(status)")
}
}
```
Once added, you can access the user's location with this simple call:
``` swift
let location = LocationHelper.currentLocation
```
### Store Location Data in Your Realm Database
#### The Location Format Expected by MongoDB
Realm doesn't have a native type for a geographic location, and so it's up to us how we choose to store it in a Realm Object. That is, unless we want to synchronize the data to MongoDB Atlas using Device Sync, and go on to use MongoDB's geospatial functionality.
To make the best use of the location data in Atlas, we need to add a [geospatial index to the field (which we’ll see how to do soon.) That means storing the location in a supported format. Not all options will work with Atlas Device Sync (e.g., it's not guaranteed that attributes will appear in the same order in your Realm Object and the synced Atlas document). The most robust approach is to use an array where the first element is longitude and the second is latitude:
``` json
location: , ]
```
#### Your Realm Object
The RChat app gives users the option to include their location in a chat message—this means that we need to include the location in the [ChatMessage Object:
``` swift
class ChatMessage: Object, ObjectKeyIdentifiable {
…
@Persisted let location = List()
…
convenience init(author: String, text: String, image: Photo?, location: Double] = []) {
...
location.forEach { coord in
self.location.append(coord)
}
...
}
}
….
}
```
The `location` array that's passed to that initializer is formed like this:
``` swift
let location = LocationHelper.currentLocation
self.location = [location.longitude, location.latitude]
```
## Location Data in Your Backend MongoDB Atlas Application Services App
The easiest way to create your backend MongoDB Atlas Application Services schema is to enable [Development Mode—that way, the schema is automatically generated from your Swift Realm Objects.
This is the generated schema for our "ChatMessage" collection:
``` swift
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
...
"location": {
"bsonType": "array",
"items": {
"bsonType": "double"
}
}
},
"required":
"_id",
...
],
"title": "ChatMessage"
}
```
This is a document that's been created from a synced Realm `ChatMessage` object:
![Screen capture of an Atlas document, which includes an array named location
### Adding a Geospatial Index in Atlas
Now that you have location data stored in Atlas, it would be nice to be able to work with it—e.g., running geospatial queries. To enable this, you need to add a geospatial index to the `location` field.
From the Atlas UI, select the "Indexes" tab for your collection and click "CREATE INDEX":
You should then configure a `2dsphere` index:
Most chat messages won't include the user's location and so I set the `sparse` option for efficiency.
Note that you'll get an error message if your ChatMessage collection contains any documents where the value in the location attribute isn't in a valid geospatial format.
Atlas will then build the index. This will be very quick, unless you already have a huge number of documents containing the location field. Once complete, you can move onto the next section.
### Plotting Your Location Data in MongoDB Charts
MongoDB Charts is a simple way to visualize MongoDB data. You can access it through the same UI as Application Services and Atlas. Just click on the "Charts" button:
The first step is to click the "Add Data Source" button:
Select your Atlas cluster:
Select the `RChat.ChatMessage` collection:
Click “Finish.” You’ll be taken to the default Dashboards view, which is empty for now. Click "Add Dashboard":
In your new dashboard, click "ADD CHART":
Configure your chart as shown here by:
- Setting the chart type to "Geospatial" and the sub-type to "Scatter."
- Dragging the "location" attribute to the coordinates box.
- Dragging the "author" field to the "Color" box.
Once you've created your chart, you can embed it in web apps, etc. That's beyond the scope of this article, but check out the MongoDB Charts docs if you're interested.
## Conclusion
SwiftUI makes it easy to embed Apple Maps in your SwiftUI apps. As with most Apple frameworks, there are extra maps features available if you break out from SwiftUI, but I'd suggest that the simplicity of working with SwiftUI is enough incentive for you to avoid that unless you have a compelling reason.
Accessing location information from within SwiftUI still feels a bit of a hack, but in reality, you cut and paste the helper code once, and then you're good to go.
By storing the location as a `longitude, latitude]` array (`List`) in your Realm database, it's simple to sync it with MongoDB Atlas. Once in Atlas, you have the full power of MongoDB's geospatial functionality to work your location data.
If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Realm",
"Swift",
"iOS"
],
"pageDescription": "Learn how to use the new Map view from iOS Map Kit in your SwiftUI/Realm apps. Also see how to use iOS location in Realm, Atlas, and Charts.",
"contentType": "Tutorial"
} | Using Maps and Location Data in Your SwiftUI (+Realm) App | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/cidr-subnet-selection-atlas | created | # CIDR Subnet Selection for MongoDB Atlas
## Introduction
One of the best features of MongoDB
Atlas is the ability to peer your
host
VPC
on your own Amazon Web Services (AWS) account to your Atlas VPC. VPC
peering provides you with the ability to use the private IP range of
your hosts and MongoDB Atlas cluster. This allows you to reduce your
network exposure and improve security of your data. If you chose to use
peering there are some considerations you should think about first in
selecting the right IP block for your private traffic.
## Host VPC
The host VPC is where you configure the systems that your application
will use to connect to your MongoDB Atlas cluster. AWS provides your
account with a default VPC for your hosts You may need to modify the
default VPC or create a new one to work alongside MongoDB Atlas.
MongoDB Atlas requires your host VPC to follow the
RFC-1918 standard for creating
private ranges. The Internet Assigned Numbers Authority (IANA) has
reserved the following three blocks of the IP address space for private
internets:
- 10.0.0.0 - 10.255.255.255 (10/8 prefix)
- 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
- 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
>
>
>Don't overlap your ranges!
>
>
The point of peering is to permit two private IP ranges to work in
conjunction to keep your network traffic off the public internet. This
will require you to use separate private IP ranges that do not conflict.
AWS standard states the following in their "Invalid VPC
Peering"
document:
>
>
>You cannot create a VPC peering connection between VPCs with matching or
>overlapping IPv4 CIDR blocks.
>
>
## MongoDB Atlas VPC
When you create a group in MongoDB Atlas, by default we provide you with
an AWS VPC which you can only modify before launching your first
cluster. Groups with an existing cluster CANNOT MODIFY their VPC CIDR
block - this is to comply with the AWS requirement for
peering.
By default we create a VPC with IP range 192.168.248.0/21. To specify
your IP block prior to configuring peering and launching your cluster,
follow these steps:
1. Sign up for MongoDB Atlas and
ensure your payment method is completed.
2. Click on the **Network Access** tab, then select **Peering**. You
should see a page such as this which shows you that you have not
launched a cluster yet:
3. Click on the **New Peering Connection** button. You will be given a
new "Peering Connection" window to add your peering details. At the
bottom of this page you'll see a section to modify "Your Atlas VPC"
4. If you would like to specify a different IP range, you may use one
of the RFC-1918 ranges with the appropriate subnet and enter it
here. It's extremely important to ensure that you choose two
distinct RFC-1918 ranges. These two cannot overlap their subnets:
5. Click on the **Initiate Peering** button and follow the directions
to add the appropriate subnet ranges.
## Conclusion
Using peering ensures that your database traffic remains off the public
network. This provides you with a much more secure solution allowing you
to easily scale up and down without specifying IP addresses each time,
and reduces costs on transporting your data from server to server. At
any time if you run into problems with this, our support team is always
available by clicking the SUPPORT link in the lower left of your window.
Our support team is happy to assist in ensuring your peering connection
is properly configured.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "VPC peering provides you with the ability to use the private IP range of your hosts and MongoDB Atlas cluster.",
"contentType": "Tutorial"
} | CIDR Subnet Selection for MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/sql-to-aggregation-pipeline | created | # MongoDB Aggregation Pipeline Queries vs SQL Queries
Let's be honest: Many devs coming to MongoDB are joining the community
with a strong background in SQL. I would personally include myself in
this subset of MongoDB devs. I think it's useful to map terms and
concepts you might be familiar with in SQL to help
"translate"
your work into MongoDB Query Language (MQL). More specifically, in this
post, I will be walking through translating the MongoDB Aggregation
Pipeline from SQL.
## What is the Aggregation Framework?
The aggregation framework allows you to analyze your data in real time.
Using the framework, you can create an aggregation pipeline that
consists of one or more
stages.
Each stage transforms the documents and passes the output to the next
stage.
If you're familiar with the Unix pipe \|, you can think of the
aggregation pipeline as a very similar concept. Just as output from one
command is passed as input to the next command when you use piping,
output from one stage is passed as input to the next stage when you use
the aggregation pipeline.
SQL is a declarative language. You have to declare what you want to
see—that's why SELECT comes first. You have to think in sets, which can
be difficult, especially for functional programmers. With MongoDB's
aggregation pipeline, you can have stages that reflect how you think—for
example, "First, let's group by X. Then, we'll get the top 5 from every
group. Then, we'll arrange by price." This is a difficult query to do in
SQL, but much easier using the aggregation pipeline framework.
The aggregation framework has a variety of
stages
available for you to use. Today, we'll discuss the basics of how to use
$match,
$group,
$sort,
and
$limit.
Note that the aggregation framework has many other powerful stages,
including
$count,
$geoNear,
$graphLookup,
$project,
$unwind,
and others.
>
>
>If you want to check out another great introduction to the MongoDB
>Aggregation Pipeline, be sure to check out Introduction to the MongoDB
>Aggregation
>Framework.
>
>
## Terminology and Concepts
The following table provides an overview of common SQL aggregation
terms, functions, and concepts and the corresponding MongoDB
aggregation
operators:
| **SQL Terms, Functions, and Concepts** | **MongoDB Aggregation Operators** |
|----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| WHERE | $match |
| GROUP BY | $group |
| HAVING | $match |
| SELECT | $project |
| LIMIT | $limit |
| OFFSET | $skip |
| ORDER BY | $sort |
| SUM() | $sum |
| COUNT() | $sum and $sortByCount |
| JOIN | $lookup |
| SELECT INTO NEW_TABLE | $out |
| MERGE INTO TABLE | $merge (Available starting in MongoDB 4.2) |
| UNION ALL | $unionWith (Available starting in MongoDB 4.4) |
Alright, now that we've covered the basics of MongoDB Aggregations,
let's jump into some examples.
## SQL Setup
The SQL examples assume *two* tables, *album* and *songs*, that join by
the *song.album_id* and the *songs.id* columns. Here's what the tables
look like:
##### Albums
| **id** | **name** | **band_name** | **price** | **status** |
|--------|-----------------------------------|------------------|-----------|------------|
| 1 | lo-fi chill hop songs to study to | Silicon Infinite | 2.99 | A |
| 2 | Moon Rocks | Silicon Infinite | 1.99 | B |
| 3 | Flavour | Organical | 4.99 | A |
##### Songs
| **id** | **title** | **plays** | **album_id** |
|--------|-----------------------|-----------|--------------|
| 1 | Snow Beats | 133 | 1 |
| 2 | Rolling By | 242 | 1 |
| 3 | Clouds | 3191 | 1 |
| 4 | But First Coffee | 562 | 3 |
| 5 | Autumn | 901 | 3 |
| 6 | Milk Toast | 118 | 2 |
| 7 | Purple Mic | 719 | 2 |
| 8 | One Note Dinner Party | 1242 | 2 |
I used a site called SQL Fiddle,
and used PostgreSQL 9.6 for all of my examples. However, feel free to
run these sample SQL snippets wherever you feel most comfortable. In
fact, this is the code I used to set up and seed my tables with our
sample data:
``` SQL
-- Creating the main albums table
CREATE TABLE IF NOT EXISTS albums (
id BIGSERIAL NOT NULL UNIQUE PRIMARY KEY,
name VARCHAR(40) NOT NULL UNIQUE,
band_name VARCHAR(40) NOT NULL,
price float8 NOT NULL,
status VARCHAR(10) NOT NULL
);
-- Creating the songs table
CREATE TABLE IF NOT EXISTS songs (
id SERIAL PRIMARY KEY NOT NULL,
title VARCHAR(40) NOT NULL,
plays integer NOT NULL,
album_id BIGINT NOT NULL REFERENCES albums ON DELETE RESTRICT
);
INSERT INTO albums (name, band_name, price, status)
VALUES
('lo-fi chill hop songs to study to', 'Silicon Infinite', 7.99, 'A'),
('Moon Rocks', 'Silicon Infinite', 1.99, 'B'),
('Flavour', 'Organical', 4.99, 'A');
INSERT INTO songs (title, plays, album_id)
VALUES
('Snow Beats', 133, (SELECT id from albums WHERE name='lo-fi chill hop songs to study to')),
('Rolling By', 242, (SELECT id from albums WHERE name='lo-fi chill hop songs to study to')),
('Clouds', 3191, (SELECT id from albums WHERE name='lo-fi chill hop songs to study to')),
('But First Coffee', 562, (SELECT id from albums WHERE name='Flavour')),
('Autumn', 901, (SELECT id from albums WHERE name='Flavour')),
('Milk Toast', 118, (SELECT id from albums WHERE name='Moon Rocks')),
('Purple Mic', 719, (SELECT id from albums WHERE name='Moon Rocks')),
('One Note Dinner Party', 1242, (SELECT id from albums WHERE name='Moon Rocks'));
```
## MongoDB Setup
The MongoDB examples assume *one* collection `albums` that contains
documents with the following schema:
``` json
{
name : 'lo-fi chill hop songs to study to',
band_name: 'Silicon Infinite',
price: 7.99,
status: 'A',
songs:
{ title: 'Snow beats', 'plays': 133 },
{ title: 'Rolling By', 'plays': 242 },
{ title: 'Sway', 'plays': 3191 }
]
}
```
For this post, I did all of my prototyping in a MongoDB Visual Studio
Code plugin playground. For more information on how to use a MongoDB
Playground in Visual Studio Code, be sure to check out this post: [How
To Use The MongoDB Visual Studio Code
Plugin.
Once you have your playground all set up, you can use this snippet to
set up and seed your collection. You can also follow along with this
demo by using the MongoDB Web
Shell.
``` javascript
// Select the database to use.
use('mongodbVSCodePlaygroundDB');
// The drop() command destroys all data from a collection.
// Make sure you run it against the correct database and collection.
db.albums.drop();
// Insert a few documents into the albums collection.
db.albums.insertMany(
{
'name' : 'lo-fi chill hop songs to study to', band_name: 'Silicon Infinite', price: 7.99, status: 'A',
songs: [
{ title: 'Snow beats', 'plays': 133 },
{ title: 'Rolling By', 'plays': 242 },
{ title: 'Clouds', 'plays': 3191 }
]
},
{
'name' : 'Moon Rocks', band_name: 'Silicon Infinite', price: 1.99, status: 'B',
songs: [
{ title: 'Milk Toast', 'plays': 118 },
{ title: 'Purple Mic', 'plays': 719 },
{ title: 'One Note Dinner Party', 'plays': 1242 }
]
},
{
'name' : 'Flavour', band_name: 'Organical', price: 4.99, status: 'A',
songs: [
{ title: 'But First Coffee', 'plays': 562 },
{ title: 'Autumn', 'plays': 901 }
]
},
]);
```
## Quick Reference
### Count all records from albums
#### SQL
``` SQL
SELECT COUNT(*) AS count
FROM albums
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{
$group: {
_id: null, // An _id value of null on the $group operator accumulates values for all the input documents as a whole.
count: { $sum: 1 }
}
}
] );
```
### Sum the price field from albums
#### SQL
``` SQL
SELECT SUM(price) AS total
FROM albums
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{
$group: {
_id: null,
total: { $sum: "$price" }
}
}
] );
```
### For each unique band_name, sum the price field
#### SQL
``` SQL
SELECT band_name,
SUM(price) AS total
FROM albums
GROUP BY band_name
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{
$group: {
_id: "$band_name",
total: { $sum: "$price" }
}
}
] );
```
### For each unique band_name, sum the price field, results sorted by sum
#### SQL
``` SQL
SELECT band_name,
SUM(price) AS total
FROM albums
GROUP BY band_name
ORDER BY total
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{
$group: {
_id: "$band_name",
total: { $sum: "$price" }
}
},
{ $sort: { total: 1 } }
] );
```
### For band_name with multiple albums, return the band_name and the corresponding album count
#### SQL
``` SQL
SELECT band_name,
count(*)
FROM albums
GROUP BY band_name
HAVING count(*) > 1;
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{
$group: {
_id: "$band_name",
count: { $sum: 1 }
}
},
{ $match: { count: { $gt: 1 } } }
] );
```
### Sum the price of all albums with status A and group by unique band_name
#### SQL
``` SQL
SELECT band_name,
SUM(price) as total
FROM albums
WHERE status = 'A'
GROUP BY band_name
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{ $match: { status: 'A' } },
{
$group: {
_id: "$band_name",
total: { $sum: "$price" }
}
}
] );
```
### For each unique band_name with status A, sum the price field and return only where the sum is greater than $5.00
#### SQL
``` SQL
SELECT band_name,
SUM(price) as total
FROM albums
WHERE status = 'A'
GROUP BY band_name
HAVING SUM(price) > 5.00;
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{ $match: { status: 'A' } },
{
$group: {
_id: "$band_name",
total: { $sum: "$price" }
}
},
{ $match: { total: { $gt: 5.00 } } }
] );
```
### For each unique band_name, sum the corresponding song plays field associated with the albums
#### SQL
``` SQL
SELECT band_name,
SUM(songs.plays) as total_plays
FROM albums,
songs
WHERE songs.album_id = albums.id
GROUP BY band_name;
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{ $unwind: "$songs" },
{
$group: {
_id: "$band_name",
qty: { $sum: "$songs.plays" }
}
}
] );
```
### For each unique album, get the song from album with the most plays
#### SQL
``` SQL
SELECT name, title, plays
FROM songs s1 INNER JOIN albums ON (album_id = albums.id)
WHERE plays=(SELECT MAX(s2.plays)
FROM songs s2
WHERE s1.album_id = s2.album_id)
ORDER BY name;
```
#### MongoDB
``` javascript
db.albums.aggregate( [
{ $project:
{
name: 1,
plays: {
$filter: {
input: "$songs",
as: "item",
cond: { $eq: ["$item.plays", { $max: "$songs.plays" }] }
}
}
}
}
] );
```
## Wrapping Up
This post is in no way a complete overview of all the ways that MongoDB
can be used like a SQL-based database. This was only meant to help devs
in SQL land start to make the transition over to MongoDB with some basic
queries using the aggregation pipeline. The aggregation framework has
many other powerful stages, including
[$count,
$geoNear,
$graphLookup,
$project,
$unwind,
and others.
If you want to get better at using the MongoDB Aggregation Framework, be
sure to check out MongoDB University: M121 - The MongoDB Aggregation
Framework. Or,
better yet, try to use some advanced MongoDB aggregation pipeline
queries in your next project! If you have any questions, be sure to head
over to the MongoDB Community
Forums. It's the
best place to get your MongoDB questions answered.
## Resources:
- MongoDB University: M121 - The MongoDB Aggregation Framework:
- How to Use Custom Aggregation Expressions in MongoDB 4.4:
- Introduction to the MongoDB Aggregation Framework:
- How to Use the Union All Aggregation Pipeline Stage in MongoDB 4.4:
- Aggregation Framework with Node.js Tutorial:
- Aggregation Pipeline Quick Reference:
https://docs.mongodb.com/manual/meta/aggregation-quick-reference
- SQL to Aggregation Mapping Chart:
https://docs.mongodb.com/manual/reference/sql-aggregation-comparison
- SQL to MongoDB Mapping Chart:
https://docs.mongodb.com/manual/reference/sql-comparison
- Questions? Comments? We'd love to connect with you. Join the
conversation on the MongoDB Community Forums:
https://developer.mongodb.com/community/forums
| md | {
"tags": [
"MongoDB",
"SQL"
],
"pageDescription": "This is an overview of common SQL aggregation terms, functions, and concepts and the corresponding MongoDB aggregation operators.",
"contentType": "Tutorial"
} | MongoDB Aggregation Pipeline Queries vs SQL Queries | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/capturing-hacker-news-mentions-nodejs-mongodb | created | # Capturing Hacker News Mentions with Node.js and MongoDB
If you're in the technology space, you've probably stumbled upon Hacker News at some point or another. Maybe you're interested in knowing what's popular this week for technology or maybe you have something to share. It's a platform for information.
The problem is that you're going to find too much information on Hacker News without a particularly easy way to filter through it to find the topics that you're interested in. Let's say, for example, you want to know information about Bitcoin as soon as it is shared. How would you do that on the Hacker News website?
In this tutorial, we're going to learn how to parse through Hacker News data as it is created, filtering for only the topics that we're interested in. We're going to do a sentiment analysis on the potential matches to rank them, and then we're going to store this information in MongoDB so we can run reports from it. We're going to do it all with Node.js and some simple pipelines.
## The Requirements
You won't need a Hacker News account for this tutorial, but you will need a few things to be successful:
- Node.js 12.10 or more recent
- A properly configured MongoDB Atlas cluster
We'll be storing all of our matches in MongoDB Atlas. This will make it easier for us to run reports and not depend on looking at logs or similarly structured data.
>You can deploy and use a MongoDB Atlas M0 cluster for FREE. Learn more by clicking here.
Hacker News doesn't have an API that will allow us to stream data in real-time. Instead, we'll be using the Unofficial Hacker News Streaming API. For this particular example, we'll be looking at the comments stream, but your needs may vary.
## Installing the Project Dependencies in a New Node.js Application
Before we get into the interesting code and our overall journey toward understanding and storing the Hacker News data as it comes in, we need to bootstrap our project.
On your computer, create a new project directory and execute the following commands:
``` bash
npm init -y
npm install mongodb ndjson request sentiment through2 through2-filter --save
```
With the above commands, we are creating a **package.json** file and installing a few packages. We know mongodb will be used for storing our Hacker News Data, but the rest of the list is probably unfamiliar to you.
We'll be using the request package to consume raw data from the API. As we progress, you'll notice that we're working with streams of data rather than one-off requests to the API. This means that the data that we receive might not always be complete. To make sense of this, we use the ndjson package to get useable JSON from the stream. Since we're working with streams, we need to be able to use pipelines, so we can't just pass our JSON data through the pipeline as is. Instead, we need to use through2 and through2-filter to filter and manipulate our JSON data before passing it to another stage in the pipeline. Finally, we have sentiment for doing a sentiment analysis on our data.
We'll reiterate on a lot of these packages as we progress.
Before moving to the next step, make sure you create a **main.js** file in your project. This is where we'll add our code, which you'll see isn't too many lines.
## Connecting to a MongoDB Cluster to Store Hacker News Mentions
We're going to start by adding our downloaded dependencies to our code file and connecting to a MongoDB cluster or instance.
Open the project's **main.js** file and add the following code:
``` javascript
const stream = require("stream");
const ndjson = require("ndjson");
const through2 = require("through2");
const request = require("request");
const filter = require("through2-filter");
const sentiment = require("sentiment");
const util = require("util");
const pipeline = util.promisify(stream.pipeline);
const { MongoClient } = require("mongodb");
(async () => {
const client = new MongoClient(process.env"ATLAS_URI"], { useUnifiedTopology: true });
try {
await client.connect();
const collection = client.db("hacker-news").collection("mentions");
console.log("FINISHED");
} catch(error) {
console.log(error);
}
})();
```
In the above code, we've added all of our downloaded dependencies, plus some. Remember we're working with a stream of data, so we need to use pipelines in Node.js if we want to work with that data in stages.
When we run the application, we are connecting to a MongoDB instance or cluster as defined in our environment variables. The `ATLAS_URI` variable would look something like this:
``` none
mongodb+srv://:@plummeting-us-east-1.hrrxc.mongodb.net/
```
You can find the connection string in your MongoDB Atlas dashboard.
Test that the application can connect to the database by executing the following command:
``` bash
node main.js
```
If you don't want to use environment variables, you can hard-code the value in your project or use a configuration file. I personally prefer environment variables because we can set them externally on most cloud deployments for security (and there's no risk that we accidentally commit them to GitHub).
## Parsing and Filtering Hacker News Data in Real Time
At this point, the code we have will connect us to MongoDB. Now we need to focus on streaming the Hacker News data into our application and filtering it for the data that we actually care about.
Let's make the following changes to our **main.js** file:
``` javascript
(async () => {
const client = new MongoClient(process.env["ATLAS_URI"], { useUnifiedTopology: true });
try {
await client.connect();
const collection = client.db("hacker-news").collection("mentions");
await pipeline(
request("http://api.hnstream.com/comments/stream/"),
ndjson.parse({ strict: false }),
filter({ objectMode: true }, chunk => {
return chunk["body"].toLowerCase().includes("bitcoin") || chunk["article-title"].toLowerCase().includes("bitcoin");
})
);
console.log("FINISHED");
} catch(error) {
console.log(error);
}
})();
```
In the above code, after we connect, we create a pipeline of stages to complete. The first stage is a simple GET request to the streaming API endpoint. The results from our request should be JSON, but since we're working with a stream of data rather than expecting a single response, our result may be malformed depending on where we are in the stream. This is normal.
To get beyond, this we can either put the pieces of the JSON puzzle together on our own as they come in from the stream, or we can use the [ndjson package. This package acts as the second stage and parses the data coming in from the previous stage, being our streaming request.
By the time the `ndjson.parse` stage completes, we should have properly formed JSON to work with. This means we need to analyze it to see if it is JSON data we want to keep or toss. Remember, the streaming API gives us all data coming from Hacker News, not just what we're looking for. To filter, we can use the through2-filter package which allows us to filter on a stream like we would on an array in javaScript.
In our `filter` stage, we are returning true if the body of the Hacker News mention includes "bitcoin" or the title of the thread includes the "bitcoin" term. This means that this particular entry is what we're looking for and it will be passed to the next stage in the pipeline. Anything that doesn't match will be ignored for future stages.
## Performing a Sentiment Analysis on Matched Data
At this point, we should have matches on Hacker News data that we're interested in. However, Hacker News has a ton of bots and users posting potentially irrelevant data just to rank in people's searches. It's a good idea to analyze our match and score it to know the quality. Then later, we can choose to ignore matches with a low score as they will probably be a waste of time.
So let's adjust our pipeline a bit in the **main.js** file:
``` javascript
(async () => {
const client = new MongoClient(process.env"ATLAS_URI"], { useUnifiedTopology: true });
const textRank = new sentiment();
try {
await client.connect();
const collection = client.db("hacker-news").collection("mentions");
await pipeline(
request("http://api.hnstream.com/comments/stream/"),
ndjson.parse({ strict: false }),
filter({ objectMode: true }, chunk => {
return chunk["body"].toLowerCase().includes("bitcoin") || chunk["article-title"].toLowerCase().includes("bitcoin");
}),
through2.obj((row, enc, next) => {
let result = textRank.analyze(row.body);
row.score = result.score;
next(null, row);
})
);
console.log("FINISHED");
} catch(error) {
console.log(error);
}
})();
```
In the above code, we've added two parts related to the [sentiment package that we had previously installed.
We first initialize the package through the following line:
``` javascript
const textRank = new sentiment();
```
When looking at our pipeline stages, we make use of the through2 package for streaming object manipulation. Since this is a stream, we can't just take our JSON from the `ndjson.parse` stage and expect to be able to manipulate it like any other object in JavaScript.
When we manipulate the matched object, we are performing a sentiment analysis on the body of the mention. At this point, we don't care what the score is, but we plan to add it to the data which we'll eventually store in MongoDB.
The object as of now might look something like this:
``` json
{
"_id": "5ffcc041b3ffc428f702d483",
"body": "
this is the body from the streaming API
",
"author": "nraboy",
"article-id": 43543234,
"parent-id": 3485345,
"article-title": "Bitcoin: Is it worth it?",
"type": "comment",
"id": 24985379,
"score": 3
}
```
The only modification we've made to the data as of right now is the addition of a score from our sentiment analysis.
It's important to note that our data is not yet inside of MongoDB. We're just at the stage where we've made modifications to the stream of data that could be a match to our interests.
## Creating Documents and Performing Queries in MongoDB
With the data formatted how we want it, we can focus on storing it within MongoDB and querying it whenever we want.
Let's make a modification to our pipeline:
``` javascript
(async () => {
const client = new MongoClient(process.env"ATLAS_URI"], { useUnifiedTopology: true });
const textRank = new sentiment();
try {
await client.connect();
const collection = client.db("hacker-news").collection("mentions");
await pipeline(
request("http://api.hnstream.com/comments/stream/"),
ndjson.parse({ strict: false }),
filter({ objectMode: true }, chunk => {
return chunk["body"].toLowerCase().includes("bitcoin") || chunk["article-title"].toLowerCase().includes("bitcoin");
}),
through2.obj((row, enc, next) => {
let result = textRank.analyze(row.body);
row.score = result.score;
next(null, row);
}),
through2.obj((row, enc, next) => {
collection.insertOne({
...row,
"user-url": `https://news.ycombinator.com/user?id=${row["author"]}`,
"item-url": `https://news.ycombinator.com/item?id=${row["article-id"]}`
});
next();
})
);
console.log("FINISHED");
} catch(error) {
console.log(error);
}
})();
```
We're doing another transformation on our object. This could have been merged with the earlier transformation stage, but for code cleanliness, we are breaking them into two stages.
In this final stage, we are doing an `insertOne` operation with the MongoDB Node.js driver. We're taking the `row` of data from the previous stage and we're adding two new fields to the object before it is inserted. We're doing this so we have quick access to the URL and don't have to rebuild it later.
If we ran the application, it would run forever, collecting any data posted to Hacker News that matched our filter.
If we wanted to query our data within MongoDB, we could use an MQL query like the following:
``` javascript
use("hacker-news");
db.mentions.find({ "score": { "$gt": 3 } });
```
The above MQL query would find all documents that have a score greater than 3. With the sentiment analysis, you're not looking at a score of 0 to 10. It is best you read through the [documentation to see how things are scored.
## Conclusion
You just saw an example of using MongoDB and Node.js for capturing relevant data from Hacker News as it happens live. This could be useful for keeping your own feed of particular topics or it can be extended for other use-cases such as monitoring what people are saying about your brand and using the code as a feedback reporting tool.
This tutorial could be expanded beyond what we explored for this example. For example, we could add MongoDB Realm Triggers to look for certain scores and send a message on Twilio or Slack if a match on our criteria was found.
If you've got any questions or comments regarding this tutorial, take a moment to drop them in the MongoDB Community Forums. | md | {
"tags": [
"JavaScript",
"Node.js"
],
"pageDescription": "Learn how to stream data from Hacker News into MongoDB for analyzing with Node.js.",
"contentType": "Tutorial"
} | Capturing Hacker News Mentions with Node.js and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/swift/realm-api-cache | created | # Build Offline-First Mobile Apps by Caching API Results in Realm
## Introduction
When building a mobile app, there's a good chance that you want it to pull in data from a cloud service—whether from your own or from a third party. While other technologies are growing (e.g., GraphQL and MongoDB Realm Sync), REST APIs are still prevalent.
It's easy to make a call to a REST API endpoint from your mobile app, but what happens when you lose network connectivity? What if you want to slice and dice that data after you've received it? How many times will your app have to fetch the same data (consuming data bandwidth and battery capacity each time)? How will your users react to a sluggish app that's forever fetching data over the internet?
By caching the data from API calls in Realm, the data is always available to your app. This leads to higher availability, faster response times, and reduced network and battery consumption.
This article shows how the RCurrency mobile app fetches exchange rate data from a public API, and then caches it in Realm for always-on, local access.
### Is Using the API from Your Mobile App the Best Approach?
This app only reads data through the API. Writing an offline-first app that needs to reliably update cloud data via an API is a **far** more complex affair. If you need to update cloud data when offline, then I'd strongly recommend you consider MongoDB Realm Sync.
Many APIs throttle your request rate or charge per request. That can lead to issues as your user base grows. A more scalable approach is to have your backend Realm app fetch the data from the API and store it in Atlas. Realm Sync then makes that data available locally on every user's mobile device—without the need for any additional API calls.
## Prerequisites
- Realm-Cocoa 10.13.0+
- Xcode 13
- iOS 15
## The RCurrency Mobile App
The RCurrency app is a simple exchange rate app. It's intended for uses such as converting currencies when traveling.
You choose a base currency and a list of other currencies you want to convert between.
When opened for the first time, RCurrency uses a REST API to retrieve exchange rates, and stores the data in Realm. From that point on, the app uses the data that's stored in Realm. Even if you force-close the app and reopen it, it uses the local data.
If the stored rates are older than today, the app will fetch the latest rates from the API and replace the Realm data.
The app supports pull-to-refresh to fetch and store the latest exchange rates from the API.
You can alter the amount of any currency, and the amounts for all other currencies are instantly recalculated.
## The REST API
I'm using the API provided by exchangerate.host. The API is a free service that provides a simple API to fetch currency exchange rates.
One of the reasons I picked this API is that it doesn't require you to register and then manage access keys/tokens. It's not rocket science to handle that complexity, but I wanted this app to focus on when to fetch data, and what to do once you receive it.
The app uses a single endpoint (where you can replace `USD` and `EUR` with the currencies you want to convert between):
```js
https://api.exchangerate.host/convert?from=USD&to=EUR
```
You can try calling that endpoint directly from your browser.
The endpoint responds with a JSON document:
```js
{
"motd": {
"msg": "If you or your company use this project or like what we doing, please consider backing us so we can continue maintaining and evolving this project.",
"url": "https://exchangerate.host/#/donate"
},
"success": true,
"query": {
"from": "USD",
"to": "EUR",
"amount": 1
},
"info": {
"rate": 0.844542
},
"historical": false,
"date": "2021-09-02",
"result": 0.844542
}
```
Note that the exchange rate for each currency is only updated once every 24 hours. That's fine for our app that's helping you decide whether you can afford that baseball cap when you're on vacation. If you're a currency day-trader, then you should look elsewhere.
## The RCurrency App Implementation
### Data Model
JSON is the language of APIs. That's great news as most modern programming languages (including Swift) make it super easy to convert between JSON strings and native objects.
The app stores the results from the API query in objects of type `Rate`. To make it as simple as possible to receive and store the results, I made the `Rate` class match the JSON format of the API results:
```swift
class Rate: Object, ObjectKeyIdentifiable, Codable {
var motd = Motd()
var success = false
@Persisted var query: Query?
var info = Info()
@Persisted var date: String
@Persisted var result: Double
}
class Motd: Codable {
var msg = ""
var url = ""
}
class Query: EmbeddedObject, ObjectKeyIdentifiable, Codable {
@Persisted var from: String
@Persisted var to: String
var amount = 0
}
class Info: Codable {
var rate = 0.0
}
```
Note that only the fields annotated with `@Persisted` will be stored in Realm.
Swift can automatically convert between `Rate` objects and the JSON strings returned by the API because we make the class comply with the `Codable` protocol.
There are two other top-level classes used by the app.
`Symbols` stores all of the supported currency symbols. In the app, the list is bootstrapped from a fixed list. For future-proofing, it would be better to fetch them from an API:
```swift
class Symbols {
var symbols = Dictionary()
}
extension Symbols {
static var data = Symbols()
static func loadData() {
data.symbols"AED"] = "United Arab Emirates Dirham"
data.symbols["AFN"] = "Afghan Afghani"
data.symbols["ALL"] = "Albanian Lek"
...
}
}
```
`UserSymbols` is used to store the user's chosen base currency and the list of currencies they'd like to see exchange rates for:
```swift
class UserSymbols: Object, ObjectKeyIdentifiable {
@Persisted var baseSymbol: String
@Persisted var symbols: List
}
```
An instance of `UserSymbols` is stored in Realm so that the user gets the same list whenever they open the app.
### `Rate` Data Lifecycle
This flowchart shows how the exchange rate for a single currency (represented by the `symbol` string) is managed when the `CurrencyRowContainerView` is used to render data for that currency:
![Flowchart showing how the app fetches data from the API and stored in in Realm. The mobile app's UI always renders what's stored in MongoDB. The following sections will describe each block in the flow diagram.
Note that the actual behavior is a little more subtle than the diagram suggests. SwiftUI ties the Realm data to the UI. If stage #2 finds the data in Realm, then it will immediately get displayed in the view (stage #8). The code will then make the extra checks and refresh the Realm data if needed. If and when the Realm data is updated, SwiftUI will automatically refresh the UI to render it.
Let's look at each of those steps in turn.
#### #1 `CurrencyContainerView` loaded for currency represented by `symbol`
`CurrencyListContainerView` iterates over each of the currencies that the user has selected. For each currency, it creates a `CurrencyRowContainerView` and passes in strings representing the base currency (`baseSymbol`) and the currency we want an exchange rate for (`symbol`):
```swift
List {
ForEach(userSymbols.symbols, id: \.self) { symbol in
CurrencyRowContainerView(baseSymbol: userSymbols.baseSymbol,
baseAmount: $baseAmount,
symbol: symbol,
refreshNeeded: refreshNeeded)
}
.onDelete(perform: deleteSymbol)
}
```
#### #2 `rate` = FetchFromRealm(`symbol`)
`CurrencyRowContainerView` then uses the `@ObservedResults` property wrapper to query all `Rate` objects that are already stored in Realm:
```swift
struct CurrencyRowContainerView: View {
@ObservedResults(Rate.self) var rates
...
}
```
The view then filters those results to find one for the requested `baseSymbol`/`symbol` pair:
```swift
var rate: Rate? {
rates.filter(
NSPredicate(format: "query.from = %@ AND query.to = %@",
baseSymbol, symbol)).first
}
```
#### #3 `rate` found?
The view checks whether `rate` is set or not (i.e., whether a matching object was found in Realm). If `rate` is set, then it's passed to `CurrencyRowDataView` to render the details (step #8). If `rate` is `nil`, then a placeholder "Loading Data..." `TextView` is rendered, and `loadData` is called to fetch the data using the API (step #4-3):
```swift
var body: some View {
if let rate = rate {
HStack {
CurrencyRowDataView(rate: rate, baseAmount: $baseAmount, action: action)
...
}
} else {
Text("Loading Data...")
.onAppear(perform: loadData)
}
}
```
#### #4-3 Fetch `rate` from API — No matching object found in Realm
The API URL is formed by inserting the base currency (`baseSymbol`) and the target currency (`symbol`) into a template string. `loadData` then sends the request to the API endpoint and handles the response:
```swift
private func loadData() {
guard let url = URL(string: "https://api.exchangerate.host/convert?from=\(baseSymbol)&to=\(symbol)") else {
print("Invalid URL")
return
}
let request = URLRequest(url: url)
print("Network request: \(url.description)")
URLSession.shared.dataTask(with: request) { data, response, error in
guard let data = data else {
print("Error fetching data: \(error?.localizedDescription ?? "Unknown error")")
return
}
if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {
// TODO: Step #5-3
} else {
print("No data received")
}
}
.resume()
}
```
#### #5-3 StoreInRealm(`rate`) — No matching object found in Realm
`Rate` objects stored in Realm are displayed in our SwiftUI views. Any data changes that impact the UI must be done on the main thread. When the API endpoint sends back results, our code receives them in a callback thread, and so we must use `DispatchQueue` to run our closure in the main thread so that we can add the resulting `Rate` object to Realm:
```swift
if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {
DispatchQueue.main.async {
$rates.append(decodedResponse)
}
} else {
print("No data received")
}
```
Notice how simple it is to convert the JSON response into a Realm `Rate` object and store it in our local realm!
#### #6 Refresh Requested?
RCurrency includes a pull-to-refresh feature which will fetch fresh exchange rate data for each of the user's currency symbols. We add the refresh functionality by appending the `.refreshable` modifier to the `List` of rates in `CurrencyListContainerView`:
```swift
List {
...
}
.refreshable(action: refreshAll)
```
`refreshAll` sets the `refreshNeeded` variable to `true`, waits a second to allow SwiftUI to react to the change, and then sets it back to `false`:
```swift
private func refreshAll() {
refreshNeeded = true
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
refreshNeeded = false
}
}
```
`refreshNeeded` is passed to each instance of `CurrencyRowContainerView`:
```swift
CurrencyRowContainerView(baseSymbol: userSymbols.baseSymbol,
baseAmount: $baseAmount,
symbol: symbol,
refreshNeeded: refreshNeeded)
```
`CurrencyRowContainerView` checks `refreshNeeded`. If `true`, it displays a temporary refresh image and invokes `refreshData` (step #4-6):
```swift
if refreshNeeded {
Image(systemName: "arrow.clockwise.icloud")
.onAppear(perform: refreshData)
}
```
#### #4-6 Fetch `rate` from API — Refresh requested
`refreshData` fetches the data in exactly the same way as `loadData` in step #4-3:
```swift
private func refreshData() {
guard let url = URL(string: "https://api.exchangerate.host/convert?from=\(baseSymbol)&to=\(symbol)") else {
print("Invalid URL")
return
}
let request = URLRequest(url: url)
print("Network request: \(url.description)")
URLSession.shared.dataTask(with: request) { data, response, error in
guard let data = data else {
print("Error fetching data: \(error?.localizedDescription ?? "Unknown error")")
return
}
if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {
DispatchQueue.main.async {
// TODO: #5-5
}
} else {
print("No data received")
}
}
.resume()
}
```
The difference is that in this case, there may already be a `Rate` object in Realm for this currency pair, and so the results are handled differently...
#### #5-6 StoreInRealm(`rate`) — Refresh requested
If the `Rate` object for this currency pair had been found in Realm, then we reference it with `existingRate`. `existingRate` is then updated with the API results:
```swift
if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {
DispatchQueue.main.async {
if let existingRate = rate {
do {
let realm = try Realm()
try realm.write() {
guard let thawedrate = existingRate.thaw() else {
print("Couldn't thaw existingRate")
return
}
thawedrate.date = decodedResponse.date
thawedrate.result = decodedResponse.result
}
} catch {
print("Unable to update existing rate in Realm")
}
}
}
}
```
#### #7 `rate` stale?
The exchange rates available through the API are updated daily. The date that the rate applies to is included in the API response, and it’s stored in the Realm `Rate` object. When displaying the exchange rate data, `CurrencyRowDataView` invokes `loadData`:
```swift
var body: some View {
CurrencyRowView(value: (rate.result) * baseAmount,
symbol: rate.query?.to ?? "",
baseValue: $baseAmount,
action: action)
.onAppear(perform: loadData)
}
```
`loadData` checks that the existing Realm `Rate` object applies to today. If not, then it will refresh the data (stage 4-7):
```swift
private func loadData() {
if !rate.isToday {
// TODO: 4-7
}
}
```
`isToday` is a `Rate` method to check whether the stored data matches the current date:
```swift
extension Rate {
var isToday: Bool {
let today = Date().description.prefix(10)
return date == today
}
}
```
#### #4-7 Fetch `rate` from API — `rate` stale
By now, the code to fetch the data from the API should be familiar:
```swift
private func loadData() {
if !rate.isToday {
guard let query = rate.query else {
print("Query data is missing")
return
}
guard let url = URL(string: "https://api.exchangerate.host/convert?from=\(query.from)&to=\(query.to)") else {
print("Invalid URL")
return
}
let request = URLRequest(url: url)
URLSession.shared.dataTask(with: request) { data, response, error in
guard let data = data else {
print("Error fetching data: \(error?.localizedDescription ?? "Unknown error")")
return
}
if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {
DispatchQueue.main.async {
// TODO: #5.7
}
} else {
print("No data received")
}
}
.resume()
}
}
```
#### #5-7 StoreInRealm(`rate`) — `rate` stale
`loadData` copies the new `date` and exchange rate (`result`) to the stored Realm `Rate` object:
```swift
if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {
DispatchQueue.main.async {
$rate.date.wrappedValue = decodedResponse.date
$rate.result.wrappedValue = decodedResponse.result
}
}
```
#### #8 View rendered with `rate`
`CurrencyRowView` receives the raw exchange rate data, and the amount to convert. It’s responsible for calculating and rendering the results:
The number shown in this view is part of a `TextField`, which the user can overwrite:
```swift
@Binding var baseValue: Double
...
TextField("Amount", text: $amount)
.keyboardType(.decimalPad)
.onChange(of: amount, perform: updateValue)
.font(.largeTitle)
```
When the user overwrites the number, the `onChange` function is called which recalculates `baseValue` (the value of the base currency that the user wants to convert):
```swift
private func updateValue(newAmount: String) {
guard let newValue = Double(newAmount) else {
print("\(newAmount) cannot be converted to a Double")
return
}
baseValue = newValue / rate
}
```
As `baseValue` was passed in as a binding, the new value percolates up the view hierarchy, and all of the currency values are updated. As the exchange rates are held in Realm, all of the currency values are recalculated without needing to use the API:
## Conclusion
REST APIs let your mobile apps act on a vast variety of cloud data. The downside is that APIs can't help you when you don't have access to the internet. They can also make your app seem sluggish, and your users may get frustrated when they have to wait for data to be downloaded.
A common solution is to use Realm to cache data from the API so that it's always available and can be accessed locally in an instant.
This article has shown you a typical data lifecycle that you can reuse in your own apps. You've also seen how easy it is to store the JSON results from an API call in your Realm database:
```swift
if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {
DispatchQueue.main.async {
$rates.append(decodedResponse)
}
}
```
We've focussed on using a read-only API. Things get complicated very quickly when your app starts modifying data through the API. What should your app do when your device is offline?
- Don't allow users to do anything that requires an update?
- Allow local updates and maintain a list of changes that you iterate through when back online?
- Will some changes you accept from the user have to be backed out once back online and you discover conflicting changes from other users?
If you need to modify data that's accessed by other users or devices, consider MongoDB Realm Sync as an alternative to accessing APIs directly from your app. It will save you thousands of lines of tricky code!
The API you're using may throttle access or charge per request. You can create a backend MongoDB Realm app to fetch the data from the API just once, and then use Realm Sync to handle the fan-out to all instances of your mobile app.
If you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum. To keep up with the latest Realm news, follow @realm on Twitter and join the Realm global community.
| md | {
"tags": [
"Swift",
"Realm",
"iOS",
"Mobile"
],
"pageDescription": "Learn how to make your mobile app always-on, even when you can't connect to your API.",
"contentType": "Code Example"
} | Build Offline-First Mobile Apps by Caching API Results in Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/getting-started-atlas-mongodb-query-language-mql | created | # Getting Started with Atlas and the MongoDB Query API
> MQL is now MongoDB Query API! Learn more about this flexible, intuitive way to work with your data.
Depending where you are in your development career or the technologies
you've already become familiar with, MongoDB can seem quite
intimidating. Maybe you're coming from years of experience with
relational database management systems (RDBMS), or maybe you're new to
the topic of data persistance in general.
The good news is that MongoDB isn't as scary as you might think, and it
is definitely a lot easier when paired with the correct tooling.
In this tutorial, we're going to see how to get started with MongoDB
Atlas for hosting our database
cluster and the MongoDB Query Language (MQL) for interacting with our
data. We won't be exploring any particular programming technology, but
everything we see can be easily translated over.
## Hosting MongoDB Clusters in the Cloud with MongoDB Atlas
There are a few ways to get started with MongoDB. You could install a
single instance or a cluster of instances on your own hardware which you
manage yourself in terms of updates, scaling, and security, or you can
make use of MongoDB Atlas which is a database as a service (DBaaS) that
makes life quite a bit easier, and in many cases cheaper, or even free.
We're going to be working with an M0 sized Atlas cluster, which is part
of the free tier that MongoDB offers. There's no expiration to this
cluster and there's no credit card required in order to deploy it.
### Deploying a Cluster of MongoDB Instances
Before we can use MongoDB in our applications, we need to deploy a
cluster. Create a MongoDB Cloud account and
into it.
Choose to **Create a New Cluster** if not immediately presented with the
option, and start selecting the features of your cluster.
You'll be able to choose between AWS, Google Cloud, and Azure for
hosting your cluster. It's important to note that these cloud providers
are for location only. You won't ever have to sign into the cloud
provider or manage MongoDB through them. The location is important for
latency reasons in case you have your applications hosted on a
particular cloud provider.
If you want to take advantage of a free cluster, make sure to choose M0
for the cluster size.
It may take a few minutes to finish creating your cluster.
### Defining Network Access Rules for the NoSQL Database Cluster
With the cluster created, you won't be able to access it from outside of
the web dashboard by default. This is a good thing because you don't
want random people on the internet attempting to gain unauthorized
access to your cluster.
To be able to access your cluster from the CLI, a web application, or
Visual Studio Code, which we'll be using later, you'll need to setup a
network rule that allows access from a particular IP address.
You have a few options when it comes to adding an IP address to the
allow list. You could add your current IP address which would be useful
for accessing from your local network. You could provide a specific IP
address which is useful for applications you host in the cloud
somewhere. You can also supply **0.0.0.0/0** which would allow full
network access to anyone, anywhere.
I'd strongly recommend not adding **0.0.0.0/0** as a network rule to
keep your cluster safe.
With IP addresses on the allow list, the final step is to create an
application user.
### Creating Role-Based Access Accounts to Interact with Databases in the Cluster
It is a good idea to create role-based access accounts to your MongoDB
Atlas cluster. This means instead of creating one super user like the
administrator account, you're creating a user account based on what the
user should be doing.
For example, maybe we create a user that has access to your accounting
databases and another user that has access to your employee database.
Within Atlas, choose the **Database Access** tab and click **Add New
Database User** to add a new user.
While you can give a user access to every database, current and future,
it is best if you create users that have more refined permissions.
It's up to you how you want to create your users, but the more specific
the permissions, the less likely your cluster will become compromised by
malicious activity.
Need some more guidance around creating an Atlas cluster? Check out
this
tutorial
by Maxime Beugnet on the subject.
With the cluster deployed, the network rules in place for your IP
address, and a user created, we can focus on some of the basics behind
the MongoDB Query Language (MQL).
## Querying Database Collections with the MongoDB Query Language (MQL)
To get the most out of MongoDB, you're going to need to become familiar
with the MongoDB Query Language (MQL). No, it is not like SQL if you're
familiar with relational database management systems (RDBMS), but it
isn't any more difficult. MQL can be used from the CLI, Visual Studio
Code, the development drivers, and more. You'll get the same experience
no matter where you're trying to write your queries.
In this section, we're going to focus on Visual Studio Code and the
MongoDB
Playground
extension for managing our data. We're doing this because Visual Studio
Code is common developer tooling and it makes for an easy to use
experience.
### Configuring Visual Studio Code for the MongoDB Playground
While we could write our queries out of the box with Visual Studio Code,
we won't be able to interact with MongoDB in a meaningful way until we
install the MongoDB
Playground
extension.
Within Visual Studio Code, bring up the extensions explorer and search
for **MongoDB**.
Install the official extension with MongoDB as the publisher.
With the extension installed, we'll need to interact with it from within
Visual Studio Code. There are a few ways to do this, but we're going to
use the command palette.
Open the command pallette (cmd + shift + p, if you're on macOS), and
enter **MongoDB: Connect** into the input box.
You'll be able to enter the information for your particular MongoDB
cluster. Once connected, we can proceed to creating a new Playground. If
you've already saved your information into the Visual Studio Code
extension and need to connect later, you can always enter **Show
MongoDB** in the command pallette and connect.
Assuming we're connected, enter **Create MongoDB Playground** in the
command pallette to create a new file with boilerplate MQL.
### Defining a Data Model and a Use Case for MongoDB
Rather than just creating random queries that may or may not be helpful
or any different from what you'd find the documentation, we're going to
come up with a data model to work with and then interact with that data
model.
I'm passionate about gaming, so our example will be centered around some
game data that might look like this:
``` json
{
"_id": "nraboy",
"name": "Nic Raboy",
"stats": {
"wins": 5,
"losses": 10,
"xp": 300
},
"achievements":
{ "name": "Massive XP", "timestamp": 1598961600000 },
{ "name": "Instant Loss", "timestamp": 1598896800000 }
]
}
```
The above document is just one of an endless possibility of data models
for a document in any given collection. To make the example more
exciting, the above document has a nested object and a nested array of
objects, something that demonstrates the power of JSON, but without
sacrificing how easy it is to work with in MongoDB.
The document above is often referred to as a user profile document in
game development. You can learn more about user profile stores in game
development through a [previous Twitch
stream on the subject.
As of right now, it's alright if your cluster has no databases,
collections, or even documents that look like the above document. We're
going to get to that next.
### Create, Read, Update, and Delete (CRUD) Documents in a Collections
When working with MongoDB, you're going to get quite familiar with the
create, read, update, and delete (CRUD) operations necessary when
working with data. To reiterate, we'll be using Visual Studio Code to do
all this, but any CRUD operation you do in Visual Studio Code, can be
taken into your application code, scripts, and similar.
Earlier you were supposed to create a new MongoDB Playground in Visual
Studio Code. Open it, remove all the boilerplate MQL, and add the
following:
``` javascript
use("gamedev");
db.profiles.insertOne({
"_id": "nraboy",
"name": "Nic Raboy",
"stats": {
"wins": 5,
"losses": 10,
"xp": 300
},
"achievements":
{ "name": "Massive XP", "timestamp": 1598961600000 },
{ "name": "Instant Loss", "timestamp": 1598896800000 }
]
});
```
In the above code we are declaring that we want to use a **gamedev**
database in our queries that follow. It's alright if such a database
doesn't already exist because it will be created at runtime.
Next we're using the `insertOne` operation in MongoDB to create a single
document. The `db` object references the **gamedev** database that we've
chosen to use. The **profiles** object references a collection that we
want to insert our document into.
The **profiles** collection does not need to exist prior to inserting
our first document.
It does not matter what we choose to call our database as well as our
collection. As long as the name makes sense to you and the use-case that
you're trying to fulfill.
Within Visual Studio Code, you can highlight the above MQL and choose
**Run Selected Lines From Playground** or use the command pallette to
run the entire playground. After running the MQL, check out your MongoDB
Atlas cluster and you should see the database, collection, and document
created.
More information on the `insert` function can be found in the [official
documentation.
If you'd rather verify the document was created without actually
navigating through MongoDB Atlas, we can move onto the next stage of the
CRUD operation journey.
Within the playground, add the following:
``` javascript
use("gamedev");
db.profiles.find({});
```
The above `find` operation will return all documents in the **profiles**
collection. If you wanted to narrow the result-set, you could provide
filter criteria instead of providing an empty object. For example, try
executing the following instead:
``` javascript
use("gamedev");
db.profiles.find({ "name": "Nic Raboy" });
```
The above `find` operation will only return documents where the `name`
field matches exactly `Nic Raboy`. We can do better though. What about
finding documents that sit within a certain range for certain fields.
Take the following for example:
``` javascript
use("gamedev");
db.profiles.find(
{
"stats.wins": {
"$gt": 6
},
"stats.losses": {
"$lt": 11
}
}
);
```
The above `find` operation says that we only want documents that have
more than six wins and less than eleven losses. If we were running the
above query with the current dataset shown earlier, no results would be
returned because nothing satisfies the conditions.
You can learn more about the filter operators that can be used in the
official
documentation.
So we've got at least one document in our collection and have seen the
`insertOne` and `find` operators. Now we need to take a look at the
update and delete parts of CRUD.
Let's say that we finished a game and the `stats.wins` field needs to be
updated. We could do something like this:
``` javascript
use("gamedev")
db.profiles.update(
{ "_id": "nraboy" },
{ "$inc": { "stats.wins": 1 } }
);
```
The first object in the above `update` operation is the filter. This is
the same filter that can be used in a `find` operation. Once we've
filtered for documents to update, the second object is the mutation. In
the above example, we're using the `$inc` operator to increase the
`stats.wins` field by a value of one.
There are quite a few operators that can be used when updating
documents. You can find more information in the official
documentation.
Maybe we don't want to use an operator when updating the document. Maybe
we want to change a field or add a field that might not exist. We can do
something like the following:
``` javascript
use("gamedev")
db.profiles.update(
{ "_id": "nraboy" },
{ "name": "Nicolas Raboy" }
);
```
The above query will filter for documents with an `_id` of `nraboy`, and
then update the `name` field on those documents to be a particular
string, in this case "Nicolas Raboy". If the `name` field doesn't exist,
it will be created and set.
Got a document you want to remove? Let's look at the final part of the
CRUD operators.
Add the following to your playground:
``` javascript
use("gamedev")
db.profiles.remove({ "_id": "nraboy" })
```
The above `remove` operation uses a filter, just like what we saw with
the `find` and `update` operations. We provide it a filter of documents
to find and in this circumstance, any matches will be removed from the
**profiles** collection.
To learn more about the `remove` function, check out the official
documentation.
### Complex Queries with the MongoDB Data Aggregation Pipeline
For a lot of applications, you might only need to ever use basic CRUD
operations when working with MongoDB. However, when you need to start
analyzing your data or manipulating your data for the sake of reporting,
running a bunch of CRUD operations might not be your best bet.
This is where a MongoDB data aggregation pipeline might come into use.
To get an idea of what a data aggregation pipeline is, think of it as a
series of data stages that must complete before you have your data.
Let's use a better example. Let's say that you want to look at your
**profiles** collection and determine all the players who received a
certain achievement after a certain date. However, you only want to know
the specific achievement and basic information about the player. You
don't want to know generic information that matched your query.
Take a look at the following:
``` javascript
use("gamedev")
db.profiles.aggregate(
{ "$match": { "_id": "nraboy" } },
{ "$unwind": "$achievements" },
{
"$match": {
"achievements.timestamp": {
"$gt": new Date().getTime() - (1000 * 60 * 60 * 24 * 1)
}
}
},
{ "$project": { "_id": 1, "achievements": 1 }}
]);
```
There are four stages in the above pipeline. First we're doing a
`$match` to find all documents that match our filter. Those documents
are pushed to the next stage of the pipeline. Rather than looking at and
trying to work with the `achievements` field which is an array, we are
choosing to `$unwind` it.
To get a better idea of what this looks like, at the end of the second
stage, any data that was found would look like this:
``` json
[
{
"_id": "nraboy",
"name": "Nic Raboy",
"stats": {
"wins": 5,
"losses": 10,
"xp": 300
},
"achievements": {
"name": "Massive XP",
"timestamp": 1598961600000
}
},
{
"_id": "nraboy",
"name": "Nic Raboy",
"stats": {
"wins": 5,
"losses": 10,
"xp": 300
},
"achievements": {
"name": "Instant Loss",
"timestamp": 1598896800000
}
}
]
```
Notice in the above JSON response that we are no longer working with an
array. We should have only matched on a single document, but the results
are actually two instead of one. That is because the `$unwind` split the
array into numerous objects.
So we've flattened the array, now we're onto the third stage of the
pipeline. We want to match any object in the result that has an
achievement timestamp greater than a specific time. The plan here is to
reduce the result-set of our flattened documents.
The final stage of our pipeline is to output only the fields that we're
interested in. With the `$project` we are saying we only want the `_id`
field and the `achievements` field.
Our final output for this aggregation might look like this:
``` json
[
{
"_id": "nraboy",
"achievements": {
"name": "Instant Loss",
"timestamp": 1598896800000
}
}
]
```
There are quite a few operators when it comes to the data aggregation
pipeline, many of which can do far more extravagant things than the four
pipeline stages that were used for this example. You can learn about the
other operators in the [official
documentation.
## Conclusion
You just got a taste of what you can do with MongoDB Atlas and the
MongoDB Query Language (MQL). While the point of this tutorial was to
get you comfortable with deploying a cluster and interacting with your
data, you can extend your knowledge and this example by exploring the
programming drivers.
Take the following quick starts for example:
- Quick Start:
Golang
- Quick Start:
Node.js
- Quick Start:
Java
- Quick Start:
C#
In addition to the quick starts, you can also check out the MongoDB
University course,
M121, which focuses
on data aggregation.
As previously mentioned, you can take the same queries between languages
with minimal to no changes between them.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to get started with MongoDB Atlas and the MongoDB Query API.",
"contentType": "Quickstart"
} | Getting Started with Atlas and the MongoDB Query API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/polymorphic-document-validation | created | # Document Validation for Polymorphic Collections
In data modeling design reviews with customers, I often propose a schema where different documents in the same collection contain different types of data. This makes it efficient to fetch related documents in a single, indexed query. MongoDB's flexible schema is great for optimizing workloads in this way, but people can be concerned about losing control of what applications write to these collections.
Customers are often concerned about ensuring that only correctly formatted documents make it into a collection, and so I explain MongoDB's schema validation feature. The question then comes: "How does that work with a polymorphic/single-collection schema?" This post is intended to answer that question — and it's simpler than you might think.
## The banking application and its data
The application I'm working on manages customer and account details. There's a many-to-many relationship between customers and accounts. The app needs to be able to efficiently query customer data based on the customer id, and account data based on either the id of its customer or the account id.
Here's an example of customer and account documents where my wife and I share a checking account but each have our own savings account:
```json
{
"_id": "kjfgjebgjfbkjb",
"customerId": "CUST-123456789",
"docType": "customer",
"name": {
"title": "Mr",
"first": "Andrew",
"middle": "James",
"last": "Morgan"
},
"address": {
"street1": "240 Blackfriars Rd",
"city": "London",
"postCode": "SE1 8NW",
"country": "UK"
},
"customerSince": ISODate("2005-05-20")
}
{
"_id": "jnafjkkbEFejfleLJ",
"customerId": "CUST-987654321",
"docType": "customer",
"name": {
"title": "Mrs",
"first": "Anne",
"last": "Morgan"
},
"address": {
"street1": "240 Blackfriars Rd",
"city": "London",
"postCode": "SE1 8NW",
"country": "UK"
},
"customerSince": ISODate("2003-12-01")
}
{
"_id": "dksfmkpGJPowefjdfhs",
"accountNumber": "ACC1000000654",
"docType": "account",
"accountType": "checking",
"customerId":
"CUST-123456789",
"CUST-987654321"
],
"dateOpened": ISODate("2003-12-01"),
"balance": NumberDecimal("5067.65")
}
{
"_id": "kliwiiejeqydioepwj",
"accountNumber": "ACC1000000432",
"docType": "account",
"accountType": "savings",
"customerId": [
"CUST-123456789"
],
"dateOpened": ISODate("2005-10-28"),
"balance": NumberDecimal("10341.21")
}
{
"_id": "djahspihhfheiphfipewe",
"accountNumber": "ACC1000000890",
"docType": "account",
"accountType": "savings",
"customerId": [
"CUST-987654321"
],
"dateOpened": ISODate("2003-12-15"),
"balance": NumberDecimal("10341.89")
}
```
As an aside, these are the indexes I added to make those frequent queries I referred to more efficient:
```javascript
const indexKeys1 = { accountNumber: 1 };
const indexKeys2 = { customerId: 1, accountType: 1 };
const indexOptions1 = { partialFilterExpression: { docType: 'account' }};
const indexOptions2 = { partialFilterExpression: { docType: 'customer' }};
db.getCollection(collection).createIndex(indexKeys1, indexOptions1);
db.getCollection(collection).createIndex(indexKeys2, indexOptions2);
```
## Adding schema validation
To quote [the docs…
> Schema validation lets you create validation rules for your fields, such as allowed data types and value ranges.
>
> MongoDB uses a flexible schema model, which means that documents in a collection do not need to have the same fields or data types by default. Once you've established an application schema, you can use schema validation to ensure there are no unintended schema changes or improper data types.
The validation rules are pretty simple to set up, and tools like Hackolade can make it simpler still — even reverse-engineering your existing documents.
It's simple to imagine setting up a JSON schema validation rule for a collection where all documents share the same attributes and types. But what about polymorphic collections? Even in polymorphic collections, there is structure to the documents. Fortunately, the syntax for setting up the validation rules allows for the required optionality.
I have two different types of documents that I want to store in my `Accounts` collection — `customer` and `account`. I included a `docType` attribute in each document to identify which type of entity it represents.
I start by creating a JSON schema definition for each type of document:
```javascript
const customerSchema = {
required: "docType", "customerId", "name", "customerSince"],
properties: {
docType: { enum: ["customer"] },
customerId: { bsonType: "string"},
name: {
bsonType: "object",
required: ["first", "last"],
properties: {
title: { enum: ["Mr", "Mrs", "Ms", "Dr"]},
first: { bsonType: "string" },
middle: { bsonType: "string" },
last: { bsonType: "string" }
}
},
address: {
bsonType: "object",
required: ["street1", "city", "postCode", "country"],
properties: {
street1: { bsonType: "string" },
street2: { bsonType: "string" },
postCode: { bsonType: "string" },
country: { bsonType: "string" }
}
},
customerSince: {
bsonType: "date"
}
}
};
const accountSchema = {
required: ["docType", "accountNumber", "accountType", "customerId", "dateOpened", "balance"],
properties: {
docType: { enum: ["account"] },
accountNumber: { bsonType: "string" },
accountType: { enum: ["checking", "savings", "mortgage", "loan"] },
customerId: { bsonType: "array" },
dateOpened: { bsonType: "date" },
balance: { bsonType: "decimal" }
}
};
```
Those definitions define what attributes should be in the document and what types they should take. Note that fields can be optional — such as `name.middle` in the `customer` schema.
It's then a simple matter of using the `oneOf` JSON schema operator to allow documents that match either of the two schema:
```javascript
const schemaValidation = {
$jsonSchema: { oneOf: [ customerSchema, accountSchema ] }
};
db.createCollection(collection, {validator: schemaValidation});
```
I wanted to go a stage further and add some extra, semantic validations:
* For `customer` documents, the `customerSince` value can't be any earlier than the current time.
* For `account` documents, the `dateOpened` value can't be any earlier than the current time.
* For savings accounts, the `balance` can't fall below zero.
These documents represents these checks:
```javascript
const badCustomer = {
"$expr": { "$gt": ["$customerSince", "$$NOW"] }
};
const badAccount = {
$or: [
{
accountType: "savings",
balance: { $lt: 0}
},
{
"$expr": { "$gt": ["$dateOpened", "$$NOW"]}
}
]
};
const schemaValidation = {
"$and": [
{ $jsonSchema: { oneOf: [ customerSchema, accountSchema ] }},
{ $nor: [
badCustomer,
badAccount
]
}
]
};
```
I updated the collection validation rules to include these new checks:
```javascript
const schemaValidation = {
"$and": [
{ $jsonSchema: { oneOf: [ customerSchema, accountSchema ] }},
{ $nor: [
badCustomer,
badAccount
]
}
]
};
db.createCollection(collection, {validator: schemaValidation} );
```
If you want to recreate this in your own MongoDB database, then just paste this into your [MongoDB playground in VS Code:
```javascript
const cust1 = {
"_id": "kjfgjebgjfbkjb",
"customerId": "CUST-123456789",
"docType": "customer",
"name": {
"title": "Mr",
"first": "Andrew",
"middle": "James",
"last": "Morgan"
},
"address": {
"street1": "240 Blackfriars Rd",
"city": "London",
"postCode": "SE1 8NW",
"country": "UK"
},
"customerSince": ISODate("2005-05-20")
}
const cust2 = {
"_id": "jnafjkkbEFejfleLJ",
"customerId": "CUST-987654321",
"docType": "customer",
"name": {
"title": "Mrs",
"first": "Anne",
"last": "Morgan"
},
"address": {
"street1": "240 Blackfriars Rd",
"city": "London",
"postCode": "SE1 8NW",
"country": "UK"
},
"customerSince": ISODate("2003-12-01")
}
const futureCustomer = {
"_id": "nansfanjnDjknje",
"customerId": "CUST-666666666",
"docType": "customer",
"name": {
"title": "Mr",
"first": "Wrong",
"last": "Un"
},
"address": {
"street1": "240 Blackfriars Rd",
"city": "London",
"postCode": "SE1 8NW",
"country": "UK"
},
"customerSince": ISODate("2025-05-20")
}
const acc1 = {
"_id": "dksfmkpGJPowefjdfhs",
"accountNumber": "ACC1000000654",
"docType": "account",
"accountType": "checking",
"customerId":
"CUST-123456789",
"CUST-987654321"
],
"dateOpened": ISODate("2003-12-01"),
"balance": NumberDecimal("5067.65")
}
const acc2 = {
"_id": "kliwiiejeqydioepwj",
"accountNumber": "ACC1000000432",
"docType": "account",
"accountType": "savings",
"customerId": [
"CUST-123456789"
],
"dateOpened": ISODate("2005-10-28"),
"balance": NumberDecimal("10341.21")
}
const acc3 = {
"_id": "djahspihhfheiphfipewe",
"accountNumber": "ACC1000000890",
"docType": "account",
"accountType": "savings",
"customerId": [
"CUST-987654321"
],
"dateOpened": ISODate("2003-12-15"),
"balance": NumberDecimal("10341.89")
}
const futureAccount = {
"_id": "kljkdfgjkdsgjklgjdfgkl",
"accountNumber": "ACC1000000999",
"docType": "account",
"accountType": "savings",
"customerId": [
"CUST-987654333"
],
"dateOpened": ISODate("2030-12-15"),
"balance": NumberDecimal("10341.89")
}
const negativeSavings = {
"_id": "shkjahsjdkhHK",
"accountNumber": "ACC1000000666",
"docType": "account",
"accountType": "savings",
"customerId": [
"CUST-9837462376"
],
"dateOpened": ISODate("2005-10-28"),
"balance": NumberDecimal("-10341.21")
}
const indexKeys1 = { accountNumber: 1 }
const indexKeys2 = { customerId: 1, accountType: 1 }
const indexOptions1 = { partialFilterExpression: { docType: 'account' }}
const indexOptions2 = { partialFilterExpression: { docType: 'customer' }}
const customerSchema = {
required: ["docType", "customerId", "name", "customerSince"],
properties: {
docType: { enum: ["customer"] },
customerId: { bsonType: "string"},
name: {
bsonType: "object",
required: ["first", "last"],
properties: {
title: { enum: ["Mr", "Mrs", "Ms", "Dr"]},
first: { bsonType: "string" },
middle: { bsonType: "string" },
last: { bsonType: "string" }
}
},
address: {
bsonType: "object",
required: ["street1", "city", "postCode", "country"],
properties: {
street1: { bsonType: "string" },
street2: { bsonType: "string" },
postCode: { bsonType: "string" },
country: { bsonType: "string" }
}
},
customerSince: {
bsonType: "date"
}
}
}
const accountSchema = {
required: ["docType", "accountNumber", "accountType", "customerId", "dateOpened", "balance"],
properties: {
docType: { enum: ["account"] },
accountNumber: { bsonType: "string" },
accountType: { enum: ["checking", "savings", "mortgage", "loan"] },
customerId: { bsonType: "array" },
dateOpened: { bsonType: "date" },
balance: { bsonType: "decimal" }
}
}
const badCustomer = {
"$expr": { "$gt": ["$customerSince", "$$NOW"] }
}
const badAccount = {
$or: [
{
accountType: "savings",
balance: { $lt: 0}
},
{
"$expr": { "$gt": ["$dateOpened", "$$NOW"]}
}
]
}
const schemaValidation = {
"$and": [
{ $jsonSchema: { oneOf: [ customerSchema, accountSchema ] }},
{ $nor: [
badCustomer,
badAccount
]
}
]
}
const database = 'MongoBank';
const collection = 'Accounts';
use(database);
db.getCollection(collection).drop();
db.createCollection(collection, {validator: schemaValidation} )
db.getCollection(collection).replaceOne({"_id": cust1._id}, cust1, {upsert: true});
db.getCollection(collection).replaceOne({"_id": cust2._id}, cust2, {upsert: true});
db.getCollection(collection).replaceOne({"_id": acc1._id}, acc1, {upsert: true});
db.getCollection(collection).replaceOne({"_id": acc2._id}, acc2, {upsert: true});
db.getCollection(collection).replaceOne({"_id": acc3._id}, acc3, {upsert: true});
// The following 3 operations should fail
db.getCollection(collection).replaceOne({"_id": negativeSavings._id}, negativeSavings, {upsert: true});
db.getCollection(collection).replaceOne({"_id": futureCustomer._id}, futureCustomer, {upsert: true});
db.getCollection(collection).replaceOne({"_id": futureAccount._id}, futureAccount, {upsert: true});
db.getCollection(collection).dropIndexes();
db.getCollection(collection).createIndex(indexKeys1, indexOptions1);
db.getCollection(collection).createIndex(indexKeys2, indexOptions2);
```
## Conclusion
I hope that this short article has shown how easy it is to use schema validations with MongoDB's polymorphic collections and single-collection design pattern.
I didn't go into much detail about why I chose the data model used in this example. If you want to know more (and you should!), then here are some great resources on data modeling with MongoDB:
* Daniel Coupal and Ken Alger’s excellent series of blog posts on [MongoDB schema patterns
* Daniel Coupal and Lauren Schaefer’s equally excellent series of blog posts on MongoDB anti-patterns
* MongoDB University Course, M320 - MongoDB Data Modeling | md | {
"tags": [
"MongoDB"
],
"pageDescription": "A great feature of MongoDB is its flexible document model. But what happens when you want to combine that with controls on the content of the documents in a collection? This post shows how to use document validation on polymorphic collections.",
"contentType": "Article"
} | Document Validation for Polymorphic Collections | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-meetup-swiftui-testing-and-realm-with-projections | created | # Realm Meetup - SwiftUI Testing and Realm With Projections
Didn't get a chance to attend the SwiftUI Testing and Realm with Projections Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.
:youtube]{vid=fxar75-7ZbQ}
In this meetup, Jason Flax, Lead iOS Engineer, makes a return to explain how the testing landscape has changed for iOS apps using the new SwiftUI framework. Learn how to write unit tests with SwiftUI apps powered by Realm, where to put your business logic with either ViewModels or in an app following powered by Model-View-Intent, and witness the power of Realm's new Projection feature.
In this 50-minute recording, Jason spends about 40 minutes presenting
- Testing Overview for iOS Apps
- What's Changed in Testing from UIKit to SwiftUI
- Unit Tests for Business Logic - ViewModels or MVI?
- Realm Projections - Live Realm Objects that Power your View
After this, we have about 10 minutes of live Q&A with Ian & Jason and our community . For those of you who prefer to read, below we have a full transcript of the meetup too.
Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.
To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.
### Transcript
(*As this is verbatim, please excuse any typos or punctuation errors!*)
**Ian:**
We’re going to talk about our integration into SwiftUI, around the Swift integration into SwiftUI and how we're making that really tight and eliminating boilerplate for developers. We're also going to show off a little bit of a feature that we're thinking about called Realm projections. And so we would love your feedback on this new functionality. We have other user group conference meetings coming up in the next few weeks. So, we're going to be talking about Realm JavaScript for React Native applications next week, following that we're talking about how to integrate with the Realm cloud and AWS EventBridge and then later on at the end of this month, we will have two other engineers from the iOS team talk about key path filtering and auto open, which will be new functionality that we deliver as part of the Realm Swift SDK.
We also have MongoDB.live, this is taking place on July 13th and 14th. This is a free event and we have a whole track of talks that are dedicated to mobile and mobile development. So, you don't need to know anything about MongoDB as a server or anything like that. These talks will be purely focused on mobile development. So you can definitely join that and get some benefit if you're just a mobile developer. A little bit about housekeeping here. This is the Bevy platform. In a few slides, I'm going to turn it back over to Jason. Jason's going to run through a presentation. If you have any questions during the presentation, there's a little chat box in the window, so just put them in there. We have other team members that are part of the Realm team that can answer them for you. And then at the end, I'll run through them as well as part of a Q&A session that you can ask any questions.
Also, if you want to make this more interactive, we're happy to have you come on to the mic and do your camera and you can ask a question live as well. So, please get connected with us. You can join our forums.realm.io. Ask any questions that you might have. We also have our community get hubs where you can file an issue. Also, if you want to win some free swag, you can go on our Twitter and tweet about this event or upcoming events. We will be sending swag for users that tweet about us. And without further ado, I will stop sharing my screen and turn it over to Jason.
**Jason:**
Hello, everyone. Hope everyone's doing well. I'm going to figure out how to share my screen with this contraption. Can people see my screen?
**Ian:**
I can see it.
**Jason:**
Cool stuff. All right. Is the thing full screen? Ian?
**Ian:**
Sorry. I was muted, but I was raising my finger.
**Jason:**
Yeah, I seem to always do these presentations when I'm visiting the States. I normally live in Dublin and so I'm out of my childhood bedroom right now. So, I don't have all of the tools I would normally have. For those of you that do not know me, my name is Jason Flax. I'm the lead engineer on the Realm Cocoa team. I've been at MongoDB for about five years, five years actually in three days, which is crazy. And we've been working with Realm for about two years now since the acquisition. And we've been trying to figure out how to better integrate Realm into SwiftUI, into all the new stuff coming out so that it's easier for people to use. We came up with a feature not too long ago to better integrate with the actual life cycle of SwiftUI ideas.
That's the ObservedRealmObject out observed results at state Realm object, the property rappers that hook into the view make things easy. We gave a presentation on the architectures that we want to see people using what SwiftUI, ruffled some feathers by saying that everybody should rewrite 50,000 lines of code and change the architecture that we see fit with SwiftUI. But a lot of people are mainly asking about testing. How do you test with SwiftUI? There aren't really good standards and practices out there yet. It's two years old. And to be honest, parts of it still feel a bit preview ish. So, what we want to do today is basically to go over, why do we test in the first place? How should you be testing with Realm?
How should you be testing the SwiftUI and Realm? What does that look like in a real world scenario? And what's coming next for Realm to better help you out in the future? Today's agenda. Why bother testing? We've all worked in places where testing doesn't happen. I encourage everybody to test their code. How to test a UI application? We're talking about iOS, macOS, TBOS, watchOS, they would be our primary users here. So we are testing a UI application. Unit and integration testing, what those are, how they differ, how Realm fits in? Testing your business logic with Realm. We at least internally have a pretty good idea of where we see business logic existing relative to the database where that sits between classes and whatnot. And then finally a sneak peek for Projections and a Q&A.
So, Projections is a standalone feature. I'll talk more about it later, but it should greatly assist in this case based on how we've seen people using Realm in SwiftUI. And for the Q&A part, we really want to hear from everybody in the audience. Testing, I wouldn't call it a hotly contested subject but it's something that we sometimes sit a bit too far removed from not building applications every day. So, it's really important that we get your feedback so that we can build better features or provide better guidance on how to better integrate Realm into your entire application life cycle. So why bother testing? Structural integrity, minimize bugs, prevent regressions, improve code quality and creating self-documenting code, though that turned to be dangerous as I have seen it used before, do not write documentation at all. I won't spend too long on this slide.
I can only assume that if you're here at the SwiftUI testing talk that you do enjoy the art of testing your code. But in general, it's going to create better code. I, myself and personal projects test my code. But I certainly didn't when I first started out as a software engineer, I was like, "Ah, sure. That's a simple function there. It'll never break." Lo and behold, three months later, I have no idea what that function is. I have no idea what it did, I was supposed to do and now I have a broken thing that I have to figure out how to fix and I'll spend an extra week on it. It's not fun for anybody involved. So, gesture code. How to test a UI application. That is a UI application. So, unit tests, unit tests are going to be your most basic test.
It tests functions that test simple bodies of work. They're going to be your smallest test, but you're going to a lot of them. And I promise that I'll try to go quickly through the basics testing for those that are more seasoned here. But unit tests are basically input and output. If I give you this, I expect this back in return. I expect the state of this to look like this based on these parameters. Probably, some of our most important tests, they keep the general structure of the code sound. Integration tests, these are going to, depending on the context of how you're talking about it could be integrating with the backend, it could be integrating with the database. Today, we're going to focus on the latter. But these are the tests that actually makes sure that some of the like external moving parts are working as you'd expect them to work.
Acceptance tests are kind of a looser version of integration tests. I won't really be going over them today. End-to-end tests, which can be considered what I was talking about earlier, hitting an actual backend. UI testing can be considered an end to end test if you want to have sort of a loose reasoning about it or UI testing that actually tests the back. And it basically does the whole system work? I have a method that sends this to the server and I get this back, does the thing I get back look correct and is the server state sound? And then smoke tests, these are your final tests where your manager comes to you at 8:00 PM on the day that you were supposed to ship the thing and he's like, "Got to get it out there.
Did you test it?" And you're like, "Oh, we smoke tested." It's the last few checks I suppose. And then performance testing, which is important in most applications, making sure that everything is running as it should, everything is running as quickly as it should. Nothing is slowing it down where it shouldn't. This can catch a lot of bugs in code. XC test provides some really simple mechanisms for performance testing that we use as well. It'd be fairly common as well, at least for libraries to have regression testing with performance testing, to make sure that code you introduced didn't slow things down 100X because that wouldn't be fun for anyone involved. So, let's start with unit tests. Again, unit tests focus on the smallest possible unit of code, typically a function or class, they're fast to run and you'll usually have a lot of them.
So, the example we're going to be going over today is a really simple application, a library application. There's going to be a library. There's going to be users. Users can borrow books and use just can return book. I don't know why I pick this, seemed like a nice thing that wasn't a to do app. Let's start going down and just explaining what's happening here. You have the library, you have an error enum, which is a fairly common thing to do in Swift. Sorry. You have an array of books. You have an array of member IDs. These are people with assumably library cards. I don't know, that's the thing in every country. You're going to have initialize it, that takes in an array of books and an array of member IDs that initialize the library to be in the correct state.
You're going to have a borrow method that is going to take an ISBAN, which is ... I don't exactly remember what it stands for. International something book number, it's the internationally recognized idea, the book and then the library memberUid, it is going to be a throwing method that returns a book. And just to go over to the book for a second, a book contains an ISBAN, ID, a title and an author. The borrow method is going to be the main thing that we look at here. This is an individual body of work, there's clear input and output. It takes an ISBAN and the library memberUid, and it gives you back a book if all was successful. Let's walk down what this method does and how we want to test it.
Again, receive an ISBAN, received a library memberUid. We're going to check if that book actually exists in the available books. If it doesn't, we throw an error, we're going to check if a member actually exists in our library memberUid, it doesn't, throw an error. If we've gotten to this point, our state is correct. We remove the book from the books array, and we return it back to the color. So, it can be a common mistake to only test the happy path there, I give you the right ISBAN, I give you the right Uid, I get the right book back. We also want to test the two cases where you don't have the correct book, you don't have the correct member. And that the correct error is thrown. So, go to import XC test and write our first unit test.
Throwing method, it is going to ... I'll go line by line. I'm not going to do this for every single slide. But because we're just kind of getting warmed up here, it'll make it clear what I'm talking about as we progress with the example because we're going to build on the example as the presentation goes on. So, we're going to create a new library. It's going to have an empty array of books and empty memberUids. We're going to try to borrow a book with an ISBAN that doesn't exist in the array and a random Uid which naturally does not exist in the empty number ID. That's going to throw an error. We're asserting that it throws an error. This is bad path, but it's good that we're testing it. We should also be checking that it's the correct error.
I did not do that to save space on the slide. The wonders of presenting. After that, we're going to create a library now with a book, but not a Uid, that book is going to make sure that the first check passes, but the lack of memberUids is going to make sure that the second check fails. So we're going to try to borrow that book again. That book is Neuromancer, which is great book. Everybody should read it. Add it to your summer reading lists, got plenty of time on our hands. We're going to assert that, that throws an error. After that we're going to actually create the array of memberUids finally, we're going to create another library with the Neuromancer book and the memberUids properly initialized this time. And we're going to finally successfully borrow the book using the first member of that members array of IDs.
That book, we're going to assert that has been the correct title and the correct author. We tested both bad paths in the happy path. There's probably more we could have tested here. We could have tested the library, initialized it to make sure that the state was set up soundly. That gets a bit murky though, when you have private fields, generally a big no, no in testing is to avoid unprivate things that should be private. That means that you're probably testing wrong or something was structured wrong. So for the most part, this test is sound, this is the basic unit test. Integration tests, integration tests ensure that the interlocking pieces of your application work together as designed. Sometimes this means testing layers between classes, and sometimes this means testing layer between your database and application. So considering that this is the Realm user group, let's consider Realm as your object model and the database that we will be using and testing against.
So, we're going to switch some things around to work with Realm. It's not going to be radically different than what we had before, but it's going to be different enough that it's worth going over. So our book and library classes are going to inherit an object now, which is a Realm type that you inherit from so that you can store that type in the database. Everything is going to have our wonderful Abruzzi Syntex attached to it, which is going away soon, by the way, everyone, which is great. The library class has changed slightly and so far is that has a library ID now, which is a Uid generated initialization. It has a Realm list of available books and a Realm list of library members. Library member is another Realm object that has a member ID, which is a Uid generated on initialization.
A list of borrowed books, as you can borrow books from the library and the member ID is the primary key there. We are going to change our borrow method on the library to work with Realm now. So it's still going to stick and it has been in a memberUid. This is mainly because we're slowly migrating to the world where the borrow function is going to get more complex. We're going to have a check here to make sure that the Realm is not invalidated. So every Realm object has an exposed Realm property on it that you can use. That is a Realm that is associated with that object. We're going to make sure that that's valid. We're going to check if the ISBAN exists within our available books list. If that passes, we're going to check that the member ID exists within our members list of library members. We're going to grab the book from the available books list. We're going to remove it from the available books list and we're going to return it to the color. As you can see, this actually isn't much different than the previous bit of code.
The main difference here is that we're writing to a Realm. Everything else is nearly the same, minor API differences. We're also going to add a return method to the library member class that is new. You should always return your library books. There's fines if you don't. So it's going to take a book and the library, we're going to, again, make sure that the Realm is not validated. We're going to make sure that our list of borrowed books because we're borrowing books from a library contains the correct book. If it does, we're going to remove it from our borrowed books list and we're going to append it back to the list of bell books in the library. So, what we're already doing here in these two methods is containing business logic. We're containing these things that actually change our data and in effect we'll eventually when we actually get to that part change the view.
So, let's test the borrow function now with Realm. Again, stepping through line by line, we're going to create an in-memory Realm because we don't actually want to store this stuff, we don't want state to linger between tests. We're going to open the Realm. We're going to create that Neuromancer book again. We're going to create a library member this time. We're going to create a library. We don't need to pass anything in this time as the state is going to be stored by the Realm and should be messed with from the appropriate locations, not necessarily on initialization, this is a choice.
This is not a mandate simplicity sake or a presentation. We're going to add that library to the Realm and we're going to, because there's no books in the library or members in the library assert that it's still froze that error. We don't have that book. Now, we're going to populate the library with the books in a right transaction. So, this is where Rome comes into play. We're going to try to borrow again, but because it doesn't have any members we're going to throw the air. Let's add members. Now we can successfully borrow the book with the given member and the given book, we're going to make sure that the ISBAN and title and author are sound, and that's it. It's nearly the same as the previous test.
But this is a super simple example and let's start including a view and figuring out how that plays in with your business logic and how Realm fits in all that. Testing business logic with Realm. Here's a really simple library view. There's two observed objects on it, a library and a library member. They should actually be observed Realm objects but it's not a perfect presentation. And so for each available book in the library, display a text for the title, a text for the author and a button to borrow the book. We're going to try to borrow, and do catch. If it succeeds, great. If it doesn't, we should actually show an error. I'm not going to put that in presentation code and we're going to tag the button with an identifier to be able to test against it later.
The main thing that we want to test in this view is the borrow button. It's the only thing that actually isn't read only. We should also test the read only things to make sure that the text user sound, but for again, second presentation, make sure that borrowing this book removes the book from the library and gives it to the member. So the thing that we at Realm have been talking about a lot recently is this MBI pattern, it meshes nicely with SwiftUI because of two-way data binding because of the simplicity of SwiftUI and the fact that we've been given all of the scaffolding to make things simpler, where we don't necessarily need few models, we don't necessarily need routers. And again, you might, I'm not mandating anything here, but this is the simplest way. And you can create a lot of small components and a lot of very clear methods on extensions on your model that make sure that this is fairly sound.
You have a user, the user has intent. They tap a button. That button changes something in the model. The model changes something in the view, the user sees the view fairly straightforward. It's a circular pattern, it's super useful in simpler circumstances. And as I found through my own dog fooding, in a new application, I can't speak to applications that have to migrate to SwiftUI, but in a new application, you can intentionally keep things simple regardless of the size of your code base, keep things small, keep your components small, create objects as you see fit, have loads of small functions that do exactly what they're supposed to do relative to that view, still a way to keep things simple. And in the case of our application, the user hits the borrow button. It's the tech button that we have.
It's going to borrow from the library from that function, that function is going to change our data. That data is going to be then reflected in the view via the Realm. The Realm is going to automatically update the view and the user's going to see that view. Fairly straightforward, fairly simple, again, works for many simple use cases. And yeah, so we're also going to add here a method for returning books. So it's the same exact thing. It's just for the member. I could have extracted this out, but wanted to show everybody it's the same thing. Member.borrowed books, texts for the title, text for the author, a return button with an accessibility identifier called return button that actually should have been used in the previous slide instead of tag. And that member is going to return that book to the library.
We also want to test that and for us in the case of the test that I'm about to show, it's kind of the final stage in the test where not only are we testing that we can borrow the book properly, but testing that we can back properly by pressing the borrow and return. So we're going to create a simple UI test here. The unit tests here that should be done are for the borrow and return methods. So, the borrow tests, we've already done. The return test, I'm going to avoid showing because it's the exact same as the borrow test, just in the case of the user. But having UI test is also really nice here because the UI in the case of MDI is the one that actually triggers the intent, they trigger what happens to the view model ... the view. Sorry, the model.
In the case of UI tests, it's actually kind of funky how you have to use it with Realm, you can't use your classes from the executable, your application. So, in the case of Realm, you'll actually have to not necessarily copy and paste, but you'll have to share a source file with your models. Realm is going to read those models and this is a totally different process. You have to think of it as the way that we're going to have to use Realm here is going to be a bit funky. That said, it's covered by about five lines of code.
We're going to use a Realm and the temporary directory, we're going to store that Realm path in the launch environment. That's going to be an environment variable that you can read from your application. I wouldn't consider that test code in your app. I would just consider it an injection required for a better structured application. The actual last line there is M stakes, everyone. But we're going to read that Realm from the application and then use it as we normally would. We're going to then write to that Realm from the rest of this test.
And on the right there is a little gift of the test running. It clicks the borrow button, it then clicks the return button and moves very quickly and they don't move as slow as they used to move. But let's go over the test. So, we create a new library. We create a new library member. We create a new book. At the library, we add the member and we append the book to the library. We then launch the application. Now we're going to launch this application with all of that state already stored. So we know exactly what that should look like. We know that the library has a book, but the user doesn't have a book. So, UI testing with SwiftUI is pretty much the same as UI kit. The downside is that it doesn't always do what you expect it to do.
If you have a heavily nested view, sometimes the identifier isn't properly exposed and you end up having to do some weird things just for the sake of UI testing your application. I think those are actually bugs though. I don't think that that's how it's supposed to work, I guess keep your eyes peeled after WWDC. But yeah, so we're going to tap the borrow.button. That's the tag that you saw before? That's going to trigger the fact that that available book is going to move to the member, so that list is going to be empty. We're going to assert then that the library.members.first.borrowbooks.firststudy is the same as the book that has been.
So, the first and only member of the library's first and only book is the same as the book that we've injected into this application. We're then going to hit the return button, that's going to return the book to the library and run through that return function that you saw as an extension on the library member class. We're going to check that the library.members.borrowbooks is empty. So, the first and only member of the library no longer has a borrowed book and that the library.borrowbook at first, it has been the only available book in the library is the same as the book that we inject into the application state. Right. So, we did it, we tested everything, the application's great. We're invincible. We beat the game, we got the high score and that's it.
But what about more complex acts, you say? You can't convert your 50,000 line app that is under concentrator to these simple MVI design pattern now? It's really easy to present information in this really sterile, simple environment. It's kind of the nature of the beast when it comes to giving a presentation in the first place. And unfortunately, sometimes it can also infect the mind when coming up with features and coming up with ways to use Realm. We don't get to work with these crazy complex applications every day, especially ones that are 10 years old.
Occasionally, we actually do get sent people's apps and it's super interesting for us. And we've got enough feedback at this point that we are trying to work towards having Realm be more integrated with more complex architectures. We don't want people to have to work around Realm, which is something we've seen, there are people that completely detach their app from Realm and use Realm as this dummy data store. That's totally fine, but there's often not a point at this point in doing something like that. There's so many better ways to use Realm that we want to introduce features that make it really obvious that you don't have to do some of these crazy things that people do. And yes, we have not completely lost our minds. We know there are more complex apps out there. So let's talk about MVVM.
It is just totally off the top of my head, not based on any factual truth and only anecdotal evidence, but it seems to be the most popular architecture these days. It is model view view model. So, the view gives commands to the view model, the view model updates the model, the view model reads from the model and it binds it to the view. I have contested in the past that it doesn't make as much sense with SwiftUI because it's two way data binding because what ends up happening with the models in SwiftUI is that you write from the view to the view model and then the view model just passes that information off to the model without doing anything to it. There's not really a transformation that generally happens between the view model and the model anymore. And then you have to then manually update the view, and especially with Realm where we're trying to do all that stuff for you, where you update the Realm and that updates the view without you having to do anything outside of placing a property wrapper on your view, it kind of breaks what we're trying to do.
But that said, we do understand that there is a nice separation here. And not only that, sometimes what is in your model isn't necessarily what you want to display on the view. Probably, more times than not, your model is not perfectly aligned with your view. What happens to you if you have multiple models wrong, doesn't support joins. More often than not, you have used with like a bunch of different pieces. Even in the example I showed, you have a library and you have a library member, somebody doing MVVM would want only a view model property and any like super simple state variables on that view. They wouldn't want to have their objects directly supplanted onto the view like that. They'd have a library view view model with a library member and a library. Or even simpler than that. They can take it beyond that and do just the available books and the borrowed books, since those are actually the only things that we're working with in that view.
So this is one thing that we've seen people do, and this is probably the simplest way to do view models with Realm. In this case, because this view specifically only handles available books and borrowed books, those are the things that we're going to read from the library and the library member. We're going to initialize the library view view model with those two things. So you're probably do that in the view before, and then pass that into the next view. You're going to assign the properties of that from the library available books and the member borrowed books, you're then going to observe the available books and observe the borrowed books because of the way that ... now that you're abstracting out some of the functionality that we added in, as far as observation, you're going to have to manually update the view from the view model.
So in that case, you're going to observe, you don't care, what's changing. You just care that there's change. You're going to send that to the object will change, which is a synthesized property on an observable object. That's going to tell the view, please update. Your borrow function is going to look slightly differently now. In this case, you're going to check for any available books, if the ISBAN exists you can still have the same errors. You're going to get the Realm off of the available books which, again, if the Realm has been invalidated or something happened, you are going to have to throw an error. You're going to grab the book out of the available books and you're going to remove it from the available books and then append it to the borrowed books in the right transaction from the Realm, and then return the book.
So, it's really not that different in this case. The return function, similarly, it does the opposite, but the same checks and now it even has the advantage of both of these are on the singular model associated with a view. And assuming that this is the only view that does this thing, that's actually not a bad setup. I would totally understand this as a design pattern for simplifying your view and not separating things too much and keeping like concepts together. But then we've seen users do some pretty crazy things, like totally map everything out of Realm and just make their view model totally Realm agnostic. I get why in certain circumstances this happens. I couldn't name a good reason why to do this outside of like there are people that totally abstract out the database layer in case they don't want to be tied to Realm.
That's understandable. We don't want people to be handcuffed to us. We want people to want to use us and assume that we will be around to continue to deliver great features and work with everyone to make building apps with Realm great. But we have seen this where ... Sure, you have some of the similar setup here where you're going to have a library and a library member, but you're going to save out the library ID and the member ID for lookup later. You're going to observe the Realm object still, but you're going to map out the books from the lists and put them into plain old Swift arrays.
And then basically what you're going to end up doing is it's going to get a lot more complex or you're going to have to look up the primary keys in the Realm. You're going to have to make sure that those objects are still sound, you're then going to have to modify the Realm objects anyway, in a right transaction. And then you're going to have to re-map out the Realm lists back into their arrays and it gets really messy and it ends up becoming quintessential spaghetti code and also hard to test, which is the point of this presentation. So, this is not something we'd recommend unless there's good reason for it. So there's a big cancel sign for you.
We understand that there are infinite use cases and 1,000 design patterns and so many different ways that you can write code, these design patterns or social constructs, man. There's no quick and easy way to do this stuff. So we're trying to come up with ways to better fit in. And for us that's projections, this is a pre-alpha feature. It's only just been scoped out. We still have to design it fully. But this is from the prototype that we have now. So what is a projection? So in database land projection is when you grab a bunch of data from different sources and put it into a single structure, but it's not actually stored in the database. So, if I have a person and that person has a name and I have a dog and that dog has a name and I want to project those two names into a single structure I would have like a structure called person and dog name.
I would do queries on the database to grab those two things. And in Mongo, there's a project operator that you can use to make sure that that object comes out with the appropriate fields and values. For us, it's going to look slightly different. At some point in the future, we would like a similar super loose projection syntax, where you can join across multiple objects and get whatever object you want back. That's kind of far future for us. So in the near future, we want to come up with something a bit simpler where you're essentially reattaching existing properties onto this new arbitrary structure. And arbitrary is kind of the key word here. It's not going to be directly associated with a single Realm object. It's going to be this thing that you associate with whatever you want to associate it with.
So if you want to associate it with the view, we've incidentally been working on sort of a view model for people then it becomes your view model. If the models are one-to-one with the view, you grab the data from the sources that you want to grab it from. And you stick it on that projection and associate that with the view. Suddenly, you have a view model. In this case, we have our library view view model, it inherits from the projection class or protocol, we're not sure yet. It's going to have two protective properties. It's going to have available books and borrowed books. These are going to be read directly from the library and member classes. These properties are going to be live. Think of this as a Realm object. This is effectively a Realm object with reattached successors.
It should be treated no differently, but it's much more flexible and lightweight. You can attach to anything on here, and you could attach the member IDs on here, if you had overdue fees and that was supposed to go on this view, you could attach overdue fees to it. There's things you can query. Right now we're trying to stick mainly to things that we can access with keypads. So, for those familiar with keypads, which I think was Swift 52.
I can't remember which version of Swift it was, but it was a really neat feature that you can basically access a chain of keypads on an object and then read those properties out. The initial version of projections will be able to do that where that available books is going to be read from that library and any updates to the library, we'll update available books, same thing with borrowed books and library member. And it's going to have a similar borrow function that the other view model had, this case it's just slightly more integrated with Realm, but I think the code is nearly identical. Same thing with return code is nearly identical, slightly more integrated with Realm.
And the view is nearly the same, except now you have the view model, sorry for some of the formatting there. In this case, you call borrow on the view model and you call return on the view model. It is very close to what we had. It's still a Realmy thing that's going to automatically update your view when any of the things that you have attached update so that if the library updates, if the user ... Oops, sorry, not user. If the member updates, if the books update, if the borrowed books update, that view is again going to automatically update. And now we've also created a single structure, which is easier to test or for you to test. Integration testing is going to be, again, very, very similar. The differences is that instead of creating a library and a member, creating we're also creating a library view model.
We're going to borrow from that, we're going to make sure that it throws the appropriate error. We're going to refill the state, mess with the state, do all the same stuff, except this time on view model. And now what we've done here is that if this is the only place where you need to return and borrow, we've created this nice standalone structure that does that for you. And it's associated with Realm, which means that it's closer to your model, since we are encouraging people to have Realm B of the model as a concept. Your testing is the exact same because this is a view associated thing and not actually a Realm object, you don't need to change these tests at all. They're the same. That's pretty much it. I hope I've left enough time for questions. I have not had a chance to look at the chat yet.
**Ian:**
I'm going to see to that, Jason, but thank you so much. I guess one of the comments here, Sebastian has never seen the objective C declaration that we have in our Realm models. Maybe tell them a little bit about the history there and then tell him what's in plan for future.
**Jason:**
Sure. So, just looking at the question now, I've never used an ob C member. Obviously, members prevents you from having to put at ob C on all of your properties that need to use objective C reflection. The reason that you have to do that with Realm and Swift is because we need to take advantage of objective C reflection. It's the only way that we're able to do that. When you put that tag there, sorry, annotation. When you put that there, it gives objective C, the objective C runtime access to that property. And we still need that. However, in the future, we are going to be taking advantage of property wrappers to make it a much nicer, cleaner, more obvious with syntax. Also, it's going to have compile time checks. That's going to look like Swift instead of an ob C whatever. That is actually coming sooner than later. I hesitate to ever promise a date, but that one should be pretty, pretty soon.
**Ian:**
Excellent. Yeah, we're looking forward to being able to clean up those Realm model definitions to make it more swifty. Richard had a question here regarding if there's a recommendation or proper way to automate the user side input for some of the UI testing?
**Jason:**
UI testing, for UI test proper, is there a way to automate the user input side of the equation since you weren't going to? I'm not entirely sure what you mean, Richard. If you could explain a bit more.
**Ian:**
I mean, I think maybe this is about having variable input into what the user puts into the field. Could this also be maybe something around a fuzzer, having different inputs and testing different fields and how they accept certain inputs and how it goes through different tests?
**Jason:**
Yeah. I mean, I didn't go over fuzz testing, but that's absolutely something that you should do. There's no automated mouse input on text input. You can automate some of that stuff. There's no mouse touch yet. You can touch any location on the screen, you can make it so that if you want to really, not load test is the wrong word, but batch up your application, just have it touch everywhere and see what happens and make sure nothing crashes, you could do that. It's actually really interesting if you have these UI tests. So yes, you can do that, Richard. I don't know if there's a set of best standards and practices, but at least with macOS, for instance, it was bizarre the first time I ran it. When you a UI test on macOS it actually completely takes control from your mouse, and it will go all over the screen wherever you tell it to and click anywhere. Obviously, on the iPhone simulator, it has a limited space of where it can touch, but yes, that can be automated. But I guess it depends on what you're trying to test.
**Ian:**
I guess, another question for me is what's your opinion on test coverage? I think a lot of people would look to have everything be unit tested, but then there's also integration tests. Should everything be having integration tests? And then end to end tests, there's kind of a big, a wide berth of different things you can test there. So, what's your opinion on how much coverage you should have for each type of test?
**Jason:**
That's a tough question, because at Realm, I suppose we tell ourselves that there can never be too many tests, so we're up to us, every single thing would be tested within reason. You can't really go overkill unless you start doing weird things to your code to accommodate weird testing patterns. I couldn't give you a number as to what appropriate test coverage is. Most things I know for us at Realm, we don't make it so that every single method needs to be tested. So, if you have a bunch of private methods, those don't need to be tested, but for us, anything by the public API needs to be heavily tested, every single method and that's not an exaggeration. We're also a library. So in a UI application, you have to look at it a bit differently and consider what your public API is, which were UI applications, really the entry points to the model, any entry point that transforms data. And in my opinion, all of those should be tested. So, I don't know if that properly answers the question, for integration tests and end to end tests, same thing. What?
**Ian:**
Yeah, I think so. I mean, I think it says where's your public API and then a mobile application, your public API is a lot of the UI interfaces that they can interact with and that's how they get into your code paths. Right?
**Jason:**
Yeah.
**Ian:**
I guess another question from me, and this is another opinion question is what's your opinion on flaky tests? And so these are tests that sometimes pass sometimes fail and is it okay? A lot of times relate to, should we release, should we not release? Maybe you could give us a little bit of your thoughts on that.
**Jason:**
Yeah. That's a tricky one because even on the Realm Cocoa, if you follow the pull requests, we still have a couple of flaky tests. To be honest, those tests are probably revealing some race condition. They could be in the test themselves though, which I think in the case of some of the recent ones, that was the case. More often flaky tests are revealing something wrong. I don't want to be on a recording thing that that's okay. But very occasionally, yes, you do have to look at a test and be like, "Maybe this is specific to the testing circumstance," but if you're trying to come out with like the most high quality product, you should have all your tests passing, you should make sure that there's no race conditions, you should make sure that everything is clean cut sound, all that kind of thing.
**Ian:**
Yeah. Okay, perfect. There's a final question here, will be docs on best practices for testing? I think we're looking as this presentation is a little bit of our, I wouldn't say docs, but a presentation on best practices for testing. It is something potentially in the future we can look to ask to our docs. So yeah, I think if we have other things covered, we can look to add testing best practices as well to our docs as well. And then last question here from Shane what are we hoping for for WWDC next week?
**Jason:**
Sure. Well, just to add one thing to Ian's question, if there's ever a question that you or anybody else here has, feel free to ask on GitHub or forums or something like that. For things that we can't offer through API or features or things that might take a long time to work on, we're happy to offer guidance. We do have an idea of what those best practices are and are happy to share them. As far as WWDC, what we're hoping for is ..., yes, we should add more docs, Richard, sorry. There are definitely some things there that are got yous. But with WWDC next week, and this ties to best practices on multi-threading using Realm, we're hoping for a sync await which we've been playing with for a few weeks now. We're hoping for actors, we're hoping for a few minor features as well like property wrappers in function parameters and property rappers in pretty much Lambdas and everywhere. We're hoping for Sendable as well, Sendable will prevent you from passing unsafe things into thread safe areas, basically. But yeah, that's probably our main wishlist right now.
**Ian:**
Wow. Okay. That's a substantial wishlist. Well, I hope you get everything you wish for. Perfect. Well, if there's no other questions, thank you so much, everyone, and thank you so much, Jason. This has been very informative yet again.
**Jason:**
Thanks everyone for coming. I always have-
**Ian:**
Thank you.
**Jason:**
... to thank everyone.
**Ian:**
Bye.
| md | {
"tags": [
"Realm",
"Swift",
"iOS"
],
"pageDescription": "Learn how the testing landscape has changed for iOS apps using the new SwiftUI framework.",
"contentType": "Article"
} | Realm Meetup - SwiftUI Testing and Realm With Projections | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/code-examples/java/java-spring-data-client-side-field-level-encryption | created | # How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB
## GitHub Repository
The source code of this template is available on GitHub:
```bash
git clone [email protected]:mongodb-developer/mongodb-java-spring-boot-csfle.git
```
To get started, you'll need:
- Java 17.
- A MongoDB cluster v7.0.2 or higher.
- MongoDB Automatic Encryption Shared Library
v7.0.2 or higher.
See the README.md file for more
information.
## Video
This content is also available in video format.
:youtube]{vid=YePIQimYnxI}
## Introduction
This post will explain the key details of the integration of
MongoDB [Client-Side Field Level Encryption (CSFLE)
with Spring Data MongoDB.
However, this post will *not* explain the basic mechanics of CSFLE
or Spring Data MongoDB.
If you feel like you need a refresher on CSFLE before working on this more complicated piece, I can recommend a few
resources for CSFLE:
- My tutorial: CSFLE with the Java Driver (
without Spring Data)
- CSFLE MongoDB documentation
- CSFLE encryption schemas
- CSFLE quick start
And for Spring Data MongoDB:
- Spring Data MongoDB - Project
- Spring Data MongoDB - Documentation
- Baeldung Spring Data MongoDB Tutorial
- Spring Initializr
This template is *significantly* larger than other online CSFLE templates you can find online. It tries to provide
reusable code for a real production environment using:
- Multiple encrypted collections.
- Automated JSON Schema generation.
- Server-side JSON Schema.
- Separated clusters for DEKs and encrypted collections.
- Automated data encryption keys generation or retrieval.
- SpEL Evaluation Extension.
- Auto-implemented repositories.
- Open API documentation 3.0.1.
While I was coding, I also tried to respect the SOLID Principles as much
as possible to increase the code readability, usability, and reutilization.
## High-Level Diagrams
Now that we are all on board, here is a high-level diagram of the different moving parts required to create a correctly-configured CSFLE-enabled MongoClient which can encrypt and decrypt fields automatically.
```java
/**
* This class initialize the Key Vault (collection + keyAltNames unique index) using a dedicated standard connection
* to MongoDB.
* Then it creates the Data Encryption Keys (DEKs) required to encrypt the documents in each of the
* encrypted collections.
*/
@Component
public class KeyVaultAndDekSetup {
private static final Logger LOGGER = LoggerFactory.getLogger(KeyVaultAndDekSetup.class);
private final KeyVaultService keyVaultService;
private final DataEncryptionKeyService dataEncryptionKeyService;
@Value("${spring.data.mongodb.vault.uri}")
private String CONNECTION_STR;
public KeyVaultAndDekSetup(KeyVaultService keyVaultService, DataEncryptionKeyService dataEncryptionKeyService) {
this.keyVaultService = keyVaultService;
this.dataEncryptionKeyService = dataEncryptionKeyService;
}
@PostConstruct
public void postConstruct() {
LOGGER.info("=> Start Encryption Setup.");
LOGGER.debug("=> MongoDB Connection String: {}", CONNECTION_STR);
MongoClientSettings mcs = MongoClientSettings.builder()
.applyConnectionString(new ConnectionString(CONNECTION_STR))
.build();
try (MongoClient client = MongoClients.create(mcs)) {
LOGGER.info("=> Created the MongoClient instance for the encryption setup.");
LOGGER.info("=> Creating the encryption key vault collection.");
keyVaultService.setupKeyVaultCollection(client);
LOGGER.info("=> Creating the Data Encryption Keys.");
EncryptedCollectionsConfiguration.encryptedEntities.forEach(dataEncryptionKeyService::createOrRetrieveDEK);
LOGGER.info("=> Encryption Setup completed.");
} catch (Exception e) {
LOGGER.error("=> Encryption Setup failed: {}", e.getMessage(), e);
}
}
}
```
In production, you could choose to create the key vault collection and its unique index on the `keyAltNames` field
manually once and remove the code as it's never going to be executed again. I guess it only makes sense to keep it if
you are running this code in a CI/CD pipeline.
One important thing to note here is the dependency to a completely standard (i.e., not CSFLE-enabled) and ephemeral `MongoClient` (use of a
try-with-resources block) as we are already creating a collection and an index in our MongoDB cluster.
KeyVaultServiceImpl.java
```java
/**
* Initialization of the Key Vault collection and keyAltNames unique index.
*/
@Service
public class KeyVaultServiceImpl implements KeyVaultService {
private static final Logger LOGGER = LoggerFactory.getLogger(KeyVaultServiceImpl.class);
private static final String INDEX_NAME = "uniqueKeyAltNames";
@Value("${mongodb.key.vault.db}")
private String KEY_VAULT_DB;
@Value("${mongodb.key.vault.coll}")
private String KEY_VAULT_COLL;
public void setupKeyVaultCollection(MongoClient mongoClient) {
LOGGER.info("=> Setup the key vault collection {}.{}", KEY_VAULT_DB, KEY_VAULT_COLL);
MongoDatabase db = mongoClient.getDatabase(KEY_VAULT_DB);
MongoCollection vault = db.getCollection(KEY_VAULT_COLL);
boolean vaultExists = doesCollectionExist(db, KEY_VAULT_COLL);
if (vaultExists) {
LOGGER.info("=> Vault collection already exists.");
if (!doesIndexExist(vault)) {
LOGGER.info("=> Unique index created on the keyAltNames");
createKeyVaultIndex(vault);
}
} else {
LOGGER.info("=> Creating a new vault collection & index on keyAltNames.");
createKeyVaultIndex(vault);
}
}
private void createKeyVaultIndex(MongoCollection vault) {
Bson keyAltNamesExists = exists("keyAltNames");
IndexOptions indexOpts = new IndexOptions().name(INDEX_NAME)
.partialFilterExpression(keyAltNamesExists)
.unique(true);
vault.createIndex(new BsonDocument("keyAltNames", new BsonInt32(1)), indexOpts);
}
private boolean doesCollectionExist(MongoDatabase db, String coll) {
return db.listCollectionNames().into(new ArrayList<>()).stream().anyMatch(c -> c.equals(coll));
}
private boolean doesIndexExist(MongoCollection coll) {
return coll.listIndexes()
.into(new ArrayList<>())
.stream()
.map(i -> i.get("name"))
.anyMatch(n -> n.equals(INDEX_NAME));
}
}
```
When it's done, we can close the standard MongoDB connection.
## Creation of the Data Encryption Keys
We can now create the Data Encryption Keys (DEKs) using the `ClientEncryption` connection.
MongoDBKeyVaultClientConfiguration.java
```java
/**
* ClientEncryption used by the DataEncryptionKeyService to create the DEKs.
*/
@Configuration
public class MongoDBKeyVaultClientConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBKeyVaultClientConfiguration.class);
private final KmsService kmsService;
@Value("${spring.data.mongodb.vault.uri}")
private String CONNECTION_STR;
@Value("${mongodb.key.vault.db}")
private String KEY_VAULT_DB;
@Value("${mongodb.key.vault.coll}")
private String KEY_VAULT_COLL;
private MongoNamespace KEY_VAULT_NS;
public MongoDBKeyVaultClientConfiguration(KmsService kmsService) {
this.kmsService = kmsService;
}
@PostConstruct
public void postConstructor() {
this.KEY_VAULT_NS = new MongoNamespace(KEY_VAULT_DB, KEY_VAULT_COLL);
}
/**
* MongoDB Encryption Client that can manage Data Encryption Keys (DEKs).
*
* @return ClientEncryption MongoDB connection that can create or delete DEKs.
*/
@Bean
public ClientEncryption clientEncryption() {
LOGGER.info("=> Creating the MongoDB Key Vault Client.");
MongoClientSettings mcs = MongoClientSettings.builder()
.applyConnectionString(new ConnectionString(CONNECTION_STR))
.build();
ClientEncryptionSettings ces = ClientEncryptionSettings.builder()
.keyVaultMongoClientSettings(mcs)
.keyVaultNamespace(KEY_VAULT_NS.getFullName())
.kmsProviders(kmsService.getKmsProviders())
.build();
return ClientEncryptions.create(ces);
}
}
```
We can instantiate directly a `ClientEncryption` bean using
the KMS and use it to
generate our DEKs (one for each encrypted collection).
DataEncryptionKeyServiceImpl.java
```java
/**
* Service responsible for creating and remembering the Data Encryption Keys (DEKs).
* We need to retrieve the DEKs when we evaluate the SpEL expressions in the Entities to create the JSON Schemas.
*/
@Service
public class DataEncryptionKeyServiceImpl implements DataEncryptionKeyService {
private static final Logger LOGGER = LoggerFactory.getLogger(DataEncryptionKeyServiceImpl.class);
private final ClientEncryption clientEncryption;
private final Map dataEncryptionKeysB64 = new HashMap<>();
@Value("${mongodb.kms.provider}")
private String KMS_PROVIDER;
public DataEncryptionKeyServiceImpl(ClientEncryption clientEncryption) {
this.clientEncryption = clientEncryption;
}
public Map getDataEncryptionKeysB64() {
LOGGER.info("=> Getting Data Encryption Keys Base64 Map.");
LOGGER.info("=> Keys in DEK Map: {}", dataEncryptionKeysB64.entrySet());
return dataEncryptionKeysB64;
}
public String createOrRetrieveDEK(EncryptedEntity encryptedEntity) {
Base64.Encoder b64Encoder = Base64.getEncoder();
String dekName = encryptedEntity.getDekName();
BsonDocument dek = clientEncryption.getKeyByAltName(dekName);
BsonBinary dataKeyId;
if (dek == null) {
LOGGER.info("=> Creating Data Encryption Key: {}", dekName);
DataKeyOptions dko = new DataKeyOptions().keyAltNames(of(dekName));
dataKeyId = clientEncryption.createDataKey(KMS_PROVIDER, dko);
LOGGER.debug("=> DEK ID: {}", dataKeyId);
} else {
LOGGER.info("=> Existing Data Encryption Key: {}", dekName);
dataKeyId = dek.get("_id").asBinary();
LOGGER.debug("=> DEK ID: {}", dataKeyId);
}
String dek64 = b64Encoder.encodeToString(dataKeyId.getData());
LOGGER.debug("=> Base64 DEK ID: {}", dek64);
LOGGER.info("=> Adding Data Encryption Key to the Map with key: {}",
encryptedEntity.getEntityClass().getSimpleName());
dataEncryptionKeysB64.put(encryptedEntity.getEntityClass().getSimpleName(), dek64);
return dek64;
}
}
```
One thing to note here is that we are storing the DEKs in a map, so we don't have to retrieve them again later when we
need them for the JSON Schemas.
## Entities
One of the key functional areas of Spring Data MongoDB is the POJO-centric model it relies on to implement the
repositories and map the documents to the MongoDB collections.
PersonEntity.java
```java
/**
* This is the entity class for the "persons" collection.
* The SpEL expression of the @Encrypted annotation is used to determine the DEK's keyId to use for the encryption.
*
* @see com.mongodb.quickstart.javaspringbootcsfle.components.EntitySpelEvaluationExtension
*/
@Document("persons")
@Encrypted(keyId = "#{mongocrypt.keyId(#target)}")
public class PersonEntity {
@Id
private ObjectId id;
private String firstName;
private String lastName;
@Encrypted(algorithm = "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic")
private String ssn;
@Encrypted(algorithm = "AEAD_AES_256_CBC_HMAC_SHA_512-Random")
private String bloodType;
// Constructors
@Override
// toString()
// Getters & Setters
}
```
As you can see above, this entity contains all the information we need to fully automate CSFLE. We have the information
we need to generate the JSON Schema:
- Using the SpEL expression `#{mongocrypt.keyId(#target)}`, we can populate dynamically the DEK that was generated or
retrieved earlier.
- `ssn` is a `String` that requires a deterministic algorithm.
- `bloodType` is a `String` that requires a random algorithm.
The generated JSON Schema looks like this:
```json
{
"encryptMetadata": {
"keyId":
{
"$binary": {
"base64": "WyHXZ+53SSqCC/6WdCvp0w==",
"subType": "04"
}
}
]
},
"type": "object",
"properties": {
"ssn": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic"
}
},
"bloodType": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random"
}
}
}
}
```
## SpEL Evaluation Extension
The evaluation of the SpEL expression is only possible because of this class we added in the configuration:
```java
/**
* Will evaluate the SePL expressions in the Entity classes like this: #{mongocrypt.keyId(#target)} and insert
* the right encryption key for the right collection.
*/
@Component
public class EntitySpelEvaluationExtension implements EvaluationContextExtension {
private static final Logger LOGGER = LoggerFactory.getLogger(EntitySpelEvaluationExtension.class);
private final DataEncryptionKeyService dataEncryptionKeyService;
public EntitySpelEvaluationExtension(DataEncryptionKeyService dataEncryptionKeyService) {
this.dataEncryptionKeyService = dataEncryptionKeyService;
}
@Override
@NonNull
public String getExtensionId() {
return "mongocrypt";
}
@Override
@NonNull
public Map getFunctions() {
try {
return Collections.singletonMap("keyId", new Function(
EntitySpelEvaluationExtension.class.getMethod("computeKeyId", String.class), this));
} catch (NoSuchMethodException e) {
throw new RuntimeException(e);
}
}
public String computeKeyId(String target) {
String dek = dataEncryptionKeyService.getDataEncryptionKeysB64().get(target);
LOGGER.info("=> Computing dek for target {} => {}", target, dek);
return dek;
}
}
```
Note that it's the place where we are retrieving the DEKs and matching them with the `target`: "PersonEntity", in this case.
## JSON Schemas and the MongoClient Connection
JSON Schemas are actually not trivial to generate in a Spring Data MongoDB project.
As a matter of fact, to generate the JSON Schemas, we need the MappingContext (the entities, etc.) which is created by
the automatic configuration of Spring Data which creates the `MongoClient` connection and the `MongoTemplate`...
But to create the MongoClient — with the automatic encryption enabled — you need JSON Schemas!
It took me a significant amount of time to find a solution to this deadlock, and you can just enjoy the solution now!
The solution is to inject the JSON Schema creation in the autoconfiguration process by instantiating
the `MongoClientSettingsBuilderCustomizer` bean.
[MongoDBSecureClientConfiguration.java
```java
/**
* Spring Data MongoDB Configuration for the encrypted MongoClient with all the required configuration (jsonSchemas).
* The big trick in this file is the creation of the JSON Schemas before the creation of the entire configuration as
* we need the MappingContext to resolve the SpEL expressions in the entities.
*
* @see com.mongodb.quickstart.javaspringbootcsfle.components.EntitySpelEvaluationExtension
*/
@Configuration
@DependsOn("keyVaultAndDekSetup")
public class MongoDBSecureClientConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBSecureClientConfiguration.class);
private final KmsService kmsService;
private final SchemaService schemaService;
@Value("${crypt.shared.lib.path}")
private String CRYPT_SHARED_LIB_PATH;
@Value("${spring.data.mongodb.storage.uri}")
private String CONNECTION_STR_DATA;
@Value("${spring.data.mongodb.vault.uri}")
private String CONNECTION_STR_VAULT;
@Value("${mongodb.key.vault.db}")
private String KEY_VAULT_DB;
@Value("${mongodb.key.vault.coll}")
private String KEY_VAULT_COLL;
private MongoNamespace KEY_VAULT_NS;
public MongoDBSecureClientConfiguration(KmsService kmsService, SchemaService schemaService) {
this.kmsService = kmsService;
this.schemaService = schemaService;
}
@PostConstruct
public void postConstruct() {
this.KEY_VAULT_NS = new MongoNamespace(KEY_VAULT_DB, KEY_VAULT_COLL);
}
@Bean
public MongoClientSettings mongoClientSettings() {
LOGGER.info("=> Creating the MongoClientSettings for the encrypted collections.");
return MongoClientSettings.builder().applyConnectionString(new ConnectionString(CONNECTION_STR_DATA)).build();
}
@Bean
public MongoClientSettingsBuilderCustomizer customizer(MappingContext mappingContext) {
LOGGER.info("=> Creating the MongoClientSettingsBuilderCustomizer.");
return builder -> {
MongoJsonSchemaCreator schemaCreator = MongoJsonSchemaCreator.create(mappingContext);
Map schemaMap = schemaService.generateSchemasMap(schemaCreator)
.entrySet()
.stream()
.collect(toMap(e -> e.getKey().getFullName(),
Map.Entry::getValue));
Map extraOptions = Map.of("cryptSharedLibPath", CRYPT_SHARED_LIB_PATH,
"cryptSharedLibRequired", true);
MongoClientSettings mcs = MongoClientSettings.builder()
.applyConnectionString(
new ConnectionString(CONNECTION_STR_VAULT))
.build();
AutoEncryptionSettings oes = AutoEncryptionSettings.builder()
.keyVaultMongoClientSettings(mcs)
.keyVaultNamespace(KEY_VAULT_NS.getFullName())
.kmsProviders(kmsService.getKmsProviders())
.schemaMap(schemaMap)
.extraOptions(extraOptions)
.build();
builder.autoEncryptionSettings(oes);
};
}
}
```
> One thing to note here is the option to separate the DEKs from the encrypted collections in two completely separated
> MongoDB clusters. This isn't mandatory, but it can be a handy trick if you choose to have a different backup retention
> policy for your two clusters. This can be interesting for the GDPR Article 17 "Right to erasure," for instance, as you
> can then guarantee that a DEK can completely disappear from your systems (backup included). I talk more about this
> approach in
> my Java CSFLE post.
Here is the JSON Schema service which stores the generated JSON Schemas in a map:
SchemaServiceImpl.java
```java
@Service
public class SchemaServiceImpl implements SchemaService {
private static final Logger LOGGER = LoggerFactory.getLogger(SchemaServiceImpl.class);
private Map schemasMap;
@Override
public Map generateSchemasMap(MongoJsonSchemaCreator schemaCreator) {
LOGGER.info("=> Generating schema map.");
List encryptedEntities = EncryptedCollectionsConfiguration.encryptedEntities;
return schemasMap = encryptedEntities.stream()
.collect(toMap(EncryptedEntity::getNamespace,
e -> generateSchema(schemaCreator, e.getEntityClass())));
}
@Override
public Map getSchemasMap() {
return schemasMap;
}
private BsonDocument generateSchema(MongoJsonSchemaCreator schemaCreator, Class entityClass) {
BsonDocument schema = schemaCreator.filter(MongoJsonSchemaCreator.encryptedOnly())
.createSchemaFor(entityClass)
.schemaDocument()
.toBsonDocument();
LOGGER.info("=> JSON Schema for {}:\n{}", entityClass.getSimpleName(),
schema.toJson(JsonWriterSettings.builder().indent(true).build()));
return schema;
}
}
```
We are storing the JSON Schemas because this template also implements one of the good practices of CSFLE: server-side
JSON Schemas.
## Create or Update the Encrypted Collections
Indeed, to make the automatic encryption and decryption of CSFLE work, you do not require the server-side JSON Schemas.
Only the client-side ones are required for the Automatic Encryption Shared Library. But then nothing would prevent
another misconfigured client or an admin connected directly to the cluster to insert or update some documents without
encrypting the fields.
To enforce this you can use the server-side JSON Schema as you would to enforce a field type in a document, for instance.
But given that the JSON Schema will evolve with the different versions of your application, the JSON Schemas need to be
updated accordingly each time you restart your application.
```java
/**
* Create or update the encrypted collections with a server side JSON Schema to secure the encrypted field in the MongoDB database.
* This prevents any other client from inserting or editing the fields without encrypting the fields correctly.
*/
@Component
public class EncryptedCollectionsSetup {
private static final Logger LOGGER = LoggerFactory.getLogger(EncryptedCollectionsSetup.class);
private final MongoClient mongoClient;
private final SchemaService schemaService;
public EncryptedCollectionsSetup(MongoClient mongoClient, SchemaService schemaService) {
this.mongoClient = mongoClient;
this.schemaService = schemaService;
}
@PostConstruct
public void postConstruct() {
LOGGER.info("=> Setup the encrypted collections.");
schemaService.getSchemasMap()
.forEach((namespace, schema) -> createOrUpdateCollection(mongoClient, namespace, schema));
}
private void createOrUpdateCollection(MongoClient mongoClient, MongoNamespace ns, BsonDocument schema) {
MongoDatabase db = mongoClient.getDatabase(ns.getDatabaseName());
String collStr = ns.getCollectionName();
if (doesCollectionExist(db, ns)) {
LOGGER.info("=> Updating {} collection's server side JSON Schema.", ns.getFullName());
db.runCommand(new Document("collMod", collStr).append("validator", jsonSchemaWrapper(schema)));
} else {
LOGGER.info("=> Creating encrypted collection {} with server side JSON Schema.", ns.getFullName());
db.createCollection(collStr, new CreateCollectionOptions().validationOptions(
new ValidationOptions().validator(jsonSchemaWrapper(schema))));
}
}
public BsonDocument jsonSchemaWrapper(BsonDocument schema) {
return new BsonDocument("$jsonSchema", schema);
}
private boolean doesCollectionExist(MongoDatabase db, MongoNamespace ns) {
return db.listCollectionNames()
.into(new ArrayList<>())
.stream()
.anyMatch(c -> c.equals(ns.getCollectionName()));
}
}
```
## Multi-Entities Support
One big feature of this template as well is the support of multiple entities. As you probably noticed already, there is
a `CompanyEntity` and all its related components but the code is generic enough to handle any amount of entities which
isn't usually the case in all the other online tutorials.
In this template, if you want to support a third type of entity, you just have to create the components of the
three-tier architecture as usual and add your entry in the `EncryptedCollectionsConfiguration` class.
EncryptedCollectionsConfiguration.java
```java
/**
* Information about the encrypted collections in the application.
* As I need the information in multiple places, I decided to create a configuration class with a static list of
* the encrypted collections and their information.
*/
public class EncryptedCollectionsConfiguration {
public static final List encryptedEntities = List.of(
new EncryptedEntity("mydb", "persons", PersonEntity.class, "personDEK"),
new EncryptedEntity("mydb", "companies", CompanyEntity.class, "companyDEK"));
}
```
Everything else from the DEK generation to the encrypted collection creation with the server-side JSON Schema is fully
automated and taken care of transparently. All you have to do is specify
the `@Encrypted(algorithm = "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic")` annotation in the entity class and the field
will be encrypted and decrypted automatically for you when you are using the auto-implemented repositories (courtesy of
Spring Data MongoDB, of course!).
## Query by an Encrypted Field
Maybe you noticed but this template implements the `findFirstBySsn(ssn)` method which means that it's possible to
retrieve a person document by its SSN number, even if this field is encrypted.
> Note that it only works because we are using a deterministic encryption algorithm.
PersonRepository.java
```java
/**
* Spring Data MongoDB repository for the PersonEntity
*/
@Repository
public interface PersonRepository extends MongoRepository {
PersonEntity findFirstBySsn(String ssn);
}
```
## Wrapping Up
Thanks for reading my post!
If you have any questions about it, please feel free to open a question in the GitHub repository or ask a question in
the MongoDB Community Forum.
Feel free to ping me directly in your post: @MaBeuLux88.
Pull requests and improvement ideas are very welcome!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt871453c21d6d0fd6/65415752d8b7e20407a86241/Spring-Data-MongoDB-CSFLE.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3a98733accb502eb/654157524ed3b2001a90c1fb/Controller-Service-Repos.png | md | {
"tags": [
"Java",
"Spring"
],
"pageDescription": "In this advanced MongoDB CSFLE Java template, you'll learn all the tips and tricks for a successful deployment of CSFLE with Spring Data MongoDB.",
"contentType": "Code Example"
} | How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/new-time-series-collections | created | # MongoDB's New Time Series Collections
## What is Time Series Data?
Time-series data are measurements taken at time intervals. Sometimes time-series data will come into your database at high frequency - use-cases like financial transactions, stock market data, readings from smart meters, or metrics from services you're hosting over hundreds or even thousands of servers. In other cases, each measurement may only come in every few minutes. Maybe you're tracking the number of servers that you're running every few minutes to estimate your server costs for the month. Perhaps you're measuring the soil moisture of your favourite plant once a day.
| | Frequent | Infrequent |
| --------- | ------------------------------------- | ------------------------------------------- |
| Regular | Service metrics | Number of sensors providing weather metrics |
| Irregular | Financial transactions, Stock prices? | LPWAN data |
However, when it comes to time-series data, it isn’t all about frequency, the only thing that truly matters is the presence of time so whether your data comes every second, every 5 minutes, or every hour isn’t important for using MongoDB for storing and working with time-series data.
### Examples of Time-Series Data
From the very beginning, developers have been using MongoDB to store time-series data. MongoDB can be an extremely efficient engine for storing and processing time-series data, but you'd have to know how to correctly model it to have a performant solution, but that wasn't as straightforward as it could have been.
Starting in MongoDB 5.0 there is a new collection type, time-series collections, which are specifically designed for storing and working with time-series data without the hassle or need to worry about low-level model optimization.
## What are Time series Collections?
Time series collections are a new collection type introduced in MongoDB 5.0. On the surface, these collections look and feel like every other collection in MongoDB. You can read and write to them just like you do regular collections and even create secondary indexes with the createIndex command. However, internally, they are natively supported and optimized for storing and working with time-series data.
Under the hood, the creation of a time series collection results in a collection and an automatically created writable non-materialized view which serves as an abstraction layer. This abstraction layer allows you to always work with their data as single documents in their raw form without worry of performance implications as the actual time series collection implements a form of the bucket pattern you may already know when persisting data to disk, but these details are something you no longer need to care about when designing your schema or reading and writing your data. Users will always be able to work with the abstraction layer and not with a complicated compressed bucketed document.
## Why Use MongoDB's Time Series Collections?
Well because you have time-series data, right?
Of course that may be true, but there are so many more reasons to use the new time series collections over regular collections for time-series data.
Ease of use, performance, and storage efficiency were paramount goals when creating time series collections. Time series collections allow you to work with your data model like any other collection as single documents with rich data types and structures. They eliminate the need to model your time-series data in a way that it can be performant ahead of time - they take care of all this for you!
You can design your document models more intuitively, the way you would with other types of MongoDB collections. The database then optimizes the storage schema for ingestion, retrieval, and storage by providing native compression to allow you to efficiently store your time-series data without worry about duplicated fields alongside your measurements.
Despite being implemented in a different way from the collections you've used before, to optimize for time-stamped documents, it's important to remember that you can still use the MongoDB features you know and love, including things like nesting data within documents, secondary indexes, and the full breadth of analytics and data transformation functions within the aggregation framework, including joining data from other collections, using the `$lookup` operator, and creating materialized views using `$merge`.
## How to Create a Time-Series Collection
### All It Takes is Time
Creating a time series collection is straightforward, all it takes is a field in your data that corresponds to time, just pass the new "timeseries'' field to the createCollection command and you’re off and running. However, before we get too far ahead, let’s walk through just how to do this and all of the options that allow you to optimize time series collections.
Throughout this post, we'll show you how to create a time series collection to store documents that look like the following:
```js
{
"_id" : ObjectId("60c0d44894c10494260da31e"),
"source" : {sensorId: 123, region: "americas"},
"airPressure" : 99 ,
"windSpeed" : 22,
"temp" : { "degreesF": 39,
"degreesC": 3.8
},
"ts" : ISODate("2021-05-20T10:24:51.303Z")
}
```
As mentioned before, a time series collection can be created with just a simple time field. In order to store documents like this in a time series collection, we can pass the following to the *createCollection* command:
```js
db.createCollection("weather", {
timeseries: {
timeField: "ts",
},
});
```
You probably won't be surprised to learn that the timeField option declares the name of the field in your documents that stores the time, in the example above, "ts" is the name of the timeField. The value of the field specified by timeField must be a date type.
Pretty fast right? While timeseries collections only require a timeField, there are other optional parameters that can be specified at creation or in some cases at modification time which will allow you to get the most from your data and time series collections. Those optional parameters are metaField, granularity, and expireAfterSeconds.
### metaField
While not a required parameter, metaField allows for better optimization when specified, including the ability to create secondary indexes.
```js
db.createCollection("weather", {
timeseries: {
timeField: "ts",
metaField: "source",
}});
```
In the example above, the metaField would be the "source" field:
```js
"source" : {sensorId: 123, region: "americas"}
```
This is an object consisting of key-value pairs which describe our time-series data. In this example, an identifying ID and location for a sensor collecting weather data.
The metaField field can be a complicated document with nested fields, an object, or even simply a single GUID or string. The important point here is that the metaField is really just metadata which serves as a label or tag which allows you to uniquely identify the source of a time-series, and this field should never or rarely change over time.
It is recommended to always specify a metaField, but you would especially want to use this when you have multiple sources of data such as sensors or devices that share common measurements.
The metaField, if present, should partition the time-series data, so that measurements with the same metadata relate over time. Measurements with a common metaField for periods of time will be grouped together internally to eliminate the duplication of this field at the storage layer. The order of metadata fields is ignored in order to accommodate drivers and applications representing objects as unordered maps. Two metadata fields with the same contents but different order are considered to be identical.
As with the timeField, the metaField is specified as the top-level field name when creating a collection. However, the metaField can be of any BSON data type except *array* and cannot match the timeField required by timeseries collections. When specifying the metaField, specify the top level field name as a string no matter its underlying structure or data type.
Data in the same time period and with the same metaField will be colocated on disk/SSD, so choice of metaField field can affect query performance.
### Granularity
The granularity parameter represents a string with the following options:
- "seconds"
- "minutes"
- "hours"
```js
db.createCollection("weather", {
timeseries: {
timeField: "ts",
metaField: "source",
granularity: "minutes",
},
});
```
Granularity should be set to the unit that is closest to rate of ingestion for a unique metaField value. So, for example, if the collection described above is expected to receive a measurement every 5 minutes from a single source, you should use the "minutes" granularity, because source has been specified as the metaField.
In the first example, where only the timeField was specified and no metaField was identified (try to avoid this!), the granularity would need to be set relative to the *total* rate of ingestion, across all sources.
The granularity should be thought about in relation to your metadata ingestion rate, not just your overall ingestion rate. Specifying an appropriate value allows the time series collection to be optimized for your usage.
By default, MongoDB defines the granularity to be "seconds", indicative of a high-frequency ingestion rate or where no metaField is specified.
### expireAfterSeconds
Time series data often grows at very high rates and becomes less useful as it ages. Much like last week leftovers or milk you will want to manage your data lifecycle and often that takes the form of expiring old data.
Just like TTL indexes, time series collections allow you to manage your data lifecycle with the ability to automatically delete old data at a specified interval in the background. However, unlike TTL indexes on regular collections, time series collections do not require you to create an index to do this.
Simply specify your retention rate in seconds during creation time, as seen below, or modify it at any point in time after creation with collMod.
```js
db.createCollection("weather", {
timeseries: {
timeField: "ts",
metaField: "source",
granularity: "minutes"
},
expireAfterSeconds: 9000
});
```
The expiry of data is only one way MongoDB natively offers you to manage your data lifecycle. In a future post we will discuss ways to automatically archive your data and efficiently read data stored in multiple locations for long periods of time using MongoDB Online Archive.
### Putting it all Together
Putting it all together, we’ve walked you through how to create a timeseries collection and the different options you can and should specify to get the most out of your data.
```js
{
"_id" : ObjectId("60c0d44894c10494260da31e"),
"source" : {sensorId: 123, region: "americas"},
"airPressure" : 99 ,
"windSpeed" : 22,
"temp" : { "degreesF": 39,
"degreesC": 3.8
},
"ts" : ISODate("2021-05-20T10:24:51.303Z")
}
```
The above document can now be efficiently stored and accessed from a time series collection using the below createCollection command.
```js
db.createCollection("weather", {
timeseries: {
timeField: "ts",
metaField: "source",
granularity: "minutes"
},
expireAfterSeconds: 9000
});
```
While this is just an example, your document can look like nearly anything. Your schema is your choice to make with the freedom that you need not worry about how that data is compressed and persisted to disk. Optimizations will be made automatically and natively for you.
## Limitations of Time Series Collections in MongoDB 5.0
In the initial MongoDB 5.0 release of time series collection there are some limitations that exist. The most notable of these limitations is that the timeseries collections are considered append only, so we do not have support on the abstraction level for update and/or delete operations. Update and/delete operations can still be performed on time series collections, but they must go directly to the collection stored on disk using the optimized storage format and a user must have the proper permissions to perform these operations.
In addition to the append only nature, in the initial release, time series collections will not work with Change Streams, Realm Sync, or Atlas Search. Lastly, time series collections allow for the creation of secondary indexes as discussed above. However, these secondary indexes can only be defined on the metaField and/or timeField.
For a full list of limitations, please consult the official MongoDB documentation page.
While we know some of these limitations may be impactful to your current use case, we promise we're working on this right now and would love for you to provide your feedback!
## Next Steps
Now that you know what time series data is, when and how you should create a timeseries collection and some details of how to set parameters when creating a collection. Why don't you go create a timeseries collection now? Our next blog post will go into more detail on how to optimize your time series collection for specific use-cases.
You may be interested in migrating to a time series collection from an existing collection! We'll be covering this in a later post, but in the meantime, you should check out the official documentation for a list of migration tools and examples.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn all about MongoDB's new time series collection type! This post will teach you what time series data looks like, and how to best configure time series collections to store your time series data.",
"contentType": "News & Announcements"
} | MongoDB's New Time Series Collections | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/swift/building-a-mobile-chat-app-using-realm-new-way | created | # Building a Mobile Chat App Using Realm – The New and Easier Way
In my last post, I walked through how to integrate Realm into a mobile chat app in Building a Mobile Chat App Using Realm – Integrating Realm into Your App. Since then, the Realm engineering team has been busy, and Realm-Swift 10.6 introduced new features that make the SDK way more "SwiftUI-native." For developers, that makes integrating Realm into SwiftUI views much simpler and more robust. This article steps through building the same chat app using these new features. Everything in Building a Mobile Chat App Using Realm – Integrating Realm into Your App still works, and it's the best starting point if you're building an app with UIKit rather than SwiftUI.
Both of these articles follow-up on Building a Mobile Chat App Using Realm – Data Architecture. Read that post first if you want to understand the Realm data/partitioning architecture and the decisions behind it.
This article targets developers looking to build the Realm mobile database into their SwiftUI mobile apps and use MongoDB Atlas Device Sync.
If you've already read Building a Mobile Chat App Using Realm – Integrating Realm into Your App, then you'll find some parts unchanged here. As an example, there are no changes to the backend Realm application. I'll label those sections with "Unchanged" so that you know it's safe to skip over them.
RChat is a chat application. Members of a chat room share messages, photos, location, and presence information with each other. This version is an iOS (Swift and SwiftUI) app, but we will use the same data model and backend Realm application to build an Android version in the future.
If you're looking to add a chat feature to your mobile app, you can repurpose the article's code and the associated repo. If not, treat it as a case study that explains the reasoning behind the data model and partitioning/syncing decisions taken. You'll likely need to make similar design choices in your apps.
>
>
>Watch this demo of the app in action.
>
>:youtube]{vid=BlV9El_MJqk}
>
>
>
>
>This article was updated in July 2021 to replace `objc` and `dynamic` with the `@Persisted` annotation that was introduced in Realm-Cocoa 10.10.0.
>
>
## Prerequisites
If you want to build and run the app for yourself, this is what you'll need:
- iOS14.2+
- XCode 12.3+
- Realm-Swift 10.6+ (recommended to use the Swift Package Manager (SPM) rather than Cocoa Pods)
- [MongoDB Atlas account and a (free) Atlas cluster
## Walkthrough
The iOS app uses MongoDB Atlas Device Sync to share data between instances of the app (e.g., the messages sent between users). This walkthrough covers both the iOS code and the backend Realm app needed to make it work. Remember that all of the code for the final app is available in the GitHub repo.
### Create a Backend Atlas App (Unchanged)
From the Atlas UI, select the "App Services" tab (formerly "Realm"). Select the options to indicate that you're creating a new iOS mobile app and then click "Start a New App".
Name the app "RChat" and click "Create Application".
Copy the "App ID." You'll need to use this in your iOS app code:
### Connect iOS App to Your App (Unchanged)
The SwiftUI entry point for the app is RChatApp.swift. This is where you define your link to your Realm application (named `app`) using the App ID from your new backend Atlas App Services app:
``` swift
import SwiftUI
import RealmSwift
let app = RealmSwift.App(id: "rchat-xxxxx") // TODO: Set the Realm application ID
@main
struct RChatApp: SwiftUI.App {
@StateObject var state = AppState()
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(state)
}
}
}
```
Note that we created an instance of AppState and pass it into our top-level view (ContentView) as an `environmentObject`. This is a common SwiftUI pattern for making state information available to every view without the need to explicitly pass it down every level of the view hierarchy:
``` swift
import SwiftUI
import RealmSwift
let app = RealmSwift.App(id: "rchat-xxxxx") // TODO: Set the Realm application ID
@main
struct RChatApp: SwiftUI.App {
@StateObject var state = AppState()
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(state)
}
}
}
```
### Realm Model Objects
These are largely as described in Building a Mobile Chat App Using Realm – Data Architecture. I'll highlight some of the key changes using the User Object class as an example:
``` swift
import Foundation
import RealmSwift
class User: Object, ObjectKeyIdentifiable {
@Persisted var _id = UUID().uuidString
@Persisted var partition = "" // "user=_id"
@Persisted var userName = ""
@Persisted var userPreferences: UserPreferences?
@Persisted var lastSeenAt: Date?
@Persisted var conversations = List()
@Persisted var presence = "Off-Line"
override static func primaryKey() -> String? {
return "_id"
}
}
```
`User` now conforms to Realm-Cocoa's `ObjectKeyIdentifiable` protocol, automatically adding identifiers to each instance that are used by SwiftUI (e.g., when iterating over results in a `ForEach` loop). It's like `Identifiable` but integrated into Realm to handle events such as Atlas Device Sync adding a new object to a result set or list.
`conversations` is now a `var` rather than a `let`, allowing us to append new items to the list.
### Application-Wide State: AppState
The `AppState` class is so much simpler now. Wherever possible, the opening of a Realm is now handled when opening the view that needs it.
Views can pass state up and down the hierarchy. However, it can simplify state management by making some state available application-wide. In this app, we centralize this app-wide state data storage and control in an instance of the AppState class.
A lot is going on in `AppState.swift`, and you can view the full file in the repo.
As part of adopting the latest Realm-Cocoa SDK feature, I no longer need to store open Realms in `AppState` (as Realms are now opened as part of loading the view that needs them). `AppState` contains the `user` attribute to represent the user currently logged into the app (and Realm). If `user` is set to `nil`, then no user is logged in:
``` swift
class AppState: ObservableObject {
...
var user: User?
...
}
```
The app uses the Realm SDK to interact with the back end Atlas App Services application to perform actions such as logging into Realm. Those operations can take some time as they involve accessing resources over the internet, and so we don't want the app to sit busy-waiting for a response. Instead, we use Combine publishers and subscribers to handle these events. `loginPublisher`, `logoutPublisher`, and `userRealmPublisher` are publishers to handle logging in, logging out, and opening Realms for a user:
``` swift
class AppState: ObservableObject {
...
let loginPublisher = PassthroughSubject()
let logoutPublisher = PassthroughSubject()
let userRealmPublisher = PassthroughSubject()
...
}
```
When an `AppState` class is instantiated, the actions are assigned to each of the Combine publishers:
``` swift
init() {
_ = app.currentUser?.logOut()
initLoginPublisher()
initUserRealmPublisher()
initLogoutPublisher()
}
```
We'll later see that an event is sent to `loginPublisher` when a user has successfully logged in. In `AppState`, we define what should be done when those events are received. Events received on `loginPublisher` trigger the opening of a realm with the partition set to `user=`, which in turn sends an event to `userRealmPublisher`:
``` swift
func initLoginPublisher() {
loginPublisher
.receive(on: DispatchQueue.main)
.flatMap { user -> RealmPublishers.AsyncOpenPublisher in
self.shouldIndicateActivity = true
let realmConfig = user.configuration(partitionValue: "user=\(user.id)")
return Realm.asyncOpen(configuration: realmConfig)
}
.receive(on: DispatchQueue.main)
.map {
return $0
}
.subscribe(userRealmPublisher)
.store(in: &self.cancellables)
}
```
When the Realm has been opened and the Realm sent to `userRealmPublisher`, `user` is initialized with the `User` object retrieved from the Realm. The user's presence is set to `onLine`:
``` swift
func initUserRealmPublisher() {
userRealmPublisher
.sink(receiveCompletion: { result in
if case let .failure(error) = result {
self.error = "Failed to log in and open user realm: \(error.localizedDescription)"
}
}, receiveValue: { realm in
print("User Realm User file location: \(realm.configuration.fileURL!.path)")
self.userRealm = realm
self.user = realm.objects(User.self).first
do {
try realm.write {
self.user?.presenceState = .onLine
}
} catch {
self.error = "Unable to open Realm write transaction"
}
self.shouldIndicateActivity = false
})
.store(in: &cancellables)
}
```
After logging out of Realm, we simply set `user` to nil:
``` swift
func initLogoutPublisher() {
logoutPublisher
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: { _ in
}, receiveValue: { _ in
self.user = nil
})
.store(in: &cancellables)
}
```
### Enabling Email/Password Authentication in the Atlas App Services App (Unchanged)
After seeing what happens **after** a user has logged into Realm, we need to circle back and enable email/password authentication in the backend Atlas App Services app. Fortunately, it's straightforward to do.
From the Atlas UI, select "Authentication" from the lefthand menu, followed by "Authentication Providers." Click the "Edit" button for "Email/Password":
Enable the provider and select "Automatically confirm users" and "Run a password reset function." Select "New function" and save without making any edits:
Don't forget to click on "REVIEW & DEPLOY" whenever you've made a change to the backend Realm app.
### Create `User` Document on User Registration (Unchanged)
When a new user registers, we need to create a `User` document in Atlas that will eventually synchronize with a `User` object in the iOS app. Atlas provides authentication triggers that can automate this.
Select "Triggers" and then click on "Add a Trigger":
Set the "Trigger Type" to "Authentication," provide a name, set the "Action Type" to "Create" (user registration), set the "Event Type" to "Function," and then select "New Function":
Name the function `createNewUserDocument` and add the code for the function:
``` javascript
exports = function({user}) {
const db = context.services.get("mongodb-atlas").db("RChat");
const userCollection = db.collection("User");
const partition = `user=${user.id}`;
const defaultLocation = context.values.get("defaultLocation");
const userPreferences = {
displayName: ""
};
const userDoc = {
_id: user.id,
partition: partition,
userName: user.data.email,
userPreferences: userPreferences,
location: context.values.get("defaultLocation"),
lastSeenAt: null,
presence:"Off-Line",
conversations: ]
};
return userCollection.insertOne(userDoc)
.then(result => {
console.log(`Added User document with _id: ${result.insertedId}`);
}, error => {
console.log(`Failed to insert User document: ${error}`);
});
};
```
Note that we set the `partition` to `user=`, which matches the partition used when the iOS app opens the User Realm.
"Save" then "REVIEW & DEPLOY."
### Define Schema (Unchanged)
Refer to [Building a Mobile Chat App Using Realm – Data Architecture to better understand the app's schema and partitioning rules. This article skips the analysis phase and just configures the schema.
Browse to the "Rules" section in the App Services UI and click on "Add Collection." Set "Database Name" to `RChat` and "Collection Name" to `User`. We won't be accessing the `User` collection directly through App Services, so don't select a "Permissions Template." Click "Add Collection":
At this point, I'll stop reminding you to click "REVIEW & DEPLOY!"
Select "Schema," paste in this schema, and then click "SAVE":
``` javascript
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"conversations": {
"bsonType": "array",
"items": {
"bsonType": "object",
"properties": {
"displayName": {
"bsonType": "string"
},
"id": {
"bsonType": "string"
},
"members": {
"bsonType": "array",
"items": {
"bsonType": "object",
"properties": {
"membershipStatus": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
}
},
"required":
"membershipStatus",
"userName"
],
"title": "Member"
}
},
"unreadCount": {
"bsonType": "long"
}
},
"required": [
"unreadCount",
"id",
"displayName"
],
"title": "Conversation"
}
},
"lastSeenAt": {
"bsonType": "date"
},
"partition": {
"bsonType": "string"
},
"presence": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
},
"userPreferences": {
"bsonType": "object",
"properties": {
"avatarImage": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"displayName": {
"bsonType": "string"
}
},
"required": [],
"title": "UserPreferences"
}
},
"required": [
"_id",
"partition",
"userName",
"presence"
],
"title": "User"
}
```
Repeat for the `Chatster` schema:
``` javascript
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"avatarImage": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"displayName": {
"bsonType": "string"
},
"lastSeenAt": {
"bsonType": "date"
},
"partition": {
"bsonType": "string"
},
"presence": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
}
},
"required": [
"_id",
"partition",
"presence",
"userName"
],
"title": "Chatster"
}
```
And for the `ChatMessage` collection:
``` javascript
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"author": {
"bsonType": "string"
},
"image": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"location": {
"bsonType": "array",
"items": {
"bsonType": "double"
}
},
"partition": {
"bsonType": "string"
},
"text": {
"bsonType": "string"
},
"timestamp": {
"bsonType": "date"
}
},
"required": [
"_id",
"partition",
"text",
"timestamp"
],
"title": "ChatMessage"
}
```
### Enable Atlas Device Sync (Unchanged)
We use Atlas Device Sync to synchronize objects between instances of the iOS app (and we'll extend this app also to include Android). It also syncs those objects with Atlas collections. Note that there are three options to create a schema:
1. Manually code the schema as a JSON schema document.
2. Derive the schema from existing data stored in Atlas. (We don't yet have any data and so this isn't an option here.)
3. Derive the schema from the Realm objects used in the mobile app.
We've already specified the schema and so will stick to the first option.
Select "Sync" and then select your Atlas cluster. Set the "Partition Key" to the `partition` attribute (it appears in the list as it's already in the schema for all three collections), and the rules for whether a user can sync with a given partition:
The "Read" rule controls whether a user can establish a one-way read-only sync relationship to the mobile app for a given user and partition. In this case, the rule delegates this to an Atlas Function named `canReadPartition`:
``` json
{
"%%true": {
"%function": {
"arguments": [
"%%partition"
],
"name": "canReadPartition"
}
}
}
```
The "Write" rule delegates to the `canWritePartition`:
``` json
{
"%%true": {
"%function": {
"arguments": [
"%%partition"
],
"name": "canWritePartition"
}
}
}
```
Once more, we've already seen those functions in [Building a Mobile Chat App Using Realm – Data Architecture but I'll include the code here for completeness.
canReadPartition:
``` javascript
exports = function(partition) {
console.log(`Checking if can sync a read for partition = ${partition}`);
const db = context.services.get("mongodb-atlas").db("RChat");
const chatsterCollection = db.collection("Chatster");
const userCollection = db.collection("User");
const chatCollection = db.collection("ChatMessage");
const user = context.user;
let partitionKey = "";
let partitionVale = "";
const splitPartition = partition.split("=");
if (splitPartition.length == 2) {
partitionKey = splitPartition0];
partitionValue = splitPartition[1];
console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);
} else {
console.log(`Couldn't extract the partition key/value from ${partition}`);
return false;
}
switch (partitionKey) {
case "user":
console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) – ${partitionKey === user.id}`);
return partitionValue === user.id;
case "conversation":
console.log(`Looking up User document for _id = ${user.id}`);
return userCollection.findOne({ _id: user.id })
.then (userDoc => {
if (userDoc.conversations) {
let foundMatch = false;
userDoc.conversations.forEach( conversation => {
console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)
if (conversation.id === partitionValue) {
console.log(`Found matching conversation element for id = ${partitionValue}`);
foundMatch = true;
}
});
if (foundMatch) {
console.log(`Found Match`);
return true;
} else {
console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);
return false;
}
} else {
console.log(`No conversations attribute in User doc`);
return false;
}
}, error => {
console.log(`Unable to read User document: ${error}`);
return false;
});
case "all-users":
console.log(`Any user can read all-users partitions`);
return true;
default:
console.log(`Unexpected partition key: ${partitionKey}`);
return false;
}
};
```
[canWritePartition:
``` javascript
exports = function(partition) {
console.log(`Checking if can sync a write for partition = ${partition}`);
const db = context.services.get("mongodb-atlas").db("RChat");
const chatsterCollection = db.collection("Chatster");
const userCollection = db.collection("User");
const chatCollection = db.collection("ChatMessage");
const user = context.user;
let partitionKey = "";
let partitionVale = "";
const splitPartition = partition.split("=");
if (splitPartition.length == 2) {
partitionKey = splitPartition0];
partitionValue = splitPartition[1];
console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);
} else {
console.log(`Couldn't extract the partition key/value from ${partition}`);
return false;
}
switch (partitionKey) {
case "user":
console.log(`Checking if partitionKey(${partitionValue}) matches user.id(${user.id}) – ${partitionKey === user.id}`);
return partitionValue === user.id;
case "conversation":
console.log(`Looking up User document for _id = ${user.id}`);
return userCollection.findOne({ _id: user.id })
.then (userDoc => {
if (userDoc.conversations) {
let foundMatch = false;
userDoc.conversations.forEach( conversation => {
console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)
if (conversation.id === partitionValue) {
console.log(`Found matching conversation element for id = ${partitionValue}`);
foundMatch = true;
}
});
if (foundMatch) {
console.log(`Found Match`);
return true;
} else {
console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);
return false;
}
} else {
console.log(`No conversations attribute in User doc`);
return false;
}
}, error => {
console.log(`Unable to read User document: ${error}`);
return false;
});
case "all-users":
console.log(`No user can write to an all-users partitions`);
return false;
default:
console.log(`Unexpected partition key: ${partitionKey}`);
return false;
}
};
```
To create these functions, select "Functions" and click "Create New Function." Make sure you type the function name precisely, set "Authentication" to "System," and turn on the "Private" switch (which means it can't be called directly from external services such as our mobile app):
### Linking User and Chatster Documents (Unchanged)
As described in [Building a Mobile Chat App Using Realm – Data Architecture, there are relationships between different `User` and `Chatster` documents. Now that we've defined the schemas and enabled Device Sync, it's convenient to add the Atlas Function and Trigger to maintain those relationships.
Create a Function named `userDocWrittenTo`, set "Authentication" to "System," and make it private. This article is aiming to focus on the iOS app more than the backend app, and so we won't delve into this code:
``` javascript
exports = function(changeEvent) {
const db = context.services.get("mongodb-atlas").db("RChat");
const chatster = db.collection("Chatster");
const userCollection = db.collection("User");
const docId = changeEvent.documentKey._id;
const user = changeEvent.fullDocument;
let conversationsChanged = false;
console.log(`Mirroring user for docId=${docId}. operationType = ${changeEvent.operationType}`);
switch (changeEvent.operationType) {
case "insert":
case "replace":
case "update":
console.log(`Writing data for ${user.userName}`);
let chatsterDoc = {
_id: user._id,
partition: "all-users=all-the-users",
userName: user.userName,
lastSeenAt: user.lastSeenAt,
presence: user.presence
};
if (user.userPreferences) {
const prefs = user.userPreferences;
chatsterDoc.displayName = prefs.displayName;
if (prefs.avatarImage && prefs.avatarImage._id) {
console.log(`Copying avatarImage`);
chatsterDoc.avatarImage = prefs.avatarImage;
console.log(`id of avatarImage = ${prefs.avatarImage._id}`);
}
}
chatster.replaceOne({ _id: user._id }, chatsterDoc, { upsert: true })
.then (() => {
console.log(`Wrote Chatster document for _id: ${docId}`);
}, error => {
console.log(`Failed to write Chatster document for _id=${docId}: ${error}`);
});
if (user.conversations && user.conversations.length > 0) {
for (i = 0; i < user.conversations.length; i++) {
let membersToAdd = ];
if (user.conversations[i].members.length > 0) {
for (j = 0; j < user.conversations[i].members.length; j++) {
if (user.conversations[i].members[j].membershipStatus == "User added, but invite pending") {
membersToAdd.push(user.conversations[i].members[j].userName);
user.conversations[i].members[j].membershipStatus = "Membership active";
conversationsChanged = true;
}
}
}
if (membersToAdd.length > 0) {
userCollection.updateMany({userName: {$in: membersToAdd}}, {$push: {conversations: user.conversations[i]}})
.then (result => {
console.log(`Updated ${result.modifiedCount} other User documents`);
}, error => {
console.log(`Failed to copy new conversation to other users: ${error}`);
});
}
}
}
if (conversationsChanged) {
userCollection.updateOne({_id: user._id}, {$set: {conversations: user.conversations}});
}
break;
case "delete":
chatster.deleteOne({_id: docId})
.then (() => {
console.log(`Deleted Chatster document for _id: ${docId}`);
}, error => {
console.log(`Failed to delete Chatster document for _id=${docId}: ${error}`);
});
break;
}
};
```
Set up a database trigger to execute the new function whenever anything in the `User` collection changes:
### Registering and Logging in from the iOS App
This section is virtually unchanged. As part of using the new Realm SDK features, there is now less in `AppState` (including fewer publishers), and so less attributes need to be set up as part of the login process.
We've now created enough of the backend app that mobile apps can now register new Realm users and use them to log into the app.
The app's top-level SwiftUI view is [ContentView, which decides which sub-view to show based on whether our `AppState` environment object indicates that a user is logged in or not:
``` swift
@EnvironmentObject var state: AppState
...
if state.loggedIn {
if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {
SetProfileView(isPresented: $showingProfileView)
.environment(\.realmConfiguration, app.currentUser!.configuration(partitionValue: "user=\(state.user?._id ?? "")"))
} else {
ConversationListView()
.environment(\.realmConfiguration, app.currentUser!.configuration(partitionValue: "user=\(state.user?._id ?? "")"))
.navigationBarTitle("Chats", displayMode: .inline)
.navigationBarItems(
trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(
photo: state.user?.userPreferences?.avatarImage,
online: true) { showingProfileView.toggle() } : nil
)
}
} else {
LoginView()
}
...
```
When first run, no user is logged in, and so `LoginView` is displayed.
Note that `AppState.loggedIn` checks whether a user is currently logged into the Realm `app`:
``` swift
var loggedIn: Bool {
app.currentUser != nil && user != nil && app.currentUser?.state == .loggedIn
}
```
The UI for LoginView contains cells to provide the user's email address and password, a radio button to indicate whether this is a new user, and a button to register or log in a user:
Clicking the button executes one of two functions:
``` swift
...
CallToActionButton(
title: newUser ? "Register User" : "Log In",
action: { self.userAction(username: self.username, password: self.password) })
...
private func userAction(username: String, password: String) {
state.shouldIndicateActivity = true
if newUser {
signup(username: username, password: password)
} else {
login(username: username, password: password)
}
}
```
`signup` makes an asynchronous call to the Realm SDK to register the new user. Through a Combine pipeline, `signup` receives an event when the registration completes, which triggers it to invoke the `login` function:
``` swift
private func signup(username: String, password: String) {
if username.isEmpty || password.isEmpty {
state.shouldIndicateActivity = false
return
}
self.state.error = nil
app.emailPasswordAuth.registerUser(email: username, password: password)
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: {
state.shouldIndicateActivity = false
switch $0 {
case .finished:
break
case .failure(let error):
self.state.error = error.localizedDescription
}
}, receiveValue: {
self.state.error = nil
login(username: username, password: password)
})
.store(in: &state.cancellables)
}
```
The `login` function uses the Realm SDK to log in the user asynchronously. If/when the Realm login succeeds, the Combine pipeline sends the Realm user to the `chatsterLoginPublisher` and `loginPublisher` publishers (recall that we've seen how those are handled within the `AppState` class):
``` swift
private func login(username: String, password: String) {
if username.isEmpty || password.isEmpty {
state.shouldIndicateActivity = false
return
}
self.state.error = nil
app.login(credentials: .emailPassword(email: username, password: password))
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: {
state.shouldIndicateActivity = false
switch $0 {
case .finished:
break
case .failure(let error):
self.state.error = error.localizedDescription
}
}, receiveValue: {
self.state.error = nil
state.loginPublisher.send($0)
})
.store(in: &state.cancellables)
}
```
### Saving the User Profile
On being logged in for the first time, the user is presented with SetProfileView. (They can also return here later by clicking on their avatar.) This is a SwiftUI sheet where the user can set their profile and preferences by interacting with the UI and then clicking "Save User Profile":
When the view loads, the UI is populated with any existing profile information found in the `User` object in the `AppState` environment object:
``` swift
...
@EnvironmentObject var state: AppState
...
.onAppear { initData() }
...
private func initData() {
displayName = state.user?.userPreferences?.displayName ?? ""
photo = state.user?.userPreferences?.avatarImage
}
```
As the user updates the UI elements, the Realm `User` object isn't changed. It's not until they click "Save User Profile" that we update the `User` object. `state.user` is an object that's being managed by Realm, and so it must be updated within a Realm transaction. Using one of the new Realm SDK features, the Realm for this user's partition is made available in `SetProfileView` by injecting it into the environment from `ContentView`:
``` swift
SetProfileView(isPresented: $showingProfileView)
.environment(\.realmConfiguration,
app.currentUser!.configuration(partitionValue: "user=\(state.user?._id ?? "")"))
```
`SetProfileView` receives `userRealm` through the environment and uses it to create a transaction (line 10):
``` swift
...
@EnvironmentObject var state: AppState
@Environment(\.realm) var userRealm
...
CallToActionButton(title: "Save User Profile", action: saveProfile)
...
private func saveProfile() {
state.shouldIndicateActivity = true
do {
try userRealm.write {
state.user?.userPreferences?.displayName = displayName
if photoAdded {
guard let newPhoto = photo else {
print("Missing photo")
state.shouldIndicateActivity = false
return
}
state.user?.userPreferences?.avatarImage = newPhoto
}
state.user?.presenceState = .onLine
}
} catch {
state.error = "Unable to open Realm write transaction"
}
}
```
Once saved to the local Realm, Device Sync copies changes made to the `User` object to the associated `User` document in Atlas.
### List of Conversations
Once the user has logged in and set up their profile information, they're presented with the `ConversationListView`. Again, we use the new SDK feature to implicitly open the Realm for this user partition and pass it through the environment from `ContentView`:
``` swift
if state.loggedIn {
if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {
SetProfileView(isPresented: $showingProfileView)
.environment(\.realmConfiguration,
app.currentUser!.configuration(partitionValue: "user=\(state.user?._id ?? "")"))
} else {
ConversationListView()
.environment(\.realmConfiguration,
app.currentUser!.configuration(partitionValue: "user=\(state.user?._id ?? "")"))
.navigationBarTitle("Chats", displayMode: .inline)
.navigationBarItems(
trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(
photo: state.user?.userPreferences?.avatarImage,
online: true) { showingProfileView.toggle() } : nil
)
}
} else {
LoginView()
}
```
ConversationListView receives the Realm through the environment and then uses another new Realm SDK feature (`@ObservedResults`) to set `users` to be a live result set of all `User` objects in the partition (as each user has their own partition, there will be exactly one `User` document in `users`):
``` swift
@ObservedResults(User.self) var users
```
ConversationListView displays a list of all the conversations that the user is currently a member of (initially none) by looping over `conversations` within their `User` Realm object:
``` swift
@ObservedResults(User.self) var users
...
private let sortDescriptors =
SortDescriptor(keyPath: "unreadCount", ascending: false),
SortDescriptor(keyPath: "displayName", ascending: true)
]
...
if let conversations = users[0].conversations.sorted(by: sortDescriptors) {
List {
ForEach(conversations) { conversation in
Button(action: {
self.conversation = conversation
showConversation.toggle()
}) { ConversationCardView(conversation: conversation, isPreview: isPreview) }
}
}
...
}
```
At any time, another user can include you in a new group conversation. This view needs to reflect those changes as they happen:
When the other user adds us to a conversation, our `User` document is updated automatically through the magic of Atlas Device Sync and our Atlas Trigger. Prior to Realm-Cocoa 10.6, we needed to observe the Realm and trick SwiftUI into refreshing the view when changes were received. The Realm/SwiftUI integration now refreshes the view automatically.
### Creating New Conversations
When you click in the new conversation button in `ConversationListView`, a SwiftUI sheet is activated to host `NewConversationView`. This time, we implicitly open and pass in the `Chatster` Realm (for the universal partition `all-users=all-the-users`:
``` swift
.sheet(isPresented: $showingAddChat) {
NewConversationView()
.environmentObject(state)
.environment(\.realmConfiguration, app.currentUser!.configuration(partitionValue: "all-users=all-the-users"))
```
[NewConversationView creates a live Realm result set (`chatsters`) from the Realm passed through the environment:
``` swift
@ObservedResults(Chatster.self) var chatsters
```
`NewConversationView` is similar to `SetProfileView.` in that it lets the user provide a number of details which are then saved to Realm when the "Save" button is tapped.
In order to use the "Realm injection" approach, we now need to delegate the saving of the `User` object to another view (`NewConversationView` received the `Chatster` Realm but the updated `User` object needs be saved in a transaction for the `User` Realm):
``` swift
code content
SaveConversationButton(name: name, members: members, done: { presentationMode.wrappedValue.dismiss() })
.environment(\.realmConfiguration,
app.currentUser!.configuration(partitionValue: "user=\(state.user?._id ?? "")"))
```
Something that we haven't covered yet is applying a filter to the live Realm search results. Here we filter on the `userName` within the Chatster objects:
``` swift
@ObservedResults(Chatster.self) var chatsters
...
private func searchUsers() {
var candidateChatsters: Results
if candidateMember == "" {
candidateChatsters = chatsters
} else {
let predicate = NSPredicate(format: "userName CONTAINScd] %@", candidateMember)
candidateChatsters = chatsters.filter(predicate)
}
candidateMembers = []
candidateChatsters.forEach { chatster in
if !members.contains(chatster.userName) && chatster.userName != state.user?.userName {
candidateMembers.append(chatster.userName)
}
}
}
```
### Conversation Status (Unchanged)
When the status of a conversation changes (users go online/offline or new messages are received), the card displaying the conversation details should update.
We already have a Function to set the `presence` status in `Chatster` documents/objects when users log on or off. All `Chatster` objects are readable by all users, and so [ConversationCardContentsView can already take advantage of that information.
The `conversation.unreadCount` is part of the `User` object, and so we need another Atlas Trigger to update that whenever a new chat message is posted to a conversation.
We add a new Atlas Function `chatMessageChange` that's configured as private and with "System" authentication (just like our other functions). This is the function code that will increment the `unreadCount` for all `User` documents for members of the conversation:
``` javascript
exports = function(changeEvent) {
if (changeEvent.operationType != "insert") {
console.log(`ChatMessage ${changeEvent.operationType} event – currently ignored.`);
return;
}
console.log(`ChatMessage Insert event being processed`);
let userCollection = context.services.get("mongodb-atlas").db("RChat").collection("User");
let chatMessage = changeEvent.fullDocument;
let conversation = "";
if (chatMessage.partition) {
const splitPartition = chatMessage.partition.split("=");
if (splitPartition.length == 2) {
conversation = splitPartition1];
console.log(`Partition/conversation = ${conversation}`);
} else {
console.log("Couldn't extract the conversation from partition ${chatMessage.partition}");
return;
}
} else {
console.log("partition not set");
return;
}
const matchingUserQuery = {
conversations: {
$elemMatch: {
id: conversation
}
}
};
const updateOperator = {
$inc: {
"conversations.$[element].unreadCount": 1
}
};
const arrayFilter = {
arrayFilters:[
{
"element.id": conversation
}
]
};
userCollection.updateMany(matchingUserQuery, updateOperator, arrayFilter)
.then ( result => {
console.log(`Matched ${result.matchedCount} User docs; updated ${result.modifiedCount}`);
}, error => {
console.log(`Failed to match and update User docs: ${error}`);
});
};
```
That function should be invoked by a new database trigger (`ChatMessageChange`) to fire whenever a document is inserted into the `RChat.ChatMessage` collection.
### Within the Chat Room
[ChatRoomView has a lot of similarities with `ConversationListView`, but with one fundamental difference. Each conversation/chat room has its own partition, and so when opening a conversation, you need to open a new Realm. Again, we use the new SDK feature to open and pass in the Realm for the appropriate conversation partition:
``` swift
ChatRoomBubblesView(conversation: conversation)
.environment(\.realmConfiguration, app.currentUser!.configuration(partitionValue: "conversation=\(conversation.id)"))
```
If you worked through Building a Mobile Chat App Using Realm – Integrating Realm into Your App, then you may have noticed that I had to introduce an extra view layer—`ChatRoomBubblesView`—in order to open the Conversation Realm. This is because you can only pass in a single Realm through the environment, and `ChatRoomView` needed the User Realm. On the plus side, we no longer need all of the boilerplate code to open the Realm from the view's `onApppear` method explicitly.
ChatRoomBubblesView sorts the Realm result set by timestamp (we want the most recent chat message to appear at the bottom of the List):
``` swift
@ObservedResults(ChatMessage.self,
sortDescriptor: SortDescriptor(keyPath: "timestamp", ascending: true)) var chats.
```
The Realm/SwiftUI integration means that the UI will automatically refresh whenever a new chat message is added to the Realm, but I also want to scroll to the bottom of the list so that the latest message is visible. We can achieve this by monitoring the Realm. Note that we only open a `Conversation` Realm when the user opens the associated view because having too many realms open concurrently can exhaust resources. It's also important that we stop observing the Realm by setting it to `nil` when leaving the view:
``` swift
@State private var realmChatsNotificationToken: NotificationToken?
@State private var latestChatId = ""
...
ScrollView(.vertical) {
ScrollViewReader { (proxy: ScrollViewProxy) in
VStack {
ForEach(chats) { chatMessage in
ChatBubbleView(chatMessage: chatMessage,
authorName: chatMessage.author != state.user?.userName ? chatMessage.author : nil,
isPreview: isPreview)
}
}
.onAppear {
scrollToBottom()
withAnimation(.linear(duration: 0.2)) {
proxy.scrollTo(latestChatId, anchor: .bottom)
}
}
.onChange(of: latestChatId) { target in
withAnimation {
proxy.scrollTo(target, anchor: .bottom)
}
}
}
}
...
.onAppear { loadChatRoom() }
.onDisappear { closeChatRoom() }
...
private func loadChatRoom() {
scrollToBottom()
realmChatsNotificationToken = chats.thaw()?.observe { _ in
scrollToBottom()
}
}
private func closeChatRoom() {
if let token = realmChatsNotificationToken {
token.invalidate()
}
}
private func scrollToBottom() {
latestChatId = chats.last?._id ?? ""
}
```
Note that we clear the notification token when leaving the view, ensuring that resources aren't wasted.
To send a message, all the app needs to do is to add the new chat message to Realm. Atlas Device Sync will then copy it to Atlas, where it is then synced to the other users. Note that we no longer need to explicitly open a Realm transaction to append the new chat message to the Realm that was received through the environment:
``` swift
@ObservedResults(ChatMessage.self, sortDescriptor: SortDescriptor(keyPath: "timestamp", ascending: true)) var chats
...
private func sendMessage(chatMessage: ChatMessage) {
guard let conversataionString = conversation else {
print("comversation not set")
return
}
chatMessage.conversationId = conversataionString.id
$chats.append(chatMessage)
}
```
## Summary
Since the release of Building a Mobile Chat App Using Realm – Integrating Realm into Your App, Realm-Swift 10.6 added new features that make working with Realm and SwiftUI simpler. Simply by passing the Realm configuration through the environment, the Realm is opened and made available to the view, and that view can go on to make updates without explicitly starting a transaction. This article has shown how those new features can be used to simplify your code. It has gone through the key steps you need to take when building a mobile app using Realm, including:
- Managing the user lifecycle: registering, authenticating, logging in, and logging out.
- Managing and storing user profile information.
- Adding objects to Realm.
- Performing searches on Realm data.
- Syncing data between your mobile apps and with MongoDB Atlas.
- Reacting to data changes synced from other devices.
- Adding some backend magic using Atlas Triggers and Functions.
We've skipped a lot of code and functionality in this article, and it's worth looking through the rest of the app to see how to use features such as these from a SwiftUI iOS app:
- Location data
- Maps
- Camera and photo library
- Actions when minimizing your app
- Notifications
We wrote the iOS version of the app first, but we plan on adding an Android (Kotlin) version soon—keep checking the developer hub and the repo for updates.
## References
- GitHub Repo for this app
- Read Building a Mobile Chat App Using Realm – Data Architecture to understand the data model and partitioning strategy behind the RChat app
- Read Building a Mobile Chat App Using Realm – Integrating Realm into Your App if you want to know how to build Realm into your app without using the new SwiftUI featured in Realm-Cocoa 10.6 (for example, if you need to use UIKit)
- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine
- GitHub Repo for Realm-Cocoa SDK
- Realm Swift SDK documentation
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Swift",
"Realm",
"iOS",
"Mobile"
],
"pageDescription": "How to incorporate Realm into your iOS App. Building a chat app with SwiftUI and Realm Swift – the new and easier way to work with Realm and SwiftUI",
"contentType": "Code Example"
} | Building a Mobile Chat App Using Realm – The New and Easier Way | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-starlette-stitch | created |
HOME!
MongoBnB
{% for property in response %}
{{ property.name }} (Up to {{ property.guests }} guests)
{{ property.address }}
{{ property.summary }}
${{ property.price }}/night (+${{ property.cleaning_fee }} Cleaning Fee)
Details
Book
{% endfor %}
MongoBnB
Back
{{ property.name }} (Up to {{ property.guests }} guests)
{{ property.address }}
{{ property.summary }}
{% for amenity in property.amenities %}
{{ amenity }}
{% endfor %}
${{ property.price }}/night (+${{ property.cleaning_fee }} Cleaning Fee)
Book
MongoBnB
Back
Confirmed!
Your booking confirmation for {{request.path_params['id']}} is {{confirmation}}
| md | {
"tags": [
"Python"
],
"pageDescription": "Learn how to build a property booking website in Python with Starlette, MongoDB, and Twilio.",
"contentType": "Tutorial"
} | Build a Property Booking Website with Starlette, MongoDB, and Twilio | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-api-excel-power-query | created | # Using the Atlas Data API from Excel with Power Query
## Data Science and the Ubiquity of Excel
> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.
When you ask what tools you should learn to be a data scientist, you will hear names like *Spark, Jupyter notebooks, R, Pandas*, and *Numpy* mentioned. Many enterprise data wranglers, on the other hand, have been using, and continue to use, industry heavyweights like SAS, SPSS, and Matlab as they have for the last few decades.
The truth is, though, that the majority of back-office data science is still performed using the ubiquitous *Microsoft Excel*.
Excel has been the go-to choice for importing, manipulating, analysing, and visualising data for 34 years and has more capabilities and features than most of us would ever believe. It would therefore be wrong to have a series on accessing data in MongoDB with the data API without including how to get data into Excel.
This is also unique in this series or articles in not requiring any imperative coding at all. We will use the Power Query functionality in Excel to both fetch raw data, and to push summarization tasks down to MongoDB and retrieve the results.
The MongoDB Atlas Data API is an HTTPS-based API that allows us to read and write data in Atlas, where a MongoDB driver library is either not available or not desirable. In this article, we will see how a business analyst or other back-office user, who often may not be a professional Developer, can access data from, and record data, in Atlas. The Atlas Data API can easily be used by users, unable to create or configure back-end services, who simply want to work with data in tools they know like Google Sheets or Excel.
## Prerequisites
To access the data API using Power Query in Excel, we will need a version of Excel that supports it. Power Query is only available on the Windows desktop version, not on a Mac or via the browser-based Office 365 version of Excel.
We will also need an Atlas cluster for which we have enabled the data API, and our **endpoint URL** and **API key**. You can learn how to get these in this article or this video if you do not have them already.
A common use-case of Atlas with Microsoft Excel sheets might be to retrieve some subset of business data to analyse or to produce an export for a third party. To demonstrate this, we first need to have some business data available in MongoDB Atlas, this can be added by selecting the three dots next to our cluster name and choosing "Load Sample Dataset" or following instructions here.
## Using Excel Power Query with HTTPS POST Requests
If we open up a new blank Excel workbook and then go to the **Data** ribbon, we can see on the left-hand side an option to get data **From Web**. Unfortunately, Microsoft has chosen in the wizard that this launches, to restrict data retrieval to API's that use *GET* rather than *POST* as the HTTP verb to request data.
> An HTTP GET request is passed all of its data as part of the URL, the values after the website and path encodes additional parts to the request, normally in a simple key-value format. A POST request sends the data as a second part of the request and is not subject to the same length and security limitations a GET has.
HTTP *GET* is used for many simple read-only APIs, but the richness and complexity of queries and aggregations possible using the Atlas Data API. do not lend themselves to passing data in a GET rather than the body of a *POST*, so we are required to use a *POST* request instead.
Fortunately, Excel and Power Query do support *POST* requests when creating a query from scratch using what Microsoft calls a **Blank Query**.
To call a web service with a *POST* from Excel, start with a new **Blank Workbook**.
Click on **Data** on the menu bar to show the Data Ribbon. Then click **Get Data** on the far left and choose **From Other Sources->Blank Query**. It's right at the bottom of the ribbon bar dropdown.
We are then presented with the *Query Editor*.
We now need to use the *Advanced Editor* to define our 'JSON' payload, and send it via an HTTP *POST* request. Click **Advanced Editor** on the left to show the existing *Blank* Query.
This has two blocks. The *let* part is a set of transformations to fetch and manipulate data and the *in* part defines what the final data set should be called.
This is using *Power Query M* syntax. To help understand the next steps, let's summarise the syntax for that.
## Power Query M syntax in a nutshell
Power Query M can have constant strings and numbers. Constant strings are denoted by double quotes like "MongoDB." Numbers are just the unquoted number alone, i.e., 5.23. Constants cannot be on the left side of an assignment.
Something not surrounded by quotes is a variable—e.g., *People* or *Source* and can be used either side of an assignment. To allow variable names to contain any character, including spaces, without ambiguity variables can also be declared as a hash symbol followed by double quotes so ` #"Number of runs"` is a variable name, not a constant.
*Power Query M* defines arrays/lists of values as a comma separated list enclosed in braces (a.k.a. curly brackets) so `#"State Names" = { "on", "off", "broken" }` defines a variable called *State Names* as a list of three string values.
*Power Query M* defines *Records* (Dynamic Key->Value mappings) using a comma separated set of `variable=value` statements inside square brackets, for example `Person = Name="John",Dogs=3]`. These data types can be nested—for example, P`erson = [Name="John",Dogs={ [name="Brea",age=10],[name="Harvest",age=5],[name="Bramble",age=1] }]`.
If you are used to pretty much any other programming language, you may find the contrarian syntax of *Power Query M* either amusing or difficult.
## Defining a JSON Object to POST to the Atlas Data API with Power Query M
We can set the value of the variable Source to an explicit JSON object by passing a Power Query M Record to the function Json.FromValue like this.
```
let
postData = Json.FromValue([filter=[property_type="House"],dataSource="Cluster0", database="sample_airbnb",collection="listingsAndReviews"]),
Source = postData
in
Source
```
This is the request we are going to send to the Data API. This request will search the collection *listingsAndReviews* in a Cluster called *Cluster0* for documents where the field *property\_type* equals "*House*".
We paste the code above into the advanced Editor, and verify that there is a green checkmark at the bottom with the words "No syntax errors have been detected," and then we can click **Done**. We see a screen like this.
![
The small CSV icon in the grey area represents our single JSON Document. Double click it and Power Query will apply a basic transformation to a table with JSON fields as values as shown below.
## Posting payload JSON to the Find Endpoint in Atlas from Excel
To get our results from Atlas, we need to post this payload to our Atlas *API find endpoint* and parse the response. Click **Advanced Editor** again and change the contents to those in the box below changing the value "**data-amzuu**" in the endpoint to match your endpoint and the value of **YOUR-API-KEY** to match your personal API key. You will also need to change **Cluster0** if your database cluster has a different name.
You will notice that two additional steps were added to the Query to convert it to the CSV we saw above. Overwrite these so the box just contains the lines below and click Done.
```
let
postData = Json.FromValue(filter=[property_type="House"],dataSource="Cluster0", database="sample_airbnb",collection="listingsAndReviews"]),
response = Web.Contents( "https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find",
[ Headers = [#"Content-Type" = "application/json",
#"api-key"="YOUR-API-KEY"] ,
Content=postData]),
Source = Json.Document(response)
in
Source
```
You will now see this screen, which is telling us it has retrieved a list of JSON documents.
![
Before we go further and look at how to parse this result into our worksheet, let us first review the connection we have just set up.
The first line, as before, is defining *postData* as a JSON string containing the payload for the Atlas API.
The next line, seen below, makes an HTTPS call to Atlas by calling the Web.Contents function and puts the return value in the variable *response*.
```
response = Web.Contents(
"https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find",
Headers = [#"Content-Type" = "application/json",
#"api-key"="YOUR-API-KEY"] ,
Content=postData]),
```
The first parameter to *Web.Contents* is our endpoint URL as a string.
The second parameter is a *record* specifying options for the request. We are specifying two options: *Headers* and *Content*.
*Headers* is a *record* used to specify the HTTP Headers to the request. In our case, we specify *Content-Type* and also explicitly include our credentials using a header named *api-key.*
> Ideally, we would use the functionality built into Excel to handle web authentication and not need to include the API key in the query, but Microsoft has disabled this for POST requests out of security concerns with Windows federated authentication ([DataSource.Error: Web.Contents with the Content option is only supported when connecting anonymously). We unfortunately need to, therefore, supply it explicitly as a header.
We also specify `Content=postData` , this is what makes this become a POST request rather than a GET request and pass our JSON payload to the HTTP API.
The next line `Source = Json.Document(response)` parses the JSON that gets sent back in the response, creating a Power Query *record* from the JSON data and assigning it to a variable named *Source.*
## Converting documents from MongoDB Atlas into Excel Rows
So, getting back to parsing our returned data, we are now looking at something like this.
The parsed JSON has returned a single record with one value, documents, which is a list.In JSON it would look like this `{documents : { … }, { … } , { … } ] }`
How do we parse it? The first step is to press the **Into Table** button in the Ribbon bar which converts the record into a *table*.
![
Now we have a table with one value 'Documents' of type list. We need to break that down.
Right click the second column (**value**) and select **Drill Down** from the menu. As we do each of these stages, we see it being added to the list of transformations in the *Applied Steps* list on the right-hand side.
We now have a list of JSON documents but we want to convert that into rows.
First, we want to right-click on the word **list** in row 1 and select **Drill Down** from the menu again.
Now we have a set of records, convert them to a table by clicking the **To Table** button and setting the delimiter to **None** in the dialog that appears. We now see a table but with a single column called *Column1*.
Finally, If you select the Small icon at the right-hand end of the column header you can choose which columns you want. Select all the columns then click **OK**.
Finally, click **Close and Load** from the ribbon bar to write the results back to the sheet and save the Query.
## Parameterising Power Queries using JSON Parameters
We hardcoded this to fetch us properties of type "House"' but what if we want to perform different queries? We can use the Excel Power Query Parameters to do this.
Select the **Data** Tab on the worksheet. Then, on the left, **Get Data->Launch Power Query Editor**.
From the ribbon of the editor, click **Manage Parameters** to open the parameter editor. Parameters are variables you can edit via the GUI or populate from functions. Click **New** (it's not clear that it is clickable) and rename the new parameter to **Mongo Query**. Wet the *type* to **Text** and the *current value* to **{ beds: 2 }**, then click **OK**.
Now select **Query1** again on the left side of the window and click **Advanced Editor** in the ribbon bar. Change the source to match the code below. *Note that we are only changing the postData line.*
```
let
postData = Json.FromValue(filter=Json.Document(#"Mongo Query"),dataSource="Cluster0", database="sample_airbnb",collection="listingsAndReviews"]),
response = Web.Contents("https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find",
[ Headers = [#"Content-Type" = "application/json",
#"api-key"= "YOUR-API-KEY"] , Content=postData]),
Source = Json.Document(response),
documents = Source[documents],
#"Converted to Table" = Table.FromList(documents, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"_id", "listing_url", "name", "summary", "space", "description", "neighborhood_overview", "notes", "transit", "access", "interaction", "house_rules", "property_type", "room_type", "bed_type", "minimum_nights", "maximum_nights", "cancellation_policy", "last_scraped", "calendar_last_scraped", "first_review", "last_review", "accommodates", "bedrooms", "beds", "number_of_reviews", "bathrooms", "amenities", "price", "security_deposit", "cleaning_fee", "extra_people", "guests_included", "images", "host", "address", "availability", "review_scores", "reviews"}, {"Column1._id", "Column1.listing_url", "Column1.name", "Column1.summary", "Column1.space", "Column1.description", "Column1.neighborhood_overview", "Column1.notes", "Column1.transit", "Column1.access", "Column1.interaction", "Column1.house_rules", "Column1.property_type", "Column1.room_type", "Column1.bed_type", "Column1.minimum_nights", "Column1.maximum_nights", "Column1.cancellation_policy", "Column1.last_scraped", "Column1.calendar_last_scraped", "Column1.first_review", "Column1.last_review", "Column1.accommodates", "Column1.bedrooms", "Column1.beds", "Column1.number_of_reviews", "Column1.bathrooms", "Column1.amenities", "Column1.price", "Column1.security_deposit", "Column1.cleaning_fee", "Column1.extra_people", "Column1.guests_included", "Column1.images", "Column1.host", "Column1.address", "Column1.availability", "Column1.review_scores", "Column1.reviews"})
in
#"Expanded Column1"
```
What we have done is make *postData* take the value in the *Mongo Query* parameter, and parse it as JSON. This lets us create arbitrary filters by specifying MongoDB queries in the Mongo Query Parameter. The changed line is shown below.
```
postData = Json.FromValue([filter=Json.Document(#"Mongo Query"), dataSource="Cluster0",database="sample_airbnb",collection="listingsAndReviews"]),
```
## Running MongoDB Aggregation Pipelines from Excel
We can apply this same technique to run arbitrary MongoDB Aggregation Pipelines. Right click on Query1 in the list on the left and select Duplicate. Then right-click on Query1(2) and rename it to Aggregate. Select it and then click Advanced Editor on the ribbon. Change the word find in the URL to aggregate and the word filter in the payload to pipeline.
![
You will get an error at first like this.
This is because the parameter Mongo Query is not a valid Aggregation Pipeline. Click **Manage Parameters** on the ribbon and change the value to **{$sortByCount : "$beds" }**]. Then Click the X next to *Expanded Column 1* on the right of the screen as the expansion is now incorrect.
![
Again, click on the icon next to **Column1** and Select **All Columns** to see how many properties there are for a given number of beds - processing the query with an aggregation pipeline on the server.
## Putting it all together
Using Power Query with parameters, we can specify the cluster, collection, database, and parameters such as the query, fields returned, sort order ,and limit. We can also choose, by changing the endpoint, to perform a simple query or run an aggregation pipeline.
To simplify this, there is an Excel workbook available here which has all of these things parameterised so you can simply set the parameters required and run the Power Query to query your Atlas cluster. You can use this as a starting point in exploring how to further use the Excel and Power Query to access data in MongoDB Atlas. | md | {
"tags": [
"Atlas",
"JavaScript",
"Excel"
],
"pageDescription": "This Article shows you how to run Queries and Aggregations again MongoDB Atlas using the Power Query function in Microsoft Excel.",
"contentType": "Quickstart"
} | Using the Atlas Data API from Excel with Power Query | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/multiple-mongodb-connections-in-a-single-application | created | # Multiple MongoDB Connections in a Single Application
MongoDB, a popular NoSQL database, is widely used in various applications and scenarios. While a single database connection can adequately serve the needs of numerous projects, there are specific scenarios and various real-world use cases that highlight the advantages of employing multiple connections.
In this article, we will explore the concept of establishing multiple MongoDB connections within a single Node.js application.
## Exploring the need for multiple MongoDB connections: Use cases & examples ##
In the world of MongoDB and data-driven applications, the demand for multiple MongoDB connections is on the rise. Let's explore why this need arises and discover real-world use cases and examples where multiple connections provide a vital solution.
Sectors such as e-commerce, gaming, financial services, media, entertainment, and the Internet of Things (IoT) frequently contend with substantial data volumes or data from diverse sources.
For instance, imagine a web application that distributes traffic evenly across several MongoDB servers using multiple connections or a microservices architecture where each microservice accesses the database through its dedicated connection. Perhaps in a data processing application, multiple connections allow data retrieval from several MongoDB servers simultaneously. Even a backup application can employ multiple connections to efficiently back up data from multiple MongoDB servers to a single backup server.
Moreover, consider a multi-tenant application where different tenants or customers share the same web application but require separate, isolated databases. In this scenario, each tenant can have their own dedicated MongoDB connection. This ensures data separation, security, and customization for each tenant while all operating within the same application. This approach simplifies management and provides an efficient way to scale as new tenants join the platform without affecting existing ones.
Before we delve into practical implementation, let's introduce some key concepts that will be relevant in the upcoming use case examples. Consider various use cases such as load balancing, sharding, read replicas, isolation, and fault tolerance. These concepts play a crucial role in scenarios where multiple MongoDB connections are required for efficient data management and performance optimization.
## Prerequisites ##
Throughout this guide, we'll be using Node.js, Express.js, and the Mongoose NPM package for managing MongoDB interactions. Before proceeding, ensure that your development environment is ready and that you have these dependencies installed.
If you are new to MongoDB or haven't set up MongoDB before, the first step is to set up a MongoDB Atlas account. You can find step-by-step instructions on how to do this in the MongoDB Getting Started with Atlas article.
> This post uses MongoDB 6.3.2 and Node.js 18.17.1
If you're planning to create a new project, start by creating a fresh directory for your project. Then, initiate a new project using the `npm init` command.
If you already have an existing project and want to integrate these dependencies, ensure you have the project's directory open. In this case, you only need to install the dependencies Express and Mongoose if you haven’t already, making sure to specify the version numbers to prevent any potential conflicts.
npm i [email protected] [email protected]
> Please be aware that Mongoose is not the official MongoDB driver but a
> popular Object Data Modelling (ODM) library for MongoDB. If you prefer
> to use the official MongoDB driver, you can find relevant
> documentation on the MongoDB official
> website.
The next step is to set up the environment `.env` file if you haven't already. We will define variables for the MongoDB connection strings that we will use throughout this article. The `PRIMARY_CONN_STR` variable is for the primary MongoDB connection string, and the `SECONDARY_CONN_STR` variable is for the secondary MongoDB connection string.
```javascript
PRIMARY_CONN_STR=mongodb+srv://…
SECONDARY_CONN_STR=mongodb+srv://…
```
If you are new to MongoDB and need guidance on obtaining a MongoDB connection string from Atlas, please refer to the Get Connection String article.
Now, we'll break down the connection process into two parts: one for the primary connection and the other for the secondary connection.
Now, let's begin by configuring the primary connection.
## Setting up the primary MongoDB connection ##
The primary connection process might be familiar to you if you've already implemented it in your application. However, I'll provide a detailed explanation for clarity. Readers who are already familiar with this process can skip this section.
We commonly utilize the mongoose.connect() method to establish the primary MongoDB database connection for our application, as it efficiently manages a single connection pool for the entire application.
In a separate file named `db.primary.js`, we define a connection method that we'll use in our main application file (for example, `index.js`). This method, shown below, configures the MongoDB connection and handles events:
```javascript
const mongoose = require("mongoose");
module.exports = (uri, options = {}) => {
// By default, Mongoose skips properties not defined in the schema (strictQuery). Adjust it based on your configuration.
mongoose.set('strictQuery', true);
// Connect to MongoDB
mongoose.connect(uri, options)
.then()
.catch(err => console.error("MongoDB primary connection failed, " + err));
// Event handling
mongoose.connection.once('open', () => console.info("MongoDB primary connection opened!"));
mongoose.connection.on('connected', () => console.info("MongoDB primary connection succeeded!"));
mongoose.connection.on('error', (err) => {
console.error("MongoDB primary connection failed, " + err);
mongoose.disconnect();
});
mongoose.connection.on('disconnected', () => console.info("MongoDB primary connection disconnected!"));
// Graceful exit
process.on('SIGINT', () => {
mongoose.connection.close().then(() => {
console.info("Mongoose primary connection disconnected through app termination!");
process.exit(0);
});
});
}
```
The next step is to create schemas for performing operations in your application. We will write the schema in a separate file named `product.schema.js` and export it. Let's take an example schema for products in a stores application:
```javascript
const mongoose = require("mongoose");
module.exports = (options = {}) => {
// Schema for Product
return new mongoose.Schema(
{
store: {
_id: mongoose.Types.ObjectId, // Reference-id to the store collection
name: String
},
name: String
// add required properties
},
options
);
}
```
Now, let’s import the `db.primary.js` file in our main file (for example, `index.js`) and use the method defined there to establish the primary MongoDB connection. You can also pass an optional connection options object if needed.
After setting up the primary MongoDB connection, you import the `product.schema.js` file to access the Product Schema. This enables you to create a model and perform operations related to products in your application:
```javascript
// Primary Connection (Change the variable name as per your .env configuration!)
// Establish the primary MongoDB connection using the connection string variable declared in the Prerequisites section.
require("./db.primary.js")(process.env.PRIMARY_CONN_STR, {
// (optional) connection options
});
// Import Product Schema
const productSchema = require("./product.schema.js")({
collection: "products",
// Pass configuration options if needed
});
// Create Model
const ProductModel = mongoose.model("Product", productSchema);
// Execute Your Operations Using ProductModel Object
(async function () {
let product = await ProductModel.findOne();
console.log(product);
})();
```
Now, let's move on to setting up a secondary or second MongoDB connection for scenarios where your application requires multiple MongoDB connections.
## Setting up secondary MongoDB connections ##
Depending on your application's requirements, you can configure secondary MongoDB connections for various use cases. But before that, we'll create a connection code in a `db.secondary.js` file, specifically utilizing the mongoose.createConnection() method. This method allows us to establish separate connection pools each tailored to a specific use case or data access pattern, unlike the `mongoose.connect()` method that we used previously for the primary MongoDB connection:
```javascript
const mongoose = require("mongoose");
module.exports = (uri, options = {}) => {
// Connect to MongoDB
const db = mongoose.createConnection(uri, options);
// By default, Mongoose skips properties not defined in the schema (strictQuery). Adjust it based on your configuration.
db.set('strictQuery', true);
// Event handling
db.once('open', () => console.info("MongoDB secondary connection opened!"));
db.on('connected', () => console.info(`MongoDB secondary connection succeeded!`));
db.on('error', (err) => {
console.error(`MongoDB secondary connection failed, ` + err);
db.close();
});
db.on('disconnected', () => console.info(`MongoDB secondary connection disconnected!`));
// Graceful exit
process.on('SIGINT', () => {
db.close().then(() => {
console.info(`Mongoose secondary connection disconnected through app termination!`);
process.exit(0);
});
});
// Export db object
return db;
}
```
Now, let’s import the `db.secondary.js` file in our main file (for example, `index.js`), create the connection object with a variable named `db`, and use the method defined there to establish the secondary MongoDB connection. You can also pass an optional connection options object if needed:
```javascript
// Secondary Connection (Change the variable name as per your .env configuration!)
// Establish the secondary MongoDB connection using the connection string variable declared in the Prerequisites section.
const db = require("./db.secondary.js")(process.env.SECONDARY_CONN_STR, {
// (optional) connection options
});
```
Now that we are all ready with the connection, you can use that `db` object to create a model. We explore different scenarios and examples to help you choose the setup that best aligns with your specific data access and management needs:
### 1. Using the existing schema ###
You can choose to use the same schema `product.schema.js` file that was employed in the primary connection. This is suitable for scenarios where both connections will operate on the same data model.
Import the `product.schema.js` file to access the Product Schema. This enables you to create a model using `db` object and perform operations related to products in your application:
```javascript
// Import Product Schema
const secondaryProductSchema = require("./product.schema.js")({
collection: "products",
// Pass configuration options if needed
});
// Create Model
const SecondaryProductModel = db.model("Product", secondaryProductSchema);
// Execute Your Operations Using SecondaryProductModel Object
(async function () {
let product = await SecondaryProductModel.findOne();
console.log(product);
})();
```
To see a practical code example and available resources for using the existing schema of a primary database connection into a secondary MongoDB connection in your project, visit the GitHub repository.
### 2. Setting schema flexibility ###
When working with multiple MongoDB connections, it's essential to have the flexibility to adapt your schema based on specific use cases. While the primary connection may demand a strict schema with validation to ensure data integrity, there are scenarios where a secondary connection serves a different purpose. For instance, a secondary connection might store data for analytics on an archive server, with varying schema requirements driven by past use cases. In this section, we'll explore how to configure schema flexibility for your secondary connection, allowing you to meet the distinct needs of your application.
If you prefer to have schema flexibility in mongoose, you can pass the `strict: false` property in the options when configuring your schema for the secondary connection. This allows you to work with data that doesn't adhere strictly to the schema.
Import the `product.schema.js` file to access the Product Schema. This enables you to create a model using `db` object and perform operations related to products in your application:
```javascript
// Import Product Schema
const secondaryProductSchema = require("./product.schema.js")({
collection: "products",
strict: false
// Pass configuration options if needed
});
// Create Model
const SecondaryProductModel = db.model("Product", secondaryProductSchema);
// Execute Your Operations Using SecondaryProductModel Object
(async function () {
let product = await SecondaryProductModel.findOne();
console.log(product);
})();
```
To see a practical code example and available resources for setting schema flexibility in a secondary MongoDB connection in your project, visit the GitHub repository.
### 3. Switching databases within the same connection ###
Within your application's database setup, you can seamlessly switch between different databases using the db.useDb()) method. This method enables you to create a new connection object associated with a specific database while sharing the same connection pool.
This approach allows you to efficiently manage multiple databases within your application, using a single connection while maintaining distinct data contexts for each database.
Import the `product.schema.js` file to access the Product Schema. This enables you to create a model using `db` object and perform operations related to products in your application.
Now, to provide an example where a store can have its own database containing users and products, you can include the following scenario.
**Example use case: Store with separate database**
Imagine you're developing an e-commerce platform where multiple stores operate independently. Each store has its database to manage its products. In this scenario, you can use the `db.useDb()` method to switch between different store databases while maintaining a shared connection pool:
```javascript
// Import Product Schema
const secondaryProductSchema = require("./product.schema.js")({
collection: "products",
// strict: false // that doesn't adhere strictly to the schema!
// Pass configuration options if needed
});
// Create a connection for 'Store A'
const storeA = db.useDb('StoreA');
// Create Model
const SecondaryStoreAProductModel = storeA.model("Product", secondaryProductSchema);
// Execute Your Operations Using SecondaryStoreAProductModel Object
(async function () {
let product = await SecondaryStoreAProductModel.findOne();
console.log(product);
})();
// Create a connection for 'Store B'
const storeB = db.useDb('StoreB');
// Create Model
const SecondaryStoreBProductModel = storeB.model("Product", secondaryProductSchema);
// Execute Your Operations Using SecondaryStoreBProductModel Object
(async function () {
let product = await SecondaryStoreBProductModel.findOne();
console.log(product);
})();
```
In this example, separate database connections have been established for `Store A` and `Store B`, each containing its product data. This approach provides a clear separation of data while efficiently utilizing a single shared connection pool for all stores, enhancing data management in a multi-store e-commerce platform.
In the previous section, we demonstrated a static approach where connections were explicitly created for each store, and each connection was named accordingly (e.g., `StoreA`, `StoreB`).
To introduce a dynamic approach, you can create a function that accepts a store's ID or name as a parameter and returns a connection object. This dynamic function allows you to switch between different stores by providing their identifiers, and it efficiently reuses existing connections when possible.
```javascript
// Function to get connection object for particular store's database
function getStoreConnection(storeId) {
return db.useDb("Store"+storeId, { useCache: true });
}
// Create a connection for 'Store A'
const store = getStoreConnection("A");
// Create Model
const SecondaryStoreProductModel = store.model("Product", secondaryProductSchema);
// Execute Your Operations Using SecondaryStoreProductModel Object
(async function () {
let product = await SecondaryStoreProductModel.findOne();
console.log(product);
})();
```
In the dynamic approach, connection instances are created and cached as needed, eliminating the need for manually managing separate connections for each store. This approach enhances flexibility and resource efficiency in scenarios where you need to work with multiple stores in your application.
By exploring these examples, we've covered a range of scenarios for managing multiple databases within the same connection, providing you with the flexibility to tailor your database setup to your specific application needs. You're now equipped to efficiently manage distinct data contexts for various use cases within your application.
To see a practical code example and available resources for switching databases within the same connection into a secondary MongoDB connection in your project, visit the GitHub repository.
## Best practices ##
In the pursuit of a robust and efficient MongoDB setup within your Node.js application, I recommend the following best practices. These guidelines serve as a foundation for a reliable implementation, and I encourage you to consider and implement them:
- **Connection pooling**: Make the most of connection pooling to efficiently manage MongoDB connections, enabling connection reuse and reducing overhead. Read more about connection pooling.
- **Error handling**: Robust error-handling mechanisms, comprehensive logging, and contingency plans ensure the reliability of your MongoDB setup in the face of unexpected issues.
- **Security**: Prioritize data security with authentication, authorization, and secure communication practices, especially when dealing with sensitive information. Read more about MongoDB Security.
- **Scalability**: Plan for scalability from the outset, considering both horizontal and vertical scaling strategies to accommodate your application's growth.
- **Testing**: Comprehensive testing in various scenarios, such as failover, high load, and resource constraints, validates the resilience and performance of your multiple MongoDB connection setup.
## Conclusion ##
Leveraging multiple MongoDB connections in a Node.js application opens up a world of possibilities for diverse use cases, from e-commerce to multi-tenant systems. Whether you need to enhance data separation, scale your application efficiently, or accommodate different data access patterns, these techniques empower you to tailor your database setup to the unique needs of your project. With the knowledge gained in this guide, you're well-prepared to manage multiple data contexts within a single application, ensuring robust, flexible, and efficient MongoDB interactions.
## Additional resources ##
- **Mongoose documentation**: For an in-depth understanding of Mongoose connections, explore the official Mongoose documentation.
- **GitHub repository**: To dive into the complete implementation of multiple MongoDB connections in a Node.js application that we have performed above, visit the GitHub repository. Feel free to clone the repository and experiment with different use cases in your projects.
If you have any questions or feedback, check out the MongoDB Community Forums and let us know what you think.
| md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Multiple MongoDB Connections in a Single Application | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/atlas/cloudflare-worker-rest-api | created | # Create a REST API with Cloudflare Workers and MongoDB Atlas
## Introduction
Cloudflare Workers provides a serverless execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure.
MongoDB Atlas allows you to create, manage, and monitor MongoDB clusters in the cloud provider of your choice (AWS, GCP, or Azure) while the Web SDK can provide a layer of authentication and define access rules to the collections.
In this blog post, we will combine all these technologies together and create a REST API with a Cloudflare worker using a MongoDB Atlas cluster to store the data.
> Note: In this tutorial, the worker isn't using any form of caching. While the connection between MongoDB and the Atlas serverless application is established and handled automatically in the Atlas App Services back end, each new query sent to the worker will require the user to go through the authentication and authorization process before executing any query. In this tutorial, we are using API keys to handle this process but Atlas App Services offers many different authentication providers.
## TL;DR!
The worker is in this GitHub repository. The README will get you up and running in no time, if you know what you are doing. Otherwise, I suggest you follow this step-by-step blog post. ;-)
```shell
$ git clone [email protected]:mongodb-developer/cloudflare-worker-rest-api-atlas.git
```
## Prerequisites
- NO credit card! You can run this entire tutorial for free!
- Git and cURL.
- MongoDB Atlas account.
- MongoDB Atlas Cluster (a free M0 cluster is fine).
- Cloudflare account (free plan is fine) with a `*.workers.dev` subdomain for the workers. Follow steps 1 to 3 from this documentation to get everything you need.
We will create the Atlas App Services application (formerly known as a MongoDB Realm application) together in the next section. This will provide you the AppID and API key that we need.
To deploy our Cloudflare worker, we will need:
- The application ID (top left corner in your app—see next section).
- The Cloudflare account login/password.
- The Cloudflare account ID (in Workers tab > Overview).
To test (or interact with) the REST API, we need:
- The authentication API key (more about that below, but it's in Authentication tab > API Keys).
- The Cloudflare `*.workers.dev` subdomain (in Workers tab > Overview).
It was created during this step of your set-up:
## Create and Configure the Atlas Application
To begin with, head to your MongoDB Atlas main page where you can see your cluster and access the 'App Services' tab at the top.
Create an empty application (no template) as close as possible to your MongoDB Atlas cluster to avoid latency between your cluster and app. My app is "local" in Ireland (eu-west-1) in my case.
Now that our app is created, we need to set up two things: authentication via API keys and collection rules. Before that, note that you can retrieve your app ID in the top left corner of your new application.
### Authentication Via API Keys
Head to Authentication > API Keys.
Activate the provider and save the draft.
We need to create an API key, but we can only do so if the provider is already deployed. Click on review and deploy.
Now you can create an API key and **save it somewhere**! It will only be displayed **once**. If you lose it, discard this one and create a new one.
We only have a single user in our application as we only created a single API key. Note that this tutorial would work with any other authentication method if you update the authentication code accordingly in the worker.
### Collection Rules
By default, your application cannot access any collection from your MongoDB Atlas cluster. To define how users can interact with the data, you must define roles and permissions.
In our case, we want to create a basic REST API where each user can read and write their own data in a single collection `todos` in the `cloudflare` database.
Head to the Rules tab and let's create this new `cloudflare.todos` collection.
First, click "create a collection".
Next, name your database `cloudflare` and collection `todos`. Click create!
Each document in this collection will belong to a unique user defined by the `owner_id` field. This field will contain the user ID that you can see in the `App Users` tab.
To limit users to only reading and writing their own data, click on your new `todos` collection in the Rules UI. Add the rule `readOwnWriteOwn` in the `Other presets`.
After adding this preset role, you can double-check the rule by clicking on the `Advanced view`. It should contain the following:
```json
{
"roles":
{
"name": "readOwnWriteOwn",
"apply_when": {},
"document_filters": {
"write": {
"owner_id": "%%user.id"
},
"read": {
"owner_id": "%%user.id"
}
},
"read": true,
"write": true,
"insert": true,
"delete": true,
"search": true
}
]
}
```
You can now click one more time on `Review Draft and Deploy`. Our application is now ready to use.
## Set Up and Deploy the Cloudflare Worker
The Cloudflare worker is available in [GitHub repository. Let's clone the repository.
```shell
$ git clone [email protected]:mongodb-developer/cloudflare-worker-rest-api-atlas.git
$ cd cloudflare-worker-rest-api-realm-atlas
$ npm install
```
Now that we have the worker template, we just need to change the configuration to deploy it on your Cloudflare account.
Edit the file `wrangler.toml`:
- Replace `CLOUDFLARE_ACCOUNT_ID` with your real Cloudflare account ID.
- Replace `MONGODB_ATLAS_APPID` with your real MongoDB Atlas App Services app ID.
You can now deploy your worker to your Cloudflare account using Wrangler:
```shell
$ npm i wrangler -g
$ wrangler login
$ wrangler deploy
```
Head to your Cloudflare account. You should now see your new worker in the Workers tab > Overview.
## Check Out the REST API Code
Before we test the API, please take a moment to read the code of the REST API we just deployed, which is in the `src/index.ts` file:
```typescript
import * as Realm from 'realm-web';
import * as utils from './utils';
// The Worker's environment bindings. See `wrangler.toml` file.
interface Bindings {
// MongoDB Atlas Application ID
ATLAS_APPID: string;
}
// Define type alias; available via `realm-web`
type Document = globalThis.Realm.Services.MongoDB.Document;
// Declare the interface for a "todos" document
interface Todo extends Document {
owner_id: string;
done: boolean;
todo: string;
}
let App: Realm.App;
const ObjectId = Realm.BSON.ObjectID;
// Define the Worker logic
const worker: ExportedHandler = {
async fetch(req, env) {
const url = new URL(req.url);
App = App || new Realm.App(env.ATLAS_APPID);
const method = req.method;
const path = url.pathname.replace(//]$/, '');
const todoID = url.searchParams.get('id') || '';
if (path !== '/api/todos') {
return utils.toError(`Unknown '${path}' URL; try '/api/todos' instead.`, 404);
}
const token = req.headers.get('authorization');
if (!token) return utils.toError(`Missing 'authorization' header; try to add the header 'authorization: ATLAS_APP_API_KEY'.`, 401);
try {
const credentials = Realm.Credentials.apiKey(token);
// Attempt to authenticate
var user = await App.logIn(credentials);
var client = user.mongoClient('mongodb-atlas');
} catch (err) {
return utils.toError('Error with authentication.', 500);
}
// Grab a reference to the "cloudflare.todos" collection
const collection = client.db('cloudflare').collection('todos');
try {
if (method === 'GET') {
if (todoID) {
// GET /api/todos?id=XXX
return utils.reply(
await collection.findOne({
_id: new ObjectId(todoID)
})
);
}
// GET /api/todos
return utils.reply(
await collection.find()
);
}
// POST /api/todos
if (method === 'POST') {
const {todo} = await req.json();
return utils.reply(
await collection.insertOne({
owner_id: user.id,
done: false,
todo: todo,
})
);
}
// PATCH /api/todos?id=XXX&done=true
if (method === 'PATCH') {
return utils.reply(
await collection.updateOne({
_id: new ObjectId(todoID)
}, {
$set: {
done: url.searchParams.get('done') === 'true'
}
})
);
}
// DELETE /api/todos?id=XXX
if (method === 'DELETE') {
return utils.reply(
await collection.deleteOne({
_id: new ObjectId(todoID)
})
);
}
// unknown method
return utils.toError('Method not allowed.', 405);
} catch (err) {
const msg = (err as Error).message || 'Error with query.';
return utils.toError(msg, 500);
}
}
}
// Export for discoverability
export default worker;
```
## Test the REST API
Now that you are a bit more familiar with this REST API, let's test it!
Note that we decided to pass the values as parameters and the authorization API key as a header like this:
```
authorization: API_KEY_GOES_HERE
```
You can use [Postman or anything you want to test your REST API, but to make it easy, I made some bash script in the `api_tests` folder.
In order to make them work, we need to edit the file `api_tests/variables.sh` and provide them with:
- The Cloudflare worker URL: Replace `YOUR_SUBDOMAIN`, so the final worker URL matches yours.
- The MongoDB Atlas App Service API key: Replace `YOUR_ATLAS_APP_AUTH_API_KEY` with your auth API key.
Finally, we can execute all the scripts like this, for example:
```shell
$ cd api_tests
$ ./post.sh "Write a good README.md for Github"
{
"insertedId": "618615d879c8ad6d1129977d"
}
$ ./post.sh "Commit and push"
{
"insertedId": "618615e479c8ad6d11299e12"
}
$ ./findAll.sh
{
"_id": "618615d879c8ad6d1129977d",
"owner_id": "6186154c79c8ad6d11294f60",
"done": false,
"todo": "Write a good README.md for Github"
},
{
"_id": "618615e479c8ad6d11299e12",
"owner_id": "6186154c79c8ad6d11294f60",
"done": false,
"todo": "Commit and push"
}
]
$ ./findOne.sh 618615d879c8ad6d1129977d
{
"_id": "618615d879c8ad6d1129977d",
"owner_id": "6186154c79c8ad6d11294f60",
"done": false,
"todo": "Write a good README.md for Github"
}
$ ./patch.sh 618615d879c8ad6d1129977d true
{
"matchedCount": 1,
"modifiedCount": 1
}
$ ./findAll.sh
[
{
"_id": "618615d879c8ad6d1129977d",
"owner_id": "6186154c79c8ad6d11294f60",
"done": true,
"todo": "Write a good README.md for Github"
},
{
"_id": "618615e479c8ad6d11299e12",
"owner_id": "6186154c79c8ad6d11294f60",
"done": false,
"todo": "Commit and push"
}
]
$ ./deleteOne.sh 618615d879c8ad6d1129977d
{
"deletedCount": 1
}
$ ./findAll.sh
[
{
"_id": "618615e479c8ad6d11299e12",
"owner_id": "6186154c79c8ad6d11294f60",
"done": false,
"todo": "Commit and push"
}
]
```
As you can see, the REST API works like a charm!
## Wrap Up
Cloudflare offers a Workers [KV product that _can_ make for a quick combination with Workers, but it's still a simple key-value datastore and most applications will outgrow it. By contrast, MongoDB is a powerful, full-featured database that unlocks the ability to store, query, and index your data without compromising the security or scalability of your application.
As demonstrated in this blog post, it is possible to take full advantage of both technologies. As a result, we built a powerful and secure serverless REST API that will scale very well.
> Another option for connecting to Cloudflare is the MongoDB Atlas Data API. The Atlas Data API provides a lightweight way to connect to MongoDB Atlas that can be thought of as similar to a REST API. To learn more, view this tutorial from my fellow developer advocate Mark Smith!
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. If your question is related to Cloudflare, I encourage you to join their active Discord community.
| md | {
"tags": [
"Atlas",
"TypeScript",
"Serverless",
"Cloudflare"
],
"pageDescription": "Learn how to create a serverless REST API using Cloudflare workers and MongoDB Atlas.",
"contentType": "Tutorial"
} | Create a REST API with Cloudflare Workers and MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/authentication-ios-apps-apple-sign-in-atlas-app-services | created | # Authentication for Your iOS Apps with Apple Sign-in and Atlas App Services
Mobile device authentication serves as the crucial first line of defense against potential intruders who aim to exploit personal information, financial data, or private details. As our mobile phones store a wealth of sensitive information, it is imperative to prioritize security while developing apps that ensure user safety.
Apple sign-in is a powerful solution that places user privacy at the forefront by implementing private email relay functionality. This enables users to shield their email addresses, granting them greater control over their data. Combining this with Atlas App Services provides developers with a streamlined and secure authentication experience. It also simplifies app development, service integration, and data connectivity, eliminating operational overhead.
In the following tutorial, I will show you how with only a few steps and a little code, you can bring this seamless implementation to your iOS apps! If you also want to follow along and check the code that I’ll be explaining in this article, you can find it in the Github repository.
## Context
This sample application consists of an iOS app with a “Sign in with Apple” button, where when the user taps on it, it will prompt the authentication native sheet that will allow the user to choose to sign in to your app hiding or showing their email address. Once the sign-up process is completed, the sign-in process gets handled by the Authentication API by Apple.
## Prerequisites
Since this tutorial’s main focus is on the code implementation with Apple sign-in, a few previous steps are required for it.
- Have the latest stable version of Xcode installed on your macOS computer, and make sure that the OS is compatible with the version.
- Have a setup of a valid Apple Developer Account and configure your App ID. You can follow the official Apple documentation.
- Have the Apple Sign-In Capability added to your project. Check out the official Apple sign-in official resources.
- Have the Realm Swift SDK installed on your project and an Atlas App Services app linked to your cluster. Please follow the steps in our Realm Swift SDK documentation on how to create an Alas App Services app.
## Configuring Apple provider on Atlas App Services
In order to follow this tutorial, you will need to have an **Atlas App Services app** created. If not, please follow the steps in our MongoDB documentation. It’s quite easy to set it up!
First, in your Atlas App Services app, go to **Data Access** -> **Authentication** on the sidebar.
In the **Authentication Providers** section, enable the **Apple** provider when tapping on the **Edit** button. You’ll see a screen like the one in the screenshot below:
You will have now to fill the corresponding fields with the following information:
- **Client ID:** Enter your application’s Bundle ID for the App Services Client ID.
- **Client Secret:** Choose or create a new secret, which is stored in Atlas App Services' back end.
- **Redirect URIs:** You will have to use a URI in order to redirect the authentication. You can use your own custom domain, but if you have a paid tier cluster in Atlas, you can benefit from our Hosting Service!
Click on the “Save Draft” button and your changes will be deployed.
### Implementing the Apple sign-in authentication functionality
Now, before continuing with this section, please make sure that you have followed our quick start guide to make sure that you have our Realm Swift SDK installed. Moving on to the fun part, it’s time to code!
This is a pretty simple UIKit project, where *LoginViewController.swift* will implement the authentication functionality of Apple sign-in, and if the authenticated user is valid, then a segue will transition to *WelcomeViewController.swift*.
On top of the view controller code, make sure that you import both the AuthenticationServices and RealmSwift frameworks so you have access to their methods. In your Storyboard, add a UIButton of type *ASAuthorizationAppleIDButton* to the *LoginViewController* and link it to its corresponding Swift file.
In the *viewDidLoad()* function of *LoginViewController*, we are going to call *setupAppleSignInButton()*, which is a private function that lays out the Apple sign-in button, provided by the AuthenticationServices API. Here is the code of the functionality.
```swift
// Mark: - IBOutlets
@IBOutlet weak var appleSignInButton: ASAuthorizationAppleIDButton!
// MARK: - View Lifecycle
override func viewDidLoad() {
super.viewDidLoad()
setupAppleSignInButton()
}
// MARK: - Private helper
private func setupAppleSignInButton() {
appleSignInButton.addTarget(self, action: #selector(handleAppleIdRequest), for: .touchUpInside)
appleSignInButton.cornerRadius = 10
}
```
The private function adds a target to the *appleSignInButton* and gives it a radius of 10 to its corners. The screenshot below shows how the button is laid out in the testing device.
Now, moving to *handleAppleIdRequest*, here is the implementation for it:
```swift
@objc func handleAppleIdRequest() {
let appleIDProvider = ASAuthorizationAppleIDProvider()
let request = appleIDProvider.createRequest()
request.requestedScopes = .fullName, .email]
let authorizationController = ASAuthorizationController(authorizationRequests: [request])
authorizationController.delegate = self
authorizationController.performRequests()
}
```
This function is a method that handles the initialization of Apple ID authorization using *ASAuthorizationAppleIDProvider* and *ASAuthorizationController* classes. Here is a breakdown of what the function does:
1. It creates an instance of *ASAuthorizationAppleIDProvider*, which is responsible for generating requests to authenticate users based on their Apple ID.
2. Using the *appleIDProvider* instance, it creates an authorization request when calling *createRequest()*.
3. The request is used to configure the specific data that the app needs to access from the user’s Apple ID. In this case, we are requesting fullName and email.
4. We create an instance of *ASAuthorizationController* that will manage the authorization requests and will also handle any user interactions related to the Apple ID authentication.
5. The *authorizationController* has to set its delegate to self, as the current object will have to conform to the *ASAuthorizationControllerDelegate* protocol.
6. Finally, the specified authorization flows are performed by calling *performRequests()*. This method triggers the system to present the Apple ID login interface to the user.
As we just mentioned, the view controller has to conform to the *ASAuthorizationControllerDelegate*. To do that, I created an extension of *LoginViewController*, where the implementation of the *didCompleteWithAuthorization* delegate method is where we will handle the successful authentication with the Swift Realm SDK.
``` swift
func authorizationController(controller: ASAuthorizationController, didCompleteWithAuthorization authorization: ASAuthorization) {
if let appleIDCredential = authorization.credential as? ASAuthorizationAppleIDCredential {
let userIdentifier = appleIDCredential.user
let fullName = appleIDCredential.fullName
let email = appleIDCredential.email
guard let identityToken = appleIDCredential.identityToken else {
return
}
let decodedToken = String(decoding: identityToken, as: UTF8.self)
print(decodedToken)
realmSignIn(appleToken: decodedToken)
}
}
```
To resume it in a few lines, this code retrieves the necessary user information from the Apple ID credential if the credentials of the user are successful. We also obtain the *identityToken*, which is the vital piece of information that is needed to use it on the Atlas App Services authentication.
However, note that this token **has to be decoded** in order to be used on Atlas App Services, and for that, you can use the *String(decoding:, as:)* method.
Once the token is decoded, it is a JWT that contains claims about the user signed by Apple Authentication Service. Then the *realmSignIn()* private method is called and the decoded token is passed as a parameter so the authentication can be handled.
```swift
private func realmSignIn(appleToken: String) {
let credentials = Credentials.apple(idToken: appleToken)
app.login(credentials: credentials) { (result) in
switch result {
case .failure(let error):
print("Realm Login failed: \(error.localizedDescription)")
case .success(_):
DispatchQueue.main.async {
print("Successful Login")
self.performSegue(withIdentifier: "goToWelcomeViewController", sender: nil)
}
}
}
}
```
The *realmSignIn()* private function handles the login into Atlas App Services. This function will allow you to authenticate your users that will be connected to your app without any additional hassle. First, the credentials are generated by *Credentials.apple(idToken:)*, where the decoded Apple token is passed as a parameter.
If the login is successful, then the code performs a segue and goes to the main screen of the project, *WelcomeViewController*. If it fails, then it will print an error message. Of course, feel free to adapt this error to whatever suits you better for your use case (i.e., an alert message).
Another interesting delegate method in terms of error handling is the *didCompleteWithError()* delegate function, which will get triggered if there is an error during the Apple ID authentication. You can use this one to provide some feedback to the user and improve the UX of your application.
## Important note
One of the biggest perks of Apple sign-in authentication, as it was mentioned earlier, is the flexibility it gives to the user regarding what gets shared with your app. This means that if the user decides to hide their email address and not to share their full name as the code was requested earlier through the *requestedScopes* definition, you will receive **an empty string** in the response. In the case of the email address, it will be a *nil* value.
If your iOS application has a use case where you want to establish communication with your users, you will need to implement [communication using Apple's private email relay service. You should avoid asking the user for their email in other parts of the app too, as it could potentially create a rejection on the App Store review.
## Repository
The code for this project can be found in the Github repository.
I hope you found this tutorial helpful. I encourage you to explore our Realm Swift SDK documentation to discover all the benefits that it can offer to you when building iOS apps. We have plenty of resources available to help you learn and implement these features. So go ahead, dive in, and see what Atlas App Services has in store for your app development journey.
If you have any questions or comments don’t hesitate to head over to our Community Forums to continue the conversation. Happy coding! | md | {
"tags": [
"Realm",
"Swift",
"Mobile",
"iOS"
],
"pageDescription": "Learn how to implement Apple sign-in within your own iOS mobile applications using Swift and MongoDB Atlas App Services.",
"contentType": "Tutorial"
} | Authentication for Your iOS Apps with Apple Sign-in and Atlas App Services | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-api-introduction | created | # An Introduction to the MongoDB Atlas Data API
# Introduction to the MongoDB Atlas Data API
There are a lot of options for connecting to MongoDB Atlas as an application developer. One of the newest options is the MongoDB Atlas Data API. The Atlas Data API provides a lightweight way to connect to MongoDB Atlas that can be thought of as similar to a REST API. This tutorial will show you how to enable the Data API and perform basic CRUD operations using curl. It’s the first in a series showing different uses for the Data API and how you can use it to build data-centric applications and services faster.
Access the full API reference.
This post assumes you already have an Atlas cluster. You can either use an existing one or you can sign up for a cloud account and create your first database cluster by following the instructions.
## Enabling the Atlas Data API
Enabling the Data API is very easy once you have a cluster in Atlas.
First, Click "Data API" in the bar on the left of your Atlas deployment.
Then select which data source or sources you want the Data API to have access to. For this example, I am selecting just the default Cluster0.
Then, select the large "Enable the Data API" button.
You will then have a screen confirming what clusters you have enabled for the Data API.
In the "Data API Access" column, select "Read and Write" for now, and then click on the button at the top left that says "Create API Key." Choose a name. It's not important what name you choose, as long as it's useful to you.
Finally, click "Generate API Key" and take a note of the key displayed in a secure place as you will not be able to see it again in Atlas. You can click the "Copy" button to copy it to your clipboard. I pasted mine into a .envrc file in my project.
If you want to test out a simple command, you can select one of your database collections in the dropdowns and copy-paste some code into your terminal to see some results. While writing this post, I did it just to check that I got some results back. When you're done, click "Close" to go back to the Data API screen. If you need to manage the keys you've created, you can click the "API Keys" tab on this screen.
You are now ready to call the Data API!
## Be careful with your API key!
The API key you've just created should never be shared with anyone, or sent to the browser. Anyone who gets hold of the key can use it to make changes to the data in your database! In fact, the Data API blocks browser access, because there's currently no secure way to make Data API requests securely without sharing an API key.
## Calling the Data API
All the Data API endpoints use HTTPS POST. Though it might seem logical to use GET when reading data, GET requests are intended to be cached and many platforms will do so automatically. To ensure you never have stale query results, all of the API endpoints use POST. Time to get started!
### Adding data to Atlas
To add documents to MongoDB, you will use the InsertOne or InsertMany action endpoints.
### InsertOne
When you insert a document with the API, you must provide the "dataSource" (which is your cluster name), "database," "collection," and "document" as part of a JSON payload document.
For authentication, you will need to pass the API key as a header. The API always uses HTTPS, so this is safe and secure from network snooping.
To call with curl, use the following command:
```shell
curl --location --request POST 'https://data.mongodb-api.com/app/data-YOUR_ID/endpoint/data/v1/action/insertOne' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: YOUR_API_KEY" \
--data-raw '{
"dataSource":"Cluster0",
"database":"household",
"collection":"pets",
"document" : { "name": "Harvest",
"breed": "Labrador",
"age": 5 }
}'
```
For example, my call looks like this:
```shell
curl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/insertOne' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg" \
--data-raw '{
"dataSource":"Cluster0",
"database":"household",
"collection":"pets",
"document" : { "name": "Harvest",
"breed": "Labrador",
"age": 5 }
}'
```
Note that the URL I'm using is my Data API URL endpoint, with `/action/insertOne` appended. When I ran this command with my values for `YOUR_ID` and `YOUR_API_KEY`, curl printed the following:
```json
{"insertedId":"62c6da4f0836cbd6ebf68589"}
```
This means you've added a new document to a collection called “pets” in a database called “household.” Due to MongoDB’s flexible dynamic model, neither the database nor collection needed to be defined in advance.
This API call returned a JSON document with the _id of the new document. As I didn't explicitly supply any value for _id ( the primary key in MongoDB), one was created for me and it was of type ObjectId. The API returns standard JSON by default, so this is displayed as a string.
### FindOne
To look up the document I just added by _id, I'll need to provide the _id that was just printed by curl. In the document that was printed, the value looks like a string, but it isn't. It's an ObjectId, which is the type of value that's created by MongoDB when no value is provided for the _id.
When querying for the ObjectId value, you need to wrap this string as an EJSON ObjectId type, like this: `{ "$oid" : }`. If you don't provide this wrapper, MongoDB will mistakenly believe you are looking for a string value, not the ObjectId that's actually there.
The findOne query looks much like the insertOne query, except that the action name in the URL is now findOne, and this call takes a "filter" field instead of a "document" field.
```shell
curl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/findOne' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg" \
--data-raw '{
"dataSource":"Cluster0",
"database":"household",
"collection":"pets",
"filter" : { "_id": { "$oid": "62c6da4f0836cbd6ebf68589" } }
}'
```
This printed out the following JSON for me:
```json
{"document":{
"_id":"62c6da4f0836cbd6ebf68589",
"name":"Harvest",
"breed":"Labrador",
"age":5}}
```
### Getting Extended JSON from the API
Note that in the output above, the _id is again being converted to "plain" JSON, and so the "_id" value is being converted to a string. Sometimes, it's useful to keep the type information, so you can specify that you would like Extended JSON (EJSON) output, for any Data API call, by supplying an "Accept" header, with the value of "application/ejson":
```shell
curl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/findOne' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header 'Accept: application/ejson' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg" \
--data-raw '{
"dataSource":"Cluster0",
"database":"household",
"collection":"pets",
"filter" : { "_id": { "$oid": "62c6da4f0836cbd6ebf68589" } }
}'
```
When I ran this, the "_id" value was provided with the "$oid" wrapper, to declare that it's an ObjectId value:
```json
{"document":{
"_id":{"$oid":"62c6da4f0836cbd6ebf68589"},
"name":"Harvest",
"breed":"Labrador",
"age":{"$numberInt":"5"}}}
```
### InsertMany
If you're inserting several documents into a collection, it’s much more efficient to make a single HTTPS call with the insertMany action. This endpoint works in a very similar way to the insertOne action, but it takes a "documents" field instead of a single "document" field, containing an array of documents:
```shell
curl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/insertMany' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg" \
--data-raw '{
"dataSource":"Cluster0",
"database":"household",
"collection":"pets",
"documents" : {
"name": "Brea",
"breed": "Labrador",
"age": 9,
"colour": "black"
},
{
"name": "Bramble",
"breed": "Labrador",
"age": 1,
"colour": "black"
}]
}'
```
When I ran this, the output looked like this:
```json
{"insertedIds":["62c6e8a15a3411a70813c21e","62c6e8a15a3411a70813c21f"]}
```
This endpoint returns JSON with an array of the values for _id for the documents that were added.
### Querying data
Querying for more than one document is done with the find endpoint, which returns an array of results. The following query looks up all the labradors that are two years or older, sorted by age:
```shell
curl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/find' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg" \
--data-raw '{
"dataSource":"Cluster0",
"database":"household",
"collection":"pets",
"filter": { "breed": "Labrador",
"age": { "$gt" : 2} },
"sort": { "age": 1 } }'
```
When I ran this, I received documents for the two oldest dogs, Harvest and Brea:
```json
{"documents":[
{"_id":"62c6da4f0836cbd6ebf68589","name":"Harvest","breed":"Labrador","age":5},
{"_id":"62c6e8a15a3411a70813c21e","name":"Brea","breed":"Labrador","age":9,"colour":"black"}]}
```
This object contains a field ”documents,” that is an array of everything that matched. If I wanted to fetch a subset of the results in pages, I could use the skip and limit parameter to set which result to start at and how many to return.
```shell
curl --location --request POST https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/updateOne \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg" \
--data-raw '{
"dataSource": "Cluster0",
"database": "household",
"collection": "pets",
"filter" : { "name" : "Harvest"},
"update" : { "$set" : { "colour": "yellow" }}
}'
```
Because this both matched one document and changed its content, my output looked like this:
```json
{"matchedCount":1,"modifiedCount":1}
```
I only wanted to update a single document (because I only expected to find one document for Harvest). To change all matching documents, I would call updateMany with the same parameters.
### Run an aggregation pipeline to compute something
You can also run [aggregation pipelines. As a simple example of how to call the aggregate endpoint, let's determine the count and average age for each color of labrador.
Aggregation pipelines are the more powerful part of the MongoDB Query API. As well as looking up documents, a pipeline allows you to calculate aggregate values across multiple documents. The following example extracts all labrador documents from the "pets" collection, groups them by their "colour" field, and then calculates the number of dogs ($sum of 1 for each dog document) and the average age of dog (using $avg) for each colour.
```shell
curl --location --request POST https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/aggregate \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg" \
--data-raw '{
"dataSource": "Cluster0",
"database": "household",
"collection": "pets",
"pipeline" : { "$match": {"breed": "Labrador"}},
{ "$group": { "_id" : "$colour",
"count" : { "$sum" : 1},
"average_age": {"$avg": "$age" }}}]}'
}'
```
When I ran the above query, the result looked like this:
```json
{"documents":[{"_id":"yellow","count":1,"average_age":5},{"_id":"black","count":2,"average_age":5}]}
```
It's worth noting that there are [some limitations when running aggregation pipelines through the Data API.
## Advanced features
When it comes to authentication and authorization, or just securing access to the Data API in general, you have a few options. These features use a neat feature of the Data API, which is that your Data API is a MongoDB Atlas Application Services app behind the scenes!
You can access the application by clicking on "Advanced Settings" on your Data API console page:
The rest of this section will use the features of this Atlas Application Services app, rather than the high level Data API pages.
### Restrict access by IP address
Restricting access to your API endpoint from only the servers that should have access is a relatively straightforward but effective way of locking down your API. You can change the list of IP addresses by clicking on "App Settings" in the left-hand navigation bar, and then clicking on the "IP Access List" tab on the settings pane.
By default, all IP addresses are allowed to access your API endpoint (that's what 0.0.0.0 means). If you want to lock down access to your API, you should delete this entry and add entries for servers that should be able to access your data. There's a convenient button to add your current IP address for when you're writing code against your API endpoint.
### Authentication using JWTs and JWK
In all the examples in this post, I've shown you how to use an API key to access your data. But by using the Atlas Application Services app, you can lock down access to your data using JSON Web Tokens (or JWTs) and email/password credentials. JWT has the benefit that you can use an external authentication service or identity providers, like Auth0 or Okta, to authenticate users of your application. The auth service can provide a JWT that your application can use to make authenticated queries using the Data API, and provides a JWK (JSON Web Keys) URL that can be used by the Data API to ensure any incoming requests have been authenticated by the authentication service.
My colleague Jesse (you may know him as codeSTACKr) has written a great tutorial for getting this up and running with the Data API and Auth0, and the same process applies for accepting JWTs with the Data API. By first clicking on "Advanced Settings" to access the configuration of the app that provides your Data API endpoints behind the scenes and going into “Authentication,” you can enable the provider with the appropriate signing key and algorithm.
Instead of setting up a trigger to create a new user document when a new JWT is encountered, however, set "Create User Upon Authentication" in the User Settings panel on the Data API configuration to "on."
### Giving role-based access to the Data API
For each cluster, you can set high-level access permissions like Read-Only Access, Read & Write Access, or No Access. However, you can also take this one step further by setting custom role-based access-control with the App Service Rules.
Selecting Custom Access will allow you to set up additional roles on who can access what data, either at the cluster, collection, document, or field level.
For example, you can restrict certain API key holders to only be able to insert documents but not delete them. These user.id fields are associated with each API key created:
### Add additional business logic with custom API endpoints
The Data API provides the basic CRUD and aggregation endpoints I've described above. For accessing and manipulating the data in your MongoDB database, because the Data API is provided by an Atlas App Services application, you get all the goodness that goes with that, including the ability to add more API endpoints yourself that can use all the power available to MongoDB Atlas Functions.
For example, I could write a serverless function that would look up a user's tweets using the Twitter API, combine those with a document looked up in MongoDB, and return the result:
```javascript
exports = function({ query, headers, body}, response) {
const collection = context.services.get("mongodb-atlas").db("user_database").collection("twitter_users");
const username = query.user;
const userDoc = collection.findOne({ "username": username });
// This function is for illustration only!
const tweets = twitter_api.get_tweets(userDoc.twitter_id);
return {
user: userDoc,
tweets: tweets
}
};
```
By configuring this as an HTTPS endpoint, I can set things like the
1. API route.
2. HTTPS method.
3. Custom authentication or authorization logic.
In this example, I’ve made this function available via a straightforward HTTPS GET request.
In this way, you can build an API to handle all of your application's data service requirements, all in one place. The endpoint above could be accessed with the following curl command:
```shell
curl --location --request GET 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/aggregate?user=mongodb' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header "api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg"
```
And the results would look something like this:
```json
{"user": { "username": "mongodb", "twitter_id": "MongoDB" },
"tweets": { "count": 10, "tweet_data": [...]}}
```
## Conclusion
The Data API is a powerful new MongoDB Atlas feature, giving you the ability to query your database from any environment that supports HTTPS. It also supports powerful social authentication possibilities using the standard JWT and JWK technologies. And finally, you can extend your API using all the features like Rules, Authentication, and HTTPS Endpoints. | md | {
"tags": [
"Atlas"
],
"pageDescription": "This article introduces the Atlas Data API and describes how to enable it and then call it from cURL.",
"contentType": "Article"
} | An Introduction to the MongoDB Atlas Data API | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-quickstart-fle | created | # Store Sensitive Data With Python & MongoDB Client-Side Field Level Encryption
With a combination of legislation around customer data protection (such as GDPR), and increasing legislation around money laundering, it's increasingly necessary to be able to store sensitive customer data *securely*. While MongoDB's default security is based on modern industry standards, such as TLS for the transport-layer and SCRAM-SHA-2356 for password exchange, it's still possible for someone to get into your database, either by attacking your server through a different vector, or by somehow obtaining your security credentials.
In these situations, you can add an extra layer of security to the most sensitive fields in your database using client-side field level encryption (CSFLE). CSFLE encrypts certain fields that you specify, within the driver, on the client, so that it is never transmitted unencrypted, nor seen unencrypted by the MongoDB server. CSFLE makes it nearly impossible to obtain sensitive information from the database server either directly through intercepting data from the client, or from reading data directly from disk, even with DBA or root credentials.
There are two ways to use CSFLE in MongoDB: *Explicit*, where your code has to manually encrypt data before it is sent to the driver to be inserted or updated using helper methods; and *implicit*, where you declare in your collection which fields should be encrypted using an extended JSON Schema, and this is done by the Python driver without any code changes. This tutorial will cover *implicit* CSFLE, which is only available in MongoDB Enterprise and MongoDB Atlas. If you're running MongoDB Community Server, you'll need to use explicit CSFLE, which won't be covered here.
## Prerequisites
- A recent release of Python 3. The code in this post was written for 3.8, but any release of Python 3.6+ should be fine.
- A MongoDB Atlas cluster running MongoDB 4.2 or later.
## Getting Set Up
There are two things you need to have installed on your app server to enable CSFLE in the PyMongo driver. The first is a Python library called pymongocrypt, which you can install by running the following with your virtualenv enabled:
``` bash
python -m pip install "pymongoencryption,srv]~=3.11"
```
The `[encryption]` in square braces tells pip to install the optional dependencies required to encrypt data within the PyMongo driver.
The second thing you'll need to have installed is mongocryptd, which is an application that is provided as part of [MongoDB Enterprise. Follow the instructions to install mongocryptd on to the machine you'll be using to run your Python code. In a production environment, it's recommended to run mongocryptd as a service at startup on your VM or container.
Test that you have mongocryptd installed in your path by running `mongocryptd`, ensuring that it prints out some output. You can then shut it down again with `Ctrl-C`.
## Creating a Key to Encrypt and Decrypt Your Data
First, I'll show you how to write a script to generate a new secret master key which will be used to protect individual field keys. In this tutorial, we will be using a "local" master key which will be stored on the application side either in-line in code or in a local key file. Note that a local key file should only be used in development. For production, it's strongly recommended to either use one of the integrated native cloud key management services or retrieve the master key from a secrets manager such as Hashicorp Vault. This Python script will generate some random bytes to be used as a secret master key. It will then create a new field key in MongoDB, encrypted using the master key. The master key will be written out to a file so it can be loaded by other python scripts, along with a JSON schema document that will tell PyMongo which fields should be encrypted and how.
>All of the code described in this post is on GitHub. I recommend you check it out if you get stuck, but otherwise, it's worth following the tutorial and writing the code yourself!
First, here's a few imports you'll need. Paste these into a file called `create_key.py`.
``` python
# create_key.py
import os
from pathlib import Path
from secrets import token_bytes
from bson import json_util
from bson.binary import STANDARD
from bson.codec_options import CodecOptions
from pymongo import MongoClient
from pymongo.encryption import ClientEncryption
from pymongo.encryption_options import AutoEncryptionOpts
```
The first thing you need to do is to generate 96 bytes of random data. Fortunately, Python ships with a module for exactly this purpose, called `secrets`. You can use the `token_bytes` method for this:
``` python
# create_key.py
# Generate a secure 96-byte secret key:
key_bytes = token_bytes(96)
```
Next, here's some code that creates a MongoClient, configured with a local key management system (KMS).
>**Note**: Storing the master key, unencrypted, on a local filesystem (which is what I do in this demo code) is insecure. In production you should use a secure KMS, such as AWS KMS, Azure Key Vault, or Google's Cloud KMS.
>
>I'll cover this in a later blog post, but if you want to get started now, you should read the documentation
Add this code to your `create_key.py` script:
``` python
# create_key.py
# Configure a single, local KMS provider, with the saved key:
kms_providers = {"local": {"key": key_bytes}}
csfle_opts = AutoEncryptionOpts(
kms_providers=kms_providers, key_vault_namespace="csfle_demo.__keystore"
)
# Connect to MongoDB with the key information generated above:
with MongoClient(os.environ"MDB_URL"], auto_encryption_opts=csfle_opts) as client:
print("Resetting demo database & keystore ...")
client.drop_database("csfle_demo")
# Create a ClientEncryption object to create the data key below:
client_encryption = ClientEncryption(
kms_providers,
"csfle_demo.__keystore",
client,
CodecOptions(uuid_representation=STANDARD),
)
print("Creating key in MongoDB ...")
key_id = client_encryption.create_data_key("local", key_alt_names=["example"])
```
Once the client is configured in the code above, it's used to drop any existing "csfle_demo" database, just to ensure that running this or other scripts doesn't result in your database being left in a weird state.
The configuration and the client is then used to create a ClientEncryption object that you'll use once to create a data key in the `__keystore` collection in the `csfle_demo` database. `create_data_key` will create a document in the `__keystore` collection that will look a little like this:
``` python
{
'_id': UUID('00c63aa2-059d-4548-9e18-54452195acd0'),
'creationDate': datetime.datetime(2020, 11, 24, 11, 25, 0, 974000),
'keyAltNames': ['example'],
'keyMaterial': b'W\xd2"\xd7\xd4d\x02e/\x8f|\x8f\xa2\xb6\xb1\xc0Q\xa0\x1b\xab ...'
'masterKey': {'provider': 'local'},
'status': 0,
'updateDate': datetime.datetime(2020, 11, 24, 11, 25, 0, 974000)
}
```
Now you have two keys! One is the 96 random bytes you generated with `token_bytes` - that's the master key (which remains outside the database). And there's another key in the `__keystore` collection! This is because MongoDB CSFLE uses [envelope encryption. The key that is actually used to encrypt field values is stored in the database, but it is stored encrypted with the master key you generated.
To make sure you don't lose the master key, here's some code you should add to your script which will save it to a file called `key_bytes.bin`.
``` python
# create_key.py
Path("key_bytes.bin").write_bytes(key_bytes)
```
Finally, you need a JSON schema structure that will tell PyMongo which fields need to be encrypted, and how. The schema needs to reference the key you created in `__keystore`, and you have that in the `key_id` variable, so this script is a good place to generate the JSON file. Add the following to the end of your script:
``` python
# create_key.py
schema = {
"bsonType": "object",
"properties": {
"ssn": {
"encrypt": {
"bsonType": "string",
# Change to "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" in order to filter by ssn value:
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random",
"keyId": key_id], # Reference the key
}
},
},
}
json_schema = json_util.dumps(
schema, json_options=json_util.CANONICAL_JSON_OPTIONS, indent=2
)
Path("json_schema.json").write_text(json_schema)
```
Now you can run this script. First, set the environment variable `MDB_URL` to the URL for your Atlas cluster. The script should create two files locally: `key_bytes.bin`, containing your master key; and `json_schema.json`, containing your JSON schema. In your database, there should be a `__keystore` collection containing your new (encrypted) field key! The easiest way to check this out is to go to [cloud.mongodb.com, find your cluster, and click on `Collections`.
## Run Queries Using Your Key and Schema
Create a new file, called `csfle_main.py`. This script will connect to your MongoDB cluster using the key and schema created by running `create_key.py`. I'll then show you how to insert a document, and retrieve it both with and without CSFLE configuration, to show how it is stored encrypted and transparently decrypted by PyMongo when the correct configuration is provided.
Start with some code to import the necessary modules and load the saved files:
``` python
# csfle_main.py
import os
from pathlib import Path
from pymongo import MongoClient
from pymongo.encryption_options import AutoEncryptionOpts
from pymongo.errors import EncryptionError
from bson import json_util
# Load the master key from 'key_bytes.bin':
key_bin = Path("key_bytes.bin").read_bytes()
# Load the 'person' schema from "json_schema.json":
collection_schema = json_util.loads(Path("json_schema.json").read_text())
```
Add the following configuration needed to connect to MongoDB:
``` python
# csfle_main.py
# Configure a single, local KMS provider, with the saved key:
kms_providers = {"local": {"key": key_bin}}
# Create a configuration for PyMongo, specifying the local master key,
# the collection used for storing key data, and the json schema specifying
# field encryption:
csfle_opts = AutoEncryptionOpts(
kms_providers,
"csfle_demo.__keystore",
schema_map={"csfle_demo.people": collection_schema},
)
```
The code above is very similar to the configuration created in `create_key.py`. Note that this time, `AutoEncryptionOpts` is passed a `schema_map`, mapping the loaded JSON schema against the `people` collection in the `csfle_demo` database. This will let PyMongo know which fields to encrypt and decrypt, and which algorithms and keys to use.
At this point, it's worth taking a look at the JSON schema that you're loading. It's stored in `json_schema.json`, and it should look a bit like this:
``` json
{
"bsonType": "object",
"properties": {
"ssn": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random",
"keyId":
{
"$binary": {
"base64": "4/p3dLgeQPyuSaEf+NddHw==",
"subType": "04"}}]
}}}}
```
This schema specifies that the `ssn` field, used to store a social security number, is a string which should be stored encrypted using the [AEAD_AES_256_CBC_HMAC_SHA_512-Random algorithm.
If you don't want to store the schema in a file when you generate your field key in MongoDB, you can load the key ID at any time using the values you set for `keyAltNames` when you created the key. In my case, I set `keyAltNames` to `"example"]`, so I could look it up using the following line of code:
``` python
key_id = db.__keystore.find_one({ "keyAltNames": "example" })["_id"]
```
Because my code in `create_key.py` writes out the schema at the same time as generating the key, it already has access to the key's ID so the code doesn't need to look it up.
Add the following code to connect to MongoDB using the configuration you added above:
``` python
# csfle_main.py
# Add a new document to the "people" collection, and then read it back out
# to demonstrate that the ssn field is automatically decrypted by PyMongo:
with MongoClient(os.environ["MDB_URL"], auto_encryption_opts=csfle_opts) as client:
client.csfle_demo.people.delete_many({})
client.csfle_demo.people.insert_one({
"full_name": "Sophia Duleep Singh",
"ssn": "123-12-1234",
})
print("Decrypted find() results: ")
print(client.csfle_demo.people.find_one())
```
The code above connects to MongoDB and clears any existing documents from the `people` collection. It then adds a new person document, for Sophia Duleep Singh, with a fictional `ssn` value.
Just to prove the data can be read back from MongoDB and decrypted by PyMongo, the last line of code queries back the record that was just added and prints it to the screen. When I ran this code, it printed:
``` none
{'_id': ObjectId('5fc12f13516b61fa7a99afba'), 'full_name': 'Sophia Duleep Singh', 'ssn': '123-12-1234'}
```
To prove that the data is encrypted on the server, you can connect to your cluster using [Compass or at cloud.mongodb.com, but it's not a lot of code to connect again without encryption configuration, and query the document:
``` python
# csfle_main.py
# Connect to MongoDB, but this time without CSFLE configuration.
# This will print the document with ssn *still encrypted*:
with MongoClient(os.environ"MDB_URL"]) as client:
print("Encrypted find() results: ")
print(client.csfle_demo.people.find_one())
```
When I ran this, it printed out:
``` none
{
'_id': ObjectId('5fc12f13516b61fa7a99afba'),
'full_name': 'Sophia Duleep Singh',
'ssn': Binary(b'\x02\xe3\xfawt\xb8\x1e@\xfc\xaeI\xa1\x1f\xf8\xd7]\x1f\x02\xd8+,\x9el ...', 6)
}
```
That's a very different result from '123-12-1234'! Unfortunately, when you use the Random encryption algorithm, you lose the ability to filter on the field. You can see this if you add the following code to the end of your script and execute it:
``` python
# csfle_main.py
# The following demonstrates that if the ssn field is encrypted as
# "Random" it cannot be filtered:
try:
with MongoClient(os.environ["MDB_URL"], auto_encryption_opts=csfle_opts) as client:
# This will fail if ssn is specified as "Random".
# Change the algorithm to "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic"
# in client_schema_create_key.py (and run it again) for this to succeed:
print("Find by ssn: ")
print(client.csfle_demo.people.find_one({"ssn": "123-12-1234"}))
except EncryptionError as e:
# This is expected if the field is "Random" but not if it's "Deterministic"
print(e)
```
When you execute this block of code, it will print an exception saying, "Cannot query on fields encrypted with the randomized encryption algorithm...". `AEAD_AES_256_CBC_HMAC_SHA_512-Random` is the correct algorithm to use for sensitive data you won't have to filter on, such as medical conditions, security questions, etc. It also provides better protection against frequency analysis recovery, and so should probably be your default choice for encrypting sensitive data, especially data that is high-cardinality, such as a credit card number, phone number, or ... yes ... a social security number. But there's a distinct probability that you might want to search for someone by their Social Security number, given that it's a unique identifier for a person, and you can do this by encrypting it using the "Deterministic" algorithm.
In order to fix this, open up `create_key.py` again and change the algorithm in the schema definition from `Random` to `Deterministic`, so it looks like this:
``` python
# create_key.py
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic",
```
Re-run `create_key.py` to generate a new master key, field key, and schema file. (This operation will also delete your `csfle_demo` database!) Run `csfle_main.py` again. This time, the block of code that failed before should instead print out the details of Sophia Duleep Singh.
The problem with this way of configuring your client is that if some other code is misconfigured, it can either save unencrypted values in the database or save them using the wrong key or algorithm. Here's an example of some code to add a second record, for Dora Thewlis. Unfortunately, this time, the configuration has not provided a `schema_map`! What this means is that the SSN for Dora Thewlis will be stored in plaintext.
``` python
# Configure encryption options with the same key, but *without* a schema:
csfle_opts_no_schema = AutoEncryptionOpts(
kms_providers,
"csfle_demo.__keystore",
)
with MongoClient(
os.environ["MDB_URL"], auto_encryption_opts=csfle_opts_no_schema
) as client:
print("Inserting Dora Thewlis, without configured schema.")
# This will insert a document *without* encrypted ssn, because
# no schema is specified in the client or server:
client.csfle_demo.people.insert_one({
"full_name": "Dora Thewlis",
"ssn": "234-23-2345",
})
# Connect without CSFLE configuration to show that Sophia Duleep Singh is
# encrypted, but Dora Thewlis has her ssn saved as plaintext.
with MongoClient(os.environ["MDB_URL"]) as client:
print("Encrypted find() results: ")
for doc in client.csfle_demo.people.find():
print(" *", doc)
```
If you paste the above code into your script and run it, it should print out something like this, demonstrating that one of the documents has an encrypted SSN, and the other's is plaintext:
``` none
* {'_id': ObjectId('5fc12f13516b61fa7a99afba'), 'full_name': 'Sophia Duleep Singh', 'ssn': Binary(b'\x02\xe3\xfawt\xb8\x1e@\xfc\xaeI\xa1\x1f\xf8\xd7]\x1f\x02\xd8+,\x9el\xfe\xee\xa7\xd9\x87+\xb9p\x9a\xe7\xdcjY\x98\x82]7\xf0\xa4G[]\xd2OE\xbe+\xa3\x8b\xf5\x9f\x90u6>\xf3(6\x9c\x1f\x8e\xd8\x02\xe5\xb5h\xc64i>\xbf\x06\xf6\xbb\xdb\xad\xf4\xacp\xf1\x85\xdbp\xeau\x05\xe4Z\xe9\xe9\xd0\xe9\xe1n<', 6)}
* {'_id': ObjectId('5fc12f14516b61fa7a99afc0'), 'full_name': 'Dora Thewlis', 'ssn': '234-23-2345'}
```
*Fortunately*, MongoDB provides the ability to attach a [validator to a collection, to ensure that the data stored is encrypted according to the schema.
In order to have a schema defined on the server-side, return to your `create_key.py` script, and instead of writing out the schema to a JSON file, provide it to the `create_collection` method as a JSON Schema validator:
``` python
# create_key.py
print("Creating 'people' collection in 'csfle_demo' database (with schema) ...")
client.csfle_demo.create_collection(
"people",
codec_options=CodecOptions(uuid_representation=STANDARD),
validator={"$jsonSchema": schema},
)
```
Providing a validator attaches the schema to the created collection, so there's no need to save the file locally, no need to read it into `csfle_main.py`, and no need to provide it to MongoClient anymore. It will be stored and enforced by the server. This simplifies both the key generation code and the code to query the database, *and* it ensures that the SSN field will always be encrypted correctly. Bonus!
The definition of `csfle_opts` becomes:
``` python
# csfle_main.py
csfle_opts = AutoEncryptionOpts(
kms_providers,
"csfle_demo.__keystore",
)
```
## In Conclusion
By completing this quick start, you've learned how to:
- Create a secure random key for encrypting data keys in MongoDB.
- Use local key storage to store a key during development.
- Create a Key in MongoDB (encrypted with your local key) to encrypt data in MongoDB.
- Use a JSON Schema to define which fields should be encrypted.
- Assign the JSON Schema to a collection to validate encrypted fields on the server.
As mentioned earlier, you should *not* use local key storage to manage your key - it's insecure. You can store the key manually in a KMS of your choice, such as Hashicorp Vault, or if you're using one of the three major cloud providers, their KMS services are already integrated into PyMongo. Read the documentation to find out more.
>I hope you enjoyed this post! Let us know what you think on the MongoDB Community Forums.
There is a lot of documentation about Client-Side Field-Level Encryption, in different places. Here are the docs I found useful when writing this post:
- PyMongo CSFLE Docs
- Client-Side Field Level Encryption docs
- Schema Validation
- MongoDB University CSFLE Guides Repository
If CSFLE doesn't quite fit your security requirements, you should check out our other security docs, which cover encryption at rest and configuring transport encryption, among other things.
As always, if you have any questions, or if you've built something cool, let us know on the MongoDB Community Forums! | md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": "Store data securely in MongoDB using Client-Side Field-Level Encryption",
"contentType": "Quickstart"
} | Store Sensitive Data With Python & MongoDB Client-Side Field Level Encryption | 2024-05-20T17:32:23.501Z |