sourceName
stringclasses 1
value | url
stringlengths 52
145
| action
stringclasses 1
value | body
stringlengths 0
60.5k
| format
stringclasses 1
value | metadata
dict | title
stringlengths 5
125
| updated
stringclasses 3
values |
---|---|---|---|---|---|---|---|
devcenter | https://www.mongodb.com/developer/languages/javascript/node-aggregation-framework | created | # Aggregation Framework with Node.js Tutorial
When you want to analyze data stored in MongoDB, you can use MongoDB's powerful aggregation framework to do so. Today, I'll give you a high-level overview of the aggregation framework and show you how to use it.
>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.
>
>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.
If you're just joining us in this Quick Start with MongoDB and Node.js series, welcome! So far, we've covered how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. The code we write today will use the same structure as the code we built in the first post in the series; so, if you have any questions about how to get started or how the code is structured, head back to that first post.
And, with that, let's dive into the aggregation framework!
>If you are more of a video person than an article person, fear not. I've made a video just for you! The video below covers the same content as this article.
>
>:youtube]{vid=iz37fDe1XoM}
>
>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
## What is the Aggregation Framework?
The aggregation framework allows you to analyze your data in real time. Using the framework, you can create an aggregation pipeline that consists of one or more stages. Each stage transforms the documents and passes the output to the next stage.
If you're familiar with the Linux pipe ( `|` ), you can think of the aggregation pipeline as a very similar concept. Just as output from one command is passed as input to the next command when you use piping, output from one stage is passed as input to the next stage when you use the aggregation pipeline.
The aggregation framework has a variety of stages available for you to use. Today, we'll discuss the basics of how to use $match, $group, $sort, and $limit. Note that the aggregation framework has many other powerful stages including $count, $geoNear, $graphLookup, $project, $unwind, and others.
## How Do You Use the Aggregation Framework?
I'm hoping to visit the beautiful city of Sydney, Australia soon. Sydney is a huge city with many suburbs, and I'm not sure where to start looking for a cheap rental. I want to know which Sydney suburbs have, on average, the cheapest one-bedroom Airbnb listings.
I could write a query to pull all of the one-bedroom listings in the Sydney area and then write a script to group the listings by suburb and calculate the average price per suburb. Or, I could write a single command using the aggregation pipeline. Let's use the aggregation pipeline.
There is a variety of ways you can create aggregation pipelines. You can write them manually in a code editor or create them visually inside of MongoDB Atlas or MongoDB Compass. In general, I don't recommend writing pipelines manually as it's much easier to understand what your pipeline is doing and spot errors when you use a visual editor. Since you're already setup to use MongoDB Atlas for this blog series, we'll create our aggregation pipeline in Atlas.
### Navigate to the Aggregation Pipeline Builder in Atlas
The first thing we need to do is navigate to the Aggregation Pipeline Builder in Atlas.
1. Navigate to Atlas and authenticate if you're not already authenticated.
2. In the **Organizations** menu in the upper-left corner, select the organization you are using for this Quick Start series.
3. In the **Projects** menu (located beneath the Organizations menu), select the project you are using for this Quick Start series.
4. In the right pane for your cluster, click **COLLECTIONS**.
5. In the list of databases and collections that appears, select **listingsAndReviews**.
6. In the right pane, select the **Aggregation** view to open the Aggregation Pipeline Builder.
The Aggregation Pipeline Builder provides you with a visual representation of your aggregation pipeline. Each stage is represented by a new row. You can put the code for each stage on the left side of a row, and the Aggregation Pipeline Builder will automatically provide a live sample of results for that stage on the right side of the row.
## Build an Aggregation Pipeline
Now we are ready to build an aggregation pipeline.
### Add a $match Stage
Let's begin by narrowing down the documents in our pipeline to one-bedroom listings in the Sydney, Australia market where the room type is "Entire home/apt." We can do so by using the $match stage.
1. On the row representing the first stage of the pipeline, choose **$match** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$match` operator in the code box for the stage.
2. Now we can input a query in the code box. The query syntax for `$match` is the same as the `findOne()` syntax that we used in a previous post. Replace the code in the `$match` stage's code box with the following:
``` json
{
bedrooms: 1,
"address.country": "Australia",
"address.market": "Sydney",
"address.suburb": { $exists: 1, $ne: "" },
room_type: "Entire home/apt"
}
```
Note that we will be using the `address.suburb` field later in the pipeline, so we are filtering out documents where `address.suburb` does not exist or is represented by an empty string.
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$match` stage is executed.
### Add a $group Stage
Now that we have narrowed our documents down to one-bedroom listings in the Sydney, Australia market, we are ready to group them by suburb. We can do so by using the $group stage.
1. Click **ADD STAGE**. A new stage appears in the pipeline.
2. On the row representing the new stage of the pipeline, choose **$group** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$group` operator in the code box for the stage.
3. Now we can input code for the `$group` stage. We will provide an `_id`, which is the field that the Aggregation Framework will use to create our groups. In this case, we will use `$address.suburb` as our `_id`. Inside of the $group stage, we will also create a new field named `averagePrice`. We can use the $avg aggregation pipeline operator to calculate the average price for each suburb. Replace the code in the $group stage's code box with the following:
``` json
{
_id: "$address.suburb",
averagePrice: {
"$avg": "$price"
}
}
```
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$group` stage is executed. Note that the documents have been transformed. Instead of having a document for each listing, we now have a document for each suburb. The suburb documents have only two fields: `_id` (the name of the suburb) and `averagePrice`.
### Add a $sort Stage
Now that we have the average prices for suburbs in the Sydney, Australia market, we are ready to sort them to discover which are the least expensive. We can do so by using the $sort stage.
1. Click **ADD STAGE**. A new stage appears in the pipeline.
2. On the row representing the new stage of the pipeline, choose **$sort** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$sort` operator in the code box for the stage.
3. Now we are ready to input code for the `$sort` stage. We will sort on the `$averagePrice` field we created in the previous stage. We will indicate we want to sort in ascending order by passing `1`. Replace the code in the `$sort` stage's code box with the following:
``` json
{
"averagePrice": 1
}
```
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$sort` stage is executed. Note that the documents have the same shape as the documents in the previous stage; the documents are simply sorted from least to most expensive.
### Add a $limit Stage
Now we have the average prices for suburbs in the Sydney, Australia market sorted from least to most expensive. We may not want to work with all of the suburb documents in our application. Instead, we may want to limit our results to the 10 least expensive suburbs. We can do so by using the $limit stage.
1. Click **ADD STAGE**. A new stage appears in the pipeline.
2. On the row representing the new stage of the pipeline, choose **$limit** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$limit` operator in the code box for the stage.
3. Now we are ready to input code for the `$limit` stage. Let's limit our results to 10 documents. Replace the code in the $limit stage's code box with the following:
``` json
10
```
The Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 10 documents that will be included in the results after the `$limit` stage is executed. Note that the documents have the same shape as the documents in the previous stage; we've simply limited the number of results to 10.
## Execute an Aggregation Pipeline in Node.js
Now that we have built an aggregation pipeline, let's execute it from inside of a Node.js script.
### Get a Copy of the Node.js Template
To make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.
1. Download a copy of template.js.
2. Open `template.js` in your favorite code editor.
3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.
4. Save the file as `aggregation.js`.
You can run this file by executing `node aggregation.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.
### Create a Function
Let's create a function whose job it is to print the cheapest suburbs for a given market.
1. Continuing to work in `aggregation.js`, create an asynchronous function named `printCheapestSuburbs` that accepts a connected MongoClient, a country, a market, and the maximum number of results to print as parameters.
``` js
async function printCheapestSuburbs(client, country, market, maxNumberToPrint) {
}
```
2. We can execute a pipeline in Node.js by calling
Collection's
aggregate().
Paste the following in your new function:
``` js
const pipeline = ];
const aggCursor = client.db("sample_airbnb")
.collection("listingsAndReviews")
.aggregate(pipeline);
```
3. The first param for `aggregate()` is a pipeline of type object. We could manually create the pipeline here. Since we've already created a pipeline inside of Atlas, let's export the pipeline from there. Return to the Aggregation Pipeline Builder in Atlas. Click the **Export pipeline code to language** button.
![Export pipeline in Atlas
4. The **Export Pipeline To Language** dialog appears. In the **Export Pipleine To** selection box, choose **NODE**.
5. In the Node pane on the right side of the dialog, click the **copy** button.
6. Return to your code editor and paste the `pipeline` in place of the empty object currently assigned to the pipeline constant.
``` js
const pipeline =
{
'$match': {
'bedrooms': 1,
'address.country': 'Australia',
'address.market': 'Sydney',
'address.suburb': {
'$exists': 1,
'$ne': ''
},
'room_type': 'Entire home/apt'
}
}, {
'$group': {
'_id': '$address.suburb',
'averagePrice': {
'$avg': '$price'
}
}
}, {
'$sort': {
'averagePrice': 1
}
}, {
'$limit': 10
}
];
```
7. This pipeline would work fine as written. However, it is hardcoded to search for 10 results in the Sydney, Australia market. We should update this pipeline to be more generic. Make the following replacements in the pipeline definition:
1. Replace `'Australia'` with `country`
2. Replace `'Sydney'` with `market`
3. Replace `10` with `maxNumberToPrint`
8. `aggregate()` will return an [AggregationCursor, which we are storing in the `aggCursor` constant. An AggregationCursor allows traversal over the aggregation pipeline results. We can use AggregationCursor's forEach() to iterate over the results. Paste the following inside `printCheapestSuburbs()` below the definition of `aggCursor`.
``` js
await aggCursor.forEach(airbnbListing => {
console.log(`${airbnbListing._id}: ${airbnbListing.averagePrice}`);
});
```
### Call the Function
Now we are ready to call our function to print the 10 cheapest suburbs in the Sydney, Australia market. Add the following call in the `main()` function beneath the comment that says `Make the appropriate DB calls`.
``` js
await printCheapestSuburbs(client, "Australia", "Sydney", 10);
```
Running aggregation.js results in the following output:
``` json
Balgowlah: 45.00
Willoughby: 80.00
Marrickville: 94.50
St Peters: 100.00
Redfern: 101.00
Cronulla: 109.00
Bellevue Hill: 109.50
Kingsgrove: 112.00
Coogee: 115.00
Neutral Bay: 119.00
```
Now I know what suburbs to begin searching as I prepare for my trip to Sydney, Australia.
## Wrapping Up
The aggregation framework is an incredibly powerful way to analyze your data. Learning to create pipelines may seem a little intimidating at first, but it's worth the investment. The aggregation framework can get results to your end-users faster and save you from a lot of scripting.
Today, we only scratched the surface of the aggregation framework. I highly recommend MongoDB University's free course specifically on the aggregation framework: M121: The MongoDB Aggregation Framework. The course has a more thorough explanation of how the aggregation framework works and provides detail on how to use the various pipeline stages.
This post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.
Now you're ready to move on to the next post in this series all about change streams and triggers. In that post, you'll learn how to automatically react to changes in your database.
Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums. | md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Discover how to analyze your data using MongoDB's Aggregation Framework and Node.js.",
"contentType": "Quickstart"
} | Aggregation Framework with Node.js Tutorial | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/real-time-chat-phaser-game-mongodb-socketio | created | md | {
"tags": [
"MongoDB",
"JavaScript",
"Node.js"
],
"pageDescription": "Learn how to add real-time chat to a Phaser game with Socket.io and MongoDB.",
"contentType": "Tutorial"
} | Real-Time Chat in a Phaser Game with MongoDB and Socket.io | 2024-05-20T17:32:23.501Z |
|
devcenter | https://www.mongodb.com/developer/products/realm/jetpack-compose-experience-android | created | # Unboxing Jetpack Compose: My First Compose App
# Introduction
### What is Jetpack Compose?
As per Google, *“Jetpack Compose is Android’s modern toolkit for building native UI. It simplifies and accelerates UI development on Android. Quickly bring your app to life with less code, powerful tools, and intuitive Kotlin APIs”.*
In my words, it’s a revolutionary **declarative way** of creating (or should I say composing 😄) UI in Android using Kotlin. Until now, we created layouts using XML and never dared to create via code (except for custom views, no choice) due to its complexity, non-intuitiveness, and maintenance issues.
But now it’s different!
### What is Declarative UI?
> You know, imperative is like **how** you do something, and declarative is more like **what** you do, or something.
Doesn’t that make sense? It didn’t to me as well in the first go 😄. In my opinion, imperative is more like an algorithm to perform any operation step by step, and declarative is the code that is built using the algorithm, more to do ***what*** works.
In Android, we normally create an XML of a layout and then update (sync) each element every time based on the input received, business rules using findViewById/Kotlin Semantics/View Binding/ Data Binding 😅.
But with **Compose**, we simply write a function that has both elements and rules, which is called whenever information gets updated. In short, a part of the UI is recreated every time **without** **performance** **issues**.
This philosophy or mindset will in turn help you write smaller (Single Responsibility principle) and reusable functions.
### Why is Compose Getting So Popular?
I’m not really sure, but out of the many awesome features, the ones I’ve loved most are:
1. **Faster release cycle**: Bi-weekly, so now there is a real chance that if you get any issue with **composing,** it can be fixed soon. Hopefully!
2. **Interoperable**: Similar to Kotlin, Compose is also interoperable with earlier UI design frameworks.
3. **Jetpack library and material component built-in support**: Reduce developer efforts and time in building beautiful UI with fewer lines of code ❤️.
4. **Declarative UI**: With a new way of building UI, we are now in harmony with all other major frontend development frameworks like SwiftUI, Flutter, and React Native, making it easier for the developer to use concepts/paradigms from other platforms.
### Current state
As of 29th July, the first stable version was released 1.0, meaning **Compose is production-ready**.
# Get Started with Compose
### For using Compose, we need to set up a few things:
1. Kotlin v*1.5.10* and above, so let’s update our dependency in the project-level `build.gradle` file.
```kotlin
plugins {
id 'org.jetbrains.kotlin:android' version '1.5.10'
}
```
2. Minimum *API level 21*
```kotlin
android {
defaultConfig {
...
minSdkVersion 21
}
}
```
3. Enable Compose
```kotlin
android {
defaultConfig {
...
minSdkVersion 21
}
buildFeatures {
// Enables Jetpack Compose for this module
compose true
}
}
```
4. Others like min Java or Kotlin compiler and compose compiler
```kotlin
android {
defaultConfig {
...
minSdkVersion 21
}
buildFeatures {
// Enables Jetpack Compose for this module
compose true
}
...
// Set both the Java and Kotlin compilers to target Java 8.
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
kotlinOptions {
jvmTarget = "1.8"
}
composeOptions {
kotlinCompilerExtensionVersion '1.0.0'
}
}
```
5. At last compose dependency for build UI
```kotlin
dependencies {
implementation 'androidx.compose.ui:ui:1.0.0'
// Tooling support (Previews, etc.)
implementation 'androidx.compose.ui:ui-tooling:1.0.0'
// Foundation (Border, Background, Box, Image, Scroll, shapes, animations, etc.)
implementation 'androidx.compose.foundation:foundation:1.0.0'
// Material Design
implementation 'androidx.compose.material:material:1.0.0'
// Material design icons
implementation 'androidx.compose.material:material-icons-core:1.0.0'
implementation 'androidx.compose.material:material-icons-extended:1.0.0'
// Integration with activities
implementation 'androidx.activity:activity-compose:1.3.0'
// Integration with ViewModels
implementation 'androidx.lifecycle:lifecycle-viewmodel-compose:1.0.0-alpha07'
// Integration with observables
implementation 'androidx.compose.runtime:runtime-livedata:1.0.0'
}
```
### Mindset
While composing UI, you need to unlearn various types of layouts and remember just one thing: Everything is a composition of *rows* and *columns*.
But what about ConstraintLayout, which makes life so easy and is very useful for building complex UI? We can still use it ❤️, but in a little different way.
### First Compose Project — Tweet Details Screen
For our learning curve experience, I decided to re-create this screen in Compose.
So let’s get started.
Create a new project with Compose project as a template and open MainActivity.
If you don’t see the Compose project, then update Android Studio to the latest version.
```kotlin
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
ComposeTweetTheme {
....
}
}
}
}
```
Now to add a view to the UI, we need to create a function with `@Composable` annotation, which makes it a Compose function.
Creating our first layout of the view, toolbar
```kotlin
@Composable
fun getAppTopBar() {
TopAppBar(
title = {
Text(text = stringResource(id = R.string.app_name))
},
elevation = 0.*dp
)
}
```
To preview the UI rendered in Android Studio, we can use `@Preview` annotation.
TopAppBar is an inbuilt material component for adding a topbar to our application.
Let’s create a little more complex view, user profile view
As discussed earlier, in Compose, we have only rows and columns, so let’s break our UI 👇, where the red border represents columns and green is rows, and complete UI as a row in the screen.
So let’s create our compose function for user profile view with our root row.
You will notice the modifier argument in the Row function. This is the Compose way of adding formatting to the elements, which is uniform across all the elements.
Creating a round imageview is very simple now. No need for any library or XML drawable as an overlay.
```kotlin
Image(
painter = painterResource(id = R.drawable.ic_profile),
contentDescription = "Profile Image",
modifier = Modifier
.size(36.dp)
.clip(CircleShape)
.border(1.dp, Color.Transparent, CircleShape),
contentScale = ContentScale.Crop
)
```
Again we have a `modifier` for updating our Image (AKA ImageView) with `clip` to make it rounded and `contentScale` to scale the image.
Similarly, adding a label will be a piece of cake now.
```kotlin
Text (text = userName, fontSize = 20.sp)
```
Now let’s put it all together in rows and columns to complete the view.
```kotlin
@Composable
fun userProfileView(userName: String, userHandle: String) {
Row(
modifier = Modifier
.fillMaxWidth()
.wrapContentHeight()
.padding(all = 12.dp),
verticalAlignment = Alignment.CenterVertically
) {
Image(
painter = painterResource(id = R.drawable.ic_profile),
contentDescription = "Profile Image",
modifier = Modifier
.size(36.dp)
.clip(CircleShape)
.border(1.dp, Color.Transparent, CircleShape),
contentScale = ContentScale.Crop
)
Column(
modifier = Modifier
.padding(start = 12.dp)
) {
Text(text = userName, fontSize = 20.sp, fontWeight = FontWeight.Bold)
Text(text = userHandle, fontSize = 14.sp)
}
}
}
```
Another great example is to create a Text Label with two styles. We know that traditionally doing that is very painful.
Let’s see the Compose way of doing it.
```kotlin
Text(
text = buildAnnotatedString {
withStyle(style = SpanStyle(fontWeight = FontWeight.ExtraBold)) {
append("3")
}
append(" ")
withStyle(style = SpanStyle(fontWeight = FontWeight.Normal)) {
append(stringResource(id = R.string.retweets))
}
},
modifier = Modifier.padding(end = 8.dp)
)
```
That’s it!! I hope you’ve seen the ease of use and benefit of using Compose for building UI.
Just remember everything in Compose is rows and columns, and the order of attributes matters. You can check out my Github repo complete example which also demonstrates the rendering of data using `viewModel`.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Realm",
"Kotlin",
"Android"
],
"pageDescription": "Learn how to get started with Jetpack Compose on Android",
"contentType": "Quickstart"
} | Unboxing Jetpack Compose: My First Compose App | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/searching-on-your-location-atlas-search-geospatial-operators | created |
Bed and Breakfast [40.7128, -74.0060]:
| md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Learn how to compound Atlas Search operators and do autocomplete searches with geospatial criteria.",
"contentType": "Tutorial"
} | Searching on Your Location with Atlas Search and Geospatial Operators | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/adding-real-time-notifications-ghost-cms-using-mongodb-server-sent-events | created | # Adding Real-Time Notifications to Ghost CMS Using MongoDB and Server-Sent Events
## About Ghost
Ghost is an open-source blogging platform. Unlike other content management systems like WordPress, its focus lies on professional publishing.
This ensures the core of the system remains lean. To integrate third-party applications, you don't even need to install plugins. Instead, Ghost offers a feature called Webhooks which runs while you work on your publication.
These webhooks send particular items, such as a post or a page, to an HTTP endpoint defined by you and thereby provide an excellent base for our real-time service.
## Server-sent events
You are likely familiar with the concept of an HTTP session. A client sends a request, the server responds and then closes the connection. When using server-sent events (SSEs), said connection remains open. This allows the server to continue writing messages into the response.
Like Websockets (WS), apps and websites use SSEs for real-time communication. Where WSs use a dedicated protocol and work in both directions, SSEs are unidirectional. They use plain HTTP endpoints to write a message whenever an event occurs on the server side.
Clients can subscribe to these endpoints using the EventSource browser API:
```javascript
const subscription = new EventSource("https://example.io/subscribe")
```
## MongoDB Change Streams
Now that we’ve looked at the periphery of our application, it's time to present its core. We'll use MongoDB to store a subset of the received Ghost Webhook data. On top of that, we'll use MongoDB Change Streams to watch our webhook collection.
In a nutshell, Change Streams register data flowing into our database. We can subscribe to this data stream and react to it. Reacting means sending out SSE messages to connected clients whenever a new webhook is received and stored.
The following Javascript code showcases a simple Change Stream subscription.
```javascript
import {MongoClient} from 'mongodb';
const client = new MongoClient("");
const ghostDb = client.db('ghost');
const ghostCollection = ghostDb.collection('webhooks');
const ghostChangeStrem = ghostCollection.watch();
ghostChangeStream.on('change', document => {
/* document is the MongoDB collection entry, e.g. our webhook */
});
```
Its event-based nature matches perfectly with webhooks and SSEs. We can react to newly received webhooks where the data is created, ensuring data integrity over our whole application.
## Build a real-time endpoint
We need an extra application layer to propagate these changes to connected clients. I've decided to use Typescript and Express.js, but you can use any other server-side framework. You will also need a dedicated MongoDB instance*. For a quick start, you can sign up for MongoDB Atlas. Then, create a free cluster and connect to it.
Let's get started by cloning the `1-get-started` branch from this Github repository:
```bash
# ssh
$ git clone [email protected]:tq-bit/mongodb-article-mongo-changestreams.git
# HTTP(s)
$ git clone https://github.com/tq-bit/mongodb-article-mongo-changestreams.git
# Change to the starting branch
$ git checkout 1-get-started
# Install NPM dependencies
$ npm install
# Make a copy of .env.example
$ cp .env.example .env
```
> Make sure to fill out the MONGO_HOST environment variable with your connection string!
Express and the database client are already implemented. So in the following, we'll focus on adding MongoDB change streams and server-sent events.
Once everything is set up, you can start the server on `http://localhost:3000` by typing
```bash
npm run dev
```
The application uses two important endpoints which we will extend in the next sections:
- `/api/notification/subscribe` <- Used by EventSource to receive event messages
- `/api/notification/article/create` <- Used as a webhook target by Ghost
\* If you are not using MongoDB Atlas, make sure to have Replication Sets enabled.
## Add server-sent events
Open the cloned project in your favorite code editor. We'll add our SSE logic under `src/components/notification/notification.listener.ts`.
In a nutshell, implementing SSE requires three steps:
- Write out an HTTP status 200 header.
- Write out an opening message.
- Add event-based response message handlers.
We’ll start sending a static message and revisit this module after adding ChangeStreams.
> You can also `git checkout 2-add-sse` to see the final result.
### Write the HTTP header
Writing the HTTP header informs clients of a successful connection. It also propagates the response's content type and makes sure events are not cached.
Add the following code to the function `subscribeToArticleNotification` inside:
```javascript
// Replace
// TODO: Add function to write the head
// with
console.log('Step 1: Write the response head and keep the connection open');
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive'
});
```
### Write an opening message
The first message sent should have an event type of 'open'. It is not mandatory but helps to determine whether the subscription was successful.
Append the following code to the function `subscribeToArticleNotification`:
```javascript
// Replace
// TODO: Add functionality to write the opening message
// with
console.log('Step 2: Write the opening event message');
res.write('event: open\n');
res.write('data: Connection opened!\n'); // Data can be any string
res.write(`id: ${crypto.randomUUID()}\n\n`);
```
### Add response message handlers
We can customize the content and timing of all further messages sent. Let's add a placeholder function that sends messages out every five seconds for now. And while we’re at it, let’s also add a handler to close the client connection:
Append the following code to the function `subscribeToArticleNotification`:
```javascript
setInterval(() => {
console.log('Step 3: Send a message every five seconds');
res.write(`event: message\n`);
res.write(`data: ${JSON.stringify({ message: 'Five seconds have passed' })}\n`);
res.write(`id: ${crypto.randomUUID()}\n\n`);
}, 5000);
// Step 4: Handle request events such as client disconnect
// Clean up the Change Stream connection and close the connection stream to the client
req.on('close', () => {
console.log('Step 4: Handle request events such as client disconnect');
res.end();
});
```
To check if everything works, visit `http://localhost:3000/api/notification/subscribe`.
## Add a POST endpoint for Ghost
Let's visit `src/components/notification/notification.model.ts` next. We'll add a simple `insert` command for our database into the function `createNotificiation`:
> You can also `git checkout 3-webhook-handler` to see the final result.
```javascript
// Replace
// TODO: Add insert one functionality for DB
// with
return notificationCollection.insertOne(notification);
```
And on to `src/components/notification/notification.controller.ts`. To process incoming webhooks, we'll add a handler function into `handleArticleCreationNotification`:
```javascript
// Replace
// TODO: ADD handleArticleCreationNotification
// with
const incomingWebhook: GhostWebhook = req.body;
await NotificationModel.createNotificiation({
id: crypto.randomUUID(),
ghostId: incomingWebhook.post?.current?.id,
ghostOriginalUrl: incomingWebhook.post?.current?.url,
ghostTitle: incomingWebhook.post?.current?.title,
ghostVisibility: incomingWebhook.post?.current?.visibility,
type: NotificationEventType.PostPublished,
});
res.status(200).send('OK');
```
This handler will pick data from the incoming webhook and insert a new notification.
```bash
curl -X POST -d '{
"post": {
"current": {
"id": "sj7dj-lnhd1-kabah9-107gh-6hypo",
"url": "http://localhost:2368/how-to-create-realtime-notifications",
"title": "How to create realtime notifications",
"visibility": "public"
}
}
}' http://localhost:3000/api/notification/article/create
```
You can also test the insert functionality by using Postman or VSCode REST client and then check your MongoDB collection. There is an example request under `/test/notification.rest` in the project's directory, for your convenience.
## Trigger MongoDB Change Streams
So far, we can send SSEs and insert Ghost notifications. Let's put these two features together now.
Earlier, we added a static server message sent every five seconds. Let's revisit `src/components/notification/notification.listener.ts` and make it more dynamic.
First, let's get rid of the whole `setInterval` and its callback. Instead, we'll use our `notificationCollection` and its built-in method `watch`. This method returns a `ChangeStream`.
You can create a change stream by adding the following code above the `export default` code segment:
```javascript
const notificationStream = notificationCollection.watch();
```
The stream fires an event whenever its related collection changes. This includes the `insert` event from the previous section.
We can register callback functions for each of these. The event that fires when a document inside the collection changes is 'change':
```javascript
notificationStream.on('change', (next) => {
console.log('Step 3.1: Change in Database detected!');
});
```
The variable passed into the callback function is a change stream document. It includes two important information for us:
- The document that's inserted, updated, or deleted.
- The type of operation on the collection.
Let's assign them to one variable each inside the callback:
```javascript
notificationStream.on('change', (next) => {
// ... previous code
const {
// @ts-ignore, fullDocument is not part of the next type (yet)
fullDocument /* The newly inserted fullDocument */,
operationType /* The MongoDB operation Type, e.g. insert */,
} = next;
});
```
Let's write the notification to the client. We can do this by repeating the method we used for the opening message.
```javascript
notificationStream.on('change', (next) => {
// ... previous code
console.log('Step 3.2: Writing out response to connected clients');
res.write(`event: ${operationType}\n`);
res.write(`data: ${JSON.stringify(fullDocument)}\n`);
res.write(`id: ${crypto.randomUUID()}\n\n`);
});
```
And that's it! You can test if everything is functional by:
1. Opening your browser under `http://localhost:3000/api/notification/subscribe`.
2. Using the file under `test/notification.rest` with VSCode's HTTP client.
3. Checking if your browser includes an opening and a Ghost Notification.
For an HTTP webhook implementation, you will need a running Ghost instance. I have added a dockerfile to this repo for your convenience. You could also install Ghost yourself locally.
To start Ghost with the dockerfile, make sure you have Docker Engine or Docker Desktop with support for `docker compose` installed.
For a local installation and the first-time setup, you should follow the official Ghost installation guide.
After your Ghost instance is up and running, open your browser at `http://localhost:2368/ghost`. You can set up your site however you like, give it a name, enter details, and so on.
In order to create a webhook, you must first create a custom integration. To do so, navigate into your site’s settings and click on the “Integrations” menu point. Click on “Add Webhook,” enter a name, and click on “Create.”
Inside the newly created integration, you can now configure a webhook to point at your application under `http://:/api/notification/article/create`*.
\* This URL might vary based on your local Ghost setup. For example, if you run Ghost in a container, you can find your machine's local IP using the terminal and `ifconfig` on Linux or `ipconfig` on Windows.
And that’s it. Now, whenever a post is published, its contents will be sent to our real-time endpoint. After being inserted into MongoDB, an event message will be sent to all connected clients.
## Subscribe to Change Streams from your Ghost theme
There are a few ways to add real-time notifications to your Ghost theme. Going into detail is beyond the scope of this article. I have prepared two files, a `plugin.js` and a `plugin.css` file you can inject into the default Casper theme.
Try this out by starting a local Ghost instance using the provided dockerfile.
You must then instruct your application to serve the JS and CSS assets. Add the following to your `index.ts` file:
```javascript
// ... other app.use hooks
app.use(express.static('public'));
// ... app.listen()
```
Finally, navigate to Code Injection and add the following two entries in the 'Site Header':
```html
```
> The core piece of the plugin is the EventSource browser API. You will want to use it when integrating this application with other themes.
When going back into your Ghost publication, you should now see a small bell icon on the upper right side.
## Moving ahead
If you’ve followed along, congratulations! You now have a working real-time notification service for your Ghost blog. And if you haven’t, what are you waiting for? Sign up for a free account on MongoDB Atlas and start building. You can use the final branch of this repository to get started and explore the full power of MongoDB’s toolkit. | md | {
"tags": [
"MongoDB",
"JavaScript",
"Docker"
],
"pageDescription": "Learn how to work with MongoDB change streams and develop a server-sent event application that integrates with Ghost CMS.",
"contentType": "Tutorial"
} | Adding Real-Time Notifications to Ghost CMS Using MongoDB and Server-Sent Events | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/php/php-setup | created | # Getting Set Up to Run PHP with MongoDB
Welcome to this quickstart guide for MongoDB and PHP. I know you're probably excited to get started writing code and building applications with PHP and MongoDB. We'll get there, I promise. Let's go through some necessary set-up first, however.
This guide is organized into a few sections over a few articles. This first article addresses the installation and configuration of your development environment. PHP is an integrated web development language. There are several components you typically use in conjunction with the PHP programming language. If you already have PHP installed and you simply want to get started with PHP and MongoDB, feel free to skip to the next article in this series.
Let's start with an overview of what we'll cover in this series.
1. Prerequisites
2. Installation
3. Installing Apache
4. Installing PHP
5. Installing the PHP Extension
6. Installing the MongoDB PHP Library
7. Start a MongoDB Cluster on Atlas
8. Securing Usernames and Passwords
A brief note on PHP and Apache: Because PHP is primarily a web language — that is to say that it's built to work with a web server — we will spend some time at the beginning of this article ensuring that you have PHP and the Apache web server installed and configured properly. There are alternatives, but we're going to focus on PHP and Apache.
PHP was developed and first released in 1994 by Rasmus Lerdorf. While it has roots in the C language, PHP syntax looked much like Perl early on. One of the major reasons for its massive popularity was its simplicity and the dynamic, interpreted nature of its implementation.
# Prerequisites
You'll need the following installed on your computer to follow along with this tutorial:
* MacOS Catalina or later: You can run PHP on earlier versions but I'll be keeping to MacOS for this tutorial.
* Homebrew Package Manager: The missing package manager for MacOS.
* PECL: The repository for PHP Extensions.
* A code editor of your choice: I recommend Visual Studio Code.
# Installation
First, let's install the command line tools as these will be used by Homebrew:
``` bash
xcode-select --install
```
Next, we're going to use a package manager to install things. This ensures that our dependencies will be met. I prefer `Homebrew`, or `brew` for short. To begin using `brew`, open your `terminal app` and type:
``` bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
```
This leverages `curl` to pull down the latest installation scripts and binaries for `brew`.
The installation prompts are fairly straightforward. Enter your password where required to assume root privileges for the install. When it's complete, simply type the following to verify that `brew` is installed correctly:
``` bash
brew --version
```
If you experience trouble at this point and are unable to get `brew` running, visit the Homebrew installation docs.
You can also verify your homebrew installation using `brew doctor`. Confirm that any issues or error messages are resolved prior to moving forward. You may find warnings, and those can usually be safely ignored.
## Installing Apache
The latest macOS 11.0 Big Sur comes with Apache 2.4 pre-installed but Apple removed some critical scripts, which makes it difficult to use.
So, to be sure we're all on the same page, let's install Apache 2.4 via Homebrew and then have it to run on the standard ports (80/443).
When I was writing this tutorial, I wasted a lot of time trying to figure out what was happening with the pre-installed version. So, I think it's best if we install from scratch using Homebrew.
``` bash
sudo apachectl stop # stop the existing apache just to be safe
sudo launchtl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist # Remove the configuration to run httpd daemons
```
Now, let's install the latest version of Apache:
``` bash
brew install httpd
```
Once installed, let's start up the service.
``` bash
brew services start httpd
```
You should now be able to open a web browser and visit `http://localhost:8080` and see something similar to the following:
The standard Apache web server doesn't have support for PHP built in. Therefore, we need to install PHP and the PHP Extension to recognize and interpret PHP files.
## Installing PHP
> If you've installed previous versions of PHP, I highly recommend that you clean things up by removing older versions. If you have previous projects that depend on these versions, you'll need to be careful, and back up your configurations and project files.
Homebrew is a good way for MacOS users to install PHP.
``` bash
brew install php
```
Once this completes, you can test whether it's been installed properly by issuing the following command from your command-line prompt in the terminal.
``` bash
php --version
```
You should see something similar to this:
``` bash
$ php --version
PHP 8.0.0 (cli) (built: Nov 30 2020 13:47:29) ( NTS )
Copyright (c) The PHP Group
Zend Engine v4.0.0-dev, Copyright (c) Zend Technologies
with Zend OPcache v8.0.0, Copyright (c), by Zend Technologies
```
## Installing the PHP extension
Now that we have `php` installed, we can configure Apache to use `PHP` to interpret our web content, translating our `php` commands instead of displaying the source code.
> PECL (PHP Extension Community Library) is a repository for PHP Extensions, providing a directory of all known extensions and hosting facilities or the downloading and development of PHP extensions. `pecl` is the binary or command-line tool (installed by default with PHP) you can use to install and manage PHP extensions. We'll do that in this next section.
Install the PHP MongoDB extension before installing the PHP Library for MongoDB. It's worth noting that full MongoDB driver experience is provided by installing both the low-level extension (which integrates with our C driver) and high-level library, which is written in PHP.
You can install the extension using PECL on the command line:
``` bash
pecl install mongodb
```
Next, we need to modify the main `php.ini` file to include the MongoDB extension. To locate your `php.ini` file, use the following command:
``` bash
$ php --ini
Configuration File (php.ini) Path: /usr/local/etc/php/8.3
```
To install the extension, copy the following line and place it at the end of your `php.ini` file.
``` bash
extension=mongodb.so
```
After saving php.ini, restart the Apache service and to verify installation, you can use the following command.
``` bash
brew services restart httpd
php -i | grep mongo
```
You should see output similar to the following:
``` bash
$ php -i | grep mongo
mongodb
libmongoc bundled version => 1.25.2
libmongoc SSL => enabled
libmongoc SSL library => OpenSSL
libmongoc crypto => enabled
libmongoc crypto library => libcrypto
libmongoc crypto system profile => disabled
libmongoc SASL => enabled
libmongoc SRV => enabled
libmongoc compression => enabled
libmongoc compression snappy => enabled
libmongoc compression zlib => enabled
libmongoc compression zstd => enabled
libmongocrypt bundled version => 1.8.2
libmongocrypt crypto => enabled
libmongocrypt crypto library => libcrypto
mongodb.debug => no value => no value
```
You are now ready to begin using PHP to manipulate and manage data in your MongoDB databases. Next, we'll focus on getting your MongoDB cluster prepared.
## Troubleshooting your PHP configuration
If you are experiencing issues with installing the MongoDB extension, there are some tips to help you verify that everything is properly installed.
First, you can check that Apache and PHP have been successfully installed by creating an info.php file at the root of your web directory. To locate the root web directory, use the following command:
```
$ brew info httpd
==> httpd: stable 2.4.58 (bottled)
Apache HTTP server
https://httpd.apache.org/
/usr/local/Cellar/httpd/2.4.58 (1,663 files, 31.8MB) *
Poured from bottle using the formulae.brew.sh API on 2023-11-09 at 18:19:19
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/h/httpd.rb
License: Apache-2.0
==> Dependencies
Required: apr ✔, apr-util ✔, brotli ✔, libnghttp2 ✔, openssl@3 ✔, pcre2 ✔
==> Caveats
DocumentRoot is /usr/local/var/www
```
In the file, add the following content:
```
```
Then navigate to http://localhost:8080/info.php and you should see a blank page with just the Hello World text.
Next, edit the info.php file content to:
```
```
Save, and then refresh the info.php page. You should see a page with a large table of PHP information like this:
IMPORTANT: In production servers, it’s **unsafe to expose information displayed by phpinfo()** on a publicly accessible page
The information that we’re interested could be in these places:
* **“Configuration File (php.ini) Path”** property shows where your PHP runtime is getting its php.ini file from. It can happen that the mongodb.so extension was added in the wrong php.ini file as there may be more than one.
* **“Additional .ini files parsed”** shows potential extra PHP configuration files that may impact your specific configuration. These files are in the directory listed by the “Scan this dir for additional .ini files” section in the table.
There’s also a whole “mongodb” table that looks like this:
Its presence indicates that the MongoDB extension has been properly loaded and is functioning. You can also see its version number to make sure that’s the one you intended to use.
If you don’t see this section, it’s likely the MongoDB extension failed to load. If that’s the case, look for the “error_log” property in the table to see where the PHP error log file is, as it may contain crucial clues. Make sure that “log_errors” is set to ON. Both are located in the “Core” PHP section.
If you are upgrading to a newer version of PHP, or have multiple versions installed, keep in mind that each version needs to have its own MongoDB extension and php.ini files.
### Start a MongoDB Cluster on Atlas
Now that you've got your local environment set up, it's time to create a MongoDB database to work with, and to load in some sample data you can explore and modify.
> Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
It will take a couple of minutes for your cluster to be provisioned, so while you're waiting, you can move on to the next step.
### Set Up Your MongoDB Instance
Hopefully, your MongoDB cluster should have finished starting up now and has probably been running for a few minutes.
The following instructions were correct at the time of writing but may change, as we're always improving the Atlas user interface:
In the Atlas web interface, you should see a green button at the bottom-left of the screen, saying "Get Started." If you click on it, it'll bring up a checklist of steps for getting your database set up. Click on each of the items in the list (including the "Load Sample Data" item—we'll use this later to test the PHP library), and it'll help you through the steps to get set up.
The fastest way to get access to data is to load the sample datasets into your cluster right in the Atlas console. If you're brand new, the new user wizard will actually walk you through the process and prompt you to load these.
If you already created your cluster and want to go back to load the sample datasets, click the ellipsis (three dots) next to your cluster connection buttons (see below image) and then select `Load Sample Dataset`.
Now, let's move on to setting the configuration necessary to access your data in the MongoDB Cluster. You will need to create a database user and configure your IP Address Access List.
## Create a User
Following the "Get Started" steps, create a user with "Read and write access to any database." You can give it the username and password of your choice. Make a copy of them, because you'll need them in a minute. Use the "autogenerate secure password" button to ensure you have a long, random password which is also safe to paste into your connection string later.
## Add Your IP Address to the Access List
When deploying an app with sensitive data, you should only whitelist the IP address of the servers which need to connect to your database. To whitelist the IP address of your development machine, select "Network Access," click the "Add IP Address" button, and then click "Add Current IP Address" and hit "Confirm."
## Connect to Your Database
The last step of the "Get Started" checklist is "Connect to your Cluster." Select "Connect your application" and select "PHP" with a version of "PHPLIB 1.8."
Click the "Copy" button to copy the URL to your paste buffer. Save it to the same place you stored your username and password. Note that the URL has `` as a placeholder for your password. You should paste your password in here, replacing the whole placeholder, including the `<` and `>` characters.
Now it's time to actually write some PHP code to connect to your MongoDB database! Up until now, we've only installed the supporting system components. Before we begin to connect to our database and use PHP to manipulate data, we need to install the MongoDB PHP Library.
Composer is the recommended installation tool for the MongoDB library. Composer is a tool for dependency management in PHP. It allows you to declare the libraries your project depends on and it will manage (install/update) them for you.
To install `composer`, we can use Homebrew.
``` bash
brew install composer
```
## Installing the MongoDB PHP Library
Once you have `composer` installed, you can move forward to installing the MongoDB Library.
Installation of the library should take place in the root directory of your project. Composer is not a package manager in the same sense as Yum or Apt are. Composer installs packages in a directory inside your project. By default, it does not install anything globally.
``` bash
$ composer require mongodb/mongodb
Using version ^1.8 for mongodb/mongodb
./composer.json has been created
Running composer update mongodb/mongodb
Loading composer repositories with package information
Updating dependencies
Lock file operations: 4 installs, 0 updates, 0 removals
- Locking composer/package-versions-deprecated (1.11.99.1)
- Locking jean85/pretty-package-versions (1.6.0)
- Locking mongodb/mongodb (1.8.0)
- Locking symfony/polyfill-php80 (v1.22.0)
Writing lock file
Installing dependencies from lock file (including require-dev)
Package operations: 4 installs, 0 updates, 0 removals
- Installing composer/package-versions-deprecated (1.11.99.1): Extracting archive
- Installing symfony/polyfill-php80 (v1.22.0): Extracting archive
- Installing jean85/pretty-package-versions (1.6.0): Extracting archive
- Installing mongodb/mongodb (1.8.0): Extracting archive
Generating autoload files
composer/package-versions-deprecated: Generating version class...
composer/package-versions-deprecated: ...done generating version class
2 packages you are using are looking for funding.
```
Make sure you're in the same directory as you were when you used `composer` above to install the library.
In your code editor, create a PHP file in your project directory called quickstart.php. If you're referencing the example, enter in the following code:
``` php
@myfirstcluster.zbcul.mongodb.net/dbname?retryWrites=true&w=majority');
$customers = $client->selectCollection('sample_analytics', 'customers');
$document = $customers->findOne('username' => 'wesley20']);
var_dump($document);
?>
```
`` and `` are the username and password you created in Atlas, and the cluster address is specific to the cluster you launched in Atlas.
Save and close your `quickstart.php` program and run it from the command line:
``` bash
$ php quickstart.php
```
If all goes well, you should see something similar to the following:
``` javascript
$ php quickstart.php
object(MongoDB\Model\BSONDocument)#12 (1) {
["storage":"ArrayObject":private]=>
array(8) {
["_id"]=>
object(MongoDB\BSON\ObjectId)#16 (1) {
["oid"]=>
string(24) "5ca4bbcea2dd94ee58162a72"
}
["username"]=>
string(8) "wesley20"
["name"]=>
string(13) "James Sanchez"
["address"]=>
string(45) "8681 Karen Roads Apt. 096 Lowehaven, IA 19798"
["birthdate"]=>
object(MongoDB\BSON\UTCDateTime)#15 (1) {
["milliseconds"]=>
string(11) "95789846000"
}
["email"]=>
string(24) "[email protected]"
["accounts"]=>
object(MongoDB\Model\BSONArray)#14 (1) {
["storage":"ArrayObject":private]=>
array(1) {
[0]=>
int(987709)
}
}
["tier_and_details"]=>
object(MongoDB\Model\BSONDocument)#13 (1) {
["storage":"ArrayObject":private]=>
array(0) {
}
}
}
}
```
You just connected your PHP program to MongoDB and queried a single document from the `sample_analytics` database in your cluster! If you don't see this data, then you may not have successfully loaded sample data into your cluster. You may want to go back a couple of steps until running this command shows the document above.
## Securing Usernames and Passwords
Storing usernames and passwords in your code is **never** a good idea. So, let's take one more step to secure those a bit better. It's general practice to put these types of sensitive values into an environment file such as `.env`. The trick, then, will be to get your PHP code to read those values in. Fortunately, [Vance Lucas came up with a great solution called `phpdotenv`. To begin using Vance's solution, let's leverage `composer`.
``` bash
$ composer require vlucas/phpdotenv
```
Now that we have the library installed, let's create our `.env` file which contains our sensitive values. Open your favorite editor and create a file called `.env`, placing the following values in it. Be sure to replace `your user name` and `your password` with the actual values you created when you added a database user in Atlas.
``` bash
MDB_USER="your user name"
MDB_PASS="your password"
```
Next, we need to modify our quickstart.php program to pull in the values using `phpdotenv`. Let's add a call to the library and modify our quickstart program to look like the following. Notice the changes on lines 5, 6, and 9.
``` php
load();
$client = new MongoDB\Client(
'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@tasktracker.zbcul.mongodb.net/sample_analytics?retryWrites=true&w=majority'
);
$customers = $client->selectCollection('sample_analytics', 'customers');
$document = $customers->findOne(['username' => 'wesley20']);
var_dump($document);
```
Next, to ensure that you're not publishing your credentials into `git` or whatever source code repository you're using, be certain to add a .gitignore (or equivalent) to prevent storing this file in your repo. Here's my `.gitignore` file:
``` bash
composer.phar
/vendor/
.env
```
My `.gitignore` includes files that are leveraged as part of our libraries—these should not be stored in our project.
Should you want to leverage my project files, please feel free to visit my [github repository, clone, fork, and share your feedback in the Community.
This quick start was intended to get you set up to use PHP with MongoDB. You should now be ready to move onto the next article in this series. Please feel free to contact me in the Community should you have any questions about this article, or anything related to MongoDB.
Please be sure to visit, star, fork, and clone the companion repository for this article.
## References
* MongoDB PHP Quickstart Source Code Repository
* MongoDB PHP Driver Documentation provides thorough documentation describing how to use PHP with your MongoDB cluster.
* MongoDB Query Document documentation details the full power available for querying MongoDB collections. | md | {
"tags": [
"PHP",
"MongoDB"
],
"pageDescription": "Getting Started with MongoDB and PHP - Part 1 - Setup",
"contentType": "Quickstart"
} | Getting Set Up to Run PHP with MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/storing-large-objects-and-files | created | # Storing Large Objects and Files in MongoDB
Large objects, or "files", are easily stored in MongoDB. It is no problem to store 100MB videos in the database.
This has a number of advantages over files stored in a file system. Unlike a file system, the database will have no problem dealing with millions of objects. Additionally, we get the power of the database when dealing with this data: we can do advanced queries to find a file, using indexes; we can also do neat things like replication of the entire file set.
MongoDB stores objects in a binary format called BSON. BinData is a BSON data type for a binary byte array. However, MongoDB objects are typically limited to 16MB in size. To deal with this, files are "chunked" into multiple objects that are less than 255 KiB each. This has the added advantage of letting us efficiently retrieve a specific range of the given file.
While we could write our own chunking code, a standard format for this chunking is predefined, called GridFS. GridFS support is included in all official MongoDB drivers and also in the mongofiles command line utility.
A good way to do a quick test of this facility is to try out the mongofiles utility. See the MongoDB documentation for more information on GridFS.
## More Information
- GridFS Docs
- Building MongoDB Applications with Binary Files Using GridFS: Part 1
- Building MongoDB Applications with Binary Files Using GridFS: Part 2
- MongoDB Architecture Guide | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Discover how to store large objects and files in MongoDB.",
"contentType": "Tutorial"
} | Storing Large Objects and Files in MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-summary | created | # A Summary of Schema Design Anti-Patterns and How to Spot Them
We've reached the final post in this series on MongoDB schema design anti-patterns. You're an expert now, right? We hope so. But don't worry—even if you fall into the trap of accidentally implementing an anti-pattern, MongoDB Atlas can help you identify it.
## The Anti-Patterns
Below is a brief description of each of the schema design anti-patterns we've covered in this series.
- Massive arrays: storing massive, unbounded arrays in your documents.
- Massive number of collections: storing a massive number of collections (especially if they are unused or unnecessary) in your database.
- Unnecessary indexes: storing an index that is unnecessary because it is (1) rarely used if at all or (2) redundant because another compound index covers it.
- Bloated documents: storing large amounts of data together in a document when that data is not frequently accessed together.
- Separating data that is accessed together: separating data between different documents and collections that is frequently accessed together.
- Case-insensitive queries without case-insensitive indexes: frequently executing a case-insensitive query without having a case-insensitive index to cover it.
>
>
>:youtube]{vid=8CZs-0it9r4 list=PL4RCxklHWZ9uluV0YBxeuwpEa0FWdmCRy}
>
>If you'd like to learn more about each of the anti-patterns, check out this YouTube playlist.
>
>
## Building Your Data Modeling Foundation
Now that you know what **not** to do, let's talk about what you **should** do instead. Begin by learning the MongoDB schema design patterns. [Ken Alger and Daniel Coupal wrote a fantastic blog series that details each of the 12 patterns. Daniel also co-created a free MongoDB University Course that walks you through how to model your data.
Once you have built your data modeling foundation on schema design patterns and anti-patterns, carefully consider your use case:
- What data will you need to store?
- What data is likely to be accessed together?
- What queries will be run most frequently?
- What data is likely to grow at a rapid, unbounded pace?
The great thing about MongoDB is that it has a flexible schema. You have the power to rapidly make changes to your data model when you use MongoDB. If your initial data model turns out to be not so great or your application's requirements change, you can easily update your data model. And you can make those updates without any downtime! Check out the Schema Versioning Pattern for more details.
If and when you're ready to lock down part or all of your schema, you can add schema validation. Don't worry—the schema validation is flexible too. You can configure it to throw warnings or errors. You can also choose if the validation should apply to all documents or just documents that already pass the schema validation rules. All of this flexibility gives you the ability to validate documents with different shapes in the same collection, helping you migrate your schema from one version to the next.
## Spotting Anti-Patterns in Your Database
Hopefully, you'll keep all of the schema design patterns and anti-patterns top-of-mind while you're planning and modifying your database schema. But maybe that's wishful thinking. We all make mistakes.
If your database is hosted on MongoDB Atlas, you can get some help spotting anti-patterns. Navigate to the Performance Advisor (available in M10 clusters and above) or the Data Explorer (available in all clusters) and look for the Schema Anti-Patterns panel. These Schema Anti-Patterns panels will display a list of anti-patterns in your collections and provide pointers on how to fix the issues.
To learn more, check out Marissa Jasso's blog post that details this handy schema suggestion feature or watch her demo below.
:youtube]{vid=XFJcboyDSRA}
## Summary
Every use case is unique, so every schema will be unique. No formula exists for determining the "right" model for your data in MongoDB.
Give yourself a solid data modeling foundation by learning the MongoDB schema design patterns and anti-patterns. Then begin modeling your data, carefully considering the details of your particular use case and leveraging the principles of the patterns and anti-patterns.
So, get pumped, have fun, and model some data!
>When you're ready to build a schema in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.
## Related Links
Check out the following resources for more information:
- Blog Series: Building with Patterns: A Summary
- MongoDB University Course M320: Data Modeling
- MongoDB Docs: Schema Validation
- Blog Post: JSON Schema Validation - Locking Down Your Model the Smart Way
- Blog Post: Schema Suggestions in MongoDB Atlas: Years of Best Practices, Instantly Available To You
- MongoDB Docs: Improve Your Schema
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Get a summary of the six MongoDB Schema Design Anti-Patterns. Plus, learn how MongoDB Atlas can help you spot the anti-patterns in your databases.",
"contentType": "Article"
} | A Summary of Schema Design Anti-Patterns and How to Spot Them | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/visualize-mongodb-atlas-database-audit-logs | created | # Visualize MongoDB Atlas Database Audit Logs
MongoDB Atlas has advanced security capabilities, and audit logs are one of them. Simply put, enabling audit logs in an Atlas cluster allows you to track what happened in the database by whom and when.
In this blog post, I’ll walk you through how you can visualize MongoDB Atlas Database Audit Logs with MongoDB Atlas Charts.
## High level architecture
1. In Atlas App Services Values, Atlas Admin API public and private keys and AWS API access key id and secret access have been defined.
2. aws-sdk node package has been added as a dependency to Atlas Functions.
3. Atlas Data Federation has been configured to query the data in a cloud storage - Amazon S3 of Microsoft Blob Storage bucket.
4. Atlas Function retrieves both Atlas Admin API and AWS API credentials.
5. Atlas Function calls the Atlas Admin API with the credentials and other relevant parameters (time interval for the audit logs) and fetches the compressed audit logs.
6. Atlas Function uploads the compressed audit logs as a zip file into a cloud object storage bucket where Atlas has read access.
7. Atlas Charts visualize the data in S3 through Atlas Data Federation.
## Prerequisites
The following items must be completed before working on the steps.
* Provision an Atlas cluster where the tier is at least M10. The reason for this is auditing is not supported by free (M0) and shared tier (M2, M5) clusters.
* You need to set up database auditing in the Atlas cluster where you want to track activities.
* Under the Security section on the left hand menu on the main dashboard, select Advanced. Then toggle Database Auditing and click Audit Filter Settings.
* For the sake of simplicity, check All actions to be tracked in Audit Configuration as shown in the below screenshot.
* If you don’t have your own load generator to generate load in the database in order to visualize through MongoDB Charts later, you can review this load generator in the Github repository of this blog post.
* Create an app in Atlas App Services that will implement our functions inside it. If you haven’t created an app in Atlas App Services before, please follow this tutorial.
* Create an AWS account along with the following credentials — AWS Access Key and AWS Secret Access Secret.
* Set an AWS IAM Role that has privilege to write into the cloud object storage bucket.
* Later, Atlas will assume this role to make write operations inside S3 bucket.
## Step 1: configuring credentials
Atlas Admin API allows you to automate your Atlas environment. With a REST client, you can execute a wide variety of management activities such as retrieving audit logs.
In order to utilize Atlas Admin API, we need to create keys and use these keys later in Atlas Functions. Follow the instructions to create an API key for your project.
### Creating app services values and app services secrets
After you’ve successfully created public and private keys for the Atlas project, we can store the Atlas Admin API keys and AWS credentials in App Services Values and Secrets.
App Services Values and App Services Secrets are static, server-side constants that you can access or link to from other components of your application.
In the following part, we’ll create four App Services Values and two App Services Secrets to manage both MongoDB Atlas Admin API and AWS credentials. In order to create App Services Values and Secrets, navigate your App Services app, and on the left hand menu, select **Values**. This will bring you to a page showing the secrets and values available in your App Services app.
#### Setting up Atlas Admin API credentials
In this section, we’ll create two App Services Values and one App Services Secrets to store Atlas Admin API Credentials.
**Value 1: AtlasAdminAPIPublicKey**
This Atlas App Services value keeps the value of the public key of Atlas Admin API. Values should be wrapped in double quotes as shown in the following example.
**Secret 1: AtlasAdminAPIPrivateKey**
This Atlas App Services Secret keeps the value of the private key of Atlas Admin API. You should not wrap the secret in quotation marks.
**Value 2: AtlasAdminAPIPrivateKeyLinkToSecret**
We can’t directly access secrets in our Atlas Functions. That’s why we have to create a new value and link it to the secret containing our private key.
Until now, we’ve defined necessary App Services Values and Atlas App Services Secrets to access Atlas Admin API from an App Services App.
In order to access our S3 bucket, we need to utilize AWS SDK. Therefore, we need to do a similar configuration for AWS SDK keys.
### Setting up AWS credentials
In this section, we’ll create two App Services Values and one App Services Secret to store AWS Credentials. Learn how to get your AWS Credentials.
**Value 3: AWSAccessKeyId**
This Atlas App Services Value keeps the value of the access key id of AWS SDK.
**Secret 2: AWSSecretAccessKey**
This Atlas App Services Secret keeps the value of the secret access key of AWS SDK.
**Value 4: AWSSecretAccessKeyLinkToSecret**
This Atlas App Services Value keeps the link of Atlas App Services Secret that keeps the secret key of AWS SDK.
And after you have all these values and secrets as shown below, you can deploy the changes to make it permanent.
## Step 2: adding an external dependency
An external dependency is an external library that includes logic you'd rather not implement yourself, such as string parsing, convenience functions for array manipulations, and data structure or algorithm implementations. You can upload external dependencies from the npm repository to App Services and then import those libraries into your functions with a `require('external-module')` statement.
In order to work with AWS S3, we will add the official aws-sdk npm package.
In your App Services app, on the left-side menu, navigate to Functions. And then, navigate to the Dependencies pane in this page.
In order to work with AWS S3, we will add the official **aws-sdk** npm package.
In your App Services app, on the left-side menu, navigate to **Functions**. And then, navigate to the **Dependencies** pane in this page.
Click **Add Dependency**.
Provide **aws-sdk** as the package name and keep the package version empty. That will install the latest version of aws-sdk node package.
Now, the **aws-sdk** package is ready to be used in our Atlas App Services App.
## Step 3: configuring Atlas Data Federation to consume cloud object storage data through MongoDB Query Language (MQL)
In this tutorial, we’ll not go through all the steps to create a federated database instance in Atlas. Please check out our Atlas Data Federation resources to go through all steps to create Atlas Data Federated Instance.
As an output of this step, we’d expect a ready Federated Database Instance as shown below.
I have already added the S3 bucket (the name of the bucket is **fuat-sungur-bucket**) that I own into this Federated Database Instance as a data source and I created the collection **auditlogscollection** inside the database **auditlogs** in this Federated Database Instance.
Now, if I have the files in this S3 bucket (fuat-sungur-bucket), I’ll be able to query it using the MongoDB aggregation framework or Atlas SQL.
## Step 4: creating an Atlas function to retrieve credentials from Atlas App Services Values and Secrets
Let’s create an Atlas function, give it the name **RetrieveAndUploadAuditLogs**, and choose **System** for authentication.
We also provide the following piece of code in the **Function Editor** and **Run** the function. We’ll see the credentials have been printed out in the console.
```
exports = async function(){
const atlasAdminAPIPublicKey = context.values.get("AtlasAdminAPIPublicKey");
const atlasAdminAPIPrivateKey = context.values.get("AtlasAdminAPIPrivateKeyLinkToSecret");
const awsAccessKeyID = context.values.get("AWSAccessKeyID")
const awsSecretAccessKey = context.values.get("AWSSecretAccessKeyLinkToSecret")
console.log(`Atlas Public + Private Keys: ${atlasAdminAPIPublicKey}, ${atlasAdminAPIPrivateKey}`)
console.log(`AWS Access Key ID + Secret Access Key: ${awsAccessKeyID}, ${awsSecretAccessKey}`)}
```
## Step 5: retrieving audit logs in the Atlas function
We now continue to enhance our existing Atlas function, **RetrieveAndUploadAuditLogs**. Now, we’ll execute the HTTP/S request to retrieve audit logs into the Atlas function.
Following piece of code generates an HTTP GET request,calls the relevant Atlas Admin API resource to retrieve audit logs within 1440 minutes, and converts this compressed audit data to the Buffer class in JavaScript.
```
exports = async function(){
const atlasAdminAPIPublicKey = context.values.get("AtlasAdminAPIPublicKey");
const atlasAdminAPIPrivateKey = context.values.get("AtlasAdminAPIPrivateKeyLinkToSecret");
const awsAccessKeyID = context.values.get("AWSAccessKeyID")
const awsSecretAccessKey = context.values.get("AWSSecretAccessKeyLinkToSecret")
console.log(`Atlas Public + Private Keys: ${atlasAdminAPIPublicKey}, ${atlasAdminAPIPrivateKey}`)
console.log(`AWS Access Key ID + Secret Access Key: ${awsAccessKeyID}, ${awsSecretAccessKey}`)
//////////////////////////////////////////////////////////////////////////////////////////////////
// Atlas Cluster information
const groupId = '5ca48430014b76f34448bbcf';
const host = "exchangedata-shard-00-01.5tka5.mongodb.net";
const logType = "mongodb-audit-log"; // the other option is "mongodb" -> that allows you to download database logs
// defining startDate and endDate of Audit Logs
const endDate = new Date();
const durationInMinutes = 20;
const durationInMilliSeconds = durationInMinutes * 60 * 1000
const startDate = new Date(endDate.getTime()-durationInMilliSeconds)
const auditLogsArguments = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: `api/atlas/v1.0/groups/${groupId}/clusters/${host}/logs/${logType}.gz`,
username: atlasAdminAPIPublicKey,
password: atlasAdminAPIPrivateKey,
headers: {'Content-Type': 'application/json'], 'Accept-Encoding': ['application/gzip']},
digestAuth:true,
query: {
"startDate": [Math.round(startDate / 1000).toString()],
"endDate": [Math.round(endDate / 1000).toString()]
}
};
console.log(`Arguments:${JSON.stringify(auditLogsArguments)}`)
const response = await context.http.get(auditLogsArguments)
auditData = response.body;
console.log("AuditData:"+(auditData))
console.log("JS Type:" + typeof auditData)
// convert it to base64 and then Buffer
var bufferAuditData = Buffer.from(auditData.toBase64(),'base64')
console.log("Buffered Audit Data" + bufferAuditData)
}
```
## Step 6: uploading audit data into the S3 bucket
Until now, in our Atlas function, we retrieved the audit logs based on the given interval, and now we’ll upload this data into the S3 bucket as a zip file.
Firstly, we import **aws-sdk** NodeJS library and then configure the credentials for AWS S3. We have already retrieved the AWS credentials from App Services Values and App Services Secrets and assigned those into function variables.
After that, we configure S3-related parameters, bucket name, key (folder and filename), and body (actual payload that is our audit zip file stored in a Buffer Javascript data type). And finally, we run our upload command [(S3.putObject()).
Here you can find the entire function code:
```
exports = async function(){
const atlasAdminAPIPublicKey = context.values.get("AtlasAdminAPIPublicKey");
const atlasAdminAPIPrivateKey = context.values.get("AtlasAdminAPIPrivateKeyLinkToSecret");
const awsAccessKeyID = context.values.get("AWSAccessKeyID")
const awsSecretAccessKey = context.values.get("AWSSecretAccessKeyLinkToSecret")
console.log(`Atlas Public + Private Keys: ${atlasAdminAPIPublicKey}, ${atlasAdminAPIPrivateKey}`)
console.log(`AWS Access Key ID + Secret Access Key: ${awsAccessKeyID}, ${awsSecretAccessKey}`)
//////////////////////////////////////////////////////////////////////////////////////////////////
// Atlas Cluster information
const groupId = '5ca48430014b76f34448bbcf';
const host = "exchangedata-shard-00-01.5tka5.mongodb.net";
const logType = "mongodb-audit-log"; // the other option is "mongodb" -> that allows you to download database logs
// defining startDate and endDate of Audit Logs
const endDate = new Date();
const durationInMinutes = 20;
const durationInMilliSeconds = durationInMinutes * 60 * 1000
const startDate = new Date(endDate.getTime()-durationInMilliSeconds)
const auditLogsArguments = {
scheme: 'https',
host: 'cloud.mongodb.com',
path: `api/atlas/v1.0/groups/${groupId}/clusters/${host}/logs/${logType}.gz`,
username: atlasAdminAPIPublicKey,
password: atlasAdminAPIPrivateKey,
headers: {'Content-Type': 'application/json'], 'Accept-Encoding': ['application/gzip']},
digestAuth:true,
query: {
"startDate": [Math.round(startDate / 1000).toString()],
"endDate": [Math.round(endDate / 1000).toString()]
}
};
console.log(`Arguments:${JSON.stringify(auditLogsArguments)}`)
const response = await context.http.get(auditLogsArguments)
auditData = response.body;
console.log("AuditData:"+(auditData))
console.log("JS Type:" + typeof auditData)
// convert it to base64 and then Buffer
var bufferAuditData = Buffer.from(auditData.toBase64(),'base64')
console.log("Buffered Audit Data" + bufferAuditData)
// uploading into S3
const AWS = require('aws-sdk');
// configure AWS credentials
const config = {
accessKeyId: awsAccessKeyID,
secretAccessKey: awsSecretAccessKey
};
// configure S3 parameters
const fileName= `auditlogs/auditlog-${new Date().getTime()}.gz`
const S3params = {
Bucket: "fuat-sungur-bucket",
Key: fileName,
Body: bufferAuditData
};
const S3 = new AWS.S3(config);
// create the promise object
const s3Promise = S3.putObject(S3params).promise();
s3Promise.then(function(data) {
console.log('Put Object Success');
return { success: true }
}).catch(function(err) {
console.log(err);
return { success: false, failure: err }
});
};
```
After we run the Atlas function, we can check out the S3 bucket and verify that the compressed audit file has been uploaded.
![A folder in an S3 bucket where we store the audit logs
You can find the entire code of the Atlas functions in the dedicated Github repository.
## Step 7: visualizing audit data in MongoDB Charts
First, we need to add our Federated Database Instance that we created in Step 4 into our Charts application that we created in the prerequisites section as a data source. This Federated Database Instance allows us to run queries with the MongoDB aggregation framework on the data that is in the cloud object storage (that is S3, in this case).
Before doing anything with Atlas Charts, let’s connect to Federated Database Instance and query the audit logs to make sure we have already established the data pipeline correctly.
```bash
$ mongosh "mongodb://federateddatabaseinstance0-5tka5.a.query.mongodb.net/myFirstDatabase" --tls --authenticationDatabase admin --username main_user
Enter password: *********
Current Mongosh Log ID: 63066b8cef5a94f0eb34f561
Connecting to:
mongodb://@federateddatabaseinstance0-5tka5.a.query.mongodb.net/myFirstDatabase?directConnection=true&tls=true&authSource=admin&appName=mongosh+1.5.4
Using MongoDB: 5.2.0
Using Mongosh: 1.5.4
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
AtlasDataFederation myFirstDatabase> show dbs
auditlogs 0 B
AtlasDataFederation myFirstDatabase> use auditlogs
switched to db auditlogs
AtlasDataFederation auditlogs> show collections
Auditlogscollection
```
Now, we can get a record from the **auditlogscollection**.
```bash
AtlasDataFederation auditlogs> db.auditlogscollection.findOne()
{
atype: 'authCheck',
ts: ISODate("2022-08-24T17:42:44.435Z"),
uuid: UUID("f5fb1c0a-399b-4308-b67e-732254828d17"),
local: { ip: '192.168.248.180', port: 27017 },
remote: { ip: '192.168.248.180', port: 44072 },
users: { user: 'mms-automation', db: 'admin' } ],
roles: [
{ role: 'backup', db: 'admin' },
{ role: 'clusterAdmin', db: 'admin' },
{ role: 'dbAdminAnyDatabase', db: 'admin' },
{ role: 'readWriteAnyDatabase', db: 'admin' },
{ role: 'restore', db: 'admin' },
{ role: 'userAdminAnyDatabase', db: 'admin' }
],
param: {
command: 'find',
ns: 'local.clustermanager',
args: {
find: 'clustermanager',
filter: {},
limit: Long("1"),
singleBatch: true,
sort: {},
lsid: { id: UUID("f49d243f-9c09-4a37-bd81-9ff5a2994f05") },
'$clusterTime': {
clusterTime: Timestamp({ t: 1661362964, i: 1 }),
signature: {
hash: Binary(Buffer.from("1168fff7240bc852e17c04e9b10ceb78c63cd398", "hex"), 0),
keyId: Long("7083075402444308485")
}
},
'$db': 'local',
'$readPreference': { mode: 'primaryPreferred' }
}
},
result: 0
}
AtlasDataFederation auditlogs>
```
Let’s check the audit log of an update operation.
```bash
AtlasDataFederation auditlogs> db.auditlogscollection.findOne({atype:"authCheck", "param.command":"update", "param.ns": "audit_test.orders"})
{
atype: 'authCheck',
ts: ISODate("2022-08-24T17:42:44.757Z"),
uuid: UUID("b7115a0a-c44c-4d6d-b007-a67d887eaea6"),
local: { ip: '192.168.248.180', port: 27017 },
remote: { ip: '91.75.0.56', port: 22787 },
users: [ { user: 'main_user', db: 'admin' } ],
roles: [
{ role: 'atlasAdmin', db: 'admin' },
{ role: 'backup', db: 'admin' },
{ role: 'clusterMonitor', db: 'admin' },
{ role: 'dbAdminAnyDatabase', db: 'admin' },
{ role: 'enableSharding', db: 'admin' },
{ role: 'readWriteAnyDatabase', db: 'admin' }
],
param: {
command: 'update',
ns: 'audit_test.orders',
args: {
update: 'orders',
updates: [
{ q: { _id: 3757 }, u: { '$set': { location: '7186a' } } }
],
ordered: true,
writeConcern: { w: 'majority' },
lsid: { id: UUID("a3ace80b-5907-4bf4-a917-be5944ec5a83") },
txnNumber: Long("509"),
'$clusterTime': {
clusterTime: Timestamp({ t: 1661362964, i: 2 }),
signature: {
hash: Binary(Buffer.from("1168fff7240bc852e17c04e9b10ceb78c63cd398", "hex"), 0),
keyId: Long("7083075402444308485")
}
},
'$db': 'audit_test'
}
},
result: 0
}
```
If we are able to see some records, that’s great. Now we can build our [dashboard in Atlas Charts.
You can import this dashboard into your Charts application. You might need to configure the data source name for the Charts for your convenience. In the given dashboard, the datasource was a collection with the name **auditlogscollection** in the database **auditlogs** in the Atlas Federated Database Instance with the name **FederatedDatabaseInstance0**, as shown below.
## Caveats
The following topics can be considered for more effective and efficient audit log analysis.
* You could retrieve logs from all the hosts rather than one node.
* Therefore you can track the data modifications even in the case of primary node failures.
* You might consider tracking only the relevant activities rather than tracking all the activities in the database. Tracking all the activities in the database might impact the performance.
* You can schedules your triggers.
* The Atlas function in the example runs once manually, but it can be scheduled via Atlas scheduled triggers.
* Then, date intervals (start and end time) for the audit logs for each execution need to be calculated properly.
* You could improve read efficiency.
* You might consider partitioning in the S3 bucket by most frequently filtered fields. For further information, please check the docs on optimizing query performance.
## Summary
MongoDB Atlas is not only a database as a service platform to run your MongoDB workloads. It also provides a wide variety of components to build end-to-end solutions. In this blog post, we explored some of the capabilities of Atlas such as App Services, Atlas Charts, and Atlas Data Federation, and observed how we utilized them to build a real-world scenario.
Questions on this tutorial? Thoughts, comments? Join the conversation over at the MongoDB Community Forums! | md | {
"tags": [
"Atlas"
],
"pageDescription": "In this blog post, I’ll walk you through how you can visualize MongoDB Atlas Database Audit Logs with MongoDB Atlas Charts.",
"contentType": "Tutorial"
} | Visualize MongoDB Atlas Database Audit Logs | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-bloated-documents | created | # Bloated Documents
Welcome (or welcome back!) to the MongoDB Schema Anti-Patterns series! We're halfway through the series. So far, we've discussed three anti-patterns: massive arrays, massive number of collections, and unnecessary indexes.
Today, let's discuss document size. MongoDB has a 16 MB document size limit. But should you use all 16 MBs? Probably not. Let's find out why.
>
>
>:youtube]{vid=mHeP5IbozDU start=389}
>If your brain feels bloated from too much reading, sit back, relax, and watch this video.
>
>
## Bloated Documents
Chances are pretty good that you want your queries to be blazing fast. MongoDB wants your queries to be blazing fast too.
To keep your queries running as quickly as possible, [WiredTiger (the default storage engine for MongoDB) keeps all of the indexes plus the documents that are accessed the most frequently in memory. We refer to these frequently accessed documents and index pages as the working set. When the working set fits in the RAM allotment, MongoDB can query from memory instead of from disk. Queries from memory are faster, so the goal is to keep your most popular documents small enough to fit in the RAM allotment.
The working set's RAM allotment is the larger of:
- 50% of (RAM - 1 GB)
- 256 MB.
For more information on the storage specifics, see Memory Use. If you're using MongoDB Atlas to host your database, see Atlas Sizing and Tier Selection: Memory.
One of the rules of thumb you'll hear frequently when discussing MongoDB schema design is *data that is accessed together should be stored together*. Note that it doesn't say *data that is related to each other should be stored together*.
Sometimes data that is related to each other isn't actually accessed together. You might have large, bloated documents that contain information that is related but not actually accessed together frequently. In that case, separate the information into smaller documents in separate collections and use references to connect those documents together.
The opposite of the Bloated Documents Anti-Pattern is the Subset Pattern. The Subset Pattern encourages the use of smaller documents that contain the most frequently accessed data. Check out this post on the Subset Pattern to learn more about how to successfully leverage this pattern.
## Example
Let's revisit Leslie's website for inspirational women that we discussed in the previous post. Leslie updates the home page to display a list of the names of 100 randomly selected inspirational women. When a user clicks on the name of an inspirational woman, they will be taken to a new page with all of the detailed biographical information about the woman they selected. Leslie fills the website with 4,704 inspirational women—including herself.
Initially, Leslie decides to create one collection named InspirationalWomen, and creates a document for each inspirational woman. The document contains all of the information for that woman. Below is a document she creates for Sally Ride.
``` none
// InspirationalWomen collection
{
"_id": {
"$oid": "5ec81cc5b3443e0e72314946"
},
"first_name": "Sally",
"last_name": "Ride",
"birthday": 1951-05-26T00:00:00.000Z,
"occupation": "Astronaut",
"quote": "I would like to be remembered as someone who was not afraid to do
what she wanted to do, and as someone who took risks along the
way in order to achieve her goals.",
"hobbies":
"Tennis",
"Writing children's books"
],
"bio": "Sally Ride is an inspirational figure who... ",
...
}
```
Leslie notices that her home page is lagging. The home page is the most visited page on her site, and, if the page doesn't load quickly enough, visitors will abandon her site completely.
Leslie is hosting her database on [MongoDB Atlas and is using an M10 dedicated cluster. With an M10, she gets 2 GB of RAM. She does some quick calculations and discovers that her working set needs to fit in 0.5 GB. (Remember that her working set can be up to 50% of (2 GB RAM - 1 GB) = 0.5 GB or 256 MB, whichever is larger).
Leslie isn't sure if her working set will currently fit in 0.5 GB of RAM, so she navigates to the Atlas Data Explorer. She can see that her InspirationalWomen collection is 580.29 MB and her index size is 196 KB. When she adds those two together, she can see that she has exceeded her 0.5 GB allotment.
Leslie has two choices: she can restructure her data according to the Subset Pattern to remove the bloated documents, or she can move up to a M20 dedicated cluster, which has 4 GB of RAM. Leslie considers her options and decides that having the home page and the most popular inspirational women's documents load quickly is most important. She decides that having the less frequently viewed women's pages take slightly longer to load is fine.
She begins determining how to restructure her data to optimize for performance. The query on Leslie's homepage only needs to retrieve each woman's first name and last name. Having this information in the working set is crucial. The other information about each woman (including a lengthy bio) doesn't necessarily need to be in the working set.
To ensure her home page loads at a blazing fast pace, she decides to break up the information in her `InspirationalWomen` collection into two collections: `InspirationalWomen_Summary` and `InspirationalWomen_Details`. She creates a manual reference between the matching documents in the collections. Below are her new documents for Sally Ride.
``` none
// InspirationalWomen_Summary collection
{
"_id": {
"$oid": "5ee3b2a779448b306938af0f"
},
"inspirationalwomen_id": {
"$oid": "5ec81cc5b3443e0e72314946"
},
"first_name": "Sally",
"last_name": "Ride"
}
```
``` none
// InspirationalWomen_Details collection
{
"_id": {
"$oid": "5ec81cc5b3443e0e72314946"
},
"first_name": "Sally",
"last_name": "Ride",
"birthday": 1951-05-26T00:00:00.000Z,
"occupation": "Astronaut",
"quote": "I would like to be remembered as someone who was not afraid to do
what she wanted to do, and as someone who took risks along the
way in order to achieve her goals.",
"hobbies":
"Tennis",
"Writing children's books"
],
"bio": "Sally Ride is an inspirational figure who... ",
...
}
```
Leslie updates her query on the home page that retrieves each woman's first name and last name to use the `InspirationalWomen_Summary` collection. When a user selects a woman to learn more about, Leslie's website code will query for a document in the `InspirationalWomen_Details` collection using the id stored in the `inspirationalwomen_id` field.
Leslie returns to Atlas and inspects the size of her databases and collections. She can see that the total index size for both collections is 276 KB (180 KB + 96 KB). She can also see that the size of her `InspirationalWomen_Summary` collection is about 455 KB. The sum of the indexes and this collection is about 731 KB, which is significantly less than her working set's RAM allocation of 0.5 GB. Because of this, many of the most popular documents from the `InspirationalWomen_Details` collection will also fit in the working set.
![The Atlas Data Explorer shows the total index size for the entire database is 276 KB and the size of the InspirationalWomen_Summary collection is 454.78 KB.
In the example above, Leslie is duplicating all of the data from the `InspirationalWomen_Summary` collection in the `InspirationalWomen_Details` collection. You might be cringing at the idea of data duplication. Historically, data duplication has been frowned upon due to space constraints as well as the challenges of keeping the data updated in both collections. Storage is relatively cheap, so we don't necessarily need to worry about that here. Additionally, the data that is duplicated is unlikely to change very often.
In most cases, you won't need to duplicate all of the information in more than one collection; you'll be able to store some of the information in one collection and the rest of the information in the other. It all depends on your use case and how you are using the data.
## Summary
Be sure that the indexes and the most frequently used documents fit in the RAM allocation for your database in order to get blazing fast queries. If your working set is exceeding the RAM allocation, check if your documents are bloated with extra information that you don't actually need in the working set. Separate frequently used data from infrequently used data in different collections to optimize your performance.
Check back soon for the next post in this schema design anti-patterns series!
## Related Links
Check out the following resources for more information:
- MongoDB Docs: Reduce the Size of Large Documents
- MongoDB Docs: 16 MB Document Size Limit
- MongoDB Docs: Atlas Sizing and Tier Selection
- MongoDB Docs: Model One-to-Many Relationships with Document References
- MongoDB University M320: Data Modeling
- Blog Series: Building with Patterns
- Blog: The Subset Pattern
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Bloated Documents",
"contentType": "Article"
} | Bloated Documents | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/swift/update-on-monogodb-and-swift | created | # An Update on MongoDB's Ongoing Commitment to Swift
Recently, Rachelle Palmer, Senior Product Manager for MongoDB Drivers sat down with Kaitlin Mahar, Lead Engineer for the `Swift` and `Rust` drivers to discuss some of the exciting developments in the `Swift` space. This article details that conversation.
Swift is a well documented, easy to use and convenient language focused on iOS app development. As one of the top ten languages, it's more popular than Ruby, Go, or Rust, but keeps a fairly low profile - it's the underestimated backbone of millions of applications, from Airbnb to LinkedIn. With its simple syntax and strong performance profile, Swift is versatile enough to be used for many use cases and applications, and we've watched with great interest as the number of customers using Swift with MongoDB has grown.
Swift can also be used for more than mobile, and we've seen a growing number of developers worldwide use Swift for backend development - software engineers can easily extend their skills with this concise, open source language. Kaitlin Mahar, and I decided we'd like to share more about MongoDB's commitment and involvement with the Swift community and how that influences some of the initiatives on our Swift driver roadmap.
**Rachelle (RP):** I want to get right to the big announcement! Congratulations on joining the Swift Server Working Group (SSWG). What is the SSWG and what are some of the things that the group is thinking about right now?
**Kaitlin (KM):** The SSWG is a steering team focused on promoting the use of Swift on the server. Joining the SSWG is an honor and a privilege for me personally - through my work on the driver and attendance at conferences like Serverside.swift, I've become increasingly involved in the community over the last couple of years and excited about the huge potential I see for Swift on the server, and being a part of the group is a great opportunity to get more deeply involved in this area. There are representatives in the group from Apple, Vapor (a popular Swift web framework), and Amazon. The group right now is primarily focused on guiding the development of a robust ecosystem of libraries and tools for server-side Swift. We run an incubation process for such projects, focused on providing overall technical direction, ensuring compatibility between libraries, and promoting best practices.
To that end, one thing we're thinking about right now is connection pooling. The ability to pool connections is very important for a number of server-side use cases, and right now developers who need a pool have to implement one from scratch. A generalized library would make it far easier to, for example, write a new database driver in Swift. Many SSWG members as well as the community at large are interested in such a project and I'm very excited to see where it goes.
A number of other foundational libraries and tools are being worked on by the community as well, and we've been spending a lot of time thinking about and discussing those: for example, standardized APIs to support tracing, and a new library called Swift Service Lifecycle which helps server applications manage their startup and shutdown sequences.
**RP:** When we talk with customers about using Swift for backend development, asking how they made that choice, it seems like the answers are fairly straightforward: with limited time and limited resources, it was the fastest way to get a web app running with a team of iOS developers. Do you feel like Swift is compelling to learn if you aren't an iOS developer though? Like, as a first language instead of Python?
**KM:** Absolutely! My first language was Python, and I see a lot of things I love about Python in Swift: it's succinct and expressive, and it's easy to quickly pick up on the basics. At the same time, Swift has a really powerful and strict type system similar to what you might have used in compiled languages like Java before, which makes it far harder to introduce bugs in your code, and forces you to address edge cases (for example, null values) up front. People often say that Swift borrows the best parts of a number of other languages, and I agree with that. I think it is a great choice whether it is your first language or fifth language, regardless of if you're interested in iOS development or not.
**RP:** Unquestionably, I think there's a great match here - we have MongoDB which is really easy and quick to get started with, and you have Swift which is a major win for developer productivity.
**RP:** What's one of your favorite Swift features?
**KM:** Enums with associated values are definitely up there for me. We use these in the driver a lot. They provide a very succinct way to express that particular values are present under certain conditions. For example, MongoDB allows users to specify either a string or a document as a "hint" about what index to use when executing a query. Our API clearly communicates these choices to users by defining our `IndexHint` type like this:
``` swift
public enum IndexHint {
/// Specifies an index to use by its name.
case indexName(String)
/// Specifies an index to use by a specification `BSONDocument` containing the index key(s).
case indexSpec(BSONDocument)
}
```
This requires the user to explicitly specify which version of a hint they want to use, and requires that they provide a value of the correct corresponding type along with it.
**RP:** I'd just like to say that mine is the `MemoryLayout` type. Being able to see the memory footprint of a class that you've defined is really neat. We're also excited to announce that our top priority for the next 6-9 months is rewriting our driver to be purely in Swift. For everyone who is wondering, why wasn't our official Swift driver "all Swift" initially? And why change now?
**KM:** We initially chose to wrap libmongoc as it provided a solid, reliable core and allowed us to deliver a great experience at the API level to the community sooner. The downside of that was of course, for every feature we want to do, the C driver had to implement it first sometimes this slowed down our release cadence. We also feel that writing driver internals in pure Swift will enhance performance, and give better memory safety - for example, we won't have to spend as much time thinking about properly freeing memory when we're done using it.
If you're interested in learning more about Swift, and how to use Swift for your development projects with MongoDB, here are some resources to check out:
- Introduction to Server-Side Swift and Building a Command Line Executable
- The Swift driver GitHub
Kaitlin will also be on an upcoming MongoDB Podcast episode to talk more about working with Swift so make sure you subscribe and stay tuned!
If you have questions about the Swift Driver, or just want to interact with other developers using this and other drivers, visit us in the MongoDB Community and be sure to introduce yourself and say hello! | md | {
"tags": [
"Swift",
"MongoDB"
],
"pageDescription": "An update on MongoDB's ongoing commitment to Swift",
"contentType": "Article"
} | An Update on MongoDB's Ongoing Commitment to Swift | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-database-and-frozen-objects | created | # Realm Core Database 6.0: A New Architecture and Frozen Objects
## TL;DR
Realm is an easy-to-use, offline-first database that lets mobile developers build better apps faster.
Since the acquisition by MongoDB of Realm in May 2019, MongoDB has continued investing in building an updated version of our mobile database; culimating in the Realm Core Database 6.0.
We're thrilled to announce that it's now out of beta and released; we look forward to seeing what apps you build with Realm in production. The Realm Core Database 6.0 is now included in the 10.0 versions of each SDK: Kotlin/Java, Swift/Obj-C, Javascript on the node.js & React Native, as well as .NET support for a variety of UWP platforms and Xamarin. Take a look at the docs here.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## A New Architecture
This effort lays a new foundation that further increases the stability of the Realm Database and allows us to quickly release new features in the future.
We've also increased performance with further optimizations still to come. We're most excited that:
- The new architecture makes it faster to look up objects based on a primary key
- iOS benchmarks show faster insertions, twice as fast sorting, and ten times faster deletions
- Code simplifications yielded a ten percent reduction in total lines of code and a smaller library
- Realm files are now much smaller when storing big blobs or large transactions
## Frozen Objects
With this release, we're also thrilled to announce that Realm now supports Frozen Objects, making it easier to use Realm with reactive frameworks.
Since our initial release of the Realm database, our concept of live,thread-confined objects, has been key to reducing the code that mobile developers need to write. Objects are the data, so when the local database is updated for a particular thread, all objects are automatically updated too. This design ensures you have a consistent view of your data and makes it extremely easy to hook the local database up to the UI. But it historically came at a cost for developers using reactive frameworks.
Now, Frozen Objects allows you to work with immutable data without needing to extract it from the database. Frozen Objects act like immutable objects, meaning they won't change. They allow you to freeze elements of your data and hand it over to other threads and operations without throwing an exception - so it's simple to use Realm when working with platforms like RxJava & LiveData, RxSwift & Combine, and React.
### Using Frozen Objects
Freeze any 'Realm', 'RealmList', or 'RealmObject' and it will not be possible to modify them in any way. These Frozen Objects have none of the threading restrictions that live objects have; meaning they can be read and queried across all threads.
As an example, consider what it would look like if you were listening to changes on a live Realm using Kotlin or .NET, and then wanted to freeze query results before sending them on for further processing. If you're an iOS developer please check out our blog post on RealmSwift integration with Combine.
The Realm team is proud to say that we've heard you, and we hope that you give this feature a try to simplify your code and improve your development experience.
::::tabs
:::tab]{tabid=".NET"}
``` csharp
var realm = Realm.GetInstance();
var frozenResults = realm.All()
.Where(p => p.Name.StartsWith("Jane"))
.Freeze();
Assert.IsTrue(results.IsFrozen());
Task.Run(() =>
{
// it is now possible to read objects on another thread
var person = frozenResults.First();
Console.WriteLine($"Person from a background thread: {person.Name}");
});
```
:::
:::tab[]{tabid="Kotlin"}
``` Kotlin
val realm: Realm = Realm.getDefaultInstance();
val results: RealmResults = realm.where().beginsWith("name", "Jane").findAllAsync()
results.addChangeListener { liveResults ->
val frozenResults: RealmResults = liveResults.freeze()
val t = Thread(Runnable {
assertTrue(frozenResults.isFrozen())
// It is now possible to read objects on another thread
val person: Person = frozenResults.first()
person.name
})
t.start()
t.join()
}
```
:::
::::
Since Java needs immutable objects, we also updated our Java support so all Realm Observables and Flowables now emit frozen objects by default. This means that it should be possible to use all operators available in RxJava without either using `Realm.copyFromRealm()` or running into an `IllegalStateException:`
``` Java
val realm = Realm.getDefaultInstance()
val stream: Disposable = realm.where().beginsWith("name", "Jane").findAllAsync().asFlowable()
.flatMap { frozenPersons ->
Flowable.fromIterable(frozenPersons)
.filter { person -> person.age > 18 }
.map { person -> PersonViewModel(person.name, person.age) }
.toList()
.toFlowable()
}
.subscribeOn(Schedulers.computation())
.observeOn(AndroidSchedulers.mainThread)
.subscribe { updateUI(it) }
}
```
If you have feedback please post it in Github and the Realm team will check it out!
- [RealmJS
- RealmSwift
- RealmJava
- RealmDotNet
## A Strong Foundation for the Future
The Realm Core Database 6.0 now released with Frozen Objects and we're
now focused on adding new features; such as new types, new SDKs,
andunlocking new use cases for our developers.
Want to Ask a Question? Visit our Forums.
Want to make a feature request? Visit our Feedback Portal.
Want to be notified of upcoming Realm events such as our iOS Hackathon in November2020? Visit our Global Community Page.
>Safe Harbor
The development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality. | md | {
"tags": [
"Realm"
],
"pageDescription": "Explaining Realm Core Database 6.0 and Frozen Objects",
"contentType": "News & Announcements"
} | Realm Core Database 6.0: A New Architecture and Frozen Objects | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/build-ci-cd-pipelines-realm-apps-github-actions | created | # How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions
> As of June 2022, the functionality previously known as MongoDB Realm is now named Atlas App Services. Atlas App Services refers to the cloud services that simplify building applications with Atlas – Atlas Data API, Atlas GraphQL API, Atlas Triggers, and Atlas Device Sync. Realm will continue to be used to refer to the client-side database and SDKs.
Building Continuous Integration/Continuous Deployment (CI/CD) pipelines can be challenging. You have to map your team's ideal pipeline, identify and fix any gaps in your team's test automation, and then actually build the pipeline. Once you put in the work to craft a pipeline, you'll reap a variety of benefits like...
* Faster releases, which means you can get value to your end users quicker)
* Smaller releases, which can you help you find bugs faster
* Fewer manual tasks, which can reduce manual errors in things like testing and deployment.
As Tom Haverford from the incredible TV show Parks and Recreation wisely said, "Sometimes you gotta **work a little**, so you can **ball a lot**." (View the entire scene here. But don't get too sucked into the silliness that you forget to return to this article 😉.)
In this article, I'll walk you through how I crafted a CI/CD pipeline for a mobile app built with MongoDB Realm. I'll provide strategies as well as code you can reuse and modify, so you can put in just **a little bit of work** to craft a pipeline for your app and **ball a lot**.
This article covers the following topics:
- All About the Inventory App
- What the App Does
- The System Architecture
- All About the Pipeline
- Pipeline Implementation Using GitHub Actions
- MongoDB Atlas Project Configuration
- What Happens in Each Stage of the Pipeline
- Building Your Pipeline
- Map Your Pipeline
- Implement Your Pipeline
- Summary
> More of a video person? No worries. Check out the recording below of a talk I gave at MongoDB.live 2021 that covers the exact same content this article does. :youtube]{vid=-JcEa1snwVQ}
## All About the Inventory App
I recently created a CI/CD pipeline for an iOS app that manages stores' inventories. In this section, I'll walk you through what the app does and how it was architected. This information will help you understand why I built my CI/CD pipeline the way that I did.
### What the App Does
The Inventory App is a fairly simple iOS app that allows users to manage the online record of their physical stores' inventories. The app allows users to take the following actions:
* Create an account
* Login and logout
* Add an item to the inventory
* Adjust item quantities and prices
If you'd like to try the app for yourself, you can get a copy of the code in the GitHub repo: [mongodb-developer/realm-demos.
### The System Architecture
The system has three major components:
* **The Inventory App** is the iOS app that will be installed on the mobile device. The local Realm database is embedded in the Inventory App and stores a local copy of the inventory data.
* **The Realm App** is the central MongoDB Realm backend instance of the mobile application. In this case, the Realm App utilizes Realm features like authentication, rules, schema, GraphQL API, and Sync. The Inventory App is connected to the Realm App. **Note**: The Inventory App and the Realm App are NOT the same thing; they have two different code bases.
* **The Atlas Database** stores the inventory data. Atlas is MongoDB's fully managed Database-as-a-Service. Realm Sync handles keeping the data synced between Atlas and the mobile apps.
As you're building a CI/CD pipeline for a mobile app with an associated Realm App and Atlas database, you'll need to take into consideration how you're going to build and deploy both the mobile app and the Realm App. You'll also need to figure out how you're going to indicate which database the Realm App should be syncing to. Don't worry, I'll share strategies for how to do all of this in the sections below.
Okay, that's enough boring stuff. Let's get to my favorite part: the CI/CD pipeline!
## All About the Pipeline
Now that you know what the Inventory App does and how it was architected, let's dive into the details of the CI/CD pipeline for this app. You can use this pipeline as a basis for your pipeline and tweak it to fit your team's process.
My pipeline has three main stages:
* **Development**: In the Development Stage, developers do their development work like creating new features and fixing bugs.
* **Staging**: In the Staging Stage, the team simulates the production environment to make sure everything works together as intended. The Staging Stage could also be known as QA (Quality Assurance), Testing, or Pre-Production.
* **Production**: The Production Stage is the final stage where the end users have access to your apps.
### Pipeline Implementation Using GitHub Actions
A variety of tools exist to help teams implement CI/CD pipelines. I chose to use GitHub Actions, because it works well with GitHub (which is where my code is already) and it has a free plan for public repositories (and I like free things!). GitHub Actions allows you to automate workflows. As you'll see in later sections, I implemented my CI/CD pipeline using a workflow. Each workflow can contain one or more jobs, and each job contains one or more steps.
The complete workflow is available in build.yml in the Inventory App's GitHub repository.
### MongoDB Atlas Project Configuration
Throughout the pipeline, the workflow will deploy to new or existing Realm Apps that are associated with new or existing databases based on the pipeline stage. I decided to create four Atlas projects to support my pipeline:
* **Inventory Demo - Feature Development.** This project contains the Realm Apps associated with every new feature. Each Realm App syncs with a database that has a custom name based on the feature (for example, a feature branch named `beta6-improvements` would have a database named `InventoryDemo-beta6-improvements`). All of the databases for feature branches are stored in this project's Atlas cluster. The Realm Apps and databases for feature branches are deleted after the feature work is completed.
* **Inventory Demo - Pull Requests.** This project contains the Realm Apps that are created for every pull request. Each Realm App syncs with a database that has a custom name based on the time the workflow runs (for example, `InventoryDemo-2021-06-07_1623089424`). All of the databases associated with pull requests are stored in this project's Atlas cluster.
As part of my pipeline, I chose to delete the Realm App and associated database at the end of the workflow that was triggered by the pull request. Another option would be to skip deleting the Realm App and associated database when the tests in the workflow fail, so that a developer could manually investigate the source of the failure.
* **Inventory Demo - Staging.** This project contains the Realm App for Staging. The Realm App syncs with a database used only for Staging. The Staging database is the only database in this project's cluster. The Realm App and database are never deleted, so the team can always look in the same consistent locations for the Staging app and its data.
* **Inventory Demo - Production.** This project contains the Realm App for Production. The Realm App syncs with a database used only for Production. The Production database is the only database in this project's cluster. The Realm App and database are never deleted.
> This app requires only a single database. If your app uses more than one database, the principles described above would still hold true.
### What Happens in Each Stage of the Pipeline
I've been assigned a ticket to change the color of the **Log In** button in the iOS app from blue to pink. In the following sections, I'll walk you through what happens in each stage of the pipeline and how my code change is moved from one stage to the next.
All of the stages and transitions below use the same GitHub Actions workflow. The workflow has conditions that modify which steps are taken. I'll walk you through what steps are run in each workflow execution in the sections below. The workflow uses environment variables and secrets to store values. Visit the realm-demos GitHub repo to see the complete workflow source code.
Development
-----------
The Development stage is where I'll do my work to update the button color. In the subsections below, I'll walk you through how I do my work and trigger a workflow.
Updating the Inventory App
--------------------------
Since I want to update my iOS app code, I'll begin by opening a copy of my app's code in Xcode. I'll change the color of the **Log In** button there. I'm a good developer 😉, so I'll run the automated tests to make sure I didn't break anything. The Inventory App has automated unit and UI tests that were implemented using XCTest. I'll also kick off a simulator, so I can manually test that the new button color looks fabulous.
Updating the Realm App
----------------------
If I wanted to make an update to the Realm App code, I could either:
* work in the cloud in the Realm web interface or
* work locally in a code editor like Visual Studio Code.
If I choose to work in the Realm web interface, I can make changes and deploy them. The Realm web interface was recently updated to allow developers to commit changes they make there to their GitHub repositories. This means changes made in the web interface won't get lost when changes are deployed through other methods (like through the Realm Command Line Interface or automated GitHub deployments).
If I choose to work with my Realm App code locally, I could make my code changes and then run unit tests. If I want to run integration tests or do some manual testing, I need to deploy the Realm App. One option is to use the App Services Command Line Interface (App Services CLI) to deploy with a command like `appservices
push`. Another option is to automate the deployment using a GitHub Actions workflow.
I've chosen to automate the deployment using a GitHub Actions workflow, which I'll describe in the following section.
Kicking Off the Workflow
------------------------
As I am working locally to make changes to both the Inventory App and the Realm App, I can commit the changes to a new feature branch in my GitHub repository.
When I am ready to deploy my Realm App and run all of my automated tests, I will push the commits to my repository. The push will trigger the workflow.
The workflow runs the `build` job, which runs the following steps:
1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.
2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.
3. **Store current time in variable.** Stores the current time in an environment variable named `CURRENT_TIME`. This variable is used later in the workflow.
```
echo "CURRENT_TIME=$(date +'%Y-%m-%d_%s')" >> $GITHUB_ENV
```
4. **Is this a push to a feature branch?** If this is a push to a feature branch (which it is), do the following:
* Create a new environment variable to store the name of the feature branch.
```
ref=$(echo ${{ github.ref }})
branch=$(echo "${ref##*/}")
echo "FEATURE_BRANCH=$branch" >> $GITHUB_ENV
```
* Check the `GitHubActionsMetadata` Atlas database to see if a Realm App already exists for this feature branch. If a Realm App exists, store the Realm App ID in an environment variable. Note: Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the Atlas IP Access List.
```
output=$(mongo "mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval "db.metadata.findOne({'branch': '$branch'})")
if [ $output == *null ]]; then
echo "No Realm App found for this branch. A new app will be pushed later in this workflow"
else
echo "A Realm App was found for this branch. Updates will be pushed to the existing app later in this workflow"
app_id=$(echo $output | sed 's/^.*realm_app_id" : "\([^"]*\).*/\1/')
echo "REALM_APP_ID=$app_id" >> $GITHUB_ENV
fi
```
* Update the `databaseName` in the `development.json` [environment file. Set the database name to contain the branch name to ensure it's unique.
```
cd inventory/export/sync/environments
printf '{\n "values": {"databaseName": "InventoryDemo-%s"}\n}' "$branch" > development.json
```
* Indicate that the Realm App should use the `development` environment by updating `realm_config.json`.
```
cd ..
sed -i txt 's/{/{ "environment": "development",/' realm_config.json
```
5. **Install the App Services CLI and authenticate.** This step installs the App Services CLI and authenticates using the API keys that are stored as GitHub secrets.
```bash
npm install -g atlas-app-services-cli
appservices login --api-key="${{ secrets.REALM_API_PUBLIC_KEY }}" --private-api-key="${{ secrets.REALM_API_PRIVATE_KEY }}" --realm-url https://realm.mongodb.com --atlas-url https://cloud.mongodb.com
```
6. **Create a new Realm App for feature branches where the Realm App does not yet exist.** This step has three primary pieces:
7. Push the Realm App to the Atlas project specifically for feature
branches.
```
cd inventory/export/sync
appservices push -y --project 609ea554944fe545460529a1
```
8. Retrieve and store the Realm App ID from the output of `appservices app describe`.
```
output=$(appservices app describe)
app_id=$(echo $output | sed 's/^.*client_app_id": "\(^"]*\).*/\1/')
echo "REALM_APP_ID=$app_id" >> $GITHUB_ENV
```
9. Store the Realm App ID in the GitHubActionsMetadata database. Note: Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the [Atlas IP Access List.
```
mongo "mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval "db.metadata.insertOne({'branch': '${{ env.FEATURE_BRANCH}}', 'realm_app_id': '$app_id'})"
```
10. **Create `realm-app-id.txt` that stores the Realm App ID.** This file will be stored in the mobile app code. The sole purpose of this file is to tell the mobile app to which Realm App it should connect.
```
echo "${{ env.REALM_APP_ID }}" > $PWD/inventory/clients/ios-swiftui/InventoryDemo/realm-app-id.txt
```
11. **Build mobile app and run tests.** This step builds the mobile app for testing and then runs the tests using a variety of simulators. If you have integration tests, you could also choose to checkout previous releases of the mobile app and run the integration tests against the current version of the Realm App to ensure backwards compatibility.
12. Navigate to the mobile app's directory.
```
cd inventory/clients/ios-swiftui/InventoryDemo
```
13. Build the mobile app for testing.
```
xcodebuild -project InventoryDemo.xcodeproj -scheme "ci" -sdk iphonesimulator -destination 'platform=iOS Simulator,name=iPhone 12 Pro Max,OS=14.4' -derivedDataPath './output' build-for-testing
```
14. Define the simulators that will be used for testing.
```
iPhone12Pro='platform=iOS Simulator,name=iPhone 12 Pro Max,OS=14.4'
iPhone12='platform=iOS Simulator,name=iPhone 12,OS=14.4'
iPadPro4='platform=iOS Simulator,name=iPad Pro (12.9-inch) (4th generation)'
```
15. Run the tests on a variety of simulators. Optionally, you could put these in separate jobs to run in parallel.
```
xcodebuild -project InventoryDemo.xcodeproj -scheme "ci" -sdk iphonesimulator -destination "$iPhone12Pro" -derivedDataPath './output' test-without-building
xcodebuild -project InventoryDemo.xcodeproj -scheme "ci" -sdk iphonesimulator -destination "$iPhone12" -derivedDataPath './output' test-without-building
xcodebuild -project InventoryDemo.xcodeproj -scheme "ci" -sdk iphonesimulator -destination "$iPadPro4" -derivedDataPath './output' test-without-building
```
16. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2
Action.
17. **Complete job.** This step is created by GitHub Actions to complete the workflow.
The nice thing here is that simply by pushing my code changes to my feature branch, my Realm App is deployed and the tests are run. When I am finished making updates to the code, I can feel confident that a Staging build will be successful.
Moving from Development to Staging
----------------------------------
Now that I'm done working on my code changes, I'm ready to move to Staging. I can kick off this process by creating a GitHub pull request. In the pull request, I'll request to merge my code from my feature branch to the `staging` branch. When I submit the pull request, GitHub will automatically kick off another workflow for me.
The workflow runs the following steps.
1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.
2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.
3. **Store current time in variable.** See the section above for more information on this step.
4. **Set environment variables for all other runs.** This step sets the necessary environment variables for pull requests where a new Realm App and database will be created for *each* pull request. This step has three primary pieces.
* Create a new environment variable named `IS_DYNAMICALLY_GENERATED_APP` to indicate this is a dynamically generated app that should be deleted later in this workflow.
```
echo "IS_DYNAMICALLY_GENERATED_APP=true" >> $GITHUB_ENV
```
* Update the `databaseName` in the `testing.json` environment file. Set the database name to contain the current time to ensure it's unique.
```
cd inventory/export/sync/environments
printf '{\n "values": {"databaseName": "InventoryDemo-%s"}\n}' "${{ env.CURRENT_TIME }}" > testing.json
```
* Indicate that the Realm App should use the `testing` environment by updating `realm_config.json`.
```
cd ..
sed -i txt 's/{/{ "environment": "testing",/' realm_config.json
```
5. **Install the App Services CLI and authenticate.** See the section above for more information on this step.
6. **Create a new Realm App for pull requests.** Since this is a pull request, the workflow creates a new Realm App just for this workflow. The Realm App will be deleted at the end of the workflow.
* Push to the Atlas project specifically for pull requests.
```
cd inventory/export/sync
appservices push -y --project 609ea554944fe545460529a1
```
* Retrieve and store the Realm App ID from the output of `appservices app describe`.
```
output=$(appservices app describe)
app_id=$(echo $output | sed 's/^.*client_app_id": "\(^"]*\).*/\1/')
echo "REALM_APP_ID=$app_id" >> $GITHUB_ENV
```
* Store the Realm App ID in the `GitHubActionsMetadata` database.
> Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the [Atlas IP Access List.
```
mongo "mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval "db.metadata.insertOne({'branch': '${{ env.FEATURE_BRANCH}}', 'realm_app_id': '$app_id'})"
```
7. **Create `realm-app-id.txt` that stores the Realm App ID.** See the section above for more information on this step.
8. **Build mobile app and run tests.** See the section above for more information on this step.
9. **Delete dynamically generated Realm App.** The workflow created a Realm App just for this pull request in an earlier step. This step deletes that Realm App.
```
appservices app delete --app ${{ env.REALM_APP_ID }}
```
10. **Delete dynamically generated database.** The workflow also created a database just for this pull request in an earlier step. This step deletes that database.
```
mongo "mongodb+srv://${{ secrets.ATLAS_URI_PULL_REQUESTS }}/InventoryDemo-${{ env.CURRENT_TIME }}" --username ${{ secrets.ATLAS_USERNAME_PULL_REQUESTS }} --password ${{ secrets.ATLAS_PASSWORD_PULL_REQUESTS }} --eval "db.dropDatabase()"
```
11. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.
12. **Complete job.** This step is created by GitHub Actions to complete the workflow.
The results of the workflow are included in the pull request.
My teammate will review the pull request. They will likely review the code and double check that the workflow passed. We might go back and forth with suggestions and updates until we both agree the code is ready to be merged into the `staging` branch.
When the code is ready, my teammate will approve the pull request and then click the button to squash and merge the commits. My teammate may also choose to delete the branch as it is no longer needed.
Deleting the branch triggers the `delete-feature-branch-artifacts` workflow. This workflow is different from all of the workflows I will discuss in this article. This workflow's job is to delete the artifacts that were associated with the branch.
The `delete-feature-branch-artifacts` workflow runs the following steps.
1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.
2. **Install the App Services CLI and authenticate.** See the section above for more information on this step.
3. **Store the name of the branch.** This step retrieves the name of the branch that was just deleted and stores it in an environment variable named `FEATURE_BRANCH`.
```
ref=$(echo ${{ github.event.ref }})
branch=$(echo "${ref##*/}")
echo "FEATURE_BRANCH=$branch" >> $GITHUB_ENV
```
4. **Delete the Realm App associated with the branch.** This step queries the `GitHubActionsMetadata` database for the ID of the Realm App associated with this branch. Then it deletes the Realm App, and deletes the information in the `GitHubActionsMetadata` database. Note: Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the Atlas IP Access List.
```
# Get the Realm App associated with this branch
output=$(mongo "mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval "db.metadata.findOne({'branch': '${{ env.FEATURE_BRANCH }}'})")
if [ $output == *null ]]; then
echo "No Realm App found for this branch"
else
# Parse the output to retrieve the realm_app_id
app_id=$(echo $output | sed 's/^.*realm_app_id" : "\([^"]*\).*/\1/')
# Delete the Realm App
echo "A Realm App was found for this branch: $app_id. It will now be deleted"
appservices app delete --app $app_id
# Delete the record in the GitHubActionsMetadata database
output=$(mongo "mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval "db.metadata.deleteOne({'branch': '${{ env.FEATURE_BRANCH }}'})")
fi
```
5. **Delete the database associated with the branch.** This step deletes the database associated with the branch that was just deleted.
```
mongo "mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/InventoryDemo-${{ env.FEATURE_BRANCH }}" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval "db.dropDatabase()"
```
6. **Complete job.** This step is created by GitHub Actions to complete the workflow.
Staging
-------
As part of the pull request process, my teammate merged my code change into the `staging` branch. I call this stage "Staging," but teams have a variety of names for this stage. They might call it "QA (Quality Assurance)," "Testing," "Pre-Production," or something else entirely. This is the stage where teams simulate the production environment and make sure everything works together as intended.
When my teammate merged my code change into the `staging` branch, GitHub kicked off another workflow. The purpose of this workflow is to deploy the code changes to the Staging environment and ensure everything continues to work as expected.
![Screenshot of the GitHub Actions web interface after a push to the 'staging' branch triggers a workflow
The workflow runs the following steps.
1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.
2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.
3. **Store current time in variable.** See the section above for more information on this step.
4. **Is this a push to the Staging branch?** This step checks if the workflow was triggered by a push to the `staging` branch. If so, it stores the ID of the Staging Realm App in the `REALM_APP_ID` environment variable.
```
echo "REALM_APP_ID=inventorydemo-staging-zahjj" >> $GITHUB_ENV
```
5. **Install the App Services CLI and authenticate.** See the section above for more information on this step.
6. **Push updated copy of the Realm App for existing apps (Main, Staging, or Feature branches).** This step pushes an updated copy of the Realm App (stored in `inventory/export/sync`) for cases when the Realm App already exists.
```
cd inventory/export/sync
appservices push --remote="${{ env.REALM_APP_ID }}" -y
```
7. **Create `realm-app-id.txt` that stores the Realm App ID.** See the section above for more information on this step.
8. **Build mobile app and run tests.** See the section above for more information on this step.
9. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.
10. **Complete job.** This step is created by GitHub Actions to complete the workflow.
Realm has a new feature releasing soon that will allow you to roll back deployments. When this feature releases, I plan to add a step to the workflow above to automatically roll back the deployment to the previous one in the event of test failures.
Moving from Staging to Production
---------------------------------
At this point, some teams may choose to have their pipeline automation stop before automatically moving to production. They may want to run manual tests. Or they may want to intentionally limit their number of releases.
I've chosen to move forward with continuous deployment in my pipeline. So, if the tests in Staging pass, the workflow above continues on to the `pushToMainBranch` job that automatically pushes the latest commits to the `main` branch. The job runs the following steps:
1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.
2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out all branches in the repository, so the workflow can access both the `main` and `staging` branches.
3. **Push to the Main branch.** Merges the code from `staging` into `main`.
```
git merge origin/staging
git push
```
4. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.
5. **Complete job.** This step is created by GitHub Actions to complete the workflow.
Production
----------
Now my code is in the final stage: production. Production is where the end users get access to the application.
When the previous workflow merged the code changes from the `staging` branch into the `main` branch, another workflow began.
The workflow runs the following steps.
1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.
2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.
3. **Store current time in variable.** See the section above for more information on this step.
4. **Is this a push to the Main branch?** This step checks if the workflow was triggered by a push to the `main` branch. If so, it stores the ID of the Production Realm App in the `REALM_APP_ID` environment variable.
```
echo "REALM_APP_ID=inventorysync-ctnnu" >> $GITHUB_ENV
```
5. **Install the App Services CLI and authenticate.** See the section above for more information on this step.
6. **Push updated copy of the Realm App for existing apps (Main, Staging, or Feature branches).** See the section above for more information on this step.
7. **Create `realm-app-id.txt` that stores the Realm App ID.** See the section above for more information on this step.
8. **Build mobile app and run tests.** See the section above for more information on this step.
9. **Install the Apple certificate and provisioning profile (so we can create the archive).** When the workflow is in the production stage, it does something that is unique to all of the other workflows: This workflow creates the mobile app archive file (the `.ipa` file). In order to create the archive file, the Apple certificate and provisioning profile need to be installed. For more information on how the Apple certificate and provisioning profile are installed, see the GitHub documentation.
10. **Archive the mobile app.** This step creates the mobile app archive file (the `.ipa` file).
```
cd inventory/clients/ios-swiftui/InventoryDemo
xcodebuild -workspace InventoryDemo.xcodeproj/project.xcworkspace/ -scheme ci archive -archivePath $PWD/build/ci.xcarchive -allowProvisioningUpdates
xcodebuild -exportArchive -archivePath $PWD/build/ci.xcarchive -exportPath $PWD/build -exportOptionsPlist $PWD/build/ci.xcarchive/Info.plist
```
11. **Store the Archive in a GitHub Release.** This step uses the gh-release action to store the mobile app archive in a GitHub Release as shown in the screenshot below.
12. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.
13. **Complete job.** This step is created by GitHub Actions to complete the workflow.
As I described above, my pipeline creates a GitHub release and stores the `.ipa` file in the release. Another option would be to push the `.ipa` file to TestFlight so you could send it to your users for beta testing. Or you could automatically upload the `.ipa` to the App Store for Apple to review and approve for publication. You have the ability to customize your worfklow based on your team's process.
The nice thing about automating the deployment to production is that no one has to build the mobile app archive locally. You don't have to worry about that one person who knows how to build the archive going on vacation or leaving the company—everything is automated, so you can keep delivering new features to your users without the panic of what to do if a key person is out of the office.
## Building Your Pipeline
As I wrap up this article, I want to help you get started building your pipeline.
### Map Your Pipeline
I encourage you to begin by working with key stakeholders to map your ideal pipeline. Ask questions like the following:
* **What stages will be in the pipeline?** Do you have more stages than just Development, Staging, and Production?
* **What automated tests should be run in the various stages of your pipeline?** Consider if you need to create more automated tests so that you feel confident in your releases.
* **What should be the final output of your pipeline?** Is the result a fully automated pipeline that pushes changes automatically to the App Store? Or do you want to do some steps manually?
### Implement Your Pipeline
Once you've mapped out your pipeline and figured out what your steps should be, it's time to start implementing your pipeline. Starting from scratch can be challenging... but you don't have to start from scratch. Here are some resources you can use:
1. The **mongodb-developer/realm-demos GitHub repo** contains the code I discussed today.
* The repo has example mobile app and sync code, so you can see how the app itself was implemented. Check out the ios-swiftui directory.
* The repo also has automated tests in it, so you can take a peak at those and see how my team wrote those. Check out the InventoryDemoTests and the InventoryDemoUITests directories.
* The part I'm most excited about is the GitHub Actions Workflow: build.yml. This is where you can find all of the code for my pipeline automation. Even if you're not going to use GitHub Actions to implement your pipeline, this file can be helpful in showing how to execute the various steps from the command line. You can take those commands and use them in other CI/CD tools.
* The delete-feature-branch-artifacts.yml workflow shows how to clean up artifacts whenever a feature branch is deleted.
2. The **MongoDB Realm documentation** has a ton of great information and is really helpful in figuring out what you can do with the App Services CLI.
3. The **MongoDB Community** is the best place to ask questions as you are implementing your pipeline. If you want to show off your pipeline and share your knowledge, we'd love to hear that as well. I hope to see you there!
## Summary
You've learned a lot about how to craft your own CI/CD pipeline in this article. Creating a CI/CD pipeline can seem like a daunting task.
With the resources I've given you in this article, you can create a CI/CD pipeline that is customized to your team's process.
As Tom Haverford wisely said, "Sometimes you gotta work a little so you can ball a lot." Once you put in the work of building a pipeline that works for you and your team, your app development can really fly, and you can feel confident in your releases. And that's a really big deal.
| md | {
"tags": [
"Realm",
"GitHub Actions"
],
"pageDescription": "Learn how to build CI/CD pipelines in GitHub Actions for apps built using MongoDB Realm.",
"contentType": "Tutorial"
} | How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/scaling-gaming-mongodb-square-enix-gaspard-petit | created | # Scaling the Gaming Industry with Gaspard Petit of Square Enix
Square Enix is one of the most popular gaming brands in the world. They're known for such franchise games as Tomb Raider, Final Fantasy, Dragon Quest, and more. In this article, we provide a transcript of the MongoDB Podcast episode in which Michael and Nic sit down with Gaspard Petit, software architect at Square Enix, to talk about how they're leveraging MongoDB, and his own personal experience with MongoDB as a data platform.
You can learn more about Square Enix on their website. You can find Gaspard on LinkedIn.
Join us in the forums to chat about this episode, about gaming, or about anything related to MongoDB and Software Development.
Gaspard Petit (00:00):
Hi everybody, this is Gaspard Petit. I'm from Square Enix. Welcome to this MongoDB Podcast.
Gaspard Petit (00:09):
MongoDB was perfect for processes, there wasn't any columns predefined, any schema, we could just add fields. And why this is important as designers is that we don't know ahead of time what the final game will look like. This is something that evolves, we do a prototype of it, you like it, you don't like it, you undo something, you redo something, you go back to something you did previously, and it keeps changing as the game evolves. It's very rare that I've seen a game production go straight from point A to Z without twirling a little bit and going back and forth. So that back and forth process is cumbersome. For the back-end, where the requirements are set in stone, you have to deliver it so the game team can experience it, and then they'll iterate on it. And if you're set in stone on your database, and each time you change something you have to migrate your data, you're wasting an awful lot of time.
Michael Lynn (00:50):
Welcome to the show. On today's episode, we're talking with Gaspard Petit of the Square Enix, maker of some of the best-known, best-loved games in the gaming industry. Today, we're talking about how they're leveraging MongoDB and a little bit about Gaspard's journey as a software architect. Hope you enjoy this episode.
Automated (01:07):
You're listening to the MongoDB podcast, exploring the world of software development, data, and all things MongoDB. And now your hosts, Michael Lynn and Nic Raboy.
Michael Lynn (01:26):
Hey, Nic. How you doing today?
Nic Raboy (01:27):
I'm doing great, Mike. I'm really looking forward to this episode. I've been looking forward to it for what is it? More than a month now because it's really one of the things that hits home to me, and that's gaming. It's one of the reasons why I got into software development. So this is going to be awesome stuff. What do you think, Mike?
Michael Lynn (01:43):
Fantastic. I'm looking forward to it as well. And we have a special guest, Gaspard Petit, from Square Enix. Welcome to the podcast, it's great to have you on the show.
Gaspard Petit (01:51):
Hi, it's good to be here.
Michael Lynn (01:52):
Fantastic. Maybe if you could introduce yourself to the folks and let folks know what you do at Square Enix.
Gaspard Petit (01:58):
Sure. So I'm software online architect at Square Enix. I've been into gaming pretty much my whole life. And when I was a kid that was drawing game levels on piece of papers with my friends, went to university as a software engineer, worked in a few companies, some were gaming, some were around gaming. For example, with Autodesk or Softimage. And then got into gaming, first game was a multiplayer game. And it led me slowly into multiplayer games. First company was at Behaviour and then to Eidos working on the reboot of Tomb Raider on the multiplayer side. Took a short break, went back into actually a company called Datamine, where I learned about the back-end how to work. It wasn't on the Azure Cloud at the time. And I learned a lot about how to do these processes on the cloud, which turned out to be fascinating how you can converge a lot of requests, a lot of users into a distributed environment, and process this data efficiently.
Gaspard Petit (03:03):
And then came back to Square Enix as a lead at the time for the internally, we call it our team, the online suite, which is a team in charge of many of the Square Enix's game back-ends. And I've been there for a couple of years now. Six years, I think, and now became online architect. So my role is making sure we're developing in the right direction using the right services, that our solutions will scale, that they're appropriate for the needs of the game team. That we're giving them good online services basically, and that they're also reliable for the users.
Nic Raboy (03:44):
So the Tomb Raider reboot, was that your first big moment in the professional game industry, or did you have prior big moments before that?
Gaspard Petit (03:54):
I have to say it was probably one of the ones I'm most proud of. To be honest, I worked on a previous game, it was called Naughty Bear. It wasn't a great success from the public's point of view, the meta critics weren't great. But the team I worked on was an amazing team, and everyone on that team was dedicated. It was a small team, the challenges were huge. So from my point of view, that game was a huge success. It didn't make it, the public didn't see it that way. But the challenges, it was a multiplayer game. We had the requirements fairly last-minute to make this a multiplayer game. So we had to turn in single player into multiplayer, do the replication. A lot of complicated things in a short amount of time. But with the right team, with the right people motivated. To me, that was my first gaming achievement.
Michael Lynn (04:49):
You said the game is called Naughty Bear?
Gaspard Petit (04:51):
Naughty Bear, yes.
Michael Lynn (04:52):
What type of game is that? Because I'm not familiar with that.
Gaspard Petit (04:55):
No, not many people are. It's a game where you play a teddy bear waking up on an island. And you realize that there's a party and you're not invited to that party. So you just go postal and kill all the bears on the island pretty much. But there's AI involved, there's different ways of killing, there's different ways of interacting with those teddy bears. And of course, there's no blood, right? So it's not violence. It's just plain fun, right? So it's playing a little bit on that side, on the-
Michael Lynn (05:23):
Absolutely.
Gaspard Petit (05:26):
But it's on a small island, so it's very limited. But the fun is about the AI and playing with friends. So you can play as the bears that are trying to hide or as the bear that's trying to carnage the island.
Gaspard Petit (05:41):
This is pretty much what introduced me to leaderboards, multiplayer replication. We didn't have any saved game. It was over 10 years ago, so the cloud was just building up. But you'd still have add matchmaking features, these kind of features that brought me into the online environment.
Nic Raboy (05:59):
Awesome. In regards to your Naughty Bear game, before we get into the scoring and stuff, what did you use to develop it?
Gaspard Petit (06:05):
It was all C++, a little bit of Lua back then. Like I said, on the back-end side, there wasn't much to do. We used the first party API's which were C++ connected to their server. The rest was a black box. To me at the time, I didn't know how matchmaking worked or how all these leaderboards worked, I just remember that it felt a bit frustrating that I remember posting scores, for example, to leaderboards. And sometimes it would take a couple of seconds for the rank to be updated. And I remember feeling frustration about that. Why isn't this updated right away? I've just posted my score and can take a minute or two before my rank is updated. And now that I'm working back-end, I totally get it. I understand the volume of scores getting posted, the ranking, the sorting out, all the challenges on the back-end. But to me back then it was still a black box.
Michael Lynn (06:57):
So was that game leveraging MongoDB as part of the back-end?
Gaspard Petit (07:01):
No, no, no. Like I said, it wasn't really on the cloud. It was just first party API. I couldn't tell you what Microsoft, Sony is using. But from our point of view, we were not using any in-house database. So that was a different company, it was at Behaviour.
Michael Lynn (07:19):
And I'm curious as an early developer in your career, what things did you learn about game development that you still take with you today?
Gaspard Petit (07:28):
I think a lot of people are interested in game development for the same reasons I am. It is very left and right brain, you have a lot of creativity, you have to find ways to make things work. Sometimes you're early on in a project and you get a chance to do things right. So you architect things, you do the proper design, you even sometimes draw UML and organize your objects so that it's all clean, and you feel like you're doing theoretical and academic almost work, and then the project evolves. And as you get closer to the release date, this is not something that will live forever, it's not a product that you will recycle, and needs to be maintained for the next 10 years. This is something you're going to ship and it has to work on ideally on the day you ship it.
Gaspard Petit (08:13):
So you start shifting your focus saying, "This has to work no matter what. I have to find a solution. There's something here that doesn't work." And I don't have time to find a proper design to refactor this, I just have to make it work. And you shift your way of working completely into ship it, make it work, find a solution. And you get into a different kind of creativity as a programmer. Which I love, which is also scary some time because you put this duct tape in your code and it works. And you'rE wondering, "Should I feel right about shipping this?" And actually, nobody's going to notice and it's going to hold and the game will be fun. And it doesn't matter that you have this duct tape somewhere. I think this is part of the fun of shaping the game, making it work at the end no matter what. And it doesn't have to be perfectly clean, it has to be fun at the end.
Gaspard Petit (09:08):
This is definitely one aspect of it. The other aspect is the real-time, you want to hit 30fps or 60fps or more. I'm sure PC people are now demanding more. But you want this frame rate, and at the same time you want the AI, and you want the audio, and you want the physics and you want everything in that FPS. And you somehow have to make it all work. And you have to find whatever trick you can. If you can pre-process things on their hard drive assets, you do it. Whatever needs you can optimize, you get a chance to optimize it.
Gaspard Petit (09:37):
And there's very few places in the industry where you still get that chance to optimize things and say, "If I can remove this one millisecond somewhere, it will have actually an impact on something." Back-end has that in a way. MongoDB, I'm sure if you can remove one second in one place, you get that feeling of I can now perform this amount of more queries per second. But the game also has this aspect of, I'll be able to process a little bit more, I'll be able to load more assets, more triangles, render more things or hit more bounding boxes. So the performance is definitely an interesting aspect of the game.
Nic Raboy (10:12):
You spent a lot of time doing the actual game development being the creative side, being the performance engineer, things like that. How was the transition to becoming an online architect? I assume, at least you're no longer actually making what people see, but what people experience in the back-end, right? What's that like?
Gaspard Petit (10:34):
That's right. It wasn't an easy transition. And I was the lead on the team for a couple of years. So I got that from a few candidates joining the team, you could tell they wish they were doing gameplay or graphics, and they got into the back-end team. And it feels like you're, "Okay, I'll do that for a couple of years and then I'll see." But it ended up that I really loved it. You get a global view of the players what they're doing, not just on a single console, you also get to experience the game as it is live, which I didn't get to experience when I was programming the game, you program the game, it goes to a disk or a digital format, it's shipped and this is where Julian, you take your vacation after when a game has shipped.
Gaspard Petit (11:20):
The exhilaration of living the moment where the game is out, monitoring it, seeing the player while something disconnect, or having some problems, monitoring the metrics, seeing that the game is performing as expected or not. And then you get into other interesting things you can do on the back-end, which I couldn't do on the game is fixing the game after it has shipped. So for example, you discovered that the balancing is off. Something on the game doesn't work as expected. But you have a way of somehow figuring out from the back-end how you can fix it.
Gaspard Petit (11:54):
Of course, ideally, you would fix in the game. But nowadays, it's not always easy to repackage the game on each platform and deliver it on time. It can take a couple of weeks to fix it to fix the game from the code. So whatever we can fix from the back-end, we do. So we need to have the proper tools for monitoring this humongous amount of data coming our way. And then we have this creativity kicking in saying, "Okay, I've got this data, how can I act on it to make the game better?" So I still get those feelings from the back-end.
Michael Lynn (12:25):
And I feel like the line between back-end and front-end is really blurring lately. Anytime I get online to play a game, I'm forced to go through the update process for many of the games that I play. To what degree do you have flexibility? I'll ask the question this way. How frequently Are you making changes to games that have already shipped?
Gaspard Petit (12:46):
It's not that frequent. It's not rare, either. It's somewhere in between. Ideally, we would not have to make any changes after the game is out. But in practice, the games are becoming so complex, they no longer fit on a small 32 megabyte cartridge. So there's a lot of things going on in the game. They're they're huge. It's almost impossible to get them perfectly right, and deliver them within a couple of years.
Gaspard Petit (13:16):
And there's also a limitation to what you can test internally. Even with a huge team of QA, you will discover things only when players are experiencing the game. Like I said the flow of fixing the game is long. You hear about the report on Reddit or on Twitter, and then you try to reproduce it internally right there. It might take a couple of days to get the same bug the player has reported. And then after that, you have to figure out in the code how you can fix it, make sure you don't break anything else. So it can take literally weeks before you fix something very trivial.
Gaspard Petit (13:55):
On the back-end, if we can try it out, we can segment a specific fix for a single player, make sure for that player it works. Do some blue-green introduction of that test or do it only on staging first, making sure it works, doing it on production. And within a couple of sometimes I would say, a fix has come out in a couple of hours in some case where we noticed it on production, went to staging and to production within the same day with something that would fix the game.
Gaspard Petit (14:25):
So ideally, you would put as much as you can on the back-end because you have so much agility from the back-end. I know players are something called about this idea of using back-ends for game because they see it as a threat. I don't think they realize how much they can benefit from fixes we do on the back-end.
Nic Raboy (14:45):
So in regards to the back-end that you're heavily a part of, what typically goes in to the back-end? I assume that you're using quite a few tools, frameworks, programming languages, maybe you could shed some light onto that.
Gaspard Petit (14:57):
Oh yes, sure. So typically, in almost every project, there is some telemetry that is useful for us to monitor that the game is working like I said, as expected. We want to know if the game is crashing, we want to know if players are stuck on the level and they can't go past through it. If there's an achievement that doesn't lock or something that shouldn't be happening and doesn't happen. So we want to make sure that we're monitoring these things.
Gaspard Petit (15:23):
There's, depending on the project, we have community features. For example, comparing what you did in the life experience series to what the community did, and sometime it will be engagements or creating challenges that will change on a weekly basis. In some cases recently for outriders for example, we have the whole save game saved online, which means two things, right? We can get an idea of the state of each player, but we can also fix things. So it really depends on the project. It goes from simple telemetry, just so we know that things are going okay, or we can act on it to adding some game logic on the back-end getting executed on the back-end.
Michael Lynn (16:09):
And what are the frameworks and development tools that you leverage?
Gaspard Petit (16:12):
Yes, sorry. So the back-ends, we write are written in Java. We have different tools, we use outside of the back-end. We deploy on Kubernetes. Almost everything is Docker images at this point. We use MongoDB as the main storage. Redis as ephemeral storage. We also use Kafka for the telemetry pipeline to make sure we don't lose them and can process them asynchronously. Jenkins for building. So this is pretty much our environment.
Gaspard Petit (16:45):
We also work on the game integration, this is in C++ and C#. So our team provides and actually does some C++ development where we try to make a HTTP client, C++ clients, that is cross platform and as efficient as possible. So at least impacting the frame rate. Even sometimes it means downloading things a little bit slower or are not ticking as many ticks. But we customize our HTTP client to make sure that the online impact is minimal on the gameplay. So our team is in charge of both this client integration into the game and the back-end development.
Michael Lynn (17:24):
So those HTTP clients, are those custom SDKs that you're providing your own internal developers for using?
Gaspard Petit (17:31):
Exactly, so it's our own library that we maintain. It makes sure that what we provide can authenticate correctly with the back-end as a right way to communicate with it, the right retries, the right queuing. So we don't have to enforce through policies to each game themes, how to connect to the back-end. We can bundle these policies within the SDK that we provide to them.
Michael Lynn (17:57):
So what advice would you have for someone that's just getting into developing games? Maybe some advice for where to focus on their journey as a game developer?
Gaspard Petit (18:08):
That's a great question. The advice I would give is, it starts of course, being passionate about it. You have to because there's a lot of work in the gaming, it's true that we do a lot of hours. If we did not enjoy the work that we did, we would probably go somewhere else. But it is fun. If you're passionate about it, you won't mind as much because the success and the feeling you get on each release compensates the effort that you put into those projects. So first, you need to be passionate about it, you need to be wanting to get those projects and be proud of them.
Gaspard Petit (18:46):
And then I would say not to focus too much on one aspect of gaming because at first, I did several things, right? My studies were on the image processing, I wanted to do 3D rendering. At first, that was my initial goal as a teenager. And this is definitely not what I ended up doing. I did almost everything. I did a little bit of rendering, but almost none. I ended up in the back-end. And I learned that almost every aspect of the game development has something interesting and challenging.
Gaspard Petit (19:18):
So I would say not too much to focus on doing the physics or the rendering, sometime you might end up doing the audio and that is still something fascinating. How you can place your audio within the scene and make it sound like it comes from one place, and hit the walls. And then in each aspect, you can dig and do something interesting. And the games now at least within Square Enix they're too big for one person to do it all. So it's generally, you will be part of a team anyway. And within that team, there will be something challenging to do.
Gaspard Petit (19:49):
And even the back-end, I know not so many people consider back-end as their first choice. But I think that's something that's actually a mistake. There is a lot of interesting things to do with the back-end, especially now that there is some gameplay happening on back-ends, and increasingly more logic happening on the back-end. I don't want to say that one is better than the other, of course, but I would personally not go back, and I never expected to love it so much. So be open-minded and be passionate. I think that's my general advice.
Michael Lynn (20:26):
So speaking of back-end, can we talk a little bit about how Square Enix is leveraging MongoDB today?
Gaspard Petit (20:32):
So we've been using MongoDB for quite some time. When I joined the team, it was already been used. We were on, I think version 2.4. MongoDB had just implemented authentication on collections, I think. So quite a while ago, and I saw it evolve over time. If I can share this, I remember my first day on the team hitting MongoDB. And I was coming from a SQL-like world, and I was thinking, "What is this? What is this query language and JSON?" And of course, I couldn't query anything at first because it all seemed the syntax was completely strange to me. And I didn't understand anything about sharding, anything about chunking, anything about how the database works. So it actually took me a couple of months, I would say before I started appreciating what Mongo did, and why it had been picked.
Gaspard Petit (21:27):
So it has been recommended, if I remember, I don't want to say incorrect things. But I think it had been recommended before my time. It was a consulting team that had recommended MongoDB for the gaming. I wouldn't be able to tell you exactly why. So over time, what I realized is that MongoDB was perfect for our processes because there wasn't any columns predefine, any schema, we could just add fields. If the fields were missing, it wasn't a big deal, we could encode in the back-end, and we could just set them to default values.
Gaspard Petit (22:03):
And why this is important is because the game team generally doesn't know. I don't want to say the game team actually, the designers or the producer, they don't know ahead of time, what the final game will look like, this is something that evolves. You play, you do a prototype of it, you like it, you don't like it, you undo something, you redo something, you go back to something you did previously, and it keeps changing as the game evolves. It's very rare that I've seen a game production go straight from point A to Z without twirling a little bit and going back and forth.
Gaspard Petit (22:30):
So that back and forth process is cumbersome for the back-end. You're asked to implement something before the requirements are set in stone, you have to deliver it so the game team can experience it and then we'll iterate on it. And if you're set in stone on your database, and each time that you change something, you have to migrate your data, you're wasting an awful lot of time. And after, like I said, after a couple of months that become obvious that MongoDB was a perfect fit for that because the game team would ask us, "Hey, I need now to store this thing, or can you change this type for that type?" And it was seamless, we would change a string for an integer or a string, we would add a field to a document and that was it. No migration. If we needed, the back-end would catch the cases where a default value was missing. But that was it.
Gaspard Petit (23:19):
And we were able to progress with the game team as they evolved their design, we were able to follow them quite rapidly with our non-schema database. So now I wouldn't switch back. I've got used to the JSON query language, I think human being get used to anything. And once you're familiar with something, you don't want to learn something else. And I ended up learning the SQL Mongo syntax, and now I'm actually very comfortable with it. I do aggregation on the command line, these kinds of things. So it's just something you have to be patient off if you haven't used MongoDB before. At first, it looks a little bit weird, but it quickly becomes quite obvious why it is designed in a way. It's actually very intuitive to use.
Nic Raboy (24:07):
In regards to game development in general, who is determining what the data should look like? Is that the people actually creating the local installable copy of the game? Or is that the back-end team deciding what the model looks like in general?
Gaspard Petit (24:23):
It's a mix of both. Our team acts as an expert team, so we don't dictate where the back-end should be. But since we've been on multiple projects, we have some experience on the good and bad patterns. And in MongoDB it's not always easy, right? We've been hit pretty hard with anti-patterns in the past. So we would now jump right away if the game team asks us to store something in a way that we knew would not perform well when scaling up. So we're cautious about it, but it in general, the requirements come from the game team, and we translate that into a database schema, which says in a few cases, the game team knows exactly what they want. And in those cases, we generally just store their data as a raw string on MongoDB. And then we can process it back, whether it's JSON or whatever other format they want. We give them a field saying, "This belongs to you, and use whatever schema you want inside of it."
Gaspard Petit (25:28):
But of course, then they won't be able to insert any query into that data. It's more of a storage than anything else. If they need to perform operations, and we're definitely involved because we want to make sure that they will be hitting the right indexes, that the sharding will be done properly. So it's a combination of both sides.
Michael Lynn (25:47):
Okay, so we've got MongoDB in the stack. And I'm imagining that as a developer, I'm going to get a development environment. And tell me about the way that as a developer, I'm interacting with MongoDB. And then how does that transition into the production environment?
Gaspard Petit (26:04):
Sure. So every developer has a local MongoDB, we use that for development. So we have our own. Right now is docker-compose image. And it has a full virtual environment. It has all the other components I mentioned earlier, it has Kafka, it even LDAP, it has a bunch of things running virtually including MongoDB. And it is even configured as a sharded cluster. So we have a local sharded cluster on each of our machine to make sure that our queries will work fine on the actual sharded cluster. So it's actually very close to production, even though it's on our local PC. And we start with that, we develop in Java and write our unit test to make sure we cover what we write and don't have regression. And those unit tests will run against a local MongoDB instance.
Gaspard Petit (26:54):
At some point, we are about to release something on production especially when there's a lot of changes, we want to make sure we do load testing. For our load testing, we have something else and I am not sure that that's a very well known feature from MongoDB, but it's extremely useful for us. It's the MongoDB Operator, which is an operator within Kubernetes. And it allows spinning up clusters based on the simple YAML. So you can say, "I want a sharded cluster with three deep, five shards," and it will spin it up for you, it will take a couple of seconds a couple of minutes depending on what you have in your YAML. And then you have it. You have your cluster configured in your Kubernetes cluster. And then we run our tests on this. It's a new cluster, fresh. Run the full test, simulate millions of requests of users, destroy it. And then if we're wondering you know what? Does our back-end scale with the number of shards? And then we just spin up a new shard cluster with twice the number of shards, expect twice the performance, run the same test. Again, if we don't have one. Generally, we won't get that exactly twice the performance, right? But it will get an idea of, this operation would scale with the number of shards, and this one wouldn't.
Gaspard Petit (28:13):
So that Operator is very useful for us because it'll allow us to simulate these scenarios very easily. There's very little work involved in spinning up these Kubernetes cluster.
Gaspard Petit (28:23):
And then when we're satisfied with that, we go to Atlas, which provides us the deployment of the CloudReady clusters. So this is not me personally who does it, we have an ops team who handle this, but they will prepare for us through Atlas, they will prepare the final database that we want to use. We work together to find the number of shards, the type of instance we want to deploy. And then Atlas takes care of it. We benefit from disk auto-scaling on Atlas. We generally start with lower instance, to set up the database when the big approaches for the game release, we scale up instance type again, through Atlas.
Gaspard Petit (29:10):
In some cases, we've realized that the number of shards was insufficient after testing, and Atlas allows us to make these changes quite close to the launch date. So what that means is that we can have a good estimate a couple of weeks before the launch of our requirements in terms of infrastructure, but if we're wrong, it doesn't take that long to adjust and say, "Okay, you know what? We don't need five shards, we need 10 shards." And especially if you're before the launch, you don't have that much data. It just takes a couple of minutes, a couple of hours for Atlas to redeploy these things and get the database ready for us. So it goes in those three stages of going local for unit testing with our own image of Mongo. We have a Kubernetes cluster for load testing which use the Mongo Operator, and then we use Atlas in the end for the actual cloud deployment.
Gaspard Petit (30:08):
We actually go one step further when the game is getting old and load is predictable on it. And it's not as high as it used to be, we move this database in-house. So we have our own data centers. And we will actually share Mongo instances for multiple games. So we co-host multiple games on a single cluster, not single database, of course, but a single Mongo cluster. And that becomes very, very cost effective. We get to see, for example, if there's a sales on one game, while the other games are less active, it takes a bit more load. But next week, something else is on sales, and they kind of average out on that cluster. So older games, I'm talking like four or five years old games tend to be moved back to on-premises for cost effectiveness.
Nic Raboy (31:00):
So it's great to know that you can have that choice to bring games back in when they become old, and you need to scale them down. Maybe you can talk about some of the other benefits that come with that.
Gaspard Petit (31:12):
Yeas. And while it also ties in to the other aspects I mentioned of. We don't feel locked with MongoDB, we have options. So we have the Atlas option, which is extremely useful when we launch a game. And it's high risk, right? If an incident happened on the first week of a game launch, you want all hands on deck and as much support as you can. After a couple of years, we know the kind of errors we can get, we know what can go wrong with the back-end. And generally the volume is not as high, so we don't necessarily need that kind of support anymore. And there's also a lot of overhead on running things on the cloud, if you're on the small volume. There's not just the Mongo itself, there's the pods themselves that need to run on a compute environment, there's the traffic that is counting.
Gaspard Petit (32:05):
So we have that data center. We actually have multiple data centers, we're lucky to be big enough to have those. But it gives us this extra option of saying, "We're not locked to the cloud, it's an option to be on the cloud with MongoDB." We can run it locally on a Docker, we can run it on the cloud, where we can control where we go. And this has been a key element in the architecture of our back-ends from the start actually, making sure that every component we use can be virtualized, brought back on-premises so that we can control locally. For example, we can run tests and have everything controlled, not depending on the cloud. But we also get the opportunity of getting an external team looking at the project with us on the critical moments. So I think we're quite happy to have those options of running it wherever we want.
Michael Lynn (32:56):
Yeah, that's clearly a benefit. Talk to me a little bit about the scale. I know you probably can't mention numbers and transactions per second and things like that. But this is clearly one of the challenges in the gaming space, you're going to face massive scale. Do you want to talk a little bit about some of the challenges that you're facing, with the level of scale that you're achieving today?
Gaspard Petit (33:17):
Yes, sure. That's actually one of the challenging aspects of the back-end, making sure that you won't hit a ceiling at some point or an unexpected ceiling. And there's always one, you just don't always know which one it is. When we prepare for a game launch, regardless of its success, we have to prepare for the worst, the best success. I don't know how to phrase that. But the best success might be the worst case for us. But we want to make sure that we will support whatever number of players comes our way. And we have to be prepared for that.
Gaspard Petit (33:48):
And depending on the scenarios, it can be extremely costly to be prepared for the worst/best. Because it might be that you have to over scale right away, and make sure that your ceiling is very high. Ideally, you want to hit something somewhere in the middle where you're comfortable that if you were to go beyond that, you would be able to adjust quickly. So you sort of compromise between the cost of your launch with the risk and getting to a point where you feel comfortable saying, "If I were to hit that and it took 30 minutes to recover, that would be fine." Nobody would mind because it's such a success that everyone would understand at that point. That ceiling has to be pretty high in the gaming industry. We're talking millions of concurrent users that are connecting within the same minute, are making queries at the same time on their data. It's a huge number. It's difficult, I think, even for the human mind to comprehend these numbers when we're talking millions.
Gaspard Petit (34:50):
It is a lot of requests per second. So it has to be distributed in a way that will scale, and that was also one of the things that I realized Mongo did very well with the mongos and the mongod split to a sharded cluster, where you pretty much have as many databases you want, you can split the workload on as many database as you want with the mongos, routing it to the right place. So if you're hitting your ceiling with two shards, and you had two more shards, in theory, you can get twice the volume of queries. For that to work, you have to be careful, you have to shard appropriately. So this is where you want to have some experience and you want to make sure that your shard keys is well picked. This is something we've tuned over the years that we've had different experience with different shard keys.
Gaspard Petit (35:41):
For us, I don't know if everyone in the gaming is doing it this way, but what seems to be the most intuitive and most convenient shard key is the user ID, and we hash it. This way it goes to... Every user profile goes to a random shard, and we can scale Mongo within pretty much the number of users we have, which is generally what tends to go up and down in our case.
Gaspard Petit (36:05):
So we've had a couple of projects, we've had smaller clusters on one, two. We pretty much never have one shard, but two shards, three shards. And we've been up to 30 plus shards in some cases, and it's never really been an issue. The size, Mongo wise, I would say. There's been issues, but it wasn't really with the architecture itself, it was more of the query pattern, or in some cases, we would pull too much data in the cache. And the cache wasn't used efficiently. But there was always a workaround. And it was never really a limitation on the database. So the sharding model works very well for us.
Michael Lynn (36:45):
So I'm curious how you test in that type of scale. I imagine you can duplicate the load patterns, but the number of transactions per second must be difficult to approximate in a development environment. Are you leveraging Atlas for your production load testing?
Gaspard Petit (37:04):
No. Well, yes and no. The initial tests are done on Kubernetes using the Mongo Operator. So this is where we will simulate. For one operation, we will test will it scale with instance type? So adding more CPU, more RAM, will it scale with number of shards? So we do this grid on each operation that the players might be using ahead of time. At some point, we're comfortable that everything looks right. But testing each operation individually doesn't mean that they will all work fine, they will all play fine when they're mixed together. So the final mix goes through either the production database, if it's not being used yet, or a copy is something that it would look like the production database in Atlas.
Gaspard Petit (37:52):
So we spin up a Atlas database, similar to the one we expect to use in production. And we run the final load test on that one, just to get clear number with their real components, what will it look like. So it's not necessarily the final cluster we will use, sometimes it's a copy of it. Depending if it's available, sometimes there's already certification ongoing, or QA is already testing on production. So we can't hit the production database for that, so we just spin a different instance of it.
Nic Raboy (38:22):
So this episode has been fantastic so far, I wanted to leave it open for you giving us or the listeners I should say, any kind of last minute words of wisdom or any anything that we might have missed that you think would be valuable for them to walk away with.
Gaspard Petit (38:38):
Sure. So maybe I can share something about why I think we're efficient at what we do and why we're still enjoying the work we're doing. And it has to do a little bit with how we're organized within Square Enix with the different teams. I mentioned earlier that with our interaction with the game team was not so much to dictate how the back-end should be for them, but rather to act as experts. And this is something I think we're lucky to have within Square Enix, where our operation team and our development team are not necessarily acting purely as service providers. And this touches Mongo as well, the way we integrate Mongo in our ecosystem is not so much at... It is in part, "Please give us database, please make sure they're healthy and working and give us support when we need it." But it's also about tapping into different teams as experts.
Gaspard Petit (39:31):
So Mongo for us is a source of experts where if we need recommendations about shards, query patterns, even know how to use a Java driver. We get a chance to ask MongoDB experts and get accurate feedback on how we should be doing things. And this translate on every level of our processes. We have the ops team that will of course be monitoring and making sure things are healthy, but they're also acting as experts to tell us how the development should be ongoing or what are the best practices?
Gaspard Petit (40:03):
The back-end dev team does the same thing with the game dev team, where we will bring them our recommendations of how the game should use, consume the services of the back-end, even how they should design some features so that it will scale efficiently or tell them, "This won't work because the back-end won't scale." But act as experts, and I think that's been key for our success is making sure that each team is not just a service provider, but is also bringing expertise on the table so that each other team can be guided in the right direction.
Gaspard Petit (40:37):
So that's definitely one of the thing that I appreciate over my years. And it's been pushed down from management down to every developers where we have this mentality of acting as experts to others. So we have that as embedded engineers model, where we have some of our folks within our team dedicated to the game teams. And same thing with the ops team, they have the dedicated embedded engineers from their team dedicated to our team, making sure that we're not in silos. So that's definitely a recommendation I would give to anyone in this industry, making sure that the silos are broken and that each team is teaching other teams about their best practices.
Michael Lynn (41:21):
Fantastic. And we love that customers are willing to partner in that way and leverage the teams that have those best practices. So Gaspard, I want to thank you for spending so much time with us. It's been wonderful to chat with you and to learn more about how Square Enix is using MongoDB and everything in the game space.
Gaspard Petit (41:40):
Well, thank you very much. It was a pleasure.
Automated (41:44):
Thanks for listening. If you enjoyed this episode, please like and subscribe. Have a question or a suggestion for the show? Visit us in the MongoDB community forums at community.mongodb.com. | md | {
"tags": [
"MongoDB",
"Java",
"Kubernetes",
"Docker"
],
"pageDescription": "Join Michael Lynn and Nic Raboy as they chat with Gaspard Petit of Square Enix to learn how one of the largest and best-loved gaming brands in the world is using MongoDB to scale and grow.",
"contentType": "Podcast"
} | Scaling the Gaming Industry with Gaspard Petit of Square Enix | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/manage-data-at-scale-with-online-archive | created | # How to Manage Data at Scale With MongoDB Atlas Online Archive
Let's face it: Your data can get stale and old quickly. But just because
the data isn't being used as often as it once was doesn't mean that it's
not still valuable or that it won't be valuable again in the future. I
think this is especially true for data sets like internet of things
(IoT) data or user-generated content like comments or posts. (When was
the last time you looked at your tweets from 10 years ago?) This is a
real-time view of my IoT time series data aging.
When managing systems that have massive amounts of data, or systems that
are growing, you may find that paying to save this data becomes
increasingly more costly every single day. Wouldn't it be nice if there
was a way to manage this data in a way that still allows it to be
useable by being easy to query, as well as saving you money and time?
Well, today is your lucky day because with MongoDB Atlas Online
Archive,
you can do all this and more!
With the Online Archive feature in MongoDB
Atlas, you can create a rule to
automatically move infrequently accessed data from your live Atlas
cluster to MongoDB-managed, read-only cloud object storage. Once your
data is archived, you will have a unified view of your Atlas cluster and
your Online Archive using a single endpoint..
>
>
>Note: You can't write to the Online Archive as it is read-only.
>
>
For this demonstration, we will be setting up an Online Archive to
automatically archive comments from the `sample_mflix.comments` sample
dataset that are older than 10 years. We will then connect to our
dataset using a single endpoint and run a query to be sure that we can
still access all of our data, whether its archived or not.
## Prerequisites
- The Online Archive feature is available on
M10 and greater
clusters that run MongoDB 3.6 or later. So, for this demo, you will
need to create a M10
cluster in MongoDB
Atlas. Click here for information on setting up a new MongoDB Atlas
cluster.
- Ensure that each database has been seeded by loading sample data
into our Atlas
cluster. I will be
using the `sample_mflix.comments` dataset for this demo.
>
>
>If you haven't yet set up your free cluster on MongoDB
>Atlas, now is a great time to do so. You
>have all the instructions in this blog post.
>
>
## Configure Online Archive
Atlas archives data based on the criteria you specify in an archiving
rule. The criteria can be one of the following:
- **A combination of a date and number of days.** Atlas archives data
when the current date exceeds the date plus the number of days
specified in the archiving rule.
- **A custom query.** Atlas runs the query specified in the archiving
rule to select the documents to archive.
In order to configure our Online Archive, first navigate to the Cluster
page for your project, click on the name of the cluster you want to
configure Online Archive for, and click on the **Online Archive** tab.
Next, click the Configure Online Archive button the first time and the
Add Archive button subsequently to start configuring Online Archive for
your collection. Then, you will need to create an Archiving Rule by
specifying the collection namespace, which will be
`sample_mflix.comments` for this demo. You will also need to specify the
criteria for archiving documents. You can either use a custom query or a
date match. For our demo, we will be using a date match and
auto-archiving comments that are older than 10 years (365 days \* 10
years = 3650 days) old. It should look like this when you are done.
Optionally, you can enter up to two most commonly queried fields from
the collection in the Second most commonly queried field and Third most
commonly queried field respectively. These will create an index on your
archived data so that the performance of your online archive queries is
improved. For this demo, we will leave this as is, but if you are using
production data, be sure to analyze which queries you will be performing
most often on your Online Archive.
Before enabling the Online Archive, it's a good idea to run a test to
ensure that you are archiving the data that you intended to archive.
Atlas provides a query for you to test on the confirmation screen. I am
going to connect to my cluster using MongoDB
Compass to test this
query out, but feel free to connect and run the query using any method
you are most comfortable with. The query we are testing here is this.
``` javascript
db.comments.find({
date: { $lte: new Date(ISODate().getTime() - 1000 \* 3600 \* 24 \* 3650)}
})
.sort({ date: 1 })
```
When we run this query against the `sample_mflix.comments` collection,
we find that there is a total of 50.3k documents in this collection, and
after running our query to find all of the comments that are older than
10 years old, we find that 43,451 documents would be archived using this
rule. It's a good idea to scan through the documents to check that these
comments are in fact older than 10 years old.
So, now that we have confirmed that this is in fact correct and that we
do want to enable this Online Archive rule, head back to the *Configure
an Online Archive* page and click **Begin Archiving**.
Lastly, verify and confirm your archiving rule, and then your collection
should begin archiving your data!
>
>
>Note: Once your document is queued for archiving, you can no longer edit
>the document.
>
>
## How to Access Your Archived Data
Okay, now that your data has been archived, we still want to be able to
use this data, right? So, let's connect to our Online Archive and test
that our data is still there and that we are still able to query our
archived data, as well as our active data.
First, navigate to the *Clusters* page for your project on Atlas, and
click the **Connect** button for the cluster you have Online Archive
configured for. Choose your connection method. I will be using
Compass for this
example. Select **Connect to Cluster and Online Archive** to get the
connection string that allows you to federate queries across your
cluster and Online Archive.
After navigating to the `sample_mflix.comments` collection, we can see
that we have access to all 50.3k documents in this collection, even
after archiving our old data! This means that from a development point
of view, there are no changes to how we query our data, since we can
access archived data and active data all from one single endpoint! How
cool is that?
## Wrap-Up
There you have it! In this post, we explored how to manage your MongoDB
data at scale using MongoDB Atlas Online Archive. We set up an Online
Archive so that Atlas automatically archived comments from the
`sample_mflix.comments` dataset that were older than 10 years. We then
connected to our dataset and made a query in order to be sure that we
were still able to access and query all of our data from a unified
endpoint, regardless of it being archived or not. This technique of
archiving stale data can be a powerful feature for dealing with datasets
that are massive and/or growing quickly in order to save you time,
money, and development costs as your data demands grow.
>
>
>If you have questions, please head to our developer community
>website where the MongoDB engineers and
>the MongoDB community will help you build your next big idea with
>MongoDB.
>
>
## Additional resources:
- Archive Cluster Data
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to efficiently manage your data at scale by leveraging MongoDB Atlas Online Archive.",
"contentType": "Tutorial"
} | How to Manage Data at Scale With MongoDB Atlas Online Archive | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/rust/serde-improvements | created | # Structuring Data With Serde in Rust
## Introduction
This post details new upgrades in the Rust MongoDB Driver and BSON library to improve our integration with Serde. In the Rust Quick Start blog post, we discussed the trickiness of working with BSON, which has a dynamic schema, in Rust, which uses a static type system. The MongoDB Rust driver and BSON library use Serde to make the conversion between BSON and Rust structs and enums easier. In the 1.2.0 releases of these two libraries, we've included new Serde integration to make working directly with your own Rust data types more seamless and user-friendly.
## Prerequisites
This post assumes that you have a recent version of the Rust toolchain installed (v1.44+), and that you're comfortable with Rust syntax. It also assumes you're familiar with the Rust Serde library.
## Driver Changes
The 1.2.0 Rust driver release introduces a generic type parameter to the Collection type. The generic parameter represents the type of data you want to insert into and find from your MongoDB collection. Any Rust data type that derives/implements the Serde Serialize and Deserialize traits can be used as a type parameter for a Collection.
For example, I'm working with the following struct that defines the schema of the data in my `students` collection:
``` rust
#derive(Serialize, Deserialize)]
struct Student {
name: String,
grade: u32,
test_scores: Vec,
}
```
I can create a generic `Collection` by using the [Database::collection_with_type method and specifying `Student` as the data type I'm working with.
``` rust
let students: Collection = db.collection_with_type("students");
```
Prior to the introduction of the generic `Collection`, the various CRUD `Collection` methods accepted and returned the Document type. This meant I would need to serialize my `Student` structs to `Document`s before inserting them into the students collection. Now, I can insert a `Student` directly into my collection:
``` rust
let student = Student {
name: "Emily".to_string(),
grade: 10,
test_scores: vec and deserialize_with attributes that allow you to specify functions to use for serialization and deserialization on specific fields and variants.
The BSON library now includes a set of functions that implement common strategies for custom serialization and deserialization when working with BSON. You can use these functions by importing them from the `serde_helpers` module in the `bson-rust` crate and using the `serialize_with` and `deserialize_with` attributes. A few of these functions are detailed below.
Some users prefer to represent the object ID field in their data with a hexidecimal string rather than the BSON library ObjectId type:
``` rust
#derive(Serialize, Deserialize)]
struct Item {
oid: String,
// rest of fields
}
```
We've introduced a method for serializing a hex string into an `ObjectId` in the `serde_helpers` module called `serialize_hex_string_as_object_id`. I can annotate my `oid` field with this function using `serialize_with`:
``` rust
#[derive(Serialize, Deserialize)]
struct Item {
#[serde(serialize_with = "serialize_hex_string_as_object_id")]
oid: String,
// rest of fields
}
```
Now, if I serialize an instance of the `Item` struct into BSON, the `oid` field will be represented by an `ObjectId` rather than a `string`.
We've also introduced modules that take care of both serialization and deserialization. For instance, I might want to represent binary data using the [Uuid type in the Rust uuid crate:
``` rust
#derive(Serialize, Deserialize)]
struct Item {
uuid: Uuid,
// rest of fields
}
```
Since BSON doesn't have a specific UUID type, I'll need to convert this data into binary if I want to serialize into BSON. I'll also want to convert back to Uuid when deserializing from BSON. The `uuid_as_binary` module in the `serde_helpers` module can take care of both of these conversions. I'll add the following attribute to use this module:
``` rust
#[derive(Serialize, Deserialize)]
struct Item {
#[serde(with = "uuid_as_binary")]
uuid: Uuid,
// rest of fields
}
```
Now, I can work directly with the Uuid type without needing to worry about how to convert it to and from BSON!
The `serde_helpers` module introduces functions for several other common strategies; you can check out the documentation [here.
### Unsigned Integers
The BSON specification defines two integer types: a signed 32 bit integer and a signed 64 bit integer. This can prevent challenges when you attempt to insert data with unsigned integers into your collections.
My `Student` struct from the previous example contains unsigned integers in the `grade `and `test_score` fields. Previous versions of the BSON library would return an error if I attempted to serialize an instance of this struct into `Document`, since there isn't always a clear mapping between unsigned and signed integer types. However, many unsigned integers can fit into signed types! For example, I might want to create the following student:
``` rust
let student = Student {
name: "Alyson".to_string(),
grade: 11,
test_scores: vec. For more details on working with MongoDB in Rust, you can check out the documentation for the Rust driver and BSON library. We also happily accept contributions in the form of Github pull requests - please see the section in our README for info on how to run our tests.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"Rust",
"MongoDB"
],
"pageDescription": "New upgrades in the Rust MongoDB driver and BSON library improve integration with Serde.",
"contentType": "Article"
} | Structuring Data With Serde in Rust | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/data-api-postman | created | # Accessing Atlas Data in Postman with the Data API
> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.
MongoDB's new Data API is a great way to access your MongoDB Atlas data using a REST-like interface. When enabled, the API creates a series of serverless endpoints that can be accessed without using native drivers. This API can be helpful when you need to access your data from an application that doesn't use those drivers, such as a bash script, a Google Sheet document, or even another database.
To explore the new MongoDB Data API, you can use the public Postman workspace provided by the MongoDB developer relations team.
In this article, we will show you how to use Postman to read and write to your MongoDB Atlas cluster.
## Getting started
You will need to set up your Altas cluster and fork the Postman collection to start using it. Here are the detailed instructions if you need them.
### Set up your MongoDB Atlas cluster
The first step to using the Data API is to create your own MongoDB Atlas cluster. If you don't have a cluster available already, you can get one for free. Follow the instructions from the documentation for the detailed directions on setting up your MongoDB Atlas instance.
### Enable the Data API
Enabling the Data API on your MongoDB Atlas data collections is done with a few clicks. Once you have a cluster up and running, you can enable the Data API by following these instructions.
### Fork the Postman collection
You can use the button below to open the fork in your Postman workspace or follow the instructions provided in this section.
![Run in Postman](https://god.gw.postman.com/run-collection/17898583-25682080-e247-4d25-8e5c-1798461c7db4?action=collection%2Ffork&collection-url=entityId%3D17898583-25682080-e247-4d25-8e5c-1798461c7db4%26entityType%3Dcollection%26workspaceId%3D8355a86e-dec2-425c-9db0-cb5e0c3cec02)
From the public MongoDB workspace on Postman, you will find two collections. The second one from the list, the _MongoDB Data API_, is the one you are interested in. Click on the three dots next to the collection name and select _Create a fork_ from the popup menu.
Then follow the instructions on the screen to add this collection to your workspace. By forking this collection, you will be able to pull the changes from the official collection as the API evolves.
### Fill in the required variables
You will now need to configure your Postman collections to be ready to use your MongoDB collection. Start by opening the _Variables_ tab in the Postman collection.
You will need to fill in the values for each variable. If you don't want the variables to be saved in your collection, use the _Current value_ column. If you're going to reuse those same values next time you log in, use the _Initial value_ column.
For the `URL_ENDPOINT` variable, go to the Data API screen on your Atlas cluster. The URL endpoint should be right there at the top. Click on the _Copy_ button and paste the value in Postman.
Next, for the `API_KEY`, click on _Create API Key_. This will open up a modal window. Give your key a unique name and click on _Generate Key_. Again, click on the _Copy_ button and paste it into Postman.
Now fill in the `CLUSTER_NAME` with the name of your cluster. If you've used the default values when creating the cluster, it should be *Cluster0*. For `DATABASE` and `COLLECTION`, you can use an existing database if you have one ready. If the database and collection you specify do not exist, they will be created upon inserting the first document.
Once you've filled in those variables, click on _Save_ to persist your data in Postman.
## Using the Data API
You are now ready to use the Data API from your Postman collection.
### Insert a document
Start with the first request in the collection, the one called "Insert Document."
Start by selecting the request from the left menu. If you click on the _Body_ tab, you will see what will be sent to the Data API.
```json
{
"dataSource": "{{CLUSTER_NAME}}",
"database": "{{DATABASE}}",
"collection": "{{COLLECTION}}",
"document": {
"name": "John Sample",
"age": 42
}
}
```
Here, you can see that we are using the workspace variables for the cluster, database, and collection names. The `document` property contains the document we want to insert into the collection.
Now hit the blue _Send_ button to trigger the request. In the bottom part of the screen, you will see the response from the server. You should see something similar to:
```json
{"insertedId":"61e07acf63093e54f3c6098c"}
```
This `insertedId` is the _id_ value of the newly created document. If you go to the Atlas UI, you will see the newly created document in the collection in the data explorer. Since you already have access to the Data API, why not use the API to see the inserted value?
### Find a document
Select the following request in the list, the one called "Find Document." Again, you can look at the body of the request by selecting the matching tab. In addition to the cluster, database, and collection names, you will see a `filter` property.
```json
{
"dataSource": "{{CLUSTER_NAME}}",
"database": "{{DATABASE}}",
"collection": "{{COLLECTION}}",
"filter": { "name": "John Sample" }
}
```
The filter is the criteria that will be used for the query. In this case, you are searching for a person named "John Sample."
Click the Send button again to trigger the request. This time, you should see the document itself.
```json
{"document":{"_id":"61e07acf63093e54f3c6098c","name":"John Sample","age":42}}
```
You can use any MongoDB query operators to filter the records you want. For example, if you wanted the first document for a person older than 40, you could use the $gt operator.
```json
{
"dataSource": "{{CLUSTER_NAME}}",
"database": "{{DATABASE}}",
"collection": "{{COLLECTION}}",
"filter": { "age": {"$gt": 40} }
}
```
This last query should return you the same document again.
### Update a document
Say you made a typo when you entered John's information. He is not 42 years old, but rather 24. You can use the Data API to perform an update. Select the "Update Document" request from the list on the left, and click on the _Body_ tab. You will see the body for an update request.
```json
{
"dataSource": "{{CLUSTER_NAME}}",
"database": "{{DATABASE}}",
"collection": "{{COLLECTION}}",
"filter": { "name": "John Sample" },
"update": { "$set": { "age": 24 } }
}
```
In this case, you can see a `filter` to find a document for a person with the name John Sample. The `update` field specifies what to update. You can use any update operator here. We've used `$set` for this specific example to change the value of the age field to `24`. Running this query should give you the following result.
```json
{"matchedCount":1,"modifiedCount":1}
```
This response tells us that the operation succeeded and that one document has been modified. If you go back to the "Find Document" request and run it for a person older than 40 again, this time, you should get the following response.
```json
{"document":null}
```
The `null` value is returned because no items match the criteria passed in the `filter` field.
### Delete a document
The process to delete a document is very similar. Select the "Delete Document" request from the left navigation bar, and click on the _Body_ tab to see the request's body.
```json
{
"dataSource": "{{CLUSTER_NAME}}",
"database": "{{DATABASE}}",
"collection": "{{COLLECTION}}",
"filter": { "name": "John Sample" }
}
```
Just as in the "Find Document" endpoint, there is a filter field to select the document to delete. If you click on Send, this request will delete the person with the name "John Sample" from the collection. The response from the server is:
```json
{"deletedCount":1}
```
So you can see how many matching records were deleted from the database.
### Operations on multiple documents
So far, we have done each operation on single documents. The endpoints `/insertOne`, `/findOne`, `/updateOne`, and `/deleteOne` were used for that purpose. Each endpoint has a matching endpoint to perform operations on multiple documents in your collection.
You can find examples, along with the usage instructions for each endpoint, in the Postman collection.
Some of those endpoints can be very helpful. The `/find` endpoint can return all the documents in a collection, which can be helpful for importing data into another database. You can also use the `/insertMany` endpoint to import large chunks of data into your collections.
However, use extreme care with `/updateMany` and `/deleteMany` since a small error could potentially destroy all the data in your collection.
### Aggregation Pipelines
One of the most powerful features of MongoDB is the ability to create aggregation pipelines. These pipelines let you create complex queries using an array of JSON objects. You can also perform those queries on your collection with the Data API.
In the left menu, pick the "Run Aggregation Pipeline" item. You can use this request for running those pipelines. In the _Body_ tab, you should see the following JSON object.
```json
{
"dataSource": "{{CLUSTER_NAME}}",
"database": "{{DATABASE}}",
"collection": "{{COLLECTION}}",
"pipeline":
{
"$sort": { "age": 1 }
},
{
"$limit": 1
}
]
}
```
Here, we have a pipeline that will take all of the objects in the collection, sort them by ascending age using the `$sort` stage, and only return the first return using the `$limit` stage. This pipeline will return the youngest person in the collection.
If you want to test it out, you can first run the "Insert Multiple Documents" request to populate the collection with multiple records.
## Summary
There you have it! A fast and easy way to test out the Data API or explore your MongoDB Atlas data using Postman. If you want to learn more about the Data API, check out the [Atlas Data API Introduction blog post. If you are more interested in automating operations on your Atlas cluster, there is another API called the Management API. You can learn more about the latter on the Automate Automation on MongoDB Atlas blog post.
| md | {
"tags": [
"Atlas",
"JavaScript",
"Postman API"
],
"pageDescription": "MongoDB's new Data API is a great way to access your MongoDB Atlas data using a REST-like interface. In this article, we will show you how to use Postman to read and write to your MongoDB Atlas cluster.",
"contentType": "Tutorial"
} | Accessing Atlas Data in Postman with the Data API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/window-functions-and-time-series | created | # Window Functions & Time Series Collections
Window functions and time series collections are both features that were added to MongoDB 5.0. Window functions allow you to run a window across a sorted set of documents, producing calculations over each step of the window, like rolling average or correlation scores. Time-series collections *dramatically* reduce the storage cost and increase the performance of MongoDB when working with time-series data. Window functions can be run on *any* type of collection in MongoDB, not just time-series collections, but the two go together like ... two things that go together really well. I'm not a fan of peanut butter and jelly sandwiches, but you get the idea!
In this article, I'll help you get set up with a data project I've created, and then I'll show you how to run some window functions across the data. These kinds of operations were possible in earlier versions of MongoDB, but window functions, with the `$setWindowFields` stage, make these operations relatively straightforward.
# Prerequisites
This post assumes you already know the fundamentals of time series collections, and it may also be helpful to understand how to optimize your time series collections.
You'll also need the following software installed on your development machine to follow along with the code in the sample project:
* Just
* Mongo Shell \(mongosh\)
* Mongoimport
Once you have your time series collections correctly set up, and you're filling them with lots of time series data, you'll be ready to start analyzing the data you're collecting. Because Time Series collections are all about, well, time, you're probably going to run *temporal* operations on the collection to get the most recent or oldest data in the collection. You will also probably want to run calculations across measurements taken over time. That's where MongoDB's new window functions are especially useful.
Temporal operators and window functions can be used with *any* type of collection, but they're especially useful with time series data, and time series collections will be increasingly optimized for use with these kinds of operations.
# Getting The Sample Data
I found some stock exchange data on Kaggle, and I thought it might be fun to analyse it. I used version 2 of the dataset.
I've written some scripts to automate the process of creating a time series collection and importing the data into the collection. I've also automated running some of the operations described below on the data, so you can see the results. You can find the scripts on GitHub, along with information on how to run them if you want to do that while you're following along with this blog post.
# Getting Set Up With The Sample Project
At the time of writing, time series collections have only just been released with the release of MongoDB 5.0. As such, integration with the Aggregation tab of the Atlas Data Explorer interface isn't complete, and neither is integration with MongoDB Charts.
In order to see the results of running window functions and temporal operations on a time series collection, I've created some sample JavaScript code for running aggregations on a collection, and exported them to a new collection using $merge. This is the technique for creating materialized views in MongoDB.
I've glued all the scripts together using a task runner called Just. It's a bit like Make, if you've used that, but easier to install and use. You don't have to use it, but it has some neat features like reading config from a dotenv file automatically. I highly recommend you try it out!
First create a file called ".env", and add a configuration variable called `MDB_URI`, like this:
```
MDB_URI="mongodb+srv://USERNAME:[email protected]/DATABASE?retryWrites=true&w=majority"
```
Your URI and the credentials in it will be different, and you can get it from the Atlas user interface, by logging in to Atlas and clicking on the "Connect" button next to your cluster details. Make sure you've spun up a MongoDB 5.0 cluster, or higher.
Once you've saved the .env file, open your command-line to the correct directory and run `just connect` to test the configuration - it'll instruct `mongosh` to open up an interactive shell connected to your cluster.
You can run `db.ping()` just to check that everything's okay, and then type exit followed by the "Enter" key to quit mongosh.
# Create Your Time Series Collection
You can run `just init` to create the collection, but if you're not using Just, then the command to run inside mongosh to create your collection is:
```
// From init_database.js
db.createCollection("stock_exchange_data", {
timeseries: {
timeField: "ts",
metaField: "source",
granularity: "hours"
}
});
```
This will create a time-series collection called "stock\_exchange\_data", with a time field of "ts", a metaField of "source" (specifying the stock exchange each set of measurements is relevant to), and because there is one record *per source* per day, I've chosen the closest granularity, which is "hours".
# Import The Sample Dataset
If you run `just import` it'll import the data into the collection you just created, via the following CLI command:
```
mongoimport --uri $MDB_URI indexProcessed.json --collection stock_exchange_data
```
> **Note:** When you're importing data into a time-series collection, it's very important that your data is in chronological order, otherwise the import will be very slow!
A single sample document looks like this:
```
{
"source": {
"Region": "Hong Kong",
"Exchange": "Hong Kong Stock Exchange",
"Index": "HSI",
"Currency": "HKD"
},
"ts": {
"$date": "1986-12-31T00:00:00+00:00"
},
"open": {
"$numberDecimal": "2568.300049"
},
"high": {
"$numberDecimal": "2568.300049"
},
"low": {
"$numberDecimal": "2568.300049"
},
"close": {
"$numberDecimal": "2568.300049"
},
"adjustedClose": {
"$numberDecimal": "2568.300049"
},
"volume": {
"$numberDecimal": "0.0"
},
"closeUSD": {
"$numberDecimal": "333.87900637"
}
}
```
In a way that matches the collection's time-series parameters, "ts" contains the timestamp for the measurements in the document, and "source" contains metadata describing the source of the measurements - in this case, the Hong Kong Stock Exchange.
You can read about the meaning of each of the measurements in the documentation for the dataset. I'll mainly be working with "closeUSD", which is the closing value for the exchange, in dollars at the end of the specified day.
# Window Functions
Window functions allow you to apply a calculation to values in a series of ordered documents, either over a specified window of time, or a specified number of documents.
I want to visualise the results of these operations in Atlas Charts. You can attach an Aggregation Pipeline to a Charts data source, so you can use `$setWindowFunction` directly in data source aggregations. In this case, though, I'll show you how to run the window functions with a `$merge` stage, writing to a new collection, and then the new collection can be used as a Charts data source. This technique of writing pre-calculated results to a new collection is often referred to as a *materialized view*, or colloquially with time-series data, a *rollup*.
First, I charted the "stock\_exchange\_data" in MongoDB Charts, with "ts" (the timestamp) on the x-axis, and "closeUSD" on the y axis, separated into series by "source.exchange." I've specifically filtered the data to the year of 2008, so I could investigate the stock market values during the credit crunch at the end of the year.
You'll notice that the data above is quite spiky. A common way to smooth out spiky data is by running a rolling average on the data, where each day's data is averaged with the previous 5 days, for example.
The following aggregation pipeline will create a smoothed chart:
```
{
$setWindowFields: {
partitionBy: "$source",
sortBy: { ts: 1 },
output: {
"window.rollingCloseUSD": {
$avg: "$closeUSD",
window: {
documents: [-5, 0]
}
}
}
}
},
{
$merge: {
into: "stock_exchange_data_processed",
whenMatched: "replace"
}
}]
```
The first step applies the $avg window function to the closeUSD value. The data is partitioned by "$source" because the different stock exchanges are discrete series, and should be averaged separately. I've chosen to create a window over 6 documents at a time, rather than 6 days, because there are no values over the weekend, and this means each value will be created as an average of an equal number of documents, whereas otherwise the first day of each week would only include values from the last 3 days from the previous week.
The second $merge stage stores the result of the aggregation in the "stock\_exchange\_data\_processed" collection. Each document will be identical to the equivalent document in the "stock\_exchange\_data" collection, but with an extra field, "window.rollingCloseUSD".
![
Plotting this data shows a much smoother chart, and the drop in various exchanges in September can more clearly be seen.
It's possible to run more than one window function over the same collection in a single $setWindowFields stage, providing they all operate on the same sequence of documents (although the window specification can be different).
The file window\_functions.js contains the following stage, that executes two window functions on the collection:
```
{
$setWindowFields: {
partitionBy: "$source",
sortBy: { ts: 1 },
output: {
"window.rollingCloseUSD": {
$avg: "$closeUSD",
window: {
documents: -5, 0]
}
},
"window.dailyDifference": {
$derivative: {
input: "$closeUSD",
unit: "day"
},
window: {
documents: [-1, 0]
}
},
}
}
}
```
Notice that although the sort order of the collection must be shared across both window functions, they can specify the window individually - the $avg function operates on a window of 6 documents, whereas the $derivative executes over pairs of documents.
The derivative plot, filtered for just the New York Stock Exchange is below:
![
This shows the daily difference in the market value at the end of each day. I'm going to admit that I've cheated slightly here, to demonstrate the `$derivative` window function here. It would probably have been more appropriate to just subtract `$first` from `$last`. But that's a blog post for a different day.
The chart above is quite spiky, so I added another window function in the next stage, to average out the values over 10 days:
Those two big troughs at the end of the year really highlight when the credit crunch properly started to hit. Remember that just because you've calculated a value with a window function in one stage, there's nothing to stop you feeding that value into a later `$setWindowFields` stage, like I have here.
# Conclusion
Window functions are a super-powerful new feature of MongoDB, and I know I'm going to be using them with lots of different types of data - but especially with time-series data. I hope you found this article useful!
For more on time-series, our official documentation is comprehensive and very readable. For more on window functions, there's a good post by Guy Harrison about analyzing covid data, and as always, Paul Done's book Practical MongoDB Aggregations has some great content on these topics.
If you're interested in learning more about how time-series data is stored under-the-hood, check out my colleague John's very accessible blog post.
And if you have any questions, or you just want to show us a cool thing you built, definitely check out MongoDB Community!
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Let's load some data into a time series collection and then run some window functions over it, to calculate things like moving average, derivatives, and others.",
"contentType": "Article"
} | Window Functions & Time Series Collections | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/manage-game-user-profiles-mongodb-phaser-javascript | created | md | {
"tags": [
"JavaScript",
"Atlas"
],
"pageDescription": "Learn how to work with user profiles in a Phaser game with JavaScript and MongoDB.",
"contentType": "Tutorial"
} | Manage Game User Profiles with MongoDB, Phaser, and JavaScript | 2024-05-20T17:32:23.501Z |
|
devcenter | https://www.mongodb.com/developer/products/realm/realm-ios15-swiftui | created | # Most Useful iOS 15 SwiftUI Features
## Introduction
I'm all-in on using SwiftUI to build iOS apps. I find it so much simpler than wrangling with storyboards and UIKit. Unfortunately, there are still occasions when SwiftUI doesn't let you do what you need—forcing you to break out into UIKit.
That's why I always focus on Apple's SwiftUI enhancements at each year's WWDC. And, each year I'm rewarded with a few more enhancements that make SwiftUI more powerful and easy to work with. For example, iOS14 made it much easier to work with Apple Maps.
WWDC 2021 was no exception, introducing a raft of SwiftUI enhancements that were coming in iOS 15/ SwiftUI 3 / Xcode 13. As iOS 15 has now been released, it feels like a good time to cover the features that I've found the most useful.
I've revisited some of my existing iOS apps to see how I could exploit the new iOS 15 SwiftUI features to improve the user experience and/or simplify my code base. This article steps through the features I found most interesting/useful, and how I tested them out on my apps. These are the apps/branches that I worked with:
- RCurrency
- RChat
- LiveTutorial2021
- task-tracker-swiftui
## Prerequisites
- Xcode 13
- iOS 15
- Realm-Cocoa (varies by app, but 10.13.0+ is safe for them all)
## Lists
SwiftUI `List`s are pretty critical to data-based apps. I use `List`s in almost every iOS app I build, typically to represent objects stored in Realm. That's why I always go there first when seeing what's new.
### Custom Swipe Options
We've all used mobile apps where you swipe an item to the left for one action, and to the right for another. SwiftUI had a glaring omission—the only supported action was to swipe left to delete an item.
This was a massive pain.
This limitation meant that my task-tracker-swiftui app had a cumbersome UI. You had to click on a task to expose a sheet that let you click on your preferred action.
With iOS 15, I can replace that popup sheet with swipe actions:
The swipe actions are implemented in `TasksView`:
```swift
List {
ForEach(tasks) { task in
TaskView(task: task)
.swipeActions(edge: .leading) {
if task.statusEnum == .Open || task.statusEnum == .InProgress {
CompleteButton(task: task)
}
if task.statusEnum == .Open || task.statusEnum == .Complete {
InProgressButton(task: task)
}
if task.statusEnum == .InProgress || task.statusEnum == .Complete {
NotStartedButton(task: task)
}
}
.swipeActions(edge: .trailing) {
Button(role: .destructive, action: { $tasks.remove(task) }) {
Label("Delete", systemImage: "trash")
}
}
}
}
```
The role of the delete button is set to `.destructive` which automatically sets the color to red.
For the other actions, I created custom buttons. For example, this is the code for `CompleteButton`:
```swift
struct CompleteButton: View {
@ObservedRealmObject var task: Task
var body: some View {
Button(action: { $task.statusEnum.wrappedValue = .Complete }) {
Label("Complete", systemImage: "checkmark")
}
.tint(.green)
}
}
```
### Searchable Lists
When you're presented with a long list of options, it helps the user if you offer a way to filter the results.
RCurrency lets the user choose between 150 different currencies. Forcing the user to scroll through the whole list wouldn't make for a good experience. A search bar lets them quickly jump to the items they care about:
The selection of the currency is implemented in the `SymbolPickerView` view.
The view includes a state variable to store the `searchText` (the characters that the user has typed) and a `searchResults` computed value that uses it to filter the full list of symbols:
```swift
struct SymbolPickerView: View {
...
@State private var searchText = ""
...
var searchResults: Dictionary {
if searchText.isEmpty {
return Symbols.data.symbols
} else {
return Symbols.data.symbols.filter {
$0.key.contains(searchText.uppercased()) || $0.value.contains(searchText)}
}
}
}
```
The `List` then loops over those `searchResults`. We add the `.searchable` modifier to add the search bar, and bind it to the `searchText` state variable:
```swift
List {
ForEach(searchResults.sorted(by: <), id: \.key) { symbol in
...
}
}
.searchable(text: $searchText)
```
This is the full view:
```swift
struct SymbolPickerView: View {
@Environment(\.presentationMode) var presentationMode
var action: (String) -> Void
let existingSymbols: String]
@State private var searchText = ""
var body: some View {
List {
ForEach(searchResults.sorted(by: <), id: \.key) { symbol in
Button(action: {
pickedSymbol(symbol.key)
}) {
HStack {
Image(symbol.key.lowercased())
Text("\(symbol.key): \(symbol.value)")
}
.foregroundColor(existingSymbols.contains(symbol.key) ? .secondary : .primary)
}
.disabled(existingSymbols.contains(symbol.key))
}
}
.searchable(text: $searchText)
.navigationBarTitle("Pick Currency", displayMode: .inline)
}
private func pickedSymbol(_ symbol: String) {
action(symbol)
presentationMode.wrappedValue.dismiss()
}
var searchResults: Dictionary {
if searchText.isEmpty {
return Symbols.data.symbols
} else {
return Symbols.data.symbols.filter {
$0.key.contains(searchText.uppercased()) || $0.value.contains(searchText)}
}
}
}
```
## Pull to Refresh
We've all used this feature in iOS apps. You're impatiently waiting on an important email, and so you drag your thumb down the page to get the app to check the server.
This feature isn't always helpful for apps that use Realm and Atlas Device Sync. When the Atlas cloud data changes, the local realm is updated, and your SwiftUI view automatically refreshes to show the new data.
However, the feature **is** useful for the RCurrency app. I can use it to refresh all of the locally-stored exchange rates with fresh data from the API:
![Animation showing currencies being refreshed when the screen is dragged dowm
We allow the user to trigger the refresh by adding a `.refreshable` modifier and action (`refreshAll`) to the list of currencies in `CurrencyListContainerView`:
```swift
List {
ForEach(userSymbols.symbols, id: \.self) { symbol in
CurrencyRowContainerView(baseSymbol: userSymbols.baseSymbol,
baseAmount: $baseAmount,
symbol: symbol,
refreshNeeded: refreshNeeded)
.listRowSeparator(.hidden)
}
.onDelete(perform: deleteSymbol)
}
.refreshable{ refreshAll() }
```
In that code snippet, you can see that I added the `.listRowSeparator(.hidden)` modifier to the `List`. This is another iOS 15 feature that hides the line that would otherwise be displayed between each `List` item. Not a big feature, but every little bit helps in letting us use native SwiftUI to get the exact design we want.
## Text
### Markdown
I'm a big fan of Markdown. Markdown lets you write formatted text (including tables, links, and images) without taking your hands off the keyboard. I added this post to our CMS in markdown.
iOS 15 allows you to render markdown text within a `Text` view. If you pass a literal link to a `Text` view, then it's automatically rendered correctly:
```swift
struct MarkDownTest: View {
var body: some View {
Text("Let's see some **bold**, *italics* and some ***bold italic text***. ~~Strike that~~. We can even include a link.")
}
}
```
But, it doesn't work out of the box for string constants or variables (e.g., data read from Realm):
```swift
struct MarkDownTest: View {
let myString = "Let's see some **bold**, *italics* and some ***bold italic text***. ~~Strike that~~. We can even include a link."
var body: some View {
Text(myString)
}
}
```
The issue is that the version of `Text` that renders markdown expects to be passed an `AttributedString`. I created this simple `Markdown` view to handle this for us:
```swift
struct MarkDown: View {
let text: String
@State private var formattedText: AttributedString?
var body: some View {
Group {
if let formattedText = formattedText {
Text(formattedText)
} else {
Text(text)
}
}
.onAppear(perform: formatText)
}
private func formatText() {
do {
try formattedText = AttributedString(markdown: text)
} catch {
print("Couldn't convert this from markdown: \(text)")
}
}
}
```
I updated the `ChatBubbleView` in RChat to use the `Markdown` view:
```swift
if chatMessage.text != "" {
MarkDown(text: chatMessage.text)
.padding(Dimensions.padding)
}
```
RChat now supports markdown in user messages:
### Dates
We all know that working with dates can be a pain. At least in iOS 15 we get some nice new functionality to control how we display dates and times. We use the new `Date.formatted` syntax.
In RChat, I want the date/time information included in a chat bubble to depend on how recently the message was sent. If a message was sent less than a minute ago, then I care about the time to the nearest second. If it were sent a day ago, then I want to see the day of the week plus the hour and minutes. And so on.
I created a `TextDate` view to perform this conditional formatting:
```swift
struct TextDate: View {
let date: Date
private var isLessThanOneMinute: Bool { date.timeIntervalSinceNow > -60 }
private var isLessThanOneDay: Bool { date.timeIntervalSinceNow > -60 * 60 * 24 }
private var isLessThanOneWeek: Bool { date.timeIntervalSinceNow > -60 * 60 * 24 * 7}
private var isLessThanOneYear: Bool { date.timeIntervalSinceNow > -60 * 60 * 24 * 365}
var body: some View {
if isLessThanOneMinute {
Text(date.formatted(.dateTime.hour().minute().second()))
} else {
if isLessThanOneDay {
Text(date.formatted(.dateTime.hour().minute()))
} else {
if isLessThanOneWeek {
Text(date.formatted(.dateTime.weekday(.wide).hour().minute()))
} else {
if isLessThanOneYear {
Text(date.formatted(.dateTime.month().day()))
} else {
Text(date.formatted(.dateTime.year().month().day()))
}
}
}
}
}
}
```
This preview code lets me test it's working in the Xcode Canvas preview:
```swift
struct TextDate_Previews: PreviewProvider {
static var previews: some View {
VStack {
TextDate(date: Date(timeIntervalSinceNow: -60 * 60 * 24 * 365)) // 1 year ago
TextDate(date: Date(timeIntervalSinceNow: -60 * 60 * 24 * 7)) // 1 week ago
TextDate(date: Date(timeIntervalSinceNow: -60 * 60 * 24)) // 1 day ago
TextDate(date: Date(timeIntervalSinceNow: -60 * 60)) // 1 hour ago
TextDate(date: Date(timeIntervalSinceNow: -60)) // 1 minute ago
TextDate(date: Date()) // Now
}
}
}
```
We can then use `TextDate` in RChat's `ChatBubbleView` to add context-sensitive date and time information:
```swift
TextDate(date: chatMessage.timestamp)
.font(.caption)
```
## Keyboards
Customizing keyboards and form input was a real pain in the early days of SwiftUI—take a look at the work we did for the WildAid O-FISH app if you don't believe me. Thankfully, iOS 15 has shown some love in this area. There are a couple of features that I could see an immediate use for...
### Submit Labels
It's now trivial to rename the on-screen keyboard's "return" key. It sounds trivial, but it can give the user a big hint about what will happen if they press it.
To rename the return key, add a `.submitLabel`) modifier to the input field. You pass the modifier one of these values:
- `done`
- `go`
- `send`
- `join`
- `route`
- `search`
- `return`
- `next`
- `continue`
I decided to use these labels to improve the login flow for the LiveTutorial2021 app. In `LoginView`, I added a `submitLabel` to both the "email address" and "password" `TextFields`:
```swift
TextField("email address", text: $email)
.submitLabel(.next)
SecureField("password", text: $password)
.onSubmit(userAction)
.submitLabel(.go)
```
Note the `.onSubmit(userAction)` modifier on the password field. If the user taps "go" (or hits return on an external keyboard), then the `userAction` function is called. `userAction` either registers or logs in the user, depending on whether "Register new user” is checked.
### Focus
It can be tedious to have to click between different fields on a form. iOS 15 makes it simple to automate that shifting focus.
Sticking with LiveTutorial2021, I want the "email address" field to be selected when the view opens. When the user types their address and hits ~~"return"~~ "next", focus should move to the "password" field. When the user taps "go," the app logs them in.
You can use the new `FocusState` SwiftUI property wrapper to create variables to represent the placement of focus in the view. It can be a boolean to flag whether the associated field is in focus. In our login view, we have two fields that we need to switch focus between and so we use the `enum` option instead.
In `LoginView`, I define the `Field` enumeration type to represent whether the username (email address) or password is in focus. I then create the `focussedField` `@FocusState` variable to store the value using the `Field` type:
```swift
enum Field: Hashable {
case username
case password
}
@FocusState private var focussedField: Field?
```
I use the `.focussed` modifier to bind `focussedField` to the two fields:
```swift
TextField("email address", text: $email)
.focused($focussedField, equals: .username)
...
SecureField("password", text: $password)
.focused($focussedField, equals: .password)
...
```
It's a two-way binding. If the user selects the email field, then `focussedField` is set to `.username`. If the code sets `focussedField` to `.password`, then focus switches to the password field.
This next step feels like a hack, but I've not found a better solution yet. When the view is loaded, the code waits half a second before setting focus to the username field. Without the delay, the focus isn't set:
```swift
VStack(spacing: 16) {
...
}
.onAppear {
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
focussedField = .username
...
}
}
```
The final step is to shift focus to the password field when the user hits the "next" key in the username field:
```swift
TextField("email address", text: $email)
.onSubmit { focussedField = .password }
...
```
This is the complete body from `LoginView`:
```swift
var body: some View {
VStack(spacing: 16) {
Spacer()
TextField("email address", text: $email)
.focused($focussedField, equals: .username)
.submitLabel(.next)
.onSubmit { focussedField = .password }
SecureField("password", text: $password)
.focused($focussedField, equals: .password)
.onSubmit(userAction)
.submitLabel(.go)
Button(action: { newUser.toggle() }) {
HStack {
Image(systemName: newUser ? "checkmark.square" : "square")
Text("Register new user")
Spacer()
}
}
Button(action: userAction) {
Text(newUser ? "Register new user" : "Log in")
}
.buttonStyle(.borderedProminent)
.controlSize(.large)
Spacer()
}
.onAppear {
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
focussedField = .username
}
}
.padding()
}
```
## Buttons
### Formatting
Previously, I've created custom SwiftUI views to make buttons look like…. buttons.
Things get simpler in iOS 15.
In `LoginView`, I added two new modifiers to my register/login button:
```swift
Button(action: userAction) {
Text(newUser ? "Register new user" : "Log in")
}
.buttonStyle(.borderedProminent)
.controlSize(.large)
```
Before making this change, I experimented with other button styles:
### Confirmation
It's very easy to accidentally tap the "Logout" button, and so I wanted to add this confirmation dialog:
Again, iOS 15 makes this simple.
This is the modified version of the `LogoutButton` view:
```swift
struct LogoutButton: View {
...
@State private var isConfirming = false
var body: some View {
Button("Logout") { isConfirming = true }
.confirmationDialog("Are you sure want to logout",
isPresented: $isConfirming) {
Button(action: logout) {
Text("Confirm Logout")
}
Button("Cancel", role: .cancel) {}
}
}
...
}
```
These are the changes I made:
- Added a new state variable (`isConfirming`)
- Changed the logout button's action from calling the `logout` function to setting `isConfirming` to `true`
- Added the `confirmationDialog`-9ibgk) modifier to the button, providing three things:
- The dialog title (I didn't override the `titleVisibility` option and so the system decides whether this should be shown)
- A binding to `isConfirming` that controls whether the dialog is shown or not
- A view containing the contents of the dialog:
- A button to logout the user
- A cancel button
## Material
I'm no designer, and this is _blurring_ the edges of what changes I consider worth adding.
The RChat app may have to wait a moment while the backend MongoDB Atlas App Services application confirms that the user has been authenticated and logged in. I superimpose a progress view while that's happening:
To make it look a bit more professional, I can update `OpaqueProgressView` to use Material to blur the content that's behind the overlay. To get this effect, I update the background modifier for the `VStack`:
```swift
var body: some View {
VStack {
if let message = message {
ProgressView(message)
} else {
ProgressView()
}
}
.padding(Dimensions.padding)
.background(.ultraThinMaterial,
in: RoundedRectangle(cornerRadius: Dimensions.cornerRadius))
}
```
The result looks like this:
## Developer Tools
Finally, there are a couple of enhancements that are helpful during your development phase.
### Landscape Previews
I'm a big fan of Xcode's "Canvas" previews. Previews let you see what your view will look like. Previews update in more or less real time as you make code changes. You can even display multiple previews at once for example:
- For different devices: `.previewDevice(PreviewDevice(rawValue: "iPhone 12 Pro Max"))`
- For dark mode: `.preferredColorScheme(.dark)`
A glaring omission was that there was no way to preview landscape mode. That's fixed in iOS 15 with the addition of the `.previewInterfaceOrientation`) modifier.
For example, this code will show two devices in the preview. The first will be in portrait mode. The second will be in landscape and dark mode:
```swift
struct CurrencyRow_Previews: PreviewProvider {
static var previews: some View {
Group {
List {
CurrencyRowView(value: 3.23, symbol: "USD", baseValue: .constant(1.0))
CurrencyRowView(value: 1.0, symbol: "GBP", baseValue: .constant(10.0))
}
List {
CurrencyRowView(value: 3.23, symbol: "USD", baseValue: .constant(1.0))
CurrencyRowView(value: 1.0, symbol: "GBP", baseValue: .constant(10.0))
}
.preferredColorScheme(.dark)
.previewInterfaceOrientation(.landscapeLeft)
}
}
}
```
### Self._printChanges
SwiftUI is very smart at automatically refreshing views when associated state changes. But sometimes, it can be hard to figure out exactly why a view is or isn't being updated.
iOS 15 adds a way to print out what pieces of state data have triggered each refresh for a view. Simply call `Self._printChanges()` from the body of your view. For example, I updated `ContentView` for the LiveChat app:
```swift
struct ContentView: View {
@State private var username = ""
var body: some View {
print(Self._printChanges())
return NavigationView {
Group {
if app.currentUser == nil {
LoginView(username: $username)
} else {
ChatRoomsView(username: username)
}
}
.navigationBarTitle(username, displayMode: .inline)
.navigationBarItems(trailing: app.currentUser != nil ? LogoutButton(username: $username) : nil) }
}
}
```
If I log in and check the Xcode console, I can see that it's the update to `username` that triggered the refresh (rather than `app.currentUser`):
```swift
ContentView: _username changed.
```
There can be a lot of these messages, and so remember to turn them off before going into production.
## Conclusion
SwiftUI is developing at pace. With each iOS release, there is less and less reason to not use it for all/some of your mobile app.
This post describes how to use some of the iOS 15 SwiftUI features that caught my attention. I focussed on the features that I could see would instantly benefit my most recent mobile apps. In this article, I've shown how those apps could be updated to use these features.
There are lots of features that I didn't include here. A couple of notable omissions are:
- `AsyncImage` is going to make it far easier to work with images that are stored in the cloud. I didn't need it for any of my current apps, but I've no doubt that I'll be using it in a project soon.
- The `task`/) view modifier is going to have a significant effect on how people run asynchronous code when a view is loaded. I plan to cover this in a future article that takes a more general look at how to handle concurrency with Realm.
- Adding a toolbar to your keyboards (e.g., to let the user switch between input fields).
If you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum. To keep up with the latest Realm news, follow @realm on Twitter.
| md | {
"tags": [
"Realm",
"Swift",
"iOS"
],
"pageDescription": "See how to use some of the most useful new iOS 15 SwiftUI features in your mobile apps",
"contentType": "Tutorial"
} | Most Useful iOS 15 SwiftUI Features | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/rust/getting-started-deno-mongodb | created | # Getting Started with Deno & MongoDB
Deno is a “modern” runtime for JavaScript and TypeScript that is built in Rust. This makes it very fast!
If you are familiar with Node.js then you will be right at home with Deno. It is very similar but has some improvements over Node.js. In fact, the creator of Deno, Ryan Dahl, also created Node and Deno is meant to be the successor to Node.js.
> 💡 Fun Fact: Deno is an anagram. Rearrange the letters in Node to spell Deno.
Deno has no package manager, uses ES modules, has first-class `await`, has built-in testing, and is somewhat browser-compatible by providing built-in `fetch` and the global `window` object.
Aside from that, it’s also very secure. It’s completely locked down by default and requires you to enable each access method specifically.
This makes Deno pair nicely with MongoDB since it is also super secure by default.
### Video Version
Here is a video version of this article if you prefer to watch.
:youtube]{vid=xOgicDUXnrE}
## Prerequisites
- TypeScript — Deno uses TypeScript by default, so some TypeScript knowledge is needed.
- Basic MongoDB knowledge
- Understanding of RESTful APIs
## Getting Deno set up
You’ll need to install Deno to get started.
- macOS and Linux Shell:
`curl -fsSL https://deno.land/install.sh | sh`
- Windows PowerShell:
`iwr https://deno.land/install.ps1 -useb | iex`
For more options, here are the Deno [installation instructions.
## Setting up middleware and routing
For this tutorial, we’re going to use Oak, a middleware framework for Deno. This will provide routing for our various app endpoints to perform CRUD operations.
We’ll start by creating a `server.ts` file and import the Oak `Application` method.
```jsx
import { Application } from "https://deno.land/x/oak/mod.ts";
const app = new Application();
```
> 💡 If you are familiar with Node.js, you’ll notice that Deno does things a bit differently. Instead of using a `package.json` file and downloading all of the packages into the project directory, Deno uses file paths or URLs to reference module imports. Modules do get downloaded and cached locally, but this is done globally and not per project. This eliminates a lot of the bloat that is inherent from Node.js and its `node_modules` folder.
Next, let’s start up the server.
```jsx
const PORT = 3000;
app.listen({ port: PORT });
console.log(`Server listening on port ${PORT}`);
```
We’re going to create our routes in a new file named `routes.ts`. This time, we’ll import the `Router` method from Oak. Then, create a new instance of `Router()` and export it.
```jsx
import { Router } from "https://deno.land/x/oak/mod.ts";
const router = new Router(); // Create router
export default router;
```
Now, let’s bring our `router` into our `server.ts` file.
```jsx
import { Application } from "https://deno.land/x/oak/mod.ts";
import router from "./routes.ts"; // Import our router
const PORT = 3000;
const app = new Application();
app.use(router.routes()); // Implement our router
app.use(router.allowedMethods()); // Allow router HTTP methods
console.log(`Server listening on port ${PORT}`);
await app.listen({ port: PORT });
```
## Setting up the MongoDB Data API
In most tutorials, you’ll find that they use the mongo third-party Deno module. For this tutorial, we’ll use the brand new MongoDB Atlas Data API to interact with our MongoDB Atlas database in Deno. The Data API doesn’t require any drivers!
Let’s set up our MongoDB Atlas Data API. You’ll need a MongoDB Atlas account. If you already have one, sign in, or register now.
From the MongoDB Atlas Dashboard, click on the Data API option. By default, all MongoDB Atlas clusters have the Data API turned off. Let’s enable it for our cluster. You can turn it back off at any time.
After enabling the Data API, we’ll need to create an API Key. You can name your key anything you want. I’ll name mine `data-api-test`. Be sure to copy your API key secret at this point. You won’t see it again!
Also, take note of your App ID. It can be found in your URL Endpoint for the Data API.
Example: `https://data.mongodb-api.com/app/{APP_ID}/endpoint/data/beta`
## Configuring each route
At this point, we need to set up each function for each route. These will be responsible for Creating, Reading, Updating, and Deleting (CRUD) documents in our MongoDB database.
Let’s create a new folder called `controllers` and a file within it called `todos.ts`.
Next, we’ll set up our environmental variables to keep our secrets safe. For this, we’ll use a module called dotenv.
```jsx
import { config } from "https://deno.land/x/dotenv/mod.ts";
const { DATA_API_KEY, APP_ID } = config();
```
Here, we are importing the `config` method from that module and then using it to get our `DATA_API_KEY` and `APP_ID` environmental variables. Those will be pulled from another file that we’ll create in the root of our project called `.env`. Just the extension and no file name.
```
DATA_API_KEY=your_key_here
APP_ID=your_app_id_here
```
This is a plain text file that allows you to store secrets that you don’t want to be uploaded to GitHub or shown in your code. To ensure that these don’t get uploaded to GitHub, we’ll create another file in the root of our project called `.gitignore`. Again, just the extension with no name.
```
.env
```
In this file, we’ll simply enter `.env`. This lets git know to ignore this file so that it’s not tracked.
Now, back to the `todos.ts` file. We’ll configure some variables that will be used throughout each function.
```jsx
const BASE_URI = `https://data.mongodb-api.com/app/${APP_ID}/endpoint/data/beta/action`;
const DATA_SOURCE = "Cluster0";
const DATABASE = "todo_db";
const COLLECTION = "todos";
const options = {
method: "POST",
headers: {
"Content-Type": "application/json",
"api-key": DATA_API_KEY
},
body: ""
};
```
We’ll set up our base URI to our MongoDB Data API endpoint. This will utilize our App ID. Then we need to define our data source, database, and collection. These would be specific to your use case. And lastly, we will define our fetch options, passing in our Data API key.
## Create route
Now we can finally start creating our first route function. We’ll call this function `addTodo`. This function will add a new todo item to our database collection.
```jsx
const addTodo = async ({
request,
response,
}: {
request: any;
response: any;
}) => {
try {
if (!request.hasBody) {
response.status = 400;
response.body = {
success: false,
msg: "No Data",
};
} else {
const body = await request.body();
const todo = await body.value;
const URI = `${BASE_URI}/insertOne`;
const query = {
collection: COLLECTION,
database: DATABASE,
dataSource: DATA_SOURCE,
document: todo
};
options.body = JSON.stringify(query);
const dataResponse = await fetch(URI, options);
const { insertedId } = await dataResponse.json();
response.status = 201;
response.body = {
success: true,
data: todo,
insertedId
};
}
} catch (err) {
response.body = {
success: false,
msg: err.toString(),
};
}
};
export { addTodo };
```
This function will accept a `request` and `response`. If the `request` doesn’t have a `body` it will return an error. Otherwise, we’ll get the `todo` from the `body` of the `request` and use the `insertOne` Data API endpoint to insert a new document into our database.
We do this by creating a `query` that defines our `dataSource`, `database`, `collection`, and the `document` we are adding. This gets stringified and sent using `fetch`. `fetch` happens to be built into Deno as well; no need for another module like in Node.js.
We also wrap the entire function contents with a `try..catch` to let us know if there are any errors.
As long as everything goes smoothly, we’ll return a status of `201` and a `response.body`.
Lastly, we’ll export this function to be used in our `routes.ts` file. So, let’s do that next.
```jsx
import { Router } from "https://deno.land/x/oak/mod.ts";
import { addTodo } from "./controllers/todos.ts"; // Import controller methods
const router = new Router();
// Implement routes
router.post("/api/todos", addTodo); // Add a todo
export default router;
```
### Testing the create route
Let’s test our create route. To start the server, we’ll run the following command:
`deno run --allow-net --allow-read server.ts`
> 💡 Like mentioned before, Deno is locked down by default. We have to specify that network access is okay by using the `--allow-net` flag, and that read access to our project directory is okay to read our environmental variables using the `--allow-read` flag.
Now our server should be listening on port 3000. To test, we can use Postman, Insomnia, or my favorite, the Thunder Client extension in VS Code.
We’ll make a `POST` request to `localhost:3000/api/todos` and include in the `body` of our request the json document that we want to add.
```json
{
"title": "Todo 1",
"complete": false,
"todoId": 1
}
```
> 💡 Normally, I would not create an ID manually. I would rely on the MongoDB generated ObjectID, `_id`. That would require adding another Deno module to this project to convert the BSON ObjectId. I wanted to keep this tutorial as simple as possible.
If all goes well, we should receive a successful response.
## Read all documents route
Now let’s move on to the read routes. We’ll start with a route that gets all of our todos called `getTodos`.
```jsx
const getTodos = async ({ response }: { response: any }) => {
try {
const URI = `${BASE_URI}/find`;
const query = {
collection: COLLECTION,
database: DATABASE,
dataSource: DATA_SOURCE
};
options.body = JSON.stringify(query);
const dataResponse = await fetch(URI, options);
const allTodos = await dataResponse.json();
if (allTodos) {
response.status = 200;
response.body = {
success: true,
data: allTodos,
};
} else {
response.status = 500;
response.body = {
success: false,
msg: "Internal Server Error",
};
}
} catch (err) {
response.body = {
success: false,
msg: err.toString(),
};
}
};
```
This one will be directed to the `find` Data API endpoint. We will not pass anything other than the `dataSource`, `database`, and `collection` into our `query`. This will return all documents from the specified collection.
Next, we’ll need to add this function into our exports at the bottom of the file.
```jsx
export { addTodo, getTodos }
```
Then we’ll add this function and route into our `routes.ts` file as well.
```jsx
import { Router } from "https://deno.land/x/oak/mod.ts";
import { addTodo, getTodos } from "./controllers/todos.ts"; // Import controller methods
const router = new Router();
// Implement routes
router
.post("/api/todos", addTodo) // Add a todo
.get("/api/todos", getTodos); // Get all todos
export default router;
```
### Testing the read all documents route
Since we’ve made changes, we’ll need to restart our server using the same command as before:
`deno run --allow-net --allow-read server.ts`
To test this route, we’ll send a `GET` request to `localhost:3000/api/todos` this time, with nothing in our request `body`.
This time, we should see the first document that we inserted in our response.
## Read a single document route
Next, we’ll set up our function to read a single document. We’ll call this one `getTodo`.
```jsx
const getTodo = async ({
params,
response,
}: {
params: { id: string };
response: any;
}) => {
const URI = `${BASE_URI}/findOne`;
const query = {
collection: COLLECTION,
database: DATABASE,
dataSource: DATA_SOURCE,
filter: { todoId: parseInt(params.id) }
};
options.body = JSON.stringify(query);
const dataResponse = await fetch(URI, options);
const todo = await dataResponse.json();
if (todo) {
response.status = 200;
response.body = {
success: true,
data: todo,
};
} else {
response.status = 404;
response.body = {
success: false,
msg: "No todo found",
};
}
};
```
This function will utilize the `findOne` Data API endpoint and we’ll pass a `filter` this time into our `query`.
We’re going to use query `params` from our URL to get the ID of the document we will filter for.
Next, we need to export this function as well.
```jsx
export { addTodo, getTodos, getTodo }
```
And, we’ll import the function and set up our route in the `routes.ts` file.
```jsx
import { Router } from "https://deno.land/x/oak/mod.ts";
import {
addTodo,
getTodos,
getTodo
} from "./controllers/todos.ts"; // Import controller methods
const router = new Router();
// Implement routes
router
.post("/api/todos", addTodo) // Add a todo
.get("/api/todos", getTodos) // Get all todos
.get("/api/todos/:id", getTodo); // Get one todo
export default router;
```
### Testing the read single document route
Remember to restart the server. This route is very similar to the “read all documents” route. This time, we will need to add an ID to our URL. Let’s use: `localhost:3000/api/todos/1`.
We should see the document with the `todoId` of 1.
> 💡 To further test, try adding more test documents using the `POST` method and then run the two `GET` methods again to see the results.
## Update route
Now that we have documents, let’s set up our update route to allow us to make changes to existing documents. We’ll call this function `updateTodo`.
```jsx
const updateTodo = async ({
params,
request,
response,
}: {
params: { id: string };
request: any;
response: any;
}) => {
try {
const body = await request.body();
const { title, complete } = await body.value;
const URI = `${BASE_URI}/updateOne`;
const query = {
collection: COLLECTION,
database: DATABASE,
dataSource: DATA_SOURCE,
filter: { todoId: parseInt(params.id) },
update: { $set: { title, complete } }
};
options.body = JSON.stringify(query);
const dataResponse = await fetch(URI, options);
const todoUpdated = await dataResponse.json();
response.status = 200;
response.body = {
success: true,
todoUpdated
};
} catch (err) {
response.body = {
success: false,
msg: err.toString(),
};
}
};
```
This route will accept three arguments: `params`, `request`, and `response`. The `params` will tell us which document to update, and the `request` will tell us what to update.
We’ll use the `updateOne` Data API endpoint and set a `filter` and `update` in our `query`.
The `filter` will indicate which document we are updating and the `update` will use the `$set` operator to update the document fields.
The updated data will come from our `request.body`.
Let’s export this function at the bottom of the file.
```jsx
export { addTodo, getTodos, getTodo, updateTodo }
```
And, we’ll import the function and set up our route in the `routes.ts` file.
```jsx
import { Router } from "https://deno.land/x/oak/mod.ts";
import {
addTodo,
getTodos,
getTodo,
updateTodo
} from "./controllers/todos.ts"; // Import controller methods
const router = new Router();
// Implement routes
router
.post("/api/todos", addTodo) // Add a todo
.get("/api/todos", getTodos) // Get all todos
.get("/api/todos/:id", getTodo); // Get one todo
.put("/api/todos/:id", updateTodo) // Update a todo
export default router;
```
This route will use the `PUT` method.
### Testing the update route
Remember to restart the server. To test this route, we’ll use a combination of the previous tests.
Our method will be `PUT`. Our URL will be `localhost:3000/api/todos/1`. And we’ll include a json document in our `body` with the updated fields.
```json
{
"title": "Todo 1",
"complete": true
}
```
Our response this time will indicate if a document was found, or matched, and if a modification was made. Here we see that both are true.
If we run a `GET` request on that same URL we’ll see that the document was updated!
## Delete route
Next, we'll set up our delete route. We’ll call this one `deleteTodo`.
```jsx
const deleteTodo = async ({
params,
response,
}: {
params: { id: string };
response: any;
}) => {
try {
const URI = `${BASE_URI}/deleteOne`;
const query = {
collection: COLLECTION,
database: DATABASE,
dataSource: DATA_SOURCE,
filter: { todoId: parseInt(params.id) }
};
options.body = JSON.stringify(query);
const dataResponse = await fetch(URI, options);
const todoDeleted = await dataResponse.json();
response.status = 201;
response.body = {
todoDeleted
};
} catch (err) {
response.body = {
success: false,
msg: err.toString(),
};
}
};
```
This route will use the `deleteOne` Data API endpoint and will `filter` using the URL `params`.
Let’s export this function.
```jsx
export { addTodo, getTodos, getTodo, updateTodo, deleteTodo };
```
And we’ll import it and set up its route in the `routes.ts` file.
```jsx
import { Router } from "https://deno.land/x/oak/mod.ts";
import {
addTodo,
getTodos,
getTodo,
updateTodo,
deleteTodo
} from "./controllers/todos.ts"; // Import controller methods
const router = new Router();
// Implement routes
router
.post("/api/todos", addTodo) // Add a todo
.get("/api/todos", getTodos) // Get all todos
.get("/api/todos/:id", getTodo); // Get one todo
.put("/api/todos/:id", updateTodo) // Update a todo
.delete("/api/todos/:id", deleteTodo); // Delete a todo
export default router;
```
### Testing the delete route
Remember to restart the server. This test will use the `DELETE` method. We’ll delete the first todo using this URL: `localhost:3000/api/todos/1`.
Our response will indicate how many documents were deleted. In this case, we should see that one was deleted.
## Bonus: Aggregation route
We're going to create one more bonus route. This one will demonstrate a basic aggregation pipeline using the MongoDB Atlas Data API. We'll call this one `getIncompleteTodos`.
```jsx
const getIncompleteTodos = async ({ response }: { response: any }) => {
const URI = `${BASE_URI}/aggregate`;
const pipeline =
{
$match: {
complete: false
}
},
{
$count: 'incomplete'
}
];
const query = {
dataSource: DATA_SOURCE,
database: DATABASE,
collection: COLLECTION,
pipeline
};
options.body = JSON.stringify(query);
const dataResponse = await fetch(URI, options);
const incompleteCount = await dataResponse.json();
if (incompleteCount) {
response.status = 200;
response.body = {
success: true,
incompleteCount,
};
} else {
response.status = 404;
response.body = {
success: false,
msg: "No incomplete todos found",
};
}
};
```
For this route, we'll use the `aggregate` Data API endpoint. This endpoint will accept a `pipeline`.
We can pass any aggregation pipeline through this endpoint. Our example will be basic. The result will be a count of the incomplete todos.
Let’s export this final function.
```jsx
export { addTodo, getTodos, getTodo, updateTodo, deleteTodo, getIncompleteTodos };
```
And we’ll import it and set up our final route in the `routes.ts` file.
```jsx
import { Router } from "https://deno.land/x/oak/mod.ts";
import {
addTodo,
getTodos,
getTodo,
updateTodo,
deleteTodo,
getIncompleteTodos
} from "./controllers/todos.ts"; // Import controller methods
const router = new Router();
// Implement routes
router
.post("/api/todos", addTodo) // Add a todo
.get("/api/todos", getTodos) // Get all todos
.get("/api/todos/:id", getTodo); // Get one todo
.get("/api/todos/incomplete/count", getIncompleteTodos) // Get incomplete todo count
.put("/api/todos/:id", updateTodo) // Update a todo
.delete("/api/todos/:id", deleteTodo); // Delete a todo
export default router;
```
### Testing the aggregation route
Remember to restart the server. This test will use the `GET` method and this URL: `localhost:3000/api/todos/incomplete/count`.
Add a few test todos to the database and mark some as complete and some as incomplete.
![
Our response shows the count of incomplete todos.
## Conclusion
We created a Deno server that uses the MongoDB Atlas Data API to Create, Read, Update, and Delete (CRUD) documents in our MongoDB database. We added a bonus route to demonstrate using an aggregation pipeline with the MongoDB Atlas Data API. What next?
If you would like to see the completed code, you can find it *here*. You should be able to use this as a starter for your next project and modify it to meet your needs.
I’d love to hear your feedback or questions. Let’s chat in the MongoDB Community. | md | {
"tags": [
"Rust",
"Atlas",
"TypeScript"
],
"pageDescription": "Deno is a “modern” runtime for JavaScript and TypeScript that is built in Rust. This makes it very fast!\nIf you are familiar with Node.js, then you will be right at home with Deno. It is very similar but has some improvements over Node.js. In fact, the creator of Deno also created Node and Deno is meant to be the successor to Node.js.\nDeno pairs nicely with MongoDB.",
"contentType": "Quickstart"
} | Getting Started with Deno & MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/create-restful-api-dotnet-core-mongodb | created | # Create a RESTful API with .NET Core and MongoDB
If you've been keeping up with my development content, you'll remember that I recently wrote Build Your First .NET Core Application with MongoDB Atlas, which focused on building a console application that integrated with MongoDB. While there is a fit for MongoDB in console applications, many developers are going to find it more valuable in web applications.
In this tutorial, we're going to expand upon the previous and create a RESTful API with endpoints that perform basic create, read, update, and delete (CRUD) operations against MongoDB Atlas.
## The Requirements
To be successful with this tutorial, you'll need to have a few things taken care of first:
- A deployed and configured MongoDB Atlas cluster, M0 or higher
- .NET Core 6+
We won't go through the steps of deploying a MongoDB Atlas cluster or configuring it with user and network rules. If this is something you need help with, check out a previous tutorial that was published on the topic.
We'll be using .NET Core 6.0 in this tutorial, but other versions may still work. Just take the version into consideration before continuing.
## Create a Web API Project with the .NET Core CLI
To kick things off, we're going to create a fresh .NET Core project using the web application template that Microsoft offers. To do this, execute the following commands from the CLI:
```bash
dotnet new webapi -o MongoExample
cd MongoExample
dotnet add package MongoDB.Driver
```
The above commands will create a new web application project for .NET Core and install the latest MongoDB driver. We'll be left with some boilerplate files as part of the template, but we can remove them.
Inside the project, delete any file related to `WeatherForecast` and similar.
## Designing a Document Model and Database Service within .NET Core
Before we start designing each of the RESTful API endpoints with .NET Core, we need to create and configure our MongoDB service and define the data model for our API.
We'll start by working on our MongoDB service, which will be responsible for establishing our connection and directly working with documents within MongoDB. Within the project, create "Models/MongoDBSettings.cs" and add the following C# code:
```csharp
namespace MongoExample.Models;
public class MongoDBSettings {
public string ConnectionURI { get; set; } = null!;
public string DatabaseName { get; set; } = null!;
public string CollectionName { get; set; } = null!;
}
```
The above `MongoDBSettings` class will hold information about our connection, the database name, and the collection name. The data we plan to store in these class fields will be found in the project's "appsettings.json" file. Open it and add the following:
```json
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"MongoDB": {
"ConnectionURI": "ATLAS_URI_HERE",
"DatabaseName": "sample_mflix",
"CollectionName": "playlist"
}
}
```
Specifically take note of the `MongoDB` field. Just like with the previous example project, we'll be using the "sample_mflix" database and the "playlist" collection. You'll need to grab the `ConnectionURI` string from your MongoDB Atlas Dashboard.
With the settings in place, we can move onto creating the service.
Create "Services/MongoDBService.cs" within your project and add the following:
```csharp
using MongoExample.Models;
using Microsoft.Extensions.Options;
using MongoDB.Driver;
using MongoDB.Bson;
namespace MongoExample.Services;
public class MongoDBService {
private readonly IMongoCollection _playlistCollection;
public MongoDBService(IOptions mongoDBSettings) {
MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI);
IMongoDatabase database = client.GetDatabase(mongoDBSettings.Value.DatabaseName);
_playlistCollection = database.GetCollection(mongoDBSettings.Value.CollectionName);
}
public async Task> GetAsync() { }
public async Task CreateAsync(Playlist playlist) { }
public async Task AddToPlaylistAsync(string id, string movieId) {}
public async Task DeleteAsync(string id) { }
}
```
In the above code, each of the asynchronous functions were left blank on purpose. We'll be populating those functions as we create our endpoints. Instead, make note of the constructor method and how we're taking the passed settings that we saw in our "appsettings.json" file and setting them to variables. In the end, the only variable we'll ever interact with for this example is the `_playlistCollection` variable.
With the service available, we need to connect it to the application. Open the project's "Program.cs" file and add the following at the top:
```csharp
using MongoExample.Models;
using MongoExample.Services;
var builder = WebApplication.CreateBuilder(args);
builder.Services.Configure(builder.Configuration.GetSection("MongoDB"));
builder.Services.AddSingleton();
```
You'll likely already have the `builder` variable in your code because it was part of the boilerplate project, so don't add it twice. What you'll need to add near the top is an import to your custom models and services as well as configuring the service.
Remember the `MongoDB` field in the "appsettings.json" file? That is the section that the `GetSection` function is pulling from. That information is passed into the singleton service that we created.
With the service created and working, with the exception of the incomplete asynchronous functions, we can focus on creating a data model for our collection.
Create "Models/Playlist.cs" and add the following C# code:
```csharp
using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;
using System.Text.Json.Serialization;
namespace MongoExample.Models;
public class Playlist {
BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string? Id { get; set; }
public string username { get; set; } = null!;
[BsonElement("items")]
[JsonPropertyName("items")]
public List movieIds { get; set; } = null!;
}
```
There are a few things happening in the above class that take it from a standard C# class to something that can integrate seamlessly into a MongoDB document.
First, you might notice the following:
```csharp
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string? Id { get; set; }
```
We're saying that the `Id` field is to be represented as an ObjectId in BSON and the `_id` field within MongoDB. However, when we work with it locally in our application, it will be a string.
The next thing you'll notice is the following:
```csharp
[BsonElement("items")]
[JsonPropertyName("items")]
public List movieIds { get; set; } = null!;
```
Even though we plan to work with `movieIds` within our C# application, in MongoDB, the field will be known as `items` and when sending or receiving JSON, the field will also be known as `items` instead of `movieIds`.
You don't need to define custom mappings if you plan to have your local class field match the document field directly. Take the `username` field in our example. It has no custom mappings, so it will be `username` in C#, `username` in JSON, and `username` in MongoDB.
Just like that, we have a MongoDB service and document model for our collection to work with for .NET Core.
## Building CRUD Endpoints that Interact with MongoDB Using .NET Core
When building CRUD endpoints for this project, we'll need to bounce between two different locations within our project. We'll need to define the endpoint within a controller and do the work within our service.
Create "Controllers/PlaylistController.cs" and add the following code:
```csharp
using System;
using Microsoft.AspNetCore.Mvc;
using MongoExample.Services;
using MongoExample.Models;
namespace MongoExample.Controllers;
[Controller]
[Route("api/[controller]")]
public class PlaylistController: Controller {
private readonly MongoDBService _mongoDBService;
public PlaylistController(MongoDBService mongoDBService) {
_mongoDBService = mongoDBService;
}
[HttpGet]
public async Task> Get() {}
[HttpPost]
public async Task Post([FromBody] Playlist playlist) {}
[HttpPut("{id}")]
public async Task AddToPlaylist(string id, [FromBody] string movieId) {}
[HttpDelete("{id}")]
public async Task Delete(string id) {}
}
```
In the above `PlaylistController` class, we have a constructor method that gains access to our singleton service class. Then we have a series of endpoints for this particular controller. We could add far more endpoints than this to our controller, but it's not necessary for this example.
Let's start with creating data through the POST endpoint. To do this, it's best to start in the "Services/MongoDBService.cs" file:
```csharp
public async Task CreateAsync(Playlist playlist) {
await _playlistCollection.InsertOneAsync(playlist);
return;
}
```
We had set the `_playlistCollection` in the constructor method of the service, so we can now use the `InsertOneAsync` method, taking a passed `Playlist` variable and inserting it. Jumping back into the "Controllers/PlaylistController.cs," we can add the following:
```csharp
[HttpPost]
public async Task Post([FromBody] Playlist playlist) {
await _mongoDBService.CreateAsync(playlist);
return CreatedAtAction(nameof(Get), new { id = playlist.Id }, playlist);
}
```
What we're saying is that when the endpoint is executed, we take the `Playlist` object from the request, something that .NET Core parses for us, and pass it to the `CreateAsync` function that we saw in the service. After the insert, we return some information about the interaction.
It's important to note that in this example project, we won't be validating any data flowing from HTTP requests.
Let's jump to the read operations.
Head back into the "Services/MongoDBService.cs" file and add the following function:
```csharp
public async Task> GetAsync() {
return await _playlistCollection.Find(new BsonDocument()).ToListAsync();
}
```
The above `Find` operation will return all documents that exist in the collection. If you wanted to, you could make use of the `FindOne` or provide filter criteria to return only the data that you want. We'll explore filters shortly.
With the service function ready, add the following endpoint to the "Controllers/PlaylistController.cs" file:
```csharp
[HttpGet]
public async Task> Get() {
return await _mongoDBService.GetAsync();
}
```
Not so bad, right? We'll be doing the same thing for the other endpoints, more or less.
The next CRUD stage to take care of is the updating of data. Within the "Services/MongoDBService.cs" file, add the following function:
```csharp
public async Task AddToPlaylistAsync(string id, string movieId) {
FilterDefinition filter = Builders.Filter.Eq("Id", id);
UpdateDefinition update = Builders.Update.AddToSet("movieIds", movieId);
await _playlistCollection.UpdateOneAsync(filter, update);
return;
}
```
Rather than making changes to the entire document, we're planning on adding an item to our playlist and nothing more. To do this, we set up a match filter to determine which document or documents should receive the update. In this case, we're matching on the id which is going to be unique. Next, we're defining the update criteria, which is an `AddToSet` operation that will only add an item to the array if it doesn't already exist in the array.
The `UpdateOneAsync` method will only update one document even if the match filter returned more than one match.
In the "Controllers/PlaylistController.cs" file, add the following endpoint to pair with the `AddToPlayListAsync` function:
```csharp
[HttpPut("{id}")]
public async Task AddToPlaylist(string id, [FromBody] string movieId) {
await _mongoDBService.AddToPlaylistAsync(id, movieId);
return NoContent();
}
```
In the above PUT endpoint, we are taking the `id` from the route parameters and the `movieId` from the request body and using them with the `AddToPlaylistAsync` function.
This brings us to our final part of the CRUD spectrum. We're going to handle deleting of data.
In the "Services/MongoDBService.cs" file, add the following function:
```csharp
public async Task DeleteAsync(string id) {
FilterDefinition filter = Builders.Filter.Eq("Id", id);
await _playlistCollection.DeleteOneAsync(filter);
return;
}
```
The above function will delete a single document based on the filter criteria. The filter criteria, in this circumstance, is a match on the id which is always going to be unique. Your filters could be more extravagant if you wanted.
To bring it to an end, the endpoint for this function would look like the following in the "Controllers/PlaylistController.cs" file:
```csharp
[HttpDelete("{id}")]
public async Task Delete(string id) {
await _mongoDBService.DeleteAsync(id);
return NoContent();
}
```
We only created four endpoints, but you could take everything we did and create 100 more if you wanted to. They would all use a similar strategy and can leverage everything that MongoDB has to offer.
## Conclusion
You just saw how to create a simple four endpoint RESTful API using .NET Core and MongoDB. This was an expansion to the [previous tutorial, which went over the same usage of MongoDB, but in a console application format rather than web application.
Like I mentioned, you can take the same strategy used here and apply it towards more endpoints, each doing something critical for your web application.
Got a question about the driver for .NET? Swing by the MongoDB Community Forums! | md | {
"tags": [
"C#",
"MongoDB",
".NET"
],
"pageDescription": "Learn how to create a RESTful web API with .NET Core that interacts with MongoDB through each of its endpoints.",
"contentType": "Tutorial"
} | Create a RESTful API with .NET Core and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/everything-you-know-is-wrong | created | # Everything You Know About MongoDB is Wrong!
I joined MongoDB less than a year ago, and I've learned a lot in the time since. Until I started working towards my interviews at the company, I'd never actually *used* MongoDB, although I had seen some talks about it and been impressed by how simple it seemed to use.
But like many other people, I'd also heard the scary stories. "*It doesn't do relationships!*" people would say. "*It's fine if you want to store documents, but what if you want to do aggregation later? You'll be trapped in the wrong database! And anyway! Transactions! It doesn't have transactions!*"
It wasn't until I started to go looking for the sources of this information that I started to realise two things: First, most of those posts are from a decade ago, so they referred to a three-year-old product, rather than the mature, battle-tested version we have today. Second, almost everything they say is no longer true - and in some cases *has never been true*.
So I decided to give a talk (and now write this blog post) about the misinformation that's available online, and counter each myth, one by one.
## Myth 0: MongoDB is Web Scale
There's a YouTube video with a couple of dogs in it (dogs? I think they're dogs). You've probably seen it - one of them is that kind of blind follower of new technology who's totally bought into MongoDB, without really understanding what they've bought into. The other dog is more rational and gets frustrated by the first dog's refusal to come down to Earth.
I was sent a link to this video by a friend of mine on my first day at MongoDB, just in case I hadn't seen it. (I had seen it.) Check out the date at the bottom! This video's been circulating for over a decade. It was really funny at the time, but these days? Almost everything that's in there is outdated.
We're not upset. In fact, many people at MongoDB have the character on a T-shirt or a sticker on their laptop. He's kind of an unofficial mascot at MongoDB. Just don't watch the video looking for facts. And stop sending us links to the video - we've all seen it!
## What Exactly *is* MongoDB?
Before launching into some things that MongoDB *isn't*, let's just summarize what MongoDB actually *is.*
MongoDB is a distributed document database. Clusters (we call them replica sets) are mostly self-managing - once you've told each of the machines which other servers are in the cluster, then they'll handle it if one of the nodes goes down or there are problems with the network. If one of the machines gets shut off or crashes, the others will take over. You need a minimum of 3 nodes in a cluster, to achieve quorum. Each server in the cluster holds a complete copy of all of the data in the database.
Clusters are for redundancy, not scalability. All the clients are generally connected to only one server - the elected primary, which is responsible for executing queries and updates, and transmitting data changes to the secondary machines, which are there in case of server failure.
There *are* some interesting things you can do by connecting directly to the secondaries, like running analytics queries, because the machines are under less read load. But in general, forcing a connection to a secondary means you could be working with slightly stale data, so you shouldn't connect to a secondary node unless you're prepared to make some compromises.
So I've covered "distributed." What do I mean by "document database?"
The thing that makes MongoDB different from traditional relational databases is that instead of being able to store atoms of data in flat rows, stored in tables in the database, MongoDB allows you to store hierarchical structured data in a *document* - which is (mostly) analogous to a JSON object. Documents are stored in a collection, which is really just a bucket of documents. Each document can have a different structure, or *schema*, from all the other documents in the collection. You can (and should!) also index documents in collections, based on the kind of queries you're going to be running and the data that you're storing. And if you want validation to ensure that all the documents in a collection *do* follow a set structure, you can apply a JSON
schema to the collection as a validator.
``` javascript
{
'_id': ObjectId('573a1390f29313caabcd4135'),
'title': 'Blacksmith Scene',
'fullplot': 'A stationary camera looks at a large anvil with a
blacksmith behind it and one on either side.',
'cast': 'Charles Kayser', 'John Ott'],
'countries': ['USA'],
'directors': ['William K.L. Dickson'],
'genres': ['Short'],
'imdb': {'id': 5, 'rating': 6.2, 'votes': 1189},
'released': datetime.datetime(1893, 5, 9, 0, 0),
'runtime': 1,
'year': 1893
}
```
The above document is an example, showing a movie from 1893! This document was retrieved using the [PyMongo driver.
Note that some of the values are arrays, like 'countries' and 'cast'. Some of the values are objects (we call them subdocuments). This demonstrates the hierarchical nature of MongoDB documents - they're not flat like a table row in a relational database.
Note *also* that it contains a native Python datetime type for the 'released' value, and a special *ObjectId* type for the first value. Maybe these aren't actually JSON documents? I'll come back to that later...
## Myth 1: MongoDB is on v3.2
If you install MongoDB on Debian Stretch, with `apt get mongodb`, it will install version 3.2. Unfortunately, this version is five years old! There have been five major annual releases since then, containing a whole host of new features, as well as security, performance, and scalability improvements.
The current version of MongoDB is v4.4 (as of late 2020). If you want to install it, you should install MongoDB Community Server, but first make sure you've read about MongoDB Atlas, our hosted database-as-a-service product!
## Myth 2: MongoDB is a JSON Database
You'll almost certainly have heard that MongoDB is a JSON database, especially if you've read the MongoDB.com homepage recently!
As I implied before, though, MongoDB *isn't* a JSON database. It supports extra data types, such as ObjectIds, native date objects, more numeric types, geographic primitives, and an efficient binary type, among others!
This is because **MongoDB is a BSON database**.
This may seem like a trivial distinction, but it's important. As well as being more efficient to store, transfer, and traverse than using a text-based format for structured data, as well as supporting more data types than JSON, it's also *everywhere* in MongoDB.
- MongoDB stores BSON documents.
- Queries to look up documents are BSON documents.
- Results are provided as BSON documents.
- BSON is even used for the wire protocol used by MongoDB!
If you're used to working with JSON when doing web development, it's a useful shortcut to think of MongoDB as a JSON database. That's why we sometimes describe it that way! But once you've been working with MongoDB for a little while, you'll come to appreciate the advantages that BSON has to offer.
## Myth 3: MongoDB Doesn't Support Transactions
When reading third-party descriptions of MongoDB, you may come across blog posts describing it as a BASE database. BASE is an acronym for "Basic Availability; Soft-state; Eventual consistency."
But this is not true, and never has been! MongoDB has never been "eventually consistent." Reads and writes to the primary are guaranteed to be strongly consistent, and updates to a single document are always atomic. Soft-state apparently describes the need to continually update data or it will expire, which is also not the case.
And finally, MongoDB *will* go into a read-only state (reducing availability) if so many nodes are unavailable that a quorum cannot be achieved. This is by design. It ensures that consistency is maintained when everything else goes wrong.
**MongoDB is an ACID database**. It supports atomicity, consistency, isolation, and durability.
Updates to multiple parts of individual documents have always been atomic; but since v4.0, MongoDB has supported transactions across multiple documents and collections. Since v4.2, this is even supported across shards in a sharded cluster.
Despite *supporting* transactions, they should be used with care. They have a performance cost, and because MongoDB supports rich, hierarchical documents, if your schema is designed correctly, you should not often have to update across multiple documents.
## Myth 4: MongoDB Doesn't Support Relationships
Another out-of-date myth about MongoDB is that you can't have relationships between collections or documents. You *can* do joins with queries that we call aggregation pipelines. They're super-powerful, allowing you to query and transform your data from multiple collections using an intuitive query model that consists of a series of pipeline stages applied to data moving through the pipeline.
**MongoDB has supported lookups (joins) since v2.2.**
The example document below shows how, after a query joining an *orders* collection and an *inventory* collection, a returned order document contains the related inventory documents, embedded in an array.
My opinion is that being able to embed related documents within the primary documents being returned is more intuitive than duplicating rows for every relationship found in a relational join.
## Myth 5: MongoDB is All About Sharding
You may hear people talk about sharding as a cool feature of MongoDB. And it is - it's definitely a cool, and core, feature of MongoDB.
Sharding is when you divide your data and put each piece in a different replica set or cluster. It's a technique for dealing with huge data sets. MongoDB supports automatically ensuring data and requests are sent to the correct replica sets, and merging results from multiple shards.
But there's a fundamental issue with sharding.
I mentioned earlier in this post that the minimum number of nodes in a replica set is three, to allow quorum. As soon as you need sharding, you have at least two replica sets, so that's a minimum of six servers. On top of that, you need to run multiple instances of a server called *mongos*. Mongos is a proxy for the sharded cluster which handles the routing of requests and responses. For high availability, you need at least two instances of mongos.
So, this means a minimum sharded cluster is eight servers, and it goes up by at least three servers, with each shard added.
Sharded clusters also make your data harder to manage, and they add some limitations to the types of queries you can conduct. **Sharding is useful if you need it, but it's often cheaper and easier to simply upgrade your hardware!**
Scaling data is mostly about RAM, so if you can, buy more RAM. If CPU is your bottleneck, upgrade your CPU, or buy a bigger disk, if that's your issue.
MongoDB's sharding features are still there for you once you scale beyond the amount of RAM that can be put into a single computer. You can also do some neat things with shards, like geo-pinning, where you can store user data geographically closer to the user's location, to reduce latency.
If you're attempting to scale by sharding, you should at least consider whether hardware upgrades would be a more efficient alternative, first.
And before you consider *that*, you should look at MongoDB Atlas, MongoDB's hosted database-as-a-service product. (Yes, I know I've already mentioned it!) As well as hosting your database for you, on the cloud (or clouds) of your choice, MongoDB Atlas will also scale your database up and down as required, keeping you available, while keeping costs low. It'll handle backups and redundancy, and also includes extra features, such as charts, text search, serverless functions, and more.
## Myth 6: MongoDB is Insecure
A rather persistent myth about MongoDB is that it's fundamentally insecure. My personal feeling is that this is one of the more unfair myths about MongoDB, but it can't be denied that there are many insecure instances of MongoDB available on the Internet, and there have been several high-profile data breaches involving MongoDB.
This is historically due to the way MongoDB has been distributed. Some Linux distributions used to ship MongoDB with authentication disabled, and with networking enabled.
So, if you didn't have a firewall, or if you opened up the MongoDB port on your firewall so that it could be accessed by your web server... then your data would be stolen. Nowadays, it's just as likely that a bot will find your data, encrypt it within your database, and then add a document telling you where to send Bitcoin to get the key to decrypt it again.
*I* would argue that if you put an unprotected database server on the internet, then that's *your* fault - but it's definitely the case that this has happened many times, and there were ways to make it more difficult to mess this up.
We fixed the defaults in MongoDB 3.6. **MongoDB will not connect to the network unless authentication is enabled** *or* you provide a specific flag to the server to override this behaviour. So, you can still *be* insecure, but now you have to at least read the manual first!
Other than this, **MongoDB uses industry standards for security**, such as TLS to encrypt the data in-transit, and SCRAM-SHA-256 to authenticate users securely.
MongoDB also features client-side field-level encryption (FLE), which allows you to store data in MongoDB so that it is encrypted both in-transit and at-rest. This means that if a third-party was to gain access to your database server, they would be unable to read the encrypted data without also gaining access to the client.
## Myth 7: MongoDB Loses Data
This myth is a classic Hacker News trope. Someone posts an example of how they successfully built something with MongoDB, and there's an immediate comment saying, "I know this guy who once lost all his data in MongoDB. It just threw it away. Avoid."
If you follow up asking these users to get in touch and file a ticket describing the incident, they never turn up!
MongoDB is used in a range of industries who care deeply about keeping their data. These range from banks such as Morgan Stanley, Barclays, and HSBC to massive publishing brands, like Forbes. We've never had a report of large-scale data loss. If you *do* have a first-hand story to tell of data loss, please file a ticket. We'll take it seriously whether you're a paying enterprise customer or an open-source user.
## Myth 8: MongoDB is Just a Toy
If you've read up until this point, you can already see that this one's a myth!
MongoDB is a general purpose database for storing documents, that can be updated securely and atomically, with joins to other documents and a rich, powerful and intuitive query language for finding and aggregating those documents in the form that you need. When your data gets too big for a single machine, it supports sharding out of the box, and it supports advanced features such as client-side field level encryption for securing sensitive data, and change streams, to allow your applications to respond immediately to changes to your data, using whatever language, framework and set of libraries you prefer to develop with.
If you want to protect yourself from myths in the future, your best bet is to...
## Become a MongoDB Expert
MongoDB is a database that is easy to get started with, but to build production applications requires that you master the complexities of interacting with a distributed database. MongoDB Atlas simplifies many of those challenges, but you will get the most out of MongoDB if you invest time in learning things like the aggregation framework, read concerns, and write concerns. Nothing hard is easy, but the hard stuff is easier with MongoDB. You're not going to become an expert overnight. The good news is that there are lots of resources for learning MongoDB, and it's fun!
The MongoDB documentation is thorough and readable. There are many free courses at MongoDB University
On the MongoDB Developer Blog, we have detailed some MongoDB Patterns for schema design and development, and my awesome colleague Lauren Schaefer has been producing a series of posts describing MongoDB Anti-Patterns to help you recognise when you may not be doing things optimally.
MongoDB has an active Community Forum where you can ask questions or show off your projects.
So, **MongoDB is big and powerful, and there's a lot to learn**. I hope this article has gone some way to explaining what MongoDB is, what it isn't, and how you might go about learning to use it effectively. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "There are a bunch of myths floating around about MongoDB. Here's where I bust them.",
"contentType": "Article"
} | Everything You Know About MongoDB is Wrong! | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/get-hyped-synonyms-atlas-search | created | # Get Hyped: Synonyms in Atlas Search
Sometimes, the word you’re looking for is on the tip of your tongue, but you can’t quite grasp it. For example, when you’re trying to find a really funny tweet you saw last night to show your friends. If you’re sitting there reading this and thinking, "Wow, Anaiya and Nic, you’re so right. I wish there was a fix for this," strap on in! We have just the solution for those days when your precise linguistic abilities fail you, but you have an idea of what you’re looking for: **Synonyms in Atlas Search**.
In this tutorial, we are going to be showing you how to index a MongoDB collection to capture searches for words that mean similar things. For the specifics, we’re going to search through content written with Generation Z (Gen-Z) slang. The slang will be mapped to common words with synonyms and as a result, you’ll get a quick Gen-Z lesson without having to ever open TikTok.
If you’re in the mood to learn a few new words, alongside how effortlessly synonym mappings can be integrated into Atlas Search, this is the tutorial for you.
## Requirements
There are a few requirements that must be met to be successful with this tutorial:
- MongoDB Atlas M0 (or higher) cluster running MongoDB version 4.4 (or higher)
- Node.js
- A Twitter developer account
We’ll be using Node.js to load our Twitter data, but a Twitter developer account is required for accessing the APIs that contain Tweets.
## Load Twitter Data into a MongoDB Collection
Before starting this section of the tutorial, you’re going to need to have your Twitter API Key and API Secret handy. These can both be generated from the Twitter Developer Portal.
The idea is that we want to store a bunch of tweets in MongoDB that contain Gen-Z slang that we can later make sense of using Atlas Search and properly defined synonyms. Each tweet will be stored as a single document within MongoDB and will look something like this:
```json
{
"_id": 1420091624621629400,
"created_at": "Tue Jul 27 18:40:01 +0000 2021",
"id": 1420091624621629400,
"id_str": "1420091624621629443",
"full_text": "Don't settle for a cheugy database, choose MongoDB instead 💪",
"truncated": false,
"entities": {
"hashtags": ],
"symbols": [],
"user_mentions": [],
"urls": []
},
"metadata": {
"iso_language_code": "en",
"result_type": "recent"
},
"source": "Twitter Web App",
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": 1400935623238643700,
"id_str": "1400935623238643716",
"name": "Anaiya Raisinghani",
"screen_name": "anaiyaraisin",
"location": "",
"description": "Developer Advocacy Intern @MongoDB. Opinions are my own!",
"url": null,
"entities": {
"description": {
"urls": []
}
},
"protected": false,
"followers_count": 11,
"friends_count": 29,
"listed_count": 1,
"created_at": "Fri Jun 04 22:01:07 +0000 2021",
"favourites_count": 8,
"utc_offset": null,
"time_zone": null,
"geo_enabled": false,
"verified": false,
"statuses_count": 7,
"lang": null,
"contributors_enabled": false,
"is_translator": false,
"is_translation_enabled": false,
"profile_background_color": "F5F8FA",
"profile_background_image_url": null,
"profile_background_image_url_https": null,
"profile_background_tile": false,
"profile_image_url": "http://pbs.twimg.com/profile_images/1400935746593202176/-pgS_IUo_normal.jpg",
"profile_image_url_https": "https://pbs.twimg.com/profile_images/1400935746593202176/-pgS_IUo_normal.jpg",
"profile_banner_url": "https://pbs.twimg.com/profile_banners/1400935623238643716/1622845231",
"profile_link_color": "1DA1F2",
"profile_sidebar_border_color": "C0DEED",
"profile_sidebar_fill_color": "DDEEF6",
"profile_text_color": "333333",
"profile_use_background_image": true,
"has_extended_profile": true,
"default_profile": true,
"default_profile_image": false,
"following": null,
"follow_request_sent": null,
"notifications": null,
"translator_type": "none",
"withheld_in_countries": []
},
"geo": null,
"coordinates": null,
"place": null,
"contributors": null,
"is_quote_status": false,
"retweet_count": 0,
"favorite_count": 1,
"favorited": false,
"retweeted": false,
"lang": "en"
}
```
The above document model is more extravagant than we need. In reality, we’re only going to be paying attention to the `full_text` field, but it’s still useful to know what exists for any given tweet.
Now that we know what the document model is going to look like, we just need to consume it from Twitter.
We’re going to use two different Twitter APIs with our API Key and API Secret. The first API is the authentication API and it will give us our access token. With the access token we can get tweet data based on a Twitter query.
Since we’re using Node.js, we need to install our dependencies. Within a new directory on your computer, execute the following commands from the command line:
```bash
npm init -y
npm install mongodb axios --save
```
The above commands will create a new **package.json** file and install the MongoDB Node.js driver as well as Axios for making HTTP requests.
Take a look at the following Node.js code which can be added to a **main.js** file within your project:
```javascript
const { MongoClient } = require("mongodb");
const axios = require("axios");
require("dotenv").config();
const mongoClient = new MongoClient(process.env.MONGODB_URI);
(async () => {
try {
await mongoClient.connect();
const tokenResponse = await axios({
"method": "POST",
"url": "https://api.twitter.com/oauth2/token",
"headers": {
"Authorization": "Basic " + Buffer.from(`${process.env.API_KEY}:${process.env.API_SECRET}`).toString("base64"),
"Content-Type": "application/x-www-form-urlencoded"
},
"data": "grant_type=client_credentials"
});
const tweetResponse = await axios({
"method": "GET",
"url": "https://api.twitter.com/1.1/search/tweets.json",
"headers": {
"Authorization": "Bearer " + tokenResponse.data.access_token
},
"params": {
"q": "mongodb -filter:retweets filter:safe (from:codeSTACKr OR from:nraboy OR from:kukicado OR from:judy2k OR from:adriennetacke OR from:anaiyaraisin OR from:lauren_schaefer)",
"lang": "en",
"count": 100,
"tweet_mode": "extended"
}
});
console.log(`Next Results: ${tweetResponse.data.search_metadata.next_results}`)
const collection = mongoClient.db(process.env.MONGODB_DATABASE).collection(process.env.MONGODB_COLLECTION);
tweetResponse.data.statuses = tweetResponse.data.statuses.map(status => {
status._id = status.id;
return status;
});
const result = await collection.insertMany(tweetResponse.data.statuses);
console.log(result);
} finally {
await mongoClient.close();
}
})();
```
There’s quite a bit happening in the above code so we’re going to break it down. However, before we break it down, it's important to note that we’re using environment variables for a lot of the sensitive information like tokens, usernames, and passwords. For security reasons, you really shouldn’t hard-code these values.
Inside the asynchronous function, we attempt to establish a connection to MongoDB. If successful, no error is thrown, and we make our first HTTP request.
```javascript
const tokenResponse = await axios({
"method": "POST",
"url": "https://api.twitter.com/oauth2/token",
"headers": {
"Authorization": "Basic " + Buffer.from(`${process.env.API_KEY}:${process.env.API_SECRET}`).toString("base64"),
"Content-Type": "application/x-www-form-urlencoded"
},
"data": "grant_type=client_credentials"
});
```
Once again, in this first HTTP request, we are exchanging our API Key and API Secret with an access token to be used in future requests.
Using the access token from the response, we can make our second request to the tweets API endpoint:
```javascript
const tweetResponse = await axios({
"method": "GET",
"url": "https://api.twitter.com/1.1/search/tweets.json",
"headers": {
"Authorization": "Bearer " + tokenResponse.data.access_token
},
"params": {
"q": "mongodb -filter:retweets filter:safe",
"lang": "en",
"count": 100,
"tweet_mode": "extended"
}
});
```
The tweets API endpoint expects a Twitter specific query and some other optional parameters like the language of the tweets or the expected result count. You can check the query language in the [Twitter documentation.
At this point, we have an array of tweets to work with.
The next step is to pick the database and collection we plan to use and insert the array of tweets as documents. We can use a simple `insertMany` operation like this:
```javascript
const result = await collection.insertMany(tweetResponse.data.statuses);
```
The `insertMany` takes an array of objects, which we already have. We have an array of tweets, so each tweet will be inserted as a new document within the database.
If you have the MongoDB shell handy, you can validate the data that was inserted by executing the following:
```javascript
use("synonyms");
db.tweets.find({ });
```
Now that there’s data to work with, we can start to search it using slang synonyms.
## Creating Synonym Mappings in MongoDB
While we’re using a `tweets` collection for our actual searchable data, the synonym information needs to exist in a separate source collection in the same database.
You have two options for how you want your synonyms to be mapped–explicit or equivalent. You are not stuck with choosing just one type. You can have a combination of both explicit and equivalent as synonym documents in your collection. Choose the explicit format for when you need a set of terms to show up as a result of your inputted term, and choose equivalent if you want all terms to show up bidirectionally regardless of your queried term.
For example, the word "basic" means "regular" or "boring." If we decide on an explicit (one-way) mapping for "basic," we are telling Atlas Search that if someone searches for "basic," we want to return all documents that include the words "basic," "regular," and "boring." But! If we query the word "regular," we would not get any documents that include "basic" because "regular" is not explicitly mapped to "basic."
If we decide to map "basic" equivalently to "regular" and "boring," whenever we query any of these words, all the documents containing "basic," "regular," **and** "boring" will show up regardless of the initial queried word.
To learn more about explicit vs. equivalent synonym mappings, check out the official documentation.
For our demo, we decided to make all of our synonyms equivalent and formatted our synonym data like this:
```json
{
"mappingType": "equivalent",
"synonyms": ["basic", "regular", "boring"]
},
{
"mappingType": "equivalent",
"synonyms": ["bet", "agree", "concur"]
},
{
"mappingType": "equivalent",
"synonyms": ["yikes", "embarrassing", "bad", "awkward"]
},
{
"mappingType": "equivalent",
"synonyms": ["fam", "family", "friends"]
}
]
```
Each object in the above array will exist as a separate document within MongoDB. Each of these documents contains information for a particular set of synonyms.
To insert your synonym documents into your MongoDB collection, you can use the ‘insertMany()’ MongoDB raw function to put all your documents into the collection of your choice.
```javascript
use("synonyms");
db.slang.insertMany([
{
"mappingType": "equivalent",
"synonyms": ["basic", "regular", "boring"]
},
{
"mappingType": "equivalent",
"synonyms": ["bet", "agree", "concur"]
}
]);
```
The `use("synonyms");` line is to ensure you’re in the correct database before inserting your documents. We’re using the `slang` collection to store our synonyms and it doesn’t need to exist in our database prior to running our query.
## Create an Atlas Search Index that Leverages Synonyms
Once you have your collection of synonyms handy and uploaded, it's time to create your search index! A search index is crucial because it allows you to use full-text search to find the inputted queries in that collection.
We have included screenshots below of what your MongoDB Atlas Search user interface will look like so you can follow along:
The first step is to click on the "Search" tab, located on your cluster page in between the "Collections" and "Profiler" tabs.
![Find the Atlas Search Tab
The second step is to click on the "Create Index" button in the upper right hand corner, or if this is your first Index, it will be located in the middle of the page.
Once you reach this page, go ahead and click "Next" and continue on to the page where you will name your Index and set it all up!
Click "Next" and you’ll be able to create your very own search index!
Once you create your search index, you can go back into it and then edit your index definition using the JSON editor to include what you need. The index we wrote for this tutorial is below:
```json
{
"mappings": {
"dynamic": true
},
"synonyms":
{
"analyzer": "lucene.standard",
"name": "slang",
"source": {
"collection": "slang"
}
}
]
}
```
Let’s run through this!
```json
{
"mappings": {
"dynamic": true
},
```
You have the option of choosing between dynamic and static for your search index, and this can be up to your discretion. To find more information on the difference between dynamic and static mappings, check out the [documentation.
```json
"synonyms":
{
"analyzer": "lucene.standard",
"name": "slang",
"source": {
"collection": "slang"
}
}
]
```
This section refers to the synonyms associated with the search index. In this example, we’re giving this synonym mapping a name of "slang," and we’re using the default index analyzer on the synonym data, which can be found in the slang collection.
## Searching with Synonyms with the MongoDB Aggregation Pipeline
Our next step is to put together the search query that will actually filter through your tweet collection and find the tweets you want using synonyms!
The code we used for this part is below:
```javascript
use("synonyms");
db.tweets.aggregate([
{
"$search": {
"index": "synsearch",
"text": {
"query": "throw",
"path": "full_text",
"synonyms": "slang"
}
}
}
]);
```
We want to search through our tweets and find the documents containing synonyms for our query "throw." This is the synonym document for "throw":
```json
{
"mappingType": "equivalent",
"synonyms": ["yeet", "throw", "agree"]
},
```
Remember to include the name of your search index from earlier (synsearch). Then, the query we’re specifying is "throw." This means we want to see tweets that include "yeet," "throw," and "agree" once we run this script.
The ‘path’ represents the field we want to search within, and in this case, we are searching for "throw" only within the ‘full_text’ field of the documents and no other field. Last but not least, we want to use synonyms found in the collection we have named "slang."
Based on this query, any matches found will include the entire document in the result-set. To better streamline this, we can use a `$project` aggregation stage to specify the fields we’re interested in. This transforms our query into the following aggregation pipeline:
```javascript
db.tweets.aggregate([
{
"$search": {
"index": "synsearch",
"text": {
"query": "throw",
"path": "full_text",
"synonyms": "slang"
}
}
},
{
"$project": {
"_id": 1,
"full_text": 1,
"username": "$user.screen_name"
}
}
]);
```
And these are our results!
```json
[
{
"_id": 1420084484922347500,
"full_text": "not to throw shade on SQL databases, but MongoDB SLAPS",
"username": "codeSTACKr"
},
{
"_id": 1420088203499884500,
"full_text": "Yeet all your data into a MongoDB collection and watch the magic happen! No cap, we are efficient 💪",
"username": "nraboy"
}
]
```
Just as we wanted, we have tweets that include the word "throw" and the word "yeet!"
## Conclusion
We’ve accomplished a **ton** in this tutorial, and we hope you’ve enjoyed following along. Now, you are set with the knowledge to load in data from external sources, create your list of explicit or equivalent synonyms and insert it into a collection, and write your own index search script. Synonyms can be useful in a multitude of ways, not just isolated to Gen-Z slang. From figuring out regional variations (e.g., soda = pop), to finding typos that cannot be easily caught with autocomplete, incorporating synonyms will help save you time and a thesaurus.
Using synonyms in Atlas Search will improve your app’s search functionality and will allow you to find the data you’re looking for, even when you can’t quite put your finger on it.
If you want to take a look at the code, queries, and indexes used in this blog post, check out the project on [GitHub. If you want to learn more about synonyms in Atlas Search, check out the documentation.
If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js"
],
"pageDescription": "Learn how to define your own custom synonyms for use with MongoDB Atlas Search in this example with features searching within slang found in Twitter messages.",
"contentType": "Tutorial"
} | Get Hyped: Synonyms in Atlas Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/subset-pattern | created | # Building with Patterns: The Subset Pattern
Some years ago, the first PCs had a whopping 256KB of RAM and dual 5.25"
floppy drives. No hard drives as they were incredibly expensive at the
time. These limitations resulted in having to physically swap floppy
disks due to a lack of memory when working with large (for the time)
amounts of data. If only there was a way back then to only bring into
memory the data I frequently used, as in a subset of the overall data.
Modern applications aren't immune from exhausting resources. MongoDB
keeps frequently accessed data, referred to as the working set,
in RAM. When the working set of data and indexes grows beyond the
physical RAM allotted, performance is reduced as disk accesses starts to
occur and data rolls out of RAM.
How can we solve this? First, we could add more RAM to the server. That
only scales so much though. We can look at
sharding
our collection, but that comes with additional costs and complexities
that our application may not be ready for. Another option is to reduce
the size of our working set. This is where we can leverage the Subset
Pattern.
## The Subset Pattern
This pattern addresses the issues associated with a working set that
exceeds RAM, resulting in information being removed from memory. This is
frequently caused by large documents which have a lot of data that isn't
actually used by the application. What do I mean by that exactly?
Imagine an e-commerce site that has a list of reviews for a product.
When accessing that product's data it's quite possible that we'd only
need the most recent ten or so reviews. Pulling in the entirety of the
product data with **all** of the reviews could easily cause the working
set to expand.
Instead of storing all the reviews with the product, we can split the
collection into two collections. One collection would have the most
frequently used data, e.g. current reviews and the other collection
would have less frequently used data, e.g. old reviews, product history,
etc. We can duplicate part of a 1-N or N-N relationship that is used by
the most used side of the relationship.
In the **Product** collection, we'll only keep the ten most recent
reviews. This allows the working set to be reduced by only bringing in a
portion, or subset, of the overall data. The additional information,
reviews in this example, are stored in a separate **Reviews** collection
that can be accessed if the user wants to see additional reviews. When
considering where to split your data, the most used part of the document
should go into the "main" collection and the less frequently used data
into another. For our reviews, that split might be the number of reviews
visible on the product page.
## Sample Use Case
The Subset Pattern is very useful when we have a large portion of data
inside a document that is rarely needed. Product reviews, article
comments, actors in a movie are all examples of use cases for this
pattern. Whenever the document size is putting pressure on the size of
the working set and causing the working set to exceed the computer's RAM
capacities, the Subset Pattern is an option to consider.
## Conclusion
By using smaller documents with more frequently accessed data, we reduce
the overall size of the working set. This allows for shorter disk access
times for the most frequently used information that an application
needs. One tradeoff that we must make when using the Subset Pattern is
that we must manage the subset and also if we need to pull in older
reviews or all of the information, it will require additional trips to
the database to do so.
The next post in this series will look at the features and benefits of
the Extended Reference Pattern.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.",
"contentType": "Tutorial"
} | Building with Patterns: The Subset Pattern | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/lessons-learned-building-game-mongodb-unity | created | # Lessons Learned from Building a Game with MongoDB and Unity
Back in September 2020, my colleagues Nic
Raboy, Karen
Huaulme, and I decided to learn how to
build a game. At the time, Fall Guys was what
our team enjoyed playing together, so we set a goal for ourselves to
build a similar game. We called it Plummeting People! Every week, we'd
stream our process live on Twitch.
As you can imagine, building a game is not an easy task; add to that
fact that we were all mostly new to game development and you have a
whirlwind of lessons learned while learning in public. After spending
the last four months of 2020 building and streaming, I've compiled the
lessons learned while going through this process, starting from the
first stream.
>
>
>📺️ Watch the full series
>here
>(YouTube playlist)! And the
>repo is
>available too.
>
>
## Stream 1: Designing a Strategy to Develop a Game with Unity and MongoDB
As with most things in life, ambitious endeavors always start out with
some excitement, energy, and overall enthusiasm for the new thing ahead.
That's exactly how our game started! Nic, Karen, and I had a wonderful
stream setting the foundation for our game. What were we going to build?
What tools would we use? What were our short-term and long-term goals?
We laid it all out on a nice Jamboard. We even incorporated our chat's
ideas and suggestions!
>
>
>:youtube]{vid=XAvy2BouZ1Q}
>
>Watch us plan our strategy for Plummeting People here!
>
>
### Lessons Learned
- It's always good to have a plan; it's even better to have a flexible
plan.
- Though we separated our ideas into logical sections on our Jamboard,
it would have been more helpful to have rough deadlines and a
solidified understanding of what our minimum viable product (MVP)
was going to be.
## Stream 2: Create a User Profile Store for a Game with MongoDB, Part 1
>
>
>
>
>
## Stream 3: Create a User Profile Store for a Game with MongoDB, Part 2
>
>
>
>
>
### Lessons Learned
- Sometimes, things will spill into an additional stream, as seen
here. In order to fully show how a user profile store could work, we
pushed the remaining portions into another stream, and that's OK!
## Stream 4: 2D Objects and 2D Physics
>
>
>
>
>
## Stream 5: Using Unity's Tilemap Creator
>
>
>
>
>
### Lessons Learned
- Teammates get sick every once in a while! As you saw in the last two
streams, I was out, and then Karen was out. Having an awesome team
to cover you makes it much easier to be consistent when it comes to
streaming.
- Tilemap editors are a pretty neat and non-intimidating way to begin
creating custom levels in Unity!
## Stream 6: Adding Obstacles and Other Physics to a 2D Game
>
>
>
>
>
### Lessons Learned
- As you may have noticed, we changed our streaming schedule from
weekly to every other week, and that helped immensely. With all of
the work we do as Developer Advocates, setting the ambitious goal of
streaming every week left us no time to focus and learn more about
game development!
- Sometimes, reworking the schedule to make sure there's **more**
breathing room for you is what's needed.
## Stream 7: Making Web Requests from Unity
>
>
>
>
>
## Stream 8: Programmatically Generating GameObjects in Unity
>
>
>
>
>
### Lessons Learned
- As you become comfortable with a new technology, it's OK to go back
and improve what you've built upon! In this case, we started out
with a bunch of game objects that were copied and pasted, but found
that the proper way was to 1) create prefabs and 2) programmatically
generate those game objects.
- Learning in public and showing these moments make for a more
realistic display of how we learn!
## Stream 9: Talking Some MongoDB, Playing Some Fall Guys
>
>
>
>
>
### Lessons Learned
- Sometimes, you gotta play video games for research! That's exactly
what we did while also taking a much needed break.
- It's also always fun to see the human side of things, and that part
of us plays a lot of video games!
## Stream 10: A Recap on What We've Built
>
>
>
>
>
### Lessons Learned
- After season one finished, it was rewarding to see what we had
accomplished so far! It sometimes takes a reflective episode like
this one to see that consistent habits do lead to something.
- Though we didn't get to everything we wanted to in our Jamboard, we
learned way more about game development than ever before.
- We also learned how to improve for our next season of game
development streams. One of those factors is focusing on one full
game a month! You can [catch the first one
here, where Nic and I build an
infinite runner game in Unity.
## Summary
I hope this article has given you some insight into learning in public,
what it takes to stream your learning process, and how to continue
improving!
>
>
>📺️ Don't forget to watch the full season
>here
>(YouTube playlist)! And poke around in the code by cloning our
>repo.
>
>
If you're interested in learning more about game development, check out
the following resources:
- Creating a Multiplayer Drawing Game with Phaser and
MongoDB
- Build an Infinite Runner Game with Unity and the Realm Unity
SDK
- Developing a Side-Scrolling Platformer Game with Unity and MongoDB
Realm
If you have any questions about any of our episodes from this season, I
encourage you to join the MongoDB
Community. It's a great place to ask
questions! And if you tag me `@adriennetacke`, I'll be able to see your
questions.
Lastly, be sure to follow us on Twitch
so you never miss a stream! Nic and I will be doing our next game dev
stream on March 26, so see you there!
| md | {
"tags": [
"Realm",
"Unity"
],
"pageDescription": "After learning how to build a game in public, see what lessons Adrienne learned while building a game with MongoDB and Unity",
"contentType": "Article"
} | Lessons Learned from Building a Game with MongoDB and Unity | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/node-connect-mongodb | created | # Connect to a MongoDB Database Using Node.js
Use Node.js? Want to learn MongoDB? This is the blog series for you!
In this Quick Start series, I'll walk you through the basics of how to get started using MongoDB with Node.js. In today's post, we'll work through connecting to a MongoDB database from a Node.js script, retrieving a list of databases, and printing the results to your console.
>
>
>This post uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.
>
>Click here to see a previous version of this post that uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.
>
>
>
>
>Prefer to learn by video? I've got ya covered. Check out the video below that covers how to get connected as well as how to perform the CRUD operations.
>
>:youtube]{vid=fbYExfeFsI0}
>
>
## Set Up
Before we begin, we need to ensure you've completed a few prerequisite steps.
### Install Node.js
First, make sure you have a supported version of Node.js installed. The current version of MongoDB Node.js Driver requires Node 4.x or greater. For these examples, I've used Node.js 14.15.4. See the [MongoDB Compatability docs for more information on which version of Node.js is required for each version of the Node.js driver.
### Install the MongoDB Node.js Driver
The MongoDB Node.js Driver allows you to easily interact with MongoDB databases from within Node.js applications. You'll need the driver in order to connect to your database and execute the queries described in this Quick Start series.
If you don't have the MongoDB Node.js Driver installed, you can install it with the following command.
``` bash
npm install mongodb
```
At the time of writing, this installed version 3.6.4 of the driver. Running `npm list mongodb` will display the currently installed driver version number. For more details on the driver and installation, see the official documentation.
### Create a Free MongoDB Atlas Cluster and Load the Sample Data
Next, you'll need a MongoDB database. The easiest way to get started with MongoDB is to use Atlas, MongoDB's fully-managed database-as-a-service.
Head over to Atlas and create a new cluster in the free tier. At a high level, a cluster is a set of nodes where copies of your database will be stored. Once your tier is created, load the sample data. If you're not familiar with how to create a new cluster and load the sample data, check out this video tutorial from MongoDB Developer Advocate Maxime Beugnet.
>
>
>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
>
>
### Get Your Cluster's Connection Info
The final step is to prep your cluster for connection.
In Atlas, navigate to your cluster and click **CONNECT**. The Cluster Connection Wizard will appear.
The Wizard will prompt you to add your current IP address to the IP Access List and create a MongoDB user if you haven't already done so. Be sure to note the username and password you use for the new MongoDB user as you'll need them in a later step.
Next, the Wizard will prompt you to choose a connection method. Select **Connect Your Application**. When the Wizard prompts you to select your driver version, select **Node.js** and **3.6 or later**. Copy the provided connection string.
For more details on how to access the Connection Wizard and complete the steps described above, see the official documentation.
## Connect to Your Database from a Node.js Application
Now that everything is set up, it's time to code! Let's write a Node.js script that connects to your database and lists the databases in your cluster.
### Import MongoClient
The MongoDB module exports `MongoClient`, and that's what we'll use to connect to a MongoDB database. We can use an instance of MongoClient to connect to a cluster, access the database in that cluster, and close the connection to that cluster.
``` js
const { MongoClient } = require('mongodb');
```
### Create our Main Function
Let's create an asynchronous function named `main()` where we will connect to our MongoDB cluster, call functions that query our database, and disconnect from our cluster.
``` js
async function main() {
// we'll add code here soon
}
```
The first thing we need to do inside of `main()` is create a constant for our connection URI. The connection URI is the connection string you copied in Atlas in the previous section. When you paste the connection string, don't forget to update `` and `` to be the credentials for the user you created in the previous section. The connection string includes a `` placeholder. For these examples, we'll be using the `sample_airbnb` database, so replace `` with `sample_airbnb`.
**Note**: The username and password you provide in the connection string are NOT the same as your Atlas credentials.
``` js
/**
* Connection URI. Update , , and to reflect your cluster.
* See https://docs.mongodb.com/ecosystem/drivers/node/ for more details
*/
const uri = "mongodb+srv://:@/sample_airbnb?retryWrites=true&w=majority";
```
Now that we have our URI, we can create an instance of MongoClient.
``` js
const client = new MongoClient(uri);
```
**Note**: When you run this code, you may see DeprecationWarnings around the URL string `parser` and the Server Discover and Monitoring engine. If you see these warnings, you can remove them by passing options to the MongoClient. For example, you could instantiate MongoClient by calling `new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true })`. See the Node.js MongoDB Driver API documentation for more information on these options.
Now we're ready to use MongoClient to connect to our cluster. `client.connect()` will return a promise. We will use the await keyword when we call `client.connect()` to indicate that we should block further execution until that operation has completed.
``` js
await client.connect();
```
We can now interact with our database. Let's build a function that prints the names of the databases in this cluster. It's often useful to contain this logic in well named functions in order to improve the readability of your codebase. Throughout this series, we'll create new functions similar to the function we're creating here as we learn how to write different types of queries. For now, let's call a function named `listDatabases()`.
``` js
await listDatabases(client);
```
Let's wrap our calls to functions that interact with the database in a `try/catch` statement so that we handle any unexpected errors.
``` js
try {
await client.connect();
await listDatabases(client);
} catch (e) {
console.error(e);
}
```
We want to be sure we close the connection to our cluster, so we'll end our `try/catch` with a finally statement.
``` js
finally {
await client.close();
}
```
Once we have our `main()` function written, we need to call it. Let's send the errors to the console.
``` js
main().catch(console.error);
```
Putting it all together, our `main()` function and our call to it will look something like the following.
``` js
async function main(){
/**
* Connection URI. Update , , and to reflect your cluster.
* See https://docs.mongodb.com/ecosystem/drivers/node/ for more details
*/
const uri = "mongodb+srv://:@/sample_airbnb?retryWrites=true&w=majority";
const client = new MongoClient(uri);
try {
// Connect to the MongoDB cluster
await client.connect();
// Make the appropriate DB calls
await listDatabases(client);
} catch (e) {
console.error(e);
} finally {
await client.close();
}
}
main().catch(console.error);
```
### List the Databases in Our Cluster
In the previous section, we referenced the `listDatabases()` function. Let's implement it!
This function will retrieve a list of databases in our cluster and print the results in the console.
``` js
async function listDatabases(client){
databasesList = await client.db().admin().listDatabases();
console.log("Databases:");
databasesList.databases.forEach(db => console.log(` - ${db.name}`));
};
```
### Save Your File
You've been implementing a lot of code. Save your changes, and name your file something like `connection.js`. To see a copy of the complete file, visit the nodejs-quickstart GitHub repo.
### Execute Your Node.js Script
Now you're ready to test your code! Execute your script by running a command like the following in your terminal: `node connection.js`
You will see output like the following:
``` js
Databases:
- sample_airbnb
- sample_geospatial
- sample_mflix
- sample_supplies
- sample_training
- sample_weatherdata
- admin
- local
```
## What's Next?
Today, you were able to connect to a MongoDB database from a Node.js script, retrieve a list of databases in your cluster, and view the results in your console. Nice!
Now that you're connected to your database, continue on to the next post in this series where you'll learn to execute each of the CRUD (create, read, update, and delete) operations.
In the meantime, check out the following resources:
- Official MongoDB Documentation on the MongoDB Node.js Driver
- MongoDB University Free Courses: MongoDB for Javascript Developers
Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.
| md | {
"tags": [
"JavaScript",
"Node.js"
],
"pageDescription": "Node.js and MongoDB is a powerful pairing and in this Quick Start series we show you how.",
"contentType": "Quickstart"
} | Connect to a MongoDB Database Using Node.js | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/preparing-tsdata-with-densify-and-fill | created | # Preparing Time Series data for Analysis Tools with $densify and $fill
## Densification and Gap Filling of Time Series Data
Time series data refers to recordings of continuous values at specific points in time. This data is then examined, not as individual data points, but as how a value either changes over time or correlates with other values in the same time period.
Normally, data points would have a timestamp, one or more metadata values to identify what the source of the data is and one or more values known as measurements. For example, a stock ticker would have a time, stock symbol (metadata), and price (measurement), whereas aircraft tracking data might have time, tail number, and multiple measurement values such as speed, heading, altitude, and rate of climb. When we record this data in MongoDB, we may also include additional category metadata to assist in the analysis. For example, in the case of flight tracking, we may store the tail number as a unique identifier but also the aircraft type and owner as additional metadata, allowing us to analyze data based on broader categories.
Analysis of time series data is usually either to identify a previously unknown correlation between data points or to try and predict patterns and thus, future readings. There are many tools and techniques, including machine learning and Fourier analysis, applied to examine the changes in the stream of data and predict what future readings might be. In the world of high finance, entire industries and careers have been built around trying to say what a stock price will do next.
Some of these analytic techniques require the data to be in a specific form, having no missing readings and regularly spaced time periods, or both, but data is not always available in that form.
Some data is regularly spaced; an industrial process may record sensor readings at precise intervals. There are, however, reasons for data to be missing. This could be software failure or for external social reasons. Imagine we are examining the number of customers at stations on the London underground. We could have a daily count that we use to predict usage in the future, but a strike or other external event may cause that traffic to drop to zero on given days. From an analytical perspective, we want to replace that missing count or count of zero with a more typical value.
Some data sets are inherently irregular. When measuring things in the real world, it may not be possible to take readings on a regular cadence because they are discrete events that happen irregularly, like recording tornados or detected neutrinos. Other real-world data may be a continuous event but we are only able to observe it at random times. Imagine we are tracking a pod of whales across the ocean. They are somewhere at all times but we can only record them when we see them, or when a tracking device is within range of a receiver.
Having given these examples, it’s easier to explain the actual functionality available for densification and gap-filling using a generic concept of time and readings rather than specific examples, so we will do that below. These aggregation stages work on both time series and regular collections.
## Add missing data points with $densify
The aggregation stage `$densify` added in MongoDB 5.2 allows you to create missing documents in the series either by filling in a document where one is not present in a regularly spaced set or by inserting documents at regularly spaced intervals between the existing data points in an irregularly spaced set.
Imagine we have a data set where we get a reading once a minute, but sometimes, we are missing readings. We can create data like this in the **mongosh** shell spanning the previous 20 minutes using the following script. This starts with creating a record with the current time and then subtracts 60000 milliseconds from it until we have 20 documents. It also fails to insert any document where the iterator divides evenly by 7 to create missing records.
```
db=db.getSiblingDB('tsdemo')
db.data.drop()
let timestamp =new Date()
for(let reading=0;reading<20;reading++) {
timestamp = new Date(timestamp.getTime() - 60000)
if(reading%7) db.data.insertOne({reading,timestamp})
}
```
Whilst we can view these as text using db.data.find() , it’s better if we can visualize them. Ideally, we would use **MongoDB Charts** for this. However, these functions are not yet all available to us in Atlas and Charts with the free tier, so I’m using a local, pre-release installation of **MongoDB 5.3** and the mongosh shell writing out a graph in HTML. We can define a graphing function by pasting the following code into mongosh or saving it in a file and loading it with the `load()` command in mongosh. *Note that you need to modify the word **open** in the script below as per the comments to match the command your OS uses to open an HTML file in a browser.*
```
function graphTime(data)
{
let fs=require("fs")
let exec = require('child_process').exec;
let content = `
`
try {
let rval = fs.writeFileSync('graph.html', content)
//Linux use xdg-open not open
//Windows use start not open
//Mac uses open
rval = exec('open graph.html',null); //←---- ADJUST FOR OS
} catch (err) {
console.error(err)
}
}
```
Now we can view the sample data we added by running a query and passing it to the function.
```
let tsdata = db.data.find({},{_id:0,y:"$reading",x:"$timestamp"}).toArray()
graphTime(tsdata)
```
And we can see our data points plotted like so
In this graph, the thin vertical grid lines show minutes and the blue dots are our data points. Note that the blue dots are evenly spaced horizontally although they do not align with exact minutes. A reading that is taken every minute doesn’t require that it’s taken exactly at 0 seconds of that minute. We can see we’re missing a couple of points.
We can add these points when reading the data using `$densify`. Although we will not initially have a value for them, we can at least create placeholder documents with the correct timestamp.
To do this, we read the data using a two stage aggregation pipeline as below, specifying the field we need to add, the magnitude of the time between readings, and whether we wish to apply it to some or all of the data points. We can also have separate scales based on data categories adding missing data points for each distinct airplane or sensor, for example. In our case, we will apply to all the data as we are reading just one metric in our simple example.
```
let densify = { $densify : { field: "timestamp",
range: { step: 1, unit: "minute", bounds: "full" }}}
let projection = {$project: {_id:0, y: {$ifNull:"$reading",0]},x:"$timestamp"}}
let tsdata = db.data.aggregate([densify,projection]).toArray()
graphTime(tsdata)
```
This pipeline adds new documents with the required value of timestamp wherever one is missing. It doesn’t add any other fields to these documents, so they wouldn’t appear on our graph. The created documents look like this, with no *reading* or *_id* field.
```
{
timestamp : ISODate("2022-03-23T17:55:32.485Z")
}
```
To fix this, I have followed that up with a projection that sets the reading to 0 if it does not exist using [`$ifNull`. This is called zero filling and gives output like so.
To be useful, we almost certainly need to get a better estimate than zero for these missing readings—we can do this using `$fill`.
## Using $fill to approximate missing readings
The aggregation stage `$fill` was added in MongoDB 5.3 and can replace null or missing readings in documents by estimating them based on the non null values either side (ignoring nulls allows it to account for multiple missing values in a row). We still need to use `$densify` to add the missing documents in the first place but once we have them, rather than add a zero reading using `$set` or `$project`, we can use `$fill` to calculate more meaningful values.
To use `$fill`, you need to be able to sort the data in a meaningful fashion, as missing readings will be derived from the readings that fall before and after them. In many cases, you will sort by time, although other interval data can be used.
We can compute missing values like so, specifying the field to order by, the field we want to add if it's missing, and the method—in this case, `locf`, which repeats the same value as the previous data point.
```
let densify = { $densify : { field: "timestamp",
range: { step: 1, unit: "minute", bounds : "full" }}}
let fill = { $fill : { sortBy: { timestamp:1},
output: { reading : { method: "locf"}}}}
let projection = {$project: {_id:0,y:"$reading" ,x:"$timestamp"}}
let tsdata = db.data.aggregate(densify,fill,projection]).toArray()
graphTime(tsdata)
```
This creates a set of values like this.
![
In this case, though, those added points look wrong. Simply choosing to repeat the prior reading isn't ideal here. What we can do instead is apply a linear interpolation, drawing an imaginary line between the points before and after the gap and taking the point where our timestamp intersects that line. For this, we change `locf` to `linear` in our `$fill`.
```
let densify = { $densify : { field: "timestamp",
range : { step: 1, unit: "minute", bounds : "full" }}}
let fill = { $fill : { sortBy: { timestamp:1},
output: { reading : { method: "linear"}}}}
let projection = {$project: {_id:0,y:"$reading" ,x:"$timestamp"}}
let tsdata = db.data.aggregate(densify,fill,projection]).toArray()
graphTime(tsdata)
```
Now we get the following graph, which, in this case, seems much more appropriate.
![
We can see how to add missing values in regularly spaced data but how do we convert irregularly spaced data to regularly spaced, if that is what our analysis requires?
## Converting uneven to evenly spaced data with $densify and $fill
Imagine we have a data set where we get a reading approximately once a minute, but unevenly spaced. Sometimes, the time between readings is 20 seconds, and sometimes it's 80 seconds. On average, it's once a minute, but the algorithm we want to apply to it needs evenly spaced data. This time, we will create aperiodic data like this in the **mongosh** shell spanning the previous 20 minutes, with some variation in the timing and a steadily decreasing reading.
```
db.db.getSiblingDB('tsdemo')
db.data.drop()
let timestamp =new Date()
let start = timestamp;
for(let i=0;i<20;i++) {
timestamp = new Date(timestamp.getTime() - Math.random()*60000 - 20000)
let reading = (start-timestamp)/60000
db.data.insertOne({reading,timestamp})
}
```
When we plot this, we can see that the points are no longer evenly spaced. We require periodic data for our downstream analysis work, though, so how can we fix that? We cannot simply quantise the times in the existing readings. We may not even have one for each minute, and the values would be inaccurate for the time.
```
let tsdata = db.data.find({},{_id:0,y:"$reading",x:"$timestamp"}).toArray()
graphTime(tsdata)
```
We can solve this by using $densify to add the points we require, $fill to compute their values based on the nearest value from our original set, and then remove the original records from the set. We need to add an extra field to the originals before densification to identify them. We can do that with $set. Note that this is all inside the aggregation pipeline. We aren’t editing records in the database, so there is no significant cost associated with this.
```
let flagOriginal = {$set: {original:true}}
let densify = { $densify: { field: "timestamp",
range: { step: 1, unit: "minute", bounds : "full" }}}
let fill = { $fill : { sortBy: { timestamp:1},
output: { reading : { method: "linear"} }}}
let projection = {$project: {_id:0,y:"$reading" ,x:"$timestamp"}}
let tsdata = db.data.aggregate(flagOriginal, densify,fill,projection]).toArray()
graphTime(tsdata)
```
![
We now have approximately double the number of data points, original and generated—but we can use $match to remove those we flagged as existing pre densification.
```
let flagOriginal = {$set : {original:true}}
let densify = { $densify : { field: "timestamp",
range: { step: 1, unit: "minute", bounds : "full" }}}
let fill = { $fill : { sortBy: { timestamp:1},
output: { reading : { method: "linear"} }}}
let removeOriginal = { $match : { original : {$ne:true}}}
let projection = {$project: {_id:0,y:"$reading" ,x:"$timestamp"}}
let tsdata = db.data.aggregate(flagOriginal, densify,fill,
removeOriginal, projection]).toArray()
graphTime(tsdata)
```
![
Finally, we have evenly spaced data with values calculated based on the data points we did have. We would have filled in any missing values or large gaps in the process.
## Conclusions
The new stages `$densify` and `$fill` may not initially seem very exciting but they are key tools in working with time series data. Without `$densify`, there is no way to meaningfully identify and add missing records in a time series. The $fill stage greatly simplifies the process of computing missing values compared to using `$setWindowFields` and writing an expression to determine the value using the $linear and $locf expressions or by computing a moving average.
This then opens up the possibility of using a wide range of time series analysis algorithms in Python, R, Spark, and other analytic environments.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn through examples and graphs how the aggregation stages $densify and $fill allow you to fill gaps in time series data and convert irregular to regular time spacing. ",
"contentType": "Tutorial"
} | Preparing Time Series data for Analysis Tools with $densify and $fill | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/developing-side-scrolling-platformer-game-unity-mongodb-realm | created | # Developing a Side-Scrolling Platformer Game with Unity and MongoDB Realm
I've been a gamer since the 1990s, so 2D side-scrolling platformer games like Super Mario Bros. hold a certain place in my heart. Today, 2D games are still being created, but with the benefit of having connectivity to the internet, whether that be to store your player state information, to access new levels, or something else.
Every year, MongoDB holds an internal company-wide hackathon known as Skunkworks. During Skunkworks, teams are created and using our skills and imagination, we create something to make MongoDB better or something that uses MongoDB in a neat way. For Skunkworks 2020, I (Nic Raboy) teamed up with Barry O'Neill to create a side-scrolling platformer game with Unity that queries and sends data between MongoDB and the game. Internally, this project was known as The Untitled Leafy Game.
In this tutorial, we're going to see what went into creating a game like The Untitled Leafy Game using Unity as the game development framework and MongoDB Realm for data storage and back end.
To get a better idea of what we're going to accomplish, take a look at the following animated image:
The idea behind the game is that you are a MongoDB leaf character and you traverse through the worlds to obtain your trophy. As you traverse through the worlds, you can accumulate points by answering questions about MongoDB. These questions are obtained through a remote HTTP request and the answers are validated through another HTTP request.
## The Requirements
There are a few requirements that must be met prior to starting this tutorial:
- You must be using MongoDB Atlas and MongoDB Realm.
- You must be using Unity 2020.1.8f1 or more recent.
- At least some familiarity with Node.js (Realm) and C# (Unity).
- Your own game graphic assets.
For this tutorial, MongoDB Atlas will be used to store our data and MongoDB Realm will act as our back end that the game communicates with, rather than trying to access the data directly from Atlas.
Many of the assets in The Untitled Leafy Game were obtained through the Unity Asset Store. For this reason, I won't be able to share them raw in this tutorial. However, they are available for free with a Unity account.
You can follow along with this tutorial using the source material on GitHub. We won't be doing a step by step reproduction, but we'll be exploring important topics, all of which can be further seen in the project source on GitHub.
## Creating the Game Back End with MongoDB Atlas and MongoDB Realm
It might seem that MongoDB plays a significant role in this game, but the amount of code to make everything work is actually quite small. This is great because as a game developer, the last thing you want is to worry about fiddling with your back end and database.
It's important to understand the data model that will represent questions in the game. For this game, we're going to use the following model:
``` json
{
"_id": ObjectId("5f973c8c083f84fa6151ca54"),
"question_text": "MongoDB is Awesome!",
"problem_id": "abc123",
"subject_area": "general",
"answer": true
}
```
The `question_text` field will be displayed within the game. We can specify which question should be placed where in the game through the `problem_id` field because it will allow us to filter for the document we want. When the player selects an answer, it will be sent back to MongoDB Realm and used as a filter for the `answer` field. The `subject_area` field might be valuable when creating reports at a later date.
In MongoDB Atlas, the configuration might look like the following:
In the above example, documents with the proposed data model are stored in the `questions` collection of the `game` database. How you choose to name your collections or even the fields of your documents is up to you.
Because we'll be using MongoDB Realm rather than a self-hosted application, we need to create webhook functions to act as our back end. Create a Realm application that uses the MongoDB Atlas cluster with our data. The naming of the application does not really matter as long as it makes sense to you.
Within the MongoDB Realm dashboard, you're going to want to click on **3rd Party Services** to create new webhook functions.
Add a new **HTTP** service and give it a name of your choosing.
We'll have the option to create new webhooks and add associated function code to them. The idea is to create two webhooks, a `get_question` for retrieving question information based on an id value and a `checkanswer` for validating a sent answer with an id value.
The `get_question`, which should accept GET method requests, will have the following JavaScript code:
``` javascript
exports = async function (payload, response) {
const { problem_id } = payload.query;
const results = await await context.services
.get("mongodb-atlas")
.db("game")
.collection("questions")
.findOne({ "problem_id": problem_id }, { problem_id : 1, question_text : 1 })
response.setBody(JSON.stringify(results));
}
```
In the above code, if the function is executed, the query parameters are stored. We are expecting a `problem_id` as a query parameter in any given request. Using that information, we can do a `findOne` with the `problem_id` as the filter. Next, we can specify that we only want the `problem_id` and the `question_text` fields returned for any matched document.
The `checkanswer` should accept POST requests and will have the following JavaScript code:
``` javascript
exports = async function (payload, response) {
const query = payload.body.text();
const filter = EJSON.parse(query);
const results = await context.services
.get("mongodb-atlas")
.db("game")
.collection("questions")
.findOne({ problem_id: filter.problem_id, answer: filter.answer }, { problem_id : 1, answer: 1 });
response.setBody(results ? JSON.stringify(results) : "{}");
}
```
The logic between the two functions is quite similar. The difference is that this time, we are expecting a payload to be used as the filter. We are also filtering on both the `problem_id` as well as the `answer` rather than just the `problem_id` field.
Assuming you have questions in your database and you've deployed your webhook functions, you should be able to send HTTP requests to them for testing. As we progress through the tutorial, interacting with the questions will be done through the Unity produced game.
## Designing a Game Scene with Game Objects, Tile Pallets, and Sprites
With the back end in place, we can start focusing on the game itself. To set expectations, we're going to be using graphic assets from the Unity Asset Store, as previously mentioned in the tutorial. In particular, we're going to be using the Pixel Adventure 1 asset pack which can be obtained for free. This is in combination with some MongoDB custom graphics.
We're going to assume this is not your first time dabbling with Unity. This means that some of the topics around creating a scene won't be explored from a beginner perspective. It will save us some time and energy and get to the point.
An example of things that won't be explored include:
- Using the Palette Editor to create a world.
- Importing media and animating sprites.
If you want to catch up on some beginner Unity with MongoDB content, check out the series that I did with Adrienne Tacke.
The game will be composed of worlds also referred to as levels. Each world will have a camera, a player, some question boxes, and a multi-layered tilemap. Take the following image for example:
Within any given world, we have a **GameController** game object. The role of this object is to orchestrate the changing of scenes, something we'll explore later in the tutorial. The **Camera** game object is responsible for following the player position to keep everything within view.
The **Grid** is the parent game object to each layer of the tilemap, where in our worlds will be composed of three layers. The **Ground** layer will have basic colliders to prevent the player from moving through them, likewise with the **Boundaries** layer. The **Traps** layer will allow for collision detection, but won't actually apply physics. We have separate layers because we want to know when the player interacts with any of them. These layers are composed of tiles from the **Pixel Adventure 1** set and they are the graphical component to our worlds.
To show text on the screen, we'll need to use a **Canvas** parent game object with a child game object with the **Text** component. This child game object is represented by the **Score** game object. The **Canvas** comes in combination with the **EventSystem** which we will never directly engage with.
The **Trophy** game object is nothing more than a sprite with an image of a trophy. We will have collision related components attached, but more on that in a moment.
Finally, we have the **Questions** and **QuestionModal** game objects, both of which contain child game objects. The **Questions** group has any number of sprites to represent question boxes in the game. They have the appropriate collision components and when triggered, will interact with the game objects within the **QuestionModal** group. Think of it this way. The player interacts with the question box. A modal or popup displays with the text, possible answers, and a submit button. Each question box will have scripts where you can define which document in the database is associated with them.
In summary, any given world scene will look like this:
- GameController
- Camera
- Grid
- Ground
- Boundaries
- Traps
- Player
- QuestionModal
- ModalBackground
- QuestionText
- Dropdown
- SubmitButton
- Questions
- QuestionOne
- QuestionTwo
- Trophy
- Canvas
- Score
- EventSystem
The way you design your game may differ from the above, but it worked for the example that Barry and I did for the MongoDB Skunkworks project.
We know that every item in the project hierarchy is a game object. The components we add to them define what the game object actually does. Let's figure out what we need to add to make this game work.
The **Player** game object should have the following components:
- Sprite Renderer
- Rigidbody 2D
- Box Collider 2D
- Animator
- Script
The **Sprite Renderer** will show the graphic of our choosing for this particular game object. The **Rigidbody 2D** is the physics applied to the sprite, so how much gravity should be applied and similar. The **Box Collider 2D** represents the region around the image where collisions should be detected. The **Animator** represents the animations and flow that will be assigned to the game object. The **Script**, which in this example we'll call **Player**, will control how this sprite is interacted with. We'll get to the script later in the tutorial, but really what matters is the physics and colliders applied.
The **Trophy** game object and each of the question box game objects will have the same components, with the exception that the rigidbody will be static and not respond to gravity and similar physics events on the question boxes and the **Trophy** won't have any rigidbody. They will also not be animated.
## Interacting with the Game Player and the Environment
At this point, you should have an understanding of the game objects and components that should be a part of your game world scenes. What we want to do is make the game interactive by adding to the script for the player.
The **Player** game object should have a script associated to it. Mine is **Player.cs**, but yours could be different. Within this script, add the following:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Player : MonoBehaviour {
private Rigidbody2D rb2d;
private Animator animator;
private bool isGrounded;
Range(1, 10)]
public float speed;
[Range(1, 10)]
public float jumpVelocity;
[Range(1, 5)]
public float fallingMultiplier;
public Score score;
void Start() {
rb2d = GetComponent();
animator = GetComponent();
isGrounded = true;
}
void FixedUpdate() {
float horizontalMovement = Input.GetAxis("Horizontal");
if(Input.GetKey(KeyCode.Space) && isGrounded == true) {
rb2d.velocity += Vector2.up * jumpVelocity;
isGrounded = false;
}
if (rb2d.velocity.y < 0) {
rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;
}
else if (rb2d.velocity.y > 0 && !Input.GetKey(KeyCode.Space)) {
rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;
}
rb2d.velocity = new Vector2(horizontalMovement * speed, rb2d.velocity.y);
if(rb2d.position.y < -10.0f) {
rb2d.position = new Vector2(0.0f, 1.0f);
score.Reset();
}
}
private void OnCollisionEnter2D(Collision2D collision) {
if (collision.collider.name == "Ground" || collision.collider.name == "Platforms") {
isGrounded = true;
}
if(collision.collider.name == "Traps") {
rb2d.position = new Vector2(0.0f, 1.0f);
score.Reset();
}
}
void OnTriggerEnter2D(Collider2D collider) {
if (collider.name == "Trophy") {
Destroy(collider.gameObject);
score.BankScore();
GameController.NextLevel();
}
}
}
```
The above code could be a lot to take in, so we're going to break it down starting with the variables.
``` csharp
private Rigidbody2D rb2d;
private Animator animator;
private bool isGrounded;
[Range(1, 10)]
public float speed;
[Range(1, 10)]
public float jumpVelocity;
[Range(1, 5)]
public float fallingMultiplier;
public Score score;
```
The `rb2d` variable will be used to obtain the currently added **Rigidbody 2D** component. Likewise, the `animator` variable will obtain the **Animator** component. We'll use `isGrounded` to let us know if the player is currently jumping so that way, we can't jump infinitely.
The public variables such as `speed`, `jumpVelocity`, and `fallingMultiplier` have to do with our physics. We want to define the movement speed, how fast a jump should happen, and how fast the player should fall when finishing a jump. Finally, the `score` variable will be used to link the **Score** game object to our player script. This will allow us to interact with the text in our script.
``` csharp
void Start() {
rb2d = GetComponent();
animator = GetComponent();
isGrounded = true;
}
```
On the first rendered frame, we obtain each of the components and default our `isGrounded` variable.
During the `FixedUpdate` method, which happens continuously, we can check for keyboard interaction:
``` csharp
float horizontalMovement = Input.GetAxis("Horizontal");
if(Input.GetKey(KeyCode.Space) && isGrounded == true) {
rb2d.velocity += Vector2.up * jumpVelocity;
isGrounded = false;
}
```
In the above code, we are checking to see if the horizontal keys are pressed. These can be defined within Unity, but default as the **a** and **d** keys or the left and right arrow keys. If the space key is pressed and the player is currently on the ground, the `jumpVelocity` is applied to the rigidbody. This will cause the player to start moving up.
To remove the feeling of the player jumping on the moon, we can make use of the `fallingMultiplier` variable:
``` csharp
if (rb2d.velocity.y < 0) {
rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;
}
else if (rb2d.velocity.y > 0 && !Input.GetKey(KeyCode.Space)) {
rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;
}
```
We have an if / else if for the reason of long jumps and short jumps. If the velocity is less than zero, you are falling and the multiplier should be used. If you're currently mid jump and continuing to jump, but you let go of the space key, then the fall should start to happen rather than continuing to jump until the velocity reverses.
Now if you happen to fall off the screen, we need a way to reset.
``` csharp
if(rb2d.position.y < -10.0f) {
rb2d.position = new Vector2(0.0f, 1.0f);
score.Reset();
}
```
If we fall off the screen, the `Reset` function on `score`, which we'll see shortly, will reset back to zero and the position of the player will be reset to the beginning of the level.
We can finish the movement of our player in the `FixedUpdate` method with the following:
``` csharp
rb2d.velocity = new Vector2(horizontalMovement * speed, rb2d.velocity.y);
```
The above line takes the movement direction based on the input key, multiplies it by our defined speed, and keeps the current velocity in the y-axis. We keep the current velocity so we can move horizontally if we are jumping or not jumping.
This brings us to the `OnCollisionEnter2D` and `OnTriggerEnter2D` methods.
We need to know when we've ended a jump and when we've stumbled upon a trap. We can't just say a jump is over when the y-position falls below a certain value because the player may have fallen off a cliff.
Take the `OnCollisionEnter2D` method:
``` csharp
private void OnCollisionEnter2D(Collision2D collision) {
if (collision.collider.name == "Ground" || collision.collider.name == "Platforms") {
isGrounded = true;
}
if(collision.collider.name == "Traps") {
rb2d.position = new Vector2(0.0f, 1.0f);
score.Reset();
}
}
```
If there was a collision, we can get the game object of what we collided with. The game object should be named so we should know immediately if we collided with a floor or platform or something else. If we collided with a floor or platform, reset the jump. If we collided with a trap, we can reset the position and the score.
The `OnTriggerEnter2D` method is a little different.
``` csharp
void OnTriggerEnter2D(Collider2D collider) {
if (collider.name == "Trophy") {
Destroy(collider.gameObject);
score.BankScore();
GameController.NextLevel();
}
}
```
Remember, the **Trophy** won't have a rigidbody so there will be no physics. However, we want to know when our player has overlapped with the trophy. In the above function, if triggered, we will destroy the trophy which will remove it from the screen. We will also make use of the `BankScore` function that we'll see soon as well as the `NextLevel` function that will change our world.
As long as the tilemap layers have the correct collider components, your player should be able to move around whatever world you've decided to create. This brings us to some of the other scripts that need to be created for interaction in the **Player.cs** script.
We used a few functions on the `score` variable within the **Player.cs** script. The `score` variable is a reference to our **Score** game object which should have its own script. We'll call this the **Score.cs** script. However, before we get to the **Score.cs** script, we need to create a static class to hold our locally persistent data.
Create a **GameData.cs** file with the following:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public static class GameData
{
public static int totalScore;
}
```
Using static classes and variables is the easiest way to pass data between scenes of a Unity game. We aren't assigning this script to any game object, but it will be accessible for as long as the game is open. The `totalScore` variable will represent our session score and it will be manipulated through the **Score.cs** file.
Within the **Score.cs** file, add the following:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class Score : MonoBehaviour
{
private Text scoreText;
private int score;
void Start()
{
scoreText = GetComponent();
this.Reset();
}
public void Reset() {
score = GameData.totalScore;
scoreText.text = "SCORE: " + GameData.totalScore;
}
public void AddPoints(int amount) {
score += amount;
scoreText.text = "SCORE: " + score;
}
public void BankScore() {
GameData.totalScore += score;
}
}
```
In the above script, we have two private variables. The `scoreText` will reference the component attached to our game object and the `score` will be the running total for the particular world.
The `Reset` function, which we've seen already, will set the visible text on the screen to the value in our static class. We're doing this because we don't want to necessarily zero out the score on a reset. For this particular game, rather than resetting the entire score when we fail, we reset the score for the particular world, not all the worlds. This makes more sense in the `BankScore` method. We'd typically call `BankScore` when we progress from one world to the next. We take the current score for the world, add it to the persisted score, and then when we want to reset, our persisted score holds while the world score resets. You can design this functionality however you want.
In the **Player.cs** script, we've also made use of a **GameController.cs** script. We do this to manage switching between scenes in the game. This **GameController.cs** script should be attached to the **GameController** game object within the scene. The code behind the script should look like the following:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;
using System;
public class GameController : MonoBehaviour {
private static int currentLevelIndex;
private static string[] levels;
void Start() {
levels = new string[] {
"LevelOne",
"LevelTwo"
};
currentLevelIndex = Array.IndexOf(levels, SceneManager.GetActiveScene().name);
}
public static void NextLevel() {
if(currentLevelIndex < levels.Length - 1) {
SceneManager.LoadScene(levels[currentLevelIndex + 1]);
}
}
}
```
So why even create a script for switching scenes when it isn't particularly difficult? There are a few reasons:
1\. We don't want to manage scene switching in the **Player.cs** script to reduce cruft code. 2. We want to define world progression while being cautious that other scenes such as menus could exist.
With that said, when the first frame renders, we could define every scene that is a level or world. While we don't explore it here, we could also define every scene that is a menu or similar. When we want to progress to the next level, we can just iterate through the level array, all of which is managed by this scene manager.
Knowing what we know now, if we had set everything up correctly and tried to move our player around, we'd likely move off the screen. We need the camera to follow the player and this can be done in another script.
The **Camera.cs** script, which should be attached to the **Camera** game object, should have the following C# code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Camera : MonoBehaviour
{
public Transform player;
void Update() {
transform.position = new Vector3(player.position.x + 4, transform.position.y, transform.position.z);
}
}
```
The `player` variable should represent the **Player** game object, defined in the UI that Unity offers. It can really be any game object, but because we want to have the camera follow the player, it should probably be the **Player** game object that has the movement scripts. On every frame, the camera position is set to the player position with a small offset.
Everything we've seen up until now is responsible for player interaction. We can traverse a world, collide with the environment, and keep score.
## Making HTTP Requests from the Unity Game to MongoDB Realm
How the game interacts with the MongoDB Realm webhooks is where the fun really comes in! I explored a lot of this in a previous tutorial I wrote titled, [Sending and Requesting Data from MongoDB in a Unity Game, but it is worth exploring again for the context of The Untitled Leafy Game.
Before we get into the sending and receiving of data, we need to create a data model within Unity that roughly matches what we see in MongoDB. Create a **DatabaseModel.cs** script with the following C# code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class DatabaseModel {
public string _id;
public string question_text;
public string problem_id;
public string subject_area;
public bool answer;
public string Stringify() {
return JsonUtility.ToJson(this);
}
public static DatabaseModel Parse(string json) {
return JsonUtility.FromJson(json);
}
}
```
The above script is not one that we plan to add to a game object. We'll be able to instantiate it from any script. Notice each of the public variables and how they are named based on the fields that we're using within MongoDB. Unity offers a JsonUtility class that allows us to take public variables and either convert them into a JSON string or parse a JSON string and load the data into our public variables. It's very convenient, but the public variables need to match to be effective.
The process of game to MongoDB interaction is going to be as follows:
1. Player collides with question box
2. Question box, which has a `problem_id` associated, launches the
modal
3. Question box sends an HTTP request to MongoDB Realm
4. Question box populates the fields in the modal based on the HTTP
response
5. Question box sends an HTTP request with the player answer to MongoDB
Realm
6. The modal closes and the game continues
With those chain of events in mind, we can start making this happen. Take a **Question.cs** script that would exist on any particular question box game object:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Networking;
using System.Text;
using UnityEngine.UI;
public class Question : MonoBehaviour {
private DatabaseModel question;
public string questionId;
public GameObject questionModal;
public Score score;
private Text questionText;
private Dropdown dropdownAnswer;
private Button submitButton;
void Start() {
GameObject questionTextGameObject = questionModal.transform.Find("QuestionText").gameObject;
questionText = questionTextGameObject.GetComponent();
GameObject submitButtonGameObject = questionModal.transform.Find("SubmitButton").gameObject;
submitButton = submitButtonGameObject.GetComponent();
GameObject dropdownAnswerGameObject = questionModal.transform.Find("Dropdown").gameObject;
dropdownAnswer = dropdownAnswerGameObject.GetComponent();
}
private void OnCollisionEnter2D(Collision2D collision) {
if (collision.collider.name == "Player") {
questionModal.SetActive(true);
Time.timeScale = 0;
StartCoroutine(GetQuestion(questionId, result => {
questionText.text = result.question_text;
submitButton.onClick.AddListener(() =>{SubmitOnClick(result, dropdownAnswer);});
}));
}
}
void SubmitOnClick(DatabaseModel db, Dropdown dropdownAnswer) {
db.answer = dropdownAnswer.value == 0;
StartCoroutine(CheckAnswer(db.Stringify(), result => {
if(result == true) {
score.AddPoints(1);
}
questionModal.SetActive(false);
Time.timeScale = 1;
submitButton.onClick.RemoveAllListeners();
}));
}
IEnumerator GetQuestion(string id, System.Action callback = null)
{
using (UnityWebRequest request = UnityWebRequest.Get("https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/get_question?problem_id=" + id))
{
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError) {
Debug.Log(request.error);
if(callback != null) {
callback.Invoke(null);
}
}
else {
if(callback != null) {
callback.Invoke(DatabaseModel.Parse(request.downloadHandler.text));
}
}
}
}
IEnumerator CheckAnswer(string data, System.Action callback = null) {
using (UnityWebRequest request = new UnityWebRequest("https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/checkanswer", "POST")) {
request.SetRequestHeader("Content-Type", "application/json");
byte] bodyRaw = System.Text.Encoding.UTF8.GetBytes(data);
request.uploadHandler = (UploadHandler)new UploadHandlerRaw(bodyRaw);
request.downloadHandler = (DownloadHandler)new DownloadHandlerBuffer();
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError) {
Debug.Log(request.error);
if(callback != null) {
callback.Invoke(false);
}
} else {
if(callback != null) {
callback.Invoke(request.downloadHandler.text != "{}");
}
}
}
}
}
```
Of the scripts that exist in the project, this is probably the most complex. It isn't complex because of the MongoDB interaction. It is just complex based on how questions are integrated into the game.
Let's break it down starting with the variables:
``` csharp
private DatabaseModel question;
public string questionId;
public GameObject questionModal;
public Score score;
private Text questionText;
private Dropdown dropdownAnswer;
private Button submitButton;
```
The `questionId`, `questionModal`, and `score` variables are assigned through the UI inspector in Unity. This allows us to give each question box a unique id and give each question box the same modal to use and score widget. If we wanted, the modal and score items could be different, but it's best to recycle game objects for performance reasons.
The `questionText`, `dropdownAnswer`, and `submitButton` will be obtained from the attached `questionModal` game object.
To obtain each of the game objects and their components, we can look at the `Start` method:
``` csharp
void Start() {
GameObject questionTextGameObject = questionModal.transform.Find("QuestionText").gameObject;
questionText = questionTextGameObject.GetComponent();
GameObject submitButtonGameObject = questionModal.transform.Find("SubmitButton").gameObject;
submitButton = submitButtonGameObject.GetComponent();
GameObject dropdownAnswerGameObject = questionModal.transform.Find("Dropdown").gameObject;
dropdownAnswer = dropdownAnswerGameObject.GetComponent();
}
```
Remember, game objects don't mean a whole lot to us. We need to get the components that exist on each game object. We have the attached `questionModal` so we can use Unity to find the child game objects that we need and their components.
Before we explore how the HTTP requests come together with the rest of the script, we should explore how these requests are made in general.
``` csharp
IEnumerator GetQuestion(string id, System.Action callback = null)
{
using (UnityWebRequest request = UnityWebRequest.Get("https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/get_question?problem_id=" + id))
{
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError) {
Debug.Log(request.error);
if(callback != null) {
callback.Invoke(null);
}
}
else {
if(callback != null) {
callback.Invoke(DatabaseModel.Parse(request.downloadHandler.text));
}
}
}
}
```
In the above `GetQuestion` method, we expect an `id` which will be our `problem_id` that is attached to the question box. We also provide a `callback` which will be used when we get a response from the backend. With the [UnityWebRequest, we can make a request to our MongoDB Realm webhook. Upon success, the `callback` variable is invoked and the parsed data is returned.
You can see this in action within the `OnCollisionEnter2D` method.
``` csharp
private void OnCollisionEnter2D(Collision2D collision) {
if (collision.collider.name == "Player") {
questionModal.SetActive(true);
Time.timeScale = 0;
StartCoroutine(GetQuestion(questionId, result => {
questionText.text = result.question_text;
submitButton.onClick.AddListener(() =>{SubmitOnClick(result, dropdownAnswer);});
}));
}
}
```
When a collision happens, we see if the **Player** game object is what collided. If true, then we set the modal to active so it displays, alter the time scale so the game pauses, and then execute the `GetQuestion` from within a Unity coroutine. When we get a result for that particular `problem_id`, we set the text within the modal and add a special click listener to the button. We want the button to use the correct information from this particular instance of the question box. Remember, the modal is shared for all questions in this example, so it is important that the correct listener is used.
So we displayed the question information in the modal. Now we need to submit it. The HTTP request is slightly different:
``` csharp
IEnumerator CheckAnswer(string data, System.Action callback = null) {
using (UnityWebRequest request = new UnityWebRequest("https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/checkanswer", "POST")) {
request.SetRequestHeader("Content-Type", "application/json");
byte] bodyRaw = System.Text.Encoding.UTF8.GetBytes(data);
request.uploadHandler = (UploadHandler)new UploadHandlerRaw(bodyRaw);
request.downloadHandler = (DownloadHandler)new DownloadHandlerBuffer();
yield return request.SendWebRequest();
if (request.isNetworkError || request.isHttpError) {
Debug.Log(request.error);
if(callback != null) {
callback.Invoke(false);
}
} else {
if(callback != null) {
callback.Invoke(request.downloadHandler.text != "{}");
}
}
}
}
```
In the `CheckAnswer` method, we do another `UnityWebRequest`, this time a POST request. We encode the JSON string which is our data and we send it to our MongoDB Realm webhook. The result for the `callback` is either going to be a true or false depending on if the response is an empty object or not.
We can see this in action through the `SubmitOnClick` method:
``` csharp
void SubmitOnClick(DatabaseModel db, Dropdown dropdownAnswer) {
db.answer = dropdownAnswer.value == 0;
StartCoroutine(CheckAnswer(db.Stringify(), result => {
if(result == true) {
score.AddPoints(1);
}
questionModal.SetActive(false);
Time.timeScale = 1;
submitButton.onClick.RemoveAllListeners();
}));
}
```
Dropdowns in Unity are numeric, so we need to figure out if it is true or false. Once we have this information, we can execute the `CheckAnswer` through a coroutine, sending the document information with our user defined answer. If the response is true, we add to the score. Regardless, we hide the modal, reset the time scale, and remove the listener on the button.
## Conclusion
While we didn't see the step by step process towards reproducing a side-scrolling platformer game like the MongoDB Skunkworks project, The Untitled Leafy Game, we did walk through each of the components that went into it. These components consisted of designing a scene for a possible game world, adding player logic, score keeping logic, and HTTP request logic.
To play around with the project that took Barry O'Neill and myself ([Nic Raboy) three days to complete, check it out on GitHub. After swapping the MongoDB Realm endpoints with your own, you'll be able to play the game.
If you're interested in getting more out of game development with MongoDB and Unity, check out a series that I'm doing with Adrienne Tacke, starting with Designing a Strategy to Develop a Game with Unity and MongoDB.
Questions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.
| md | {
"tags": [
"Realm",
"C#",
"Unity"
],
"pageDescription": "Learn how to create a 2D side-scrolling platformer game with MongoDB and Unity.",
"contentType": "Tutorial"
} | Developing a Side-Scrolling Platformer Game with Unity and MongoDB Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-meetup-kotlin-multiplatform | created | # Realm Meetup - Realm Kotlin Multiplatform for Modern Mobile Apps
Didn't get a chance to attend the Realm Kotlin Multiplatform for modern mobile apps Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.
:youtube]{vid=F1cEI9OKI-E}
In this meetup, Claus Rørbech, software engineer on the Realm Android team, will walk us through some of the constraints of the RealmJava SDK, the thought process that went into the decision to build a new SDK for Kotlin, the benefits developers will be able to leverage with the new APIs, and how the RealmKotlin SDK will evolve.
In this 50-minute recording, Claus spends about 35 minutes presenting an overview of the Realm Kotlin Multiplatfrom. After this, we have about 15 minutes of live Q&A with Ian, Nabil and our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!
Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.
To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our [community forums. Come to learn. Stay to connect.
### Transcript
Claus:
Yeah, hello. I'm Claus Rorbech, welcome to today's talk on Realm Kotlin. I'm Claus Rorbech and a software engineer at MongoDB working in the Java team and today I'm going to talk about Realm Kotlin and why we decided to build a complete new SDK and I'll go over some of the concepts of this.
We'll do this with a highlight of what Realm is, what triggered this decision of writing a new SDK instead of trying to keep up with the Realm Java. Well go over some central concepts as it has some key significant changes compared to Realm Java. We'll look into status, where we are in the process. We'll look into where and how it can be used and of course peek a bit into the future.
Just to recap, what is Realm? Realm is an object database with the conventional ACID properties. It's implemented in a C++ storage engine and exposed to various language through language specific SDKs. It's easy to use, as you can define your data model directly with language constructs. It's also performant. It utilizes zero copying and lazy loading to keep the memory footprint small. Which is still key for mobile development.
Historically we have been offering live objects, which is a consistent view of data within each iteration of your run loop. And finally we are offering infrastructure for notifications and easy on decide encryption and easy synchronization with MongoDB Realm. So, Realm Java already works for Kotlin on Android, so why bother doing a new SDK? The goal of Realm is to simplify app development. Users want to build apps and not persistent layers, so we need to keep up providing a good developer experience around Realm with this ecosystem.
Why not just keep up with the Realm Java? To understand the challenge of keeping up with Realm Java, we have to have in mind that it has been around for almost a decade. Throughout that period, Android development has changed significantly. Not only has the language changed from Java to Kotlin, but there's also been multiple iterations of design guidelines. Now, finally official Android architectural guidelines and components. We have kept up over the years. We have constantly done API adjustments. Both based on new language features, but also from a lot of community feedback. What users would like us to do. We have tried to accommodate this major new design approach by adding support for the reactive frameworks.
Both RX Java and lately with coroutine flows. But keeping up has become increasingly harder. Not only just by the growing features of Realm itself, but also trying to constantly fit into this widening set of frameworks. We thought it was a good time to reassess some of these key concepts of Realm Java. The fundamentals of Realm Java is built around live objects. They provide a consistent updated view of your data within each run loop iterations. Live data is actually quite nice, but it's also thread confined.
This means that each thread needs its own instance of Realm. This Realm instance needs to be explicitly close to free up resources and all objects obtained from this Realm are also thread confined. These things have been major user obstacles because accessing objects on other threads will flow, and failing to close instances of Realm on background threads can potentially lead to growing file sizes because we cannot clean up this updated data.
So, in general it was awesome for early days Android app architectures that were quite simple, but it's less attractive for the current dominant designs that rely heavily on mutable data streams in frameworks and very flexible execution and threading models. So, besides this wish of trying to redo some of the things, there're other motivations for doing this now. Namely, being Kotlin first. Android has already moved there and almost all other users are also there.
We want to take advantage of being able to provide the cleaner APIs with nice language features like null safety and in line and extension functions and also, very importantly, co routines for asynchronous operations. Another key feature of Kotlin is the Kotlin Compiler plugin mechanism. This is a very powerful mechanism that can substitute our current pre processor approach and byte manipulation approach.
So, instead of generating code along the user classes, we can do in place code transformation. This reduces the amount of code we need to generate and simplifies our code weaving approach and therefore also our internal APIs. The Kotlin Compiler plugin is also faster than our current approach because we can eliminate the very slow KAPT plugin that acts as an annotation processor, but does it by requiring multiple passes to first generate stops and then the actual code for the implementation.
We can also target more platforms with the Compiler plugin because we eliminate the need for Google's Transform API that was only available to Android. Another key opportunity for doing this now is that we can target the Kotlin multi platform ecosystem. In fact, Realm's C++ storage engine is already multi platform. It's already running on the Kotlin multi platform targets with the exception of JavaScript for web. Secondly, we also find Kotlin's multi platform approach very appealing. It allows code sharing but also allows you to target specific portions of your app in native platform frameworks when needed.
For example, UI. We think Realm fits well into this Kotlin multi platform library suite. There's often no need for developer platform differentiation for persistence and there's actually already quite good coverage in this library suite. Together with the Kotlin serialization and Realm, you can actually do full blown apps as shared code and only supply a platform specific UI.
So, let's look into some of the key concepts of this new SDK. We'll do that by comparing it to Realm Java and we'll start by defining a data model. Throughout all these code snippets I've used the Kotlin syntax even for the Realm Java examples just to highlight the semantic changes instead of bothering with the syntactical differences. So, I just need some water...
So, as you see it looks more or less like Java. But there are some key things there. The compiler plugins enable us access the classes directly. This means we no longer need the classes to be open, because we are not internally inheriting from the user classes. We can also just add a marker interface that we fill out by the compiler plugin instead of requiring some specific base classes. And we can derive nullability directly from the types in Kotlin instead of requiring this required annotation.
Not all migration are as easy to cut down. For our Realm configurations, we're not opting in for pure Kotlin with the named parameters, but instead keeping the binder approach. This is because it's easier to discover from inside the ID and we also need to have in mind that we need to be interoperable with the Java. We only offer some specific constructors with named parameters for very common use cases. Another challenge from this new tooling is that the compiler plug-in has some constraints that complicates gathering default schemas.
We're not fully in place with the constraints for this yet, so for now, we're just required explicit schema definition in the configuration. For completion, I'll just also highlight the current API for perform inquiries on the realm. To get immediate full query capabilities with just exposed the string-based parcel of the underlying storage engine, this was a quick way to get full capabilities and we'll probably add a type safe query API later on when there's a bigger demand.
Actually, this string-based query parcel is also available from on Java recently, but users are probably more familiar with the type-based or type safe query system. All these changes are mostly syntactical but the most dominant change for realm Kotlin is the new object behavior. In Realm Kotlin, objects are no longer live, but frozen. Frozen objects are data objects tied to a specific version of the realm. They are immutable. You cannot update them and they don't change over time.
We still use the underlying zero-copying and lazy loading mechanism, so we still keep the memory footprints small. You can still use a frozen object to navigate the full object graph from this specific version. In Realm Kotlin, the queries also just returns frozen objects. Similarly, notifications also returns new instances of frozen objects, and with this, we have been able to lift the thread confinement constraint. This eases lifecycle management, because we can now only have a single global instance of the realm.
We can also pass these objects around between threads, which makes it way easier to use it in reactive frameworks. Again, let's look into some examples by comparing is to Realm Java. The shareable Realm instances eases this life cycle management. In Realm Java, we need to obtain an instance on each thread, and we also need to explicitly close this realm instance. On Realm Kotlin, we can now do this upfront with a global instance that can be passed around between the threads. We can finally close it later on this single instance. Of course, it has some consequences. With these shareable instances, changes are immediately available too on the threads.
In Realm Java, the live data implementation only updated our view of data with in between our run loop iterations. Same query in the same scope would always yield the same result. For Realm Kotlin with our frozen objects, so in Realm Kotlin, updates from different threads are immediately visible. This means that two consecutive queries might reveal different results. This also applies for local or blocking updates. Again, Realm Java with live results local updates were instantly visible, and didn't require refresh. For Realm Java, the original object is frozen and tied to a specific version of the Realm.
Which means that the update weren't reflected in our original object, and to actually inspect the updates, we would have to re-query the Realm again. In practice, we don't expect access to these different versions to be an issue. Because the primary way of reacting to changes in Realm Kotlin will be listening for changes.
In Realm Kotlin, updates are delivered as streams of immutable objects. It's implemented by coroutine flows of these frozen instances. In Realm Java, when you got notified about changes, you could basically throw away the notification object, because you could still access all data from your old existing live reference. With Realm Kotlin, your original instance is tied to a specific version of the Realm. It means that you for each update, you would need to access the notify or the new instance supplied by the coroutine flow.
But again, this updated instance, it gives you full access to the Realm version of that object. It gives you full access to the object graph from this new frozen instance. Here we start to see some of the advantage of coroutine based API. With Realm Java, this code snippet, it was run on a non-loop of thread. It would actually not even give you any notification because it was hard to tie the user code with our underlying notification mechanism. For Realm Kotlin, since we're using coroutines, we use the flexibilities of this. This gives the user flexibility to supply this loop like dispatching context and it's easy for us to hook our event delivery off with that.
Secondly, we can also now spread the operations on a flow over varying contexts. This means that we can apply all the usual flow operations, but we can also actually change the context as our objects are not tied to a specific thread. Further, we can also with the structural concurrency of codes routines, it's easier to align our subscription with the existing scopes. This means that you don't have to bother with closing the underlying Realm instance here.
With the shareable Realm instances of Realm Kotlin, updates must be done atomically. With Realm Java, since we had a thread confined instance of the Realm, we could just modify it. In Realm Kotlin, we have to be very explicit on when the state is changed. We therefore provide this right transaction on a separate mutable Realm within a managed scope. Inside the scope, it allows us to create an update objects just as in Realm Java. But the changes are only visible to other when once the scope is exited. We actually had a similiar construct in Realm Java, but this is now the only way to update the Realm.
Inside the transaction blocks, things almost works as in Realm Java. It's the underlying same starch engine principles. Transactions are still performed on single thread confined live Realm, which means that inside this transaction block, the objects and queries are actually live. The major key difference for the transaction block is how to update existing objects when they are now frozen. For both STKs, it applies that we can only do one transaction at a time. This transaction must be done on the latest version of the Realm. Therefore, when updating existing objects, we need to ensure that we have an instance that is tied to the latest version of the object.
For Realm Java, we could just pass in the live objects to our transaction block but since it could actually have ... we always had to check the object for its validity. A key issue here was that since object couldn't be passed around on arbitrary threads, we had to get a non-local object, we would have to query the Realm and find out a good filtering pattern to identify objects uniquely. For Realm, we've just provided API for obtaining the latest version of an object. This works for both primary key and non-primary key objects due to some internal identifiers.
Since we can now pass objects between a thread, we can just pass our frozen objects in and look up the latest version. To complete the tour, we'll just close the Realm. As you've already seen, it's just easier to manage the life cycle of Realm when there's one single instance, so closing an instance to free up resources and perform exclusive operations on the Realm, it's just a matter of closing the shared global instance.
Interacting with any object instance after you closed the Realm will still flow, but again, the structural concurrency of coroutine flows should assist you in stopping accessing the objects following the use cases of your app. Besides the major shift to frozen objects, we're of course trying to improve the STKs in a lot of ways, and first of all, we're trying to be idiomatic Kotlin. We want to take advantage of all the new features of the language. We're also trying to reduce size both of our own library but also the generated code. This is possible with the new compiler plug-in as we've previously touched. We can just modify the user instance and not generate additional classes.
We're also trying to bundle up part of functionality and modularize it into support libraries. This is way easier with the extension methods. Now we should be able to avoid having everybody to ship apps with the JSON import and export functionality and stuff like that. This also makes it easier to target future frameworks by offering support libraries with extension functions. As we've already also seen, we are trying to ensure that our STK is as discoverable from the ID directly, and we're also trying to ensure that the API is backward compatible with the Java.
This might not be the most idiomatic Java, but at least we try to do it without causing major headaches. We also want to improve testability, so we're putting in places to inject dispatchers. But most notably, we're also supplying JBM support for off-device testing. Lastly, since we're redoing an STK, we of course have a lot of insight in the full feature set, so we also know what to target to make a more maintainable STK. You've already seen some of the compromises for these flights, but please feel free to provide feedback if we can improve something.
With this new multiplatform STK, where and how to use. We're providing our plug-in and library as a Kotlin multiplatform STK just to be absolutely clear for people not familiar with the multiplatform ecosystem. This still means that you can just apply your project or apply this library and plug-in on Android only projects. It just means that we can now also target iOS and especially multiplatform and later desktop JBM. And I said there's already libraries out there for sterilization and networking.
With Realm, we can already build full apps with the shared business logic, and only have to supply platform dependent UI. Thanks to the layout and metadata of our artifacts, there's actually no difference in how to apply the projects depending on which setting you are in. It integrates seamlessly with both Android and KMM projects. You just have to apply the plug-in. It's already available in plug-in portal, so you can just use this new plug-in syntax. Then you have to add the repository, but you most likely already have Maven Central as part of your setup, and then at our dependency. There's a small caveat for Android projects before Kotlin or using Kotlin, before 1.5, because the IR backend that triggers our compiler plug-in is not default before that.
You would have to enable this feature in the compiler also. Yeah. You've already seen a model definition that's a very tiny one. With this ability or this new ability to share our Realm instances, we can now supply one central instance and here have exemplified it by using a tiny coin module. We are able to share this instance throughout the app, and to show how it's all tied together, I have a very small view model example. These users are central Realm instance supplied by Kotlin.
It sets up some live data to feed the view. This live data is built up from our observable flows. You can apply the various flow operators on it, but most importantly you can also control this context for where it's executing. Lastly, you are handling this in the view model scope. It's just that subscription is also following the life cycle of your view model. Lastly, for completion, there's a tiny method to put some data in there.
As you might have read between the lines, this is not all in place yet, but I'll try to give a status. We're in the middle of maturing this prove of concept of the proposed frozen architecture. This is merged into our master branch bit by bit without exposing it to the public API. There's a lot of pieces that needs to fit together before we can trigger and migrate completely to this new architecture. But you can still try our library out. We have an initial developer preview in what we can version 0.1.0. Maybe the best label is Realm Kotlin Multiplatform bring-up.
Because it sort of qualifies the overall concept in this mutliplatform setting with our compiler plug-in being on multi platforms. Also a mutliplatform [inaudible 00:30:09] with collecting all these native objects in the various [inaudible 00:30:14] management domains. A set, it doesn't include the full frozen architecture yet, so the Realm instances are still thread confined, and objects are live. There's only limited support, but we have primitive types, links to other Realm objects and primary keys. You can also register for notifications.
We use this string-based queries also briefly mentioned. It operates on Kotlin Mutliplatform mobile, which means that it's both available for Android and iOS, but only for 64 bit on both platforms. It's already available on Maven Central, so you can go and try it out either by using our own KMM example in the repository or build your own project following the read me in the repository.
What's next? Yeah. Of course in these weeks, we're stabilizing the full frozen architecture. Our upcoming milestones are first we want to target the release with the major Java features. It's a lot of the other features of Realm like lists, indexes, more detailed changelistener APIs, migration for schema updates and dynamic realms and also desktop JVM support. After that, we'll head off to build support for MongoDB Realm. To be able to sync this data remotely.
When this is in place, we'll target the full feature set of Realm Java. There's a lot of more exotic types embedded objects and there's we also just introduced new types like sets and dictionaries and [inaudible 00:32:27] types to Realm Java. These will come in a later version on Kotlin. We're also following the evolution of Kotlin Mutliplatform. It's still only alpha so we have to keep track of what they're doing there, and most notably, following the memory management model of Kotlin native, there are constraints that once you pass objects around, Kotlin is freezing those.
Right now, you cannot just pass Realm instances around because they have to be updated. But these frozen objects can be passed around threads and throughout this process, we'll do incremental releases, so please keep an eye open and provide feedback.
To keep up with our progress, follow us on GitHub. This is our main communication channel with the community. You can try out the sample, a set. You can also ... there's instructions how to do your own Kotlin Mutliplatform project, and you can peek into our public design docs. They're also linked from our repository. If you're more interested into the details of building this Mutliplatform STK, you can read a blog post on how we've addressed some of this challenge with the compiler plug-in and handling Mutliplatform C Interrupts, memory management, and all this.
Thank you for ... that's all.
**Ian:**
Thank you Claus, that was very enlightening. Now, we'll take some of your questions, so if you have any questions, please put them in the chat. The first one here will mention, we've answered some of them, but the first one here is regarding the availability of the. It is available now. You can go to GitHub/Realm/Realm.Kotlin, and get our developer preview. We plan to have iterative releases over the next few quarters. That will add more and more functionality.
The next one is regarding the migration from I presume this user has ... or James, you have a Realm Java application using Realm Java and potentially, you would be looking to migrate to Realm Kotlin. We don't plan to have an automatic feature that would scan your code and change the APIs. Because the underlying semantics have changed so much. But it is something that we can look to have a migration guide or something like that if more users are asking about it.
Really the objects have changed from being live objects to now being frozen objects. We've also removed the threading constraint and we've also have a single shared Realm instance. Whereas before, with every thread, you had to open up a new Realm instance in order to do work on that thread. The semantics have definitely changed, so you'll have to do this with developer care in order to migrate your applications over. Okay. Next question here, we'll go through some of these.
Does the Kotlin STK support just KMM for syncing or just local operations? I can answer this one. We do plan to offer our sync, and so if you're not familiar, Realm also offers a synchronization from the local store Realm file to MongoDB Atlas through Realm Sync, through the Realm Cloud. This is a way to bidirectionally sync any documents that you have stored on MongoDB Atlas down and transformed into Realm objects and vice versa.
We don't have that today, but it is something that you can look forward to the future in next quarters, we will be releasing our sync support for the new Realm Kotlin STK. Other questions here, so are these transactions required to be scoped or suspended? I presume this is using the annotations for the Kotlin coroutines keywords. The suspend functions, the functions, Claus, do you have any thoughts on that one?
**Claus:**
Yeah. We are providing a default mechanism but we are also probably adding at least already in our current prototype, we already have a blocking right. You will be able to do it without suspending. Yeah.
**Ian:**
Okay. Perfect. Also, in the same vein, when running a right transaction, do you get any success or failed result back in the code if the transaction was successful? I presume this is having some sort of callback or on success or on failure if the right transaction succeeded or failed. We plan that to our API at all?
**Claus:**
Usually, we just have exceptions if things doesn't go right, and they will propagate throughout normal suspend ... throughout coroutine mechanisms. Yeah.
**Ian:**
Yeah. It is a good thought though. We have had other users request this, so it's something if we continue to get more user feedback on this, potentially we could add in the future. Another question here, is it possible to specify a path to where all the entities are loaded instead of declaring each on a set method?
**Ian:**
Not sure I fully follow that.
**Claus:**
It's when defining the schema. Yeah. We have some options of gathering this schema information, but as I stated, we are not completely on top of which constraints we want to put into it. Right now, we are forced to actually define all the classes, but we have issues for addressing this, and we have investigating various options. But some of these come with different constraints, and we have to judge them along the way to see which fits best or maybe we'll find some other ways around this hopefully.
**Nabil:**
Just to add on top of this, we could use listOf or other data structure. Just the only constraint at the compiler level are using class literal to specify the. Since there's no reflection in Kotlin Native, we don't have the mechanism like we do in Java to infer from your class path, the class that are annotating to Java that will participate in your schema. The lack of reflection in Kotlin Native forces us to just use class and use some compiler classes to build the schema for you constraint.
**Ian:**
Yeah. Just to introduce everyone, this is Nabil. Nabil also works on the Realm Android team. He's one of the lead architects and designers of our new Realm Kotlin STK, so thank you Nabil.
**Ian:**
Is there any known limitations with the object relation specifically running on a KMM project? My understanding is no. There shouldn't be any restrictions on object relations. Also, for our types, because Realm is designed to be a cross-platform SDK where you can use the same Realm file in both iOS, JavaScript, Android applications, the types are also cross-platform. My understanding is we shouldn't have any restrictions for object relations. I don't know Nabil or Claus if you can confirm or deny that.
**Nabil:**
Sorry. I lost you for a bit.
**Claus:**
I also lost the initial. No, but we are supporting the full feature set, so I can't immediately come up with any constraints
around Java.
**Ian:**
Other questions here, will the Realm SDK support other platforms not just mobile? We talked a little bit about desktop, so I think JVM is something that we think we can get out of the box with our implementations of course. I think for desktop JVM applications is possible.
**Nabil:**
Internally like I mentioned already compiling for JVM and and also for. But we didn't expose it other public API yet. We just wanted object support in iOS and Android. The only issue for JVM, we have the tool chain compiling on desktop file, is to add the Android specific component, which are like the which you don't have for desktops. We need to find way either to integrate with Swing or to provide a hook for you to provide your looper, so it can deliver a notification for you. That's the only constraint since we're not using any other major Android specific API besides the context. The next target that we'll try to support is JVM, but we're already supported for internally, so it's not going to be a big issue.
**Ian:**
I guess in terms of web, we do have ... Realm Core, the core database is written in C++. We do have some projects to explore what it would take to compile Realm Core into Wasm so that it could then be run into a browser, and if that is successful, then we could potentially look to have web as a target. But currently, it's not part of our target right now.
Other questions here, will objects still be proxies or will they now be the same object at runtime? For example, the object Realm proxy.
**Nabil:**
There will be no proxy for Realm Kotlin. It's one of the benefit of using a compiler, is we're modifying the object itself. Similar to what composes doing with the add compose when you write a compose UI. It's modifying your function by adding some behavior. We do the same thing. Similar to what Kotlin sterilization compiler is doing, so we don't use proxy objects anymore.
**Ian:**
Okay. Perfect. And then I heard at the end that Realm instances can't be frozen on Native, just wanted to confirm that. Will that throw off if a freeze happens? Also wondering if that's changing or if you're waiting on the Native memory model changes.
**Nabil:**
There's two aspects to what we're observing to what are doing. It's like first it's the garbage collector. They introduce an approach, where you could have some callbacks when you finalize the objects and we'll rely on this to free the native resources. By native, I mean the C++ pointer. The other aspect of this is the memory model itself of the Kotlin Native, which based on frozen object similiar concept as what we used to do in Realm Java, which is a thread confinement model to achieve.
**Nabil:**
Actually, what we're doing is like we're trying to freeze the object graph similarly to what Kotlin it does. You can pass object between threads. The only sometimes issue you is like how you interface with multi-threaded coroutine on Kotlin Native. This part is not I think stable yet. But in theory, our should overlap, should work in a similiar way.
**Claus:**
I guess we don't really expect this memory management scheme from Kotlin Native to be instantly solved, but we have maybe options of providing the Realm instance in like one specific instance that doesn't need to be closed on each thread, acting centrally with if our Native writer and notification thread. It might be possible in the future to define Realms on each thread and interact with the central mechanisms. But I wouldn't expect the memory management constraints on Native to just go away.
**Ian:**
Right. Other question here will Realm plan on supporting the Android Paging 3 API?
**Nabil:**
You could ask the same question with Realm Java. The actual lazy loading capability of Realm doesn't require you to implement the paging library. The paging library problem is like how you can load efficiently pages of data without loading the entire dataset. Since both Realm Java and Realm Kotlin uses just native pointers to the data, so as you traverse your list or collection will only loads this object you're trying to access. And not like 100 or 200 similiar to what a cursor does in SQLite for instance. There's no similiar problem in Realm in general, so it didn't try to come up with a paging solution in the first place.
**Ian:**
That's just from our lazy loading and memory map architecture, right?
**Nabil:**
Correct.
**Ian:**
Yeah. Okay. Other question here, are there any plans to support polymorphic objects in the Kotlin SDK? I can answer this. I have just finished a product description of adding inheritance in polymorphism to not only the Kotlin SDK, but to all of our SDKs. This is targeted to be a medium term task. It is an expensive task, but it is something that has been highly requested for a while now. Now, we have the resources to implement it, so I'd expect we would get started in the medium term to implement that.
**Nabil:**
We also have released for in Java what we call the polymorphic type, which is. You can and then install in it the supported Realm have JSON with the dynamic types, et cetera. Go look at it. But it's not like the polymorphic, a different polymorphic that Ian was referring -
**Ian:**
It's a first step I guess you could say into polymorphism. What it enables is kind of what Nabil described is you could have an owner field and let's say that owner could be a business type, it could be a person, it could be an industrial, a commercial, each of these are different classes. You now have the ability to store that type as part of that field. But it doesn't allow for true inheritance, which I think is what most people are looking for for polymorphism. That's something that is after this is approved going to be underway. Look forward to that. Other questions here? Any other questions? Anything I missed? I think we've gone through all of them. Thank you to the Android team. Here's Christian, the lead of our Android team on as well. He's been answering a lot of questions. Thank you Christian. But if any other questions here, please reach out. Otherwise, we will close a little bit early.
Okay. Well, thank you so much everyone. Claus, thank you. I really appreciate it. Thank you for putting this together. This will be posted on YouTube. Any other questions, please go to /Realm/Realm.Kotlin on our GitHub. You can file an issue there. You can ask questions. There's also Forums.Realm.io.
We look forward to hearing from you. Okay. Thanks everyone. Bye.
**Nabil:**
See you online. Cheers. | md | {
"tags": [
"Realm",
"Kotlin",
"Android"
],
"pageDescription": "In this talk, Claus Rørbech, software engineer on the Realm Android team, will walk us through some of the constraints of the RealmJava SDK, the thought process that went into the decision to build a new SDK for Kotlin, the benefits developers will be able to leverage with the new APIs, and how the RealmKotlin SDK will evolve.",
"contentType": "Article"
} | Realm Meetup - Realm Kotlin Multiplatform for Modern Mobile Apps | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongoimport-guide | created | # How to Import Data into MongoDB with mongoimport
No matter what you're building with MongoDB, at some point you'll want to import some data. Whether it's the majority of your data, or just some reference data that you want to integrate with your main data set, you'll find yourself with a bunch of JSON or CSV files that you need to import into a collection. Fortunately, MongoDB provides a tool called mongoimport which is designed for this task. This guide will explain how to effectively use mongoimport to get your data into your MongoDB database.
>We also provide MongoImport Reference documentation, if you're looking for something comprehensive or you just need to look up a command-line option.
## Prerequisites
This guide assumes that you're reasonably comfortable with the command-line. Most of the guide will just be running commands, but towards the end I'll show how to pipe data through some command-line tools, such as `jq`.
>If you haven't had much experience on the command-line (also sometimes called the terminal, or shell, or bash), why not follow along with some of the examples? It's a great way to get started.
The examples shown were all written on MacOS, but should run on any unix-type system. If you're running on Windows, I recommend running the example commands inside the Windows Subsystem for Linux.
You'll need a temporary MongoDB database to test out these commands. If
you're just getting started, I recommend you sign up for a free MongoDB
Atlas account, and then we'll take care of the cluster for you!
And of course, you'll need a copy of `mongoimport`. If you have MongoDB
installed on your workstation then you may already have `mongoimport`
installed. If not, follow these instructions on the MongoDB website to install it.
I've created a GitHub repo of sample data, containing an extract from the New York Citibike dataset in different formats that should be useful for trying out the commands in this guide.
## Getting Started with `mongoimport`
`mongoimport` is a powerful command-line tool for importing data from JSON, CSV, and TSV files into MongoDB collections. It's super-fast and multi-threaded, so in many cases will be faster than any custom script you might write to do the same thing. `mongoimport` use can be combined with some other command-line tools, such as `jq` for JSON manipulation, or `csvkit` for CSV manipulation, or even `curl` for dynamically downloading data files from servers on the internet. As with many command-line tools, the options are endless!
## Choosing a Source Data Format
In many ways, having your source data in JSON files is better than CSV (and TSV). JSON is both a hierarchical data format, like MongoDB documents, and is also explicit about the types of data it encodes. On the other hand, source JSON data can be difficult to deal with - in many cases it is not in the structure you'd like, or it has numeric data encoded as strings, or perhaps the date formats are not in a form that `mongoimport` accepts.
CSV (and TSV) data is tabular, and each row will be imported into MongoDB as a separate document. This means that these formats cannot support hierarchical data in the same way as a MongoDB document can. When importing CSV data into MongoDB, `mongoimport` will attempt to make sensible choices when identifying the type of a specific field, such as `int32` or `string`. This behaviour can be overridden with the use of some flags, and you can specify types if you want to. On top of that, `mongoimport` supplies some facilities for parsing dates and other types in different formats.
In many cases, the choice of source data format won't be up to you - it'll be up to the organisation generating the data and providing it to you. I recommend if the source data is in CSV form then you shouldn't attempt to convert it to JSON first unless you plan to restructure it.
## Connect `mongoimport` to Your Database
This section assumes that you're connecting to a relatively straightforward setup - with a default authentication database and some authentication set up. (You should *always* create some users for authentication!)
If you don't provide any connection details to mongoimport, it will attempt to connect to MongoDB on your local machine, on port 27017 (which is MongoDB's default). This is the same as providing `--host=localhost:27017`.
## One URI to Rule Them All
There are several options that allow you to provide separate connection information to mongoimport, but I recommend you use the `--uri` option. If you're using Atlas you can get the appropriate connection URI from the Atlas interface, by clicking on your cluster's "Connect" button and selecting "Connect your Application". (Atlas is being continuously developed, so these instructions may be slightly out of date.) Set the URI as the value of your `--uri` option, and replace the username and password with the appropriate values:
``` bash
mongoimport --uri 'mongodb+srv://MYUSERNAME:[email protected]/test?retryWrites=true&w=majority'
```
**Be aware** that in this form the username and password must be URL-encoded. If you don't want to worry about this, then provide the username and password using the `--username` and `--password` options instead:
``` bash
mongoimport --uri 'mongodb+srv://mycluster-ABCDE.azure.mongodb.net/test?retryWrites=true&w=majority' \
--username='MYUSERNAME' \
--password='SECRETPASSWORD'
```
If you omit a password from the URI and do not provide a `--password` option, then `mongoimport` will prompt you for a password on the command-line. In all these cases, using single-quotes around values, as I've done, will save you problems in the long-run!
If you're *not* connecting to an Atlas database, then you'll have to generate your own URI. If you're connecting to a single server (i.e. you don't have a replicaset), then your URI will look like this: `mongodb://your.server.host.name:port/`. If you're running a replicaset (and you
should!) then you have more than one hostname to connect to, and you don't know in advance which is the primary. In this case, your URI will consist of a series of servers in your cluster (you don't need to provide all of your cluster's servers, providing one of them is available), and mongoimport will discover and connect to the primary automatically. A replicaset URI looks like this: `mongodb://username:password@host1:port,host2:port/?replicaSet=replicasetname`.
Full details of the supported URI formats can be found in our reference documentation.
There are also many other options available and these are documented in the mongoimport reference documentation.
Once you've determined the URI, then the fun begins. In the rest of this guide, I'll leave those flags out. You'll need to add them in when trying out the various other options.
## Import One JSON Document
The simplest way to import a single file into MongoDB is to use the `--file` option to specify a file. In my opinion, the very best situation is that you have a directory full of JSON files which need to be imported. Ideally each JSON file contains one document you wish to import into MongoDB, it's in the correct structure, and each of the values is of the correct type. Use this option when you wish to import a single file as a single document into a MongoDB collection.
You'll find data in this format in the 'file_per_document' directory in the sample data GitHub repo. Each document will look like this:
``` json
{
"tripduration": 602,
"starttime": "2019-12-01 00:00:05.5640",
"stoptime": "2019-12-01 00:10:07.8180",
"start station id": 3382,
"start station name": "Carroll St & Smith St",
"start station latitude": 40.680611,
"start station longitude": -73.99475825,
"end station id": 3304,
"end station name": "6 Ave & 9 St",
"end station latitude": 40.668127,
"end station longitude": -73.98377641,
"bikeid": 41932,
"usertype": "Subscriber",
"birth year": 1970,
"gender": "male"
}
```
``` bash
mongoimport --collection='mycollectionname' --file='file_per_document/ride_00001.json'
```
The command above will import all of the json file into a collection
`mycollectionname`. You don't have to create the collection in advance.
If you use MongoDB Compass or another tool to connect to the collection you just created, you'll see that MongoDB also generated an `_id` value in each document for you. This is because MongoDB requires every document to have a unique `_id`, but you didn't provide one. I'll cover more on this shortly.
## Import Many JSON Documents
Mongoimport will only import one file at a time with the `--file` option, but you can get around this by piping multiple JSON documents into mongoimport from another tool, such as `cat`. This is faster than importing one file at a time, running mongoimport from a loop, as mongoimport itself is multithreaded for faster uploads of multiple documents. With a directory full of JSON files, where each JSON file should be imported as a separate MongoDB document can be imported by `cd`-ing to the directory that contains the JSON files and running:
``` bash
cat *.json | mongoimport --collection='mycollectionname'
```
As before, MongoDB creates a new `_id` for each document inserted into the MongoDB collection, because they're not contained in the source data.
## Import One Big JSON Array
Sometimes you will have multiple documents contained in a JSON array in a single document, a little like the following:
``` json
{ title: "Document 1", data: "document 1 value"},
{ title: "Document 2", data: "document 2 value"}
]
```
You can import data in this format using the `--file` option, using the `--jsonArray` option:
``` bash
mongoimport --collection='from_array_file' --file='one_big_list.json' --jsonArray
```
If you forget to add the --jsonArray option, `mongoimport` will fail with the error "cannot decode array into a Document." This is because documents are equivalent to JSON objects, not arrays. You can store an array as a \_value\_ on a document, but a document cannot be an array.
## Import MongoDB-specific Types with JSON
If you import some of the JSON data from the [sample data github repo and then view the collection's schema in Compass, you may notice a couple of problems:
- The values of `starttime` and `stoptime` should be "date" types, not "string".
- MongoDB supports geographical points, but doesn't recognize the start and stop stations' latitudes and longitudes as such.
This stems from a fundamental difference between MongoDB documents and JSON documents. Although MongoDB documents often *look* like JSON data, they're not. MongoDB stores data as BSON. BSON has multiple advantages over JSON. It's more compact, it's faster to traverse, and it supports more types than JSON. Among those types are Dates, GeoJSON types, binary data, and decimal numbers. All the types are listed in the MongoDB documentation
If you want MongoDB to recognise fields being imported from JSON as specific BSON types, those fields must be manipulated so that they follow a structure we call Extended JSON. This means that the following field:
``` json
"starttime": "2019-12-01 00:00:05.5640"
```
must be provided to MongoDB as:
``` json
"starttime": {
"$date": "2019-12-01T00:00:05.5640Z"
}
```
for it to be recognized as a Date type. Note that the format of the date string has changed slightly, with the 'T' separating the date and time, and the Z at the end, indicating UTC timezone.
Similarly, the latitude and longitude must be converted to a GeoJSON Point type if you wish to take advantage of MongoDB's ability to search location data. The two values:
``` json
"start station latitude": 40.680611,
"start station longitude": -73.99475825,
```
must be provided to `mongoimport` in the following GeoJSON Point form:
``` json
"start station location": {
"type": "Point",
"coordinates": -73.99475825, 40.680611 ]
}
```
**Note**: the pair of values are longitude *then* latitude, as this sometimes catches people out!
Once you have geospatial data in your collection, you can use MongoDB's [geospatial queries to search for data by location.
If you need to transform your JSON data in this kind of way, see the section on JQ.
## Importing Data Into Non-Empty Collections
When importing data into a collection which already contains documents, your `_id` value is important. If your incoming documents don't contain `_id` values, then new values will be created and assigned to the new documents as they are added to the collection. If your incoming documents *do* contain `_id` values, then they will be checked against existing documents in the collection. The `_id` value must be unique within a collection. By default, if the incoming document has an `_id` value that already exists in the collection, then the document will be rejected and an error will be logged. This mode (the default) is called "insert mode". There are other modes, however, that behave differently when a matching document is imported using `mongoimport`.
### Update Existing Records
If you are periodically supplied with new data files you can use `mongoimport` to efficiently update the data in your collection. If your input data is supplied with a stable identifier, use that field as the `_id` field, and supply the option `--mode=upsert`. This mode willinsert a new document if the `_id` value is not currently present in the collection. If the `_id` value already exists in a document, then that document will be overwritten by the new document data.
If you're upserting records that don't have stable IDs, you can specify some fields to use to match against documents in the collection, with the `--upsertFields` option. If you're using more than one field name, separate these values with a comma:
``` bash
--upsertFields=name,address,height
```
Remember to index these fields, if you're using `--upsertFields`, otherwise it'll be slow!
### Merge Data into Existing Records
If you are supplied with data files which *extend* your existing documents by adding new fields, or update certain fields, you can use `mongoimport` with "merge mode". If your input data is supplied with a stable identifier, use that field as the `_id` field, and supply the option `--mode=merge`. This mode will insert a new document if the `_id` value is not currently present in the collection. If the `_id` value already exists in a document, then that document will be overwritten by the new document data.
You can also use the `--upsertFields` option here as well as when you're doing upserts, to match the documents you want to update.
## Import CSV (or TSV) into a Collection
If you have CSV files (or TSV files - they're conceptually the same) to import, use the `--type=csv` or `--type=tsv` option to tell `mongoimport` what format to expect. Also important is to know whether your CSV file has a header row - where the first line doesn't contain data - instead it contains the name for each column. If you *do* have a header row, you should use the `--headerline` option to tell `mongoimport` that the first line should not be imported as a document.
With CSV data, you may have to do some extra work to annotate the data to get it to import correctly. The primary issues are:
- CSV data is "flat" - there is no good way to embed sub-documents in a row of a CSV file, so you may want to restructure the data to match the structure you wish to have in your MongoDB documents.
- CSV data does not include type information.
The first problem is a probably bigger issue. You have two options. One is to write a script to restructure the data *before* using `mongoimport` to import the data. Another approach could be to import the data into MongoDB and then run an aggregation pipeline to transform the data into your required structure.
Both of these approaches are out of the scope of this blog post. If it's something you'd like to see more explanation of, head over to the MongoDB Community Forums.
The fact that CSV files don't specify the type of data in each field can be solved by specifying the field types when calling `mongoimport`.
### Specify Field Types
If you don't have a header row, then you must tell `mongoimport` the name of each of your columns, so that `mongoimport` knows what to call each of the fields in each of the documents to be imported. There are two methods to do this: You can list the field names on the command-line with the `--fields` option, or you can put the field names in a file, and point to it with the `--fieldFile` option.
``` bash
mongoimport \
--collection='fields_option' \
--file=without_header_row.csv \
--type=csv \
--fields="tripduration","starttime","stoptime","start station id","start station name","start station latitude","start station longitude","end station id","end station name","end station latitude","end station longitude","bikeid","usertype","birth year","gender"
```
That's quite a long line! In cases where there are lots of columns it's a good idea to manage the field names in a field file.
### Use a Field File
A field file is a list of column names, with one name per line. So the equivalent of the `--fields` value from the call above looks like this:
``` none
tripduration
starttime
stoptime
start station id
start station name
start station latitude
start station longitude
end station id
end station name
end station latitude
end station longitude
bikeid
usertype
birth year
gender
```
If you put that content in a file called 'field_file.txt' and then run the following command, it will use these column names as field names in MongoDB:
``` bash
mongoimport \
--collection='fieldfile_option' \
--file=without_header_row.csv \
--type=csv \
--fieldFile=field_file.txt
```
If you open Compass and look at the schema for either 'fields_option' or 'fieldfile_option', you should see that `mongoimport` has automatically converted integer types to `int32` and kept the latitude and longitude values as `double` which is a real type, or floating-point number. In some cases, though, MongoDB may make an incorrect decision. In the screenshot above, you can see that the 'starttime' and 'stoptime' fields have been imported as strings. Ideally they would have been imported as a BSON date type, which is more efficient for storage and filtering.
In this case, you'll want to specify the type of some or all of your columns.
### Specify Types for CSV Columns
All of the types you can specify are listed in our reference documentation.
To tell `mongoimport` you wish to specify the type of some or all of your fields, you should use the `--columnsHaveTypes` option. As well as using the `--columnsHaveTypes` option, you will need to specify the types of your fields. If you're using the `--fields` option, you can add type information to that value, but I highly recommend adding type data to the field file. This way it should be more readable and maintainable, and that's what I'll demonstrate here.
I've created a file called `field_file_with_types.txt`, and entered the following:
``` none
tripduration.auto()
starttime.date(2006-01-02 15:04:05)
stoptime.date(2006-01-02 15:04:05)
start station id.auto()
start station name.auto()
start station latitude.auto()
start station longitude.auto()
end station id.auto()
end station name.auto()
end station latitude.auto()
end station longitude.auto()
bikeid.auto()
usertype.auto()
birth year.auto()
gender.auto()
```
Because `mongoimport` already did the right thing with most of the fields, I've set them to `auto()` - the type information comes after a period (`.`). The two time fields, `starttime` and `stoptime` were being incorrectly imported as strings, so in these cases I've specified that they should be treated as a `date` type. Many of the types take arguments inside the parentheses. In the case of the `date` type, it expects the argument to be *a date* formatted in the same way you expect the column's values to be formatted. See the reference documentation for more details.
Now, the data can be imported with the following call to `mongoimport`:
``` bash
mongoimport --collection='with_types' \
--file=without_header_row.csv \
--type=csv \
--columnsHaveTypes \
--fieldFile=field_file_with_types.txt
```
## And The Rest
Hopefully you now have a good idea of how to use `mongoimport` and of how flexible it is! I haven't covered nearly all of the options that can be provided to `mongoimport`, however, just the most important ones. Others I find useful frequently are:
| Option| Description |
| --- | --- |
| `--ignoreBlanks`| Ignore fields or columns with empty values. |
| `--drop` | Drop the collection before importing the new documents. This is particularly useful during development, but **will lose data** if you use it accidentally. |
| `--stopOnError` | Another option that is useful during development, this causes `mongoimport` to stop immediately when an error occurs. |
There are many more! Check out the mongoimport reference documentation for all the details.
## Useful Command-Line Tools
One of the major benefits of command-line programs is that they are designed to work with *other* command-line programs to provide more power. There are a couple of command-line programs that I *particularly* recommend you look at: `jq` a JSON manipulation tool, and `csvkit` a similar tool for working with CSV files.
### JQ
JQ is a processor for JSON data. It incorporates a powerful filtering and scripting language for filtering, manipulating, and even generating JSON data. A full tutorial on how to use JQ is out of scope for this guide, but to give you a brief taster:
If you create a JQ script called `fix_dates.jq` containing the following:
``` none
.starttime |= { "$date": (. | sub(" "; "T") + "Z") }
| .stoptime |= { "$date": (. | sub(" "; "T") + "Z") }
```
You can now pipe the sample JSON data through this script to modify the
`starttime` and `stoptime` fields so that they will be imported into MongoDB as `Date` types:
``` bash
echo '
{
"tripduration": 602,
"starttime": "2019-12-01 00:00:05.5640",
"stoptime": "2019-12-01 00:10:07.8180"
}' \
| jq -f fix_dates.jq
{
"tripduration": 602,
"starttime": {
"$date": "2019-12-01T00:00:05.5640Z"
},
"stoptime": {
"$date": "2019-12-01T00:10:07.8180Z"
}
}
```
This can be used in a multi-stage pipe, where data is piped into `mongoimport` via `jq`.
The `jq` tool can be a little fiddly to understand at first, but once you start to understand how the language works, it is very powerful, and very fast. I've provided a more complex JQ script example in the sample data GitHub repo, called `json_fixes.jq`. Check it out for more ideas, and the full documentation on the JQ website.
### CSVKit
In the same way that `jq` is a tool for filtering and manipulating JSON data, `csvkit` is a small collection of tools for filtering and manipulating CSV data. Some of the tools, while useful in their own right, are unlikely to be useful when combined with `mongoimport`. Tools like `csvgrep` which filters csv file rows based on expressions, and `csvcut` which can remove whole columns from CSV input, are useful tools for slicing and dicing your data before providing it to `mongoimport`.
Check out the csvkit docs for more information on how to use this collection of tools.
### Other Tools
Are there other tools you know of which would work well with
`mongoimport`? Do you have a great example of using `awk` to handle tabular data before importing into MongoDB? Let us know on the community forums!
## Conclusion
It's a common mistake to write custom code to import data into MongoDB. I hope I've demonstrated how powerful `mongoimport` is as a tool for importing data into MongoDB quickly and efficiently. Combined with other simple command-line tools, it's both a fast and flexible way to import your data into MongoDB. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to import different types of data into MongoDB, quickly and efficiently, using mongoimport.",
"contentType": "Tutorial"
} | How to Import Data into MongoDB with mongoimport | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-sync-migration | created | # Migrating Your iOS App's Synced Realm Schema in Production
## Introduction
In the previous post in this series, we saw how to migrate your Realm data when you upgraded your iOS app with a new schema. But, that only handled the data in your local, standalone Realm database. What if you're using MongoDB Realm Sync to replicate your local Realm data with other instances of your mobile app and with MongoDB Atlas? That's what this article will focus on.
We'll start with the original RChat app. We'll then extend the iOS app and backend Realm schema to add a new feature that allows chat messages to be tagged as high priority. The next (and perhaps surprisingly more complicated from a Realm perspective) upgrade is to make the `author` attribute of the existing `ChatMessage` object non-optional.
You can find all of the code for this post in the RChat repo under these branches:
- Starting point
- Upgrade #1
- Upgrade #2
## Prerequisites
Realm Cocoa 10.13.0 or later (for versions of the app that you're upgrading **to**)
## Catch-Up — The RChat App
RChat is a basic chat app:
- Users can register and log in using their email address and a password.
- Users can create chat rooms and include other users in those rooms.
- Users can post messages to a chat room (optionally including their location and photos).
- All members of a chatroom can see messages sent to the room by themselves or other users.
:youtubeExisting RChat iOS app functionality]{vid=BlV9El_MJqk}
## Upgrade #1: Add a High-Priority Flag to Chat Messages
The first update is to allow a user to tag a message as being high-priority as they post it to the chat room:
![Screenshot showing the option to click a thermometer button to tag the message as urgent
That message is then highlighted with bold text and a "hot" icon in the list of chat messages:
### Updating the Backend Realm Schema
Adding a new field is an additive change—meaning that you don't need to restart sync (which would require every deployed instance of the RChat mobile app to recognize the change and start sync from scratch, potentially losing local changes).
We add the new `isHighPriority` bool to our Realm schema through the Realm UI:
We also make `isHighPriority` a required (non-optional field).
The resulting schema looks like this:
```js
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"author": {
"bsonType": "string"
},
"image": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required":
"_id",
"date"
],
"title": "Photo"
},
"isHighPriority": {
"bsonType": "bool"
},
"location": {
"bsonType": "array",
"items": {
"bsonType": "double"
}
},
"partition": {
"bsonType": "string"
},
"text": {
"bsonType": "string"
},
"timestamp": {
"bsonType": "date"
}
},
"required": [
"_id",
"partition",
"text",
"timestamp",
"isHighPriority"
],
"title": "ChatMessage"
}
```
Note that existing versions of our iOS RChat app can continue to work with our updated backend Realm app, even though their local `ChatMessage` Realm objects don't include the new field.
### Updating the iOS RChat App
While existing versions of the iOS RChat app can continue to work with the updated Realm backend app, they can't use the new `isHighPriority` field as it isn't part of the `ChatMessage` object.
To add the new feature, we need to update the mobile app after deploying the updated Realm backend application.
The first change is to add the `isHighPriority` field to the `ChatMessage` class:
```swift
class ChatMessage: Object, ObjectKeyIdentifiable {
@Persisted(primaryKey: true) var _id = UUID().uuidString
@Persisted var partition = "" // "conversation="
@Persisted var author: String? // username
@Persisted var text = ""
@Persisted var image: Photo?
@Persisted var location = List()
@Persisted var timestamp = Date()
@Persisted var isHighPriority = false
...
}
```
As seen in the [previous post in this series, Realm can automatically update the local realm to include this new attribute and initialize it to `false`. Unlike with standalone realms, we **don't** need to signal to the Realm SDK that we've updated the schema by providing a schema version.
The new version of the app will happily exchange messages with instances of the original app on other devices (via our updated backend Realm app).
## Upgrade #2: Make `author` a Non-Optional Chat Message field
When the initial version of RChat was written, the `author` field of `ChatMessage` was declared as being optional. We've since realized that there are no scenarios where we wouldn't want the author included in a chat message. To make sure that no existing or future client apps neglect to include the author, we need to update our schema to make `author` a required field.
Unfortunately, changing a field from optional to required (or vice versa) is a destructive change, and so would break sync for any deployed instances of the RChat app.
Oops!
This means that there's extra work needed to make the upgrade seamless for the end users. We'll go through the process now.
### Updating the Backend Realm Schema
The change we need to make to the schema is destructive. This means that the new document schema is incompatible with the schema that's currently being used in our mobile app.
If RChat wasn't already deployed on the devices of hundreds of millions of users (we can dream!), then we could update the Realm schema for the `ChatMessage` collection and restart Realm Sync. During development, we can simply remove the original RChat mobile app and then install an updated version on our test devices.
To avoid that trauma for our end users, we leave the `ChatMessage` collection's schema as is and create a partner collection. The partner collection (`ChatMessageV2`) will contain the same data as `ChatMessage`, except that its schema makes `author` a required field.
These are the steps we'll go through to create the partner collection:
- Define a Realm schema for the `ChatMessageV2` collection.
- Run an aggregation to copy all of the documents from `ChatMessage` to `ChatMessageV2`. If `author` is missing from a `ChatMessage` document, then the aggregation will add it.
- Add a trigger to the `ChatMessage` collection to propagate any changes to `ChatMessageV2` (adding `author` if needed).
- Add a trigger to the `ChatMessageV2` collection to propagate any changes to `ChatMessage`.
#### Define the Schema for the Partner Collection
From the Realm UI, copy the schema from the `ChatMessage` collection.
Click the button to create a new schema:
Set the database and collection name before clicking "Add Collection":
Paste in the schema copied from `ChatMessage`, add `author` to the `required` section, change the `title` to `ChatMessageV2`, and the click the "SAVE" button:
This is the resulting schema:
```js
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"author": {
"bsonType": "string"
},
"image": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required":
"_id",
"date"
],
"title": "Photo"
},
"isHighPriority": {
"bsonType": "bool"
},
"location": {
"bsonType": "array",
"items": {
"bsonType": "double"
}
},
"partition": {
"bsonType": "string"
},
"text": {
"bsonType": "string"
},
"timestamp": {
"bsonType": "date"
}
},
"required": [
"_id",
"partition",
"text",
"timestamp",
"isHighPriority",
"author"
],
"title": "ChatMessageV2"
}
```
#### Copy Existing Data to the Partner Collection
We're going to use an [aggregation pipeline to copy and transform the existing data from the original collection (`ChatMessage`) to the partner collection (`ChatMessageV2`).
You may want to pause sync just before you run the aggregation, and then unpause it after you enable the trigger on the `ChatMessage` collection in the next step:
The end users can continue to create new messages while sync is paused, but those messages won't be published to other users until sync is resumed. By pausing sync, you can ensure that all new messages will make it into the partner collection (and so be visible to users running the new version of the mobile app).
If pausing sync is too much of an inconvenience, then you could create a temporary trigger on the `ChatMessage` collection that will copy and transform document inserts to the `ChatMessageV2` collection (it's a subset of the `ChatMessageProp` trigger we'll define in the next section.).
From the Atlas UI, select "Collections" -> "ChatMessage", "New Pipeline From Text":
Paste in this aggregation pipeline and click the "Create New" button:
```js
{
'$addFields': {
'author': {
'$convert': {
'input': '$author',
'to': 'string',
'onError': 'unknown',
'onNull': 'unknown'
}
}
}
},
{
'$merge': {
into: "ChatMessageV2",
on: "_id",
whenMatched: "replace",
whenNotMatched: "insert"
}
}
]
```
This aggregation will take each `ChatMessage` document, set `author` to "unknown" if it's not already set, and then add it to the `ChatMessageV2` collection.
Click "MERGE DOCUMENTS":
![Clicking the "Merge Documents" button in the Realm UI
`ChatMessageV2` now contains a (possibly transformed) copy of every document from `ChatMessage`. But, changes to one collection won't be propagated to the other. To address that, we add a database trigger to each collection…
#### Add Database Triggers
We need to create two Realm Functions—one to copy/transfer documents to `ChatMessageV2`, and one to copy documents to `ChatMessage`.
From the "Functions" section of the Realm UI, click "Create New Function":
Name the function `copyToChatMessageV2`. Set the authentication method to "System"—this will circumvent any access permissions on the `ChatMessageV2` collection. Ensure that the "Private" switch is turned on—that means that the function can be called from a trigger, but not directly from a frontend app. Click "Save":
Paste this code into the function editor and save:
```js
exports = function (changeEvent) {
const db = context.services.get("mongodb-atlas").db("RChat");
if (changeEvent.operationType === "delete") {
return db.collection("ChatMessageV2").deleteOne({ _id: changeEvent.documentKey._id });
}
const author = changeEvent.fullDocument.author ? changeEvent.fullDocument.author : "Unknown";
const pipeline =
{ $match: { _id: changeEvent.documentKey._id } },
{
$addFields: {
author: author,
}
},
{ $merge: "ChatMessageV2" }];
return db.collection("ChatMessage").aggregate(pipeline);
};
```
This function will receive a `ChatMessage` document from our trigger. If the operation that triggered the function is a delete, then this function deletes the matching document from `ChatMessageV2`. Otherwise, the function either copies `author` from the incoming document or sets it to "Unknown" before writing the transformed document to `ChatMessageV2`. We could initialize `author` to any string, but I've used "Unknown" to tell the user that we don't know who the author was.
Create the `copyToChatMessage` function in the same way:
```js
exports = function (changeEvent) {
const db = context.services.get("mongodb-atlas").db("RChat");
if (changeEvent.operationType === "delete") {
return db.collection("ChatMessage").deleteOne({ _id: changeEvent.documentKey._id })
}
const pipeline = [
{ $match: { _id: changeEvent.documentKey._id } },
{ $merge: "ChatMessage" }]
return db.collection("ChatMessageV2").aggregate(pipeline);
};
```
The final change needed to the backend Realm application is to add database triggers that invoke these functions.
From the "Triggers" section of the Realm UI, click "Add a Trigger":
![Click the "Add a Trigger" button in the Realm UI
Configure the `ChatMessageProp` trigger as shown:
Repeat for `ChatMessageV2Change`:
If you paused sync in the previous section, then you can now unpause it.
### Updating the iOS RChat App
We want to ensure that users still running the old version of the app can continue to exchange messages with users running the latest version.
Existing versions of RChat will continue to work. They will create `ChatMessage` objects which will get synced to the `ChatMessage` Atlas collection. The database triggers will then copy/transform the document to the `ChatMessageV2` collection.
We now need to create a new version of the app that works with documents from the `ChatMessageV2` collection. We'll cover that in this section.
Recall that we set `title` to `ChatMessageV2` in the partner collection's schema. That means that to sync with that collection, we need to rename the `ChatMessage` class to `ChatMessageV2` in the iOS app.
Changing the name of the class throughout the app is made trivial by Xcode.
Open `ChatMessage.swift` and right-click on the class name (`ChatMessage`), select "Refactor" and then "Rename…":
Override the class name with `ChatMessageV2` and click "Rename":
The final step is to make the author field mandatory. Remove the ? from the author attribute to make it non-optional:
```swift
class ChatMessageV2: Object, ObjectKeyIdentifiable {
@Persisted(primaryKey: true) var _id = UUID().uuidString
@Persisted var partition = "" // "conversation="
@Persisted var author: String
...
}
```
## Conclusion
Modifying a Realm schema is a little more complicated when you're using Realm Sync for a deployed app. You'll have end users who are using older versions of the schema, and those apps need to continue to work.
Fortunately, the most common schema changes (adding or removing fields) are additive. They simply require updates to the back end and iOS schema, together.
Things get a little trickier for destructive changes, such as changing the type or optionality of an existing field. For these cases, you need to create and maintain a partner collection to avoid loss of data or service for your users.
This article has stepped through how to handle both additive and destructive schema changes, allowing you to add new features or fix issues in your apps without impacting users running older versions of your app.
Remember, you can find all of the code for this post in the RChat repo under these branches:
- Starting point
- Upgrade #1
- Upgrade #2
If you're looking to upgrade the Realm schema for an iOS app that **isn't** using Realm Sync, then refer to the previous post in this series.
If you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum. To keep up with the latest Realm news, follow @realm on Twitter and join the Realm global community.
| md | {
"tags": [
"Realm",
"iOS",
"Mobile"
],
"pageDescription": "When you add features to your app, you may need to modify your Realm schema. Here, we step through how to migrate your synced schema and data.",
"contentType": "Tutorial"
} | Migrating Your iOS App's Synced Realm Schema in Production | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/live2020-keynote-summary | created | # MongoDB.Live 2020 Keynote In Less Than 10 Minutes
Didn't get a chance to attend the MongoDB.Live 2020 online conference
this year? Don't worry. We have compiled a quick recap of all the
highlights to get you caught up.
>
>
>MongoDB.Live 2020 - Everything from the Breakthroughs to the
>Milestones - in 10 Minutes!
>
>:youtube]{vid=TB_EdovmBUo}
>
>
As you can see, we packed a lot of exciting news in this year's event.
MongoDB Realm:
- Bi-directional sync between mobile devices and data in an Atlas
cluster
- Easy integration with authentication, serverless functions and
triggers
- GraphQL support
MongoDB Server 4.4:
- Refinable shard keys
- Hedged reads
- New query language additions - union, custom aggregation expressions
MongoDB Atlas:
- Atlas Search GA
- Atlas Online Archive (Beta)
- Automated schema suggestions
- AWS IAM authentication
Analytics
- Atlas Data Lake
- Federated queries
- Charts embedding SDK
DevOps Tools and Integrations
- Community Kubernetes operator and containerized Ops Manager for
Enterprise
- New MongoDB shell
- New integration for VS Code and other JetBrains IDEs
MongoDB Learning & Community
- University Learning Paths
- Developer Hub
- Community Forum
Over 14,000 attendees attended our annual user conference online to
experience how we make data stunningly easy to work with. If you didn't
get the chance to attend, check out our upcoming [MongoDB.live 2020
regional series. These virtual
conferences in every corner of the globe are the easiest way for you to
sharpen your skills during a full day of interactive, virtual sessions,
deep dives, and workshops from the comfort of your home and the
convenience of your time zone. You'll discover new ways MongoDB removes
the developer pain of working with data, allowing you to focus on your
vision and freeing your genius.
To learn more, ask questions, leave feedback or simply connect with
other MongoDB developers, visit our community
forums. Come to learn.
Stay to connect.
>
>
>Get started with Atlas is easy. Sign up for a free MongoDB
>Atlas account to start working with
>all the exciting new features of MongoDB, including Realm and Charts,
>today!
>
>
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Missed the MongoDB .Live 2020 online conference? From the breakthroughs to the milestones, here's what you missed - in less than 10 minutes!",
"contentType": "Article"
} | MongoDB.Live 2020 Keynote In Less Than 10 Minutes | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/outlier-pattern | created | # Building with Patterns: The Outlier Pattern
So far in this *Building with Patterns* series, we've looked at the
Polymorphic,
Attribute, and
Bucket patterns. While the document schema in
these patterns has slight variations, from an application and query
standpoint, the document structures are fairly consistent. What happens,
however, when this isn't the case? What happens when there is data that
falls outside the "normal" pattern? What if there's an outlier?
Imagine you are starting an e-commerce site that sells books. One of the
queries you might be interested in running is "who has purchased a
particular book". This could be useful for a recommendation system to
show your customers similar books of interest. You decide to store the
`user_id` of a customer in an array for each book. Simple enough, right?
Well, this may indeed work for 99.99% of the cases, but what happens
when J.K. Rowling releases a new Harry Potter book and sales spike in
the millions? The 16MB BSON document size limit could easily
be reached. Redesigning our entire application for this *outlier*
situation could result in reduced performance for the typical book, but
we do need to take it into consideration.
## The Outlier Pattern
With the Outlier Pattern, we are working to prevent a few queries or
documents driving our solution towards one that would not be optimal for
the majority of our use cases. Not every book sold will sell millions of
copies.
A typical `book` document storing `user_id` information might look
something like:
``` javascript
{
"_id": ObjectID("507f1f77bcf86cd799439011")
"title": "A Genealogical Record of a Line of Alger",
"author": "Ken W. Alger",
...,
"customers_purchased": "user00", "user01", "user02"]
}
```
This would work well for a large majority of books that aren't likely to
reach the "best seller" lists. Accounting for outliers though results in
the `customers_purchased` array expanding beyond a 1000 item limit we
have set, we'll add a new field to "flag" the book as an outlier.
``` javascript
{
"_id": ObjectID("507f191e810c19729de860ea"),
"title": "Harry Potter, the Next Chapter",
"author": "J.K. Rowling",
...,
"customers_purchased": ["user00", "user01", "user02", ..., "user999"],
"has_extras": "true"
}
```
We'd then move the overflow information into a separate document linked
with the book's `id`. Inside the application, we would be able to
determine if a document has a `has_extras` field with a value of `true`.
If that is the case, the application would retrieve the extra
information. This could be handled so that it is rather transparent for
most of the application code.
Many design decisions will be based on the application workload, so this
solution is intended to show an example of the Outlier Pattern. The
important concept to grasp here is that the outliers have a substantial
enough difference in their data that, if they were considered "normal",
changing the application design for them would degrade performance for
the more typical queries and documents.
## Sample Use Case
The Outlier Pattern is an advanced pattern, but one that can result in
large performance improvements. It is frequently used in situations when
popularity is a factor, such as in social network relationships, book
sales, movie reviews, etc. The Internet has transformed our world into a
much smaller place and when something becomes popular, it transforms the
way we need to model the data around the item.
One example is a customer that has a video conferencing product. The
list of authorized attendees in most video conferences can be kept in
the same document as the conference. However, there are a few events,
like a company's all hands, that have thousands of expected attendees.
For those outlier conferences, the customer implemented "overflow"
documents to record those long lists of attendees.
## Conclusion
The problem that the Outlier Pattern addresses is preventing a few
documents or queries to determine an application's solution. Especially
when that solution would not be optimal for the majority of use cases.
We can leverage MongoDB's flexible data model to add a field to the
document "flagging" it as an outlier. Then, inside the application, we
handle the outliers slightly differently. By tailoring your schema for
the typical document or query, application performance will be optimized
for those normal use cases and the outliers will still be addressed.
One thing to consider with this pattern is that it often is tailored for
specific queries and situations. Therefore, ad hoc queries may result in
less than optimal performance. Additionally, as much of the work is done
within the application code itself, additional code maintenance may be
required over time.
In our next *Building with Patterns* post, we'll take a look at the
[Computed Pattern and how to optimize schema for
applications that can result in unnecessary waste of resources.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.",
"contentType": "Tutorial"
} | Building with Patterns: The Outlier Pattern | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/swift/realm-swiftui-scrumdinger-migration | created | # Adapting Apple's Scrumdinger SwiftUI Tutorial App to Use Realm
Apple published a great tutorial to teach developers how to create iOS apps using SwiftUI. I particularly like it because it doesn't make any assumptions about existing UIKit experience, making it ideal for developers new to iOS. That tutorial is built around an app named "Scrumdinger," which is designed to facilitate daily scrum) meetings.
Apple's Scrumdinger implementation saves the app data to a local file whenever the user minimizes the app, and loads it again when they open the app. It seemed an interesting exercise to modify Scrumdinger to use Realm rather than a flat file to persist the data. This article steps through what changes were required to rebase Scrumdinger onto Realm.
An immediate benefit of the move is that changes are now persisted immediately, so nothing is lost if the device or app crashes. It's beyond the scope of this article, but now that the app data is stored in Realm, it would be straightforward to add enhancements such as:
- Search meeting minutes for a string.
- Filter minutes by date or attendees.
- Sync data so that the same user can see all of their data on multiple iOS (and optionally, Android) devices.
- Use Realm Sync Partitions to share scrum data between team members.
- Sync the data to MongoDB Atlas so that it can be accessed by web apps or through a GraphQL API
>
>
>This article was updated in July 2021 to replace `objc` and `dynamic` with the `@Persisted` annotation that was introduced in Realm-Cocoa 10.10.0.
>
>
## Prerequisites
- Mac (sorry Windows and Linux users).
- Xcode 12.4+.
I strongly recommend that you at least scan Apple's tutorial. I don't explain any of the existing app structure or code in this article.
## Adding Realm to the Scrumdinger App
First of all, a couple of notes about the GitHub repo for this project:
- The main branch is the app as it appears in Apple's tutorial. This is the starting point for this article.
- The realm branch contains a modified version of the Scrumdinger app that persists the application data in Realm. This is the finishing point for this article.
- You can view the diff between the main and realm branches to see the changes needed to make the app run on Realm.
### Install and Run the Original Scrumdinger App
``` bash
git clone https://github.com/realm/Scrumdinger.git
cd Scrumdinger
open Scrumdinger.xcodeproj
```
From Xcode, select a simulator:
Select an iOS simulator in Xcode.
Build and run the app with `⌘r`:
Scrumdinger screen capture
Create a new daily scrum. Force close and restart the app with `⌘r`. Note that your new scrum has been lost 😢. Don't worry, that's automatically fixed once we've migrated to Realm.
### Add the Realm SDK to the Project
To use Realm, we need to add the Realm-Cocoa SDK to the Scrumdinger Xcode project using the Swift Package Manager. Select the "Scrumdinger" project and the "Swift Packages" tab, and then click the "+" button:
Paste in `https://github.com/realm/realm-cocoa` as the package repository URL:
Add the `RealmSwift` package to the `Scrumdinger` target:
We can then start using the Realm SDK with `import RealmSwift`.
### Update Model Classes to be Realm Objects
To store an object in Realm, its class must inherit from Realm's `Object` class. If the class contains sub-classes, those classes must conform to Realm's `EmbeddedObject` protocol.
#### Color
As with the original app's flat file, Realm can't natively persist the SwiftUI `Color` class, and so colors need to be stored as components. To that end, we need a `Components` class. It conforms to `EmbeddedObject` so that it can be embedded in a higher-level Realm `Object` class. Fields are flagged with the `@Persisted` annotation to indicate that they should be persisted in Realm:
``` swift
import RealmSwift
class Components: EmbeddedObject {
@Persisted var red: Double = 0
@Persisted var green: Double = 0
@Persisted var blue: Double = 0
@Persisted var alpha: Double = 0
convenience init(red: Double, green: Double, blue: Double, alpha: Double) {
self.init()
self.red = red
self.green = green
self.blue = blue
self.alpha = alpha
}
}
```
#### DailyScrum
`DailyScrum` is converted from a `struct` to an `Object` `class` so that it can be persisted in Realm. By conforming to `ObjectKeyIdentifiable`, lists of `DailyScrum` objects can be used within SwiftUI `ForEach` views, with Realm managing the `id` identifier for each instance.
We use the Realm `List` class to store arrays.
``` swift
import RealmSwift
class DailyScrum: Object, ObjectKeyIdentifiable {
@Persisted var title = ""
@Persisted var attendeeList = RealmSwift.List()
@Persisted var lengthInMinutes = 0
@Persisted var colorComponents: Components?
@Persisted var historyList = RealmSwift.List()
var color: Color { Color(colorComponents ?? Components()) }
var attendees: String] { Array(attendeeList) }
var history: [History] { Array(historyList) }
convenience init(title: String, attendees: [String], lengthInMinutes: Int, color: Color, history: [History] = []) {
self.init()
self.title = title
attendeeList.append(objectsIn: attendees)
self.lengthInMinutes = lengthInMinutes
self.colorComponents = color.components
for entry in history {
self.historyList.insert(entry, at: 0)
}
}
}
extension DailyScrum {
struct Data {
var title: String = ""
var attendees: [String] = []
var lengthInMinutes: Double = 5.0
var color: Color = .random
}
var data: Data {
return Data(title: title, attendees: attendees, lengthInMinutes: Double(lengthInMinutes), color: color)
}
func update(from data: Data) {
title = data.title
for attendee in data.attendees {
if !attendees.contains(attendee) {
self.attendeeList.append(attendee)
}
}
lengthInMinutes = Int(data.lengthInMinutes)
colorComponents = data.color.components
}
}
```
#### History
The `History` struct is replaced with a Realm `Object` class:
``` swift
import RealmSwift
class History: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var date: Date?
@Persisted var attendeeList = List()
@Persisted var lengthInMinutes: Int = 0
@Persisted var transcript: String?
var attendees: [String] { Array(attendeeList) }
convenience init(date: Date = Date(), attendees: [String], lengthInMinutes: Int, transcript: String? = nil) {
self.init()
self.date = date
attendeeList.append(objectsIn: attendees)
self.lengthInMinutes = lengthInMinutes
self.transcript = transcript
}
}
```
#### ScrumData
The `ScrumData` `ObservableObject` class was used to manage the copying of scrum data between the in-memory copy and a local iOS file (including serialization and deserialization). This is now handled automatically by Realm, and so this class can be deleted.
Nothing feels better than deleting boiler-plate code!
### Top-Level SwiftUI App
Once the data is being stored in Realm, there's no need for lifecycle code to load data when the app starts or save it when it's minimized, and so `ScrumdingerApp` becomes a simple wrapper for the top-level view (`ScrumsView`):
``` swift
import SwiftUI
@main
struct ScrumdingerApp: App {
var body: some Scene {
WindowGroup {
NavigationView {
ScrumsView()
}
}
}
}
```
### SwiftUI Views
#### ScrumsView
The move from a file to Realm simplifies the top-level view.
``` swift
import RealmSwift
struct ScrumsView: View {
@ObservedResults(DailyScrum.self) var scrums
@State private var isPresented = false
@State private var newScrumData = DailyScrum.Data()
@State private var currentScrum = DailyScrum()
var body: some View {
List {
if let scrums = scrums {
ForEach(scrums) { scrum in
NavigationLink(destination: DetailView(scrum: scrum)) {
CardView(scrum: scrum)
}
.listRowBackground(scrum.color)
}
}
}
.navigationTitle("Daily Scrums")
.navigationBarItems(trailing: Button(action: {
isPresented = true
}) {
Image(systemName: "plus")
})
.sheet(isPresented: $isPresented) {
NavigationView {
EditView(scrumData: $newScrumData)
.navigationBarItems(leading: Button("Dismiss") {
isPresented = false
}, trailing: Button("Add") {
let newScrum = DailyScrum(
title: newScrumData.title,
attendees: newScrumData.attendees,
lengthInMinutes: Int(newScrumData.lengthInMinutes),
color: newScrumData.color)
$scrums.append(newScrum)
isPresented = false
})
}
}
}
}
```
The `DailyScrum` objects are automatically loaded from the default Realm using the `@ObservedResults` annotation.
New scrums can be added to Realm by appending them to the `scrums` result set with `$scrums.append(newScrum)`. Note that there's no need to open a Realm transaction explicitly. That's now handled under the covers by the Realm SDK.
### DetailView
The main change to `DetailView` is that any edits to a scrum are persisted immediately. At the time of writing ([Realm-Cocoa 10.7.2), the view must open a transaction to store the change:
``` swift
do {
try Realm().write() {
guard let thawedScrum = scrum.thaw() else {
print("Unable to thaw scrum")
return
}
thawedScrum.update(from: data)
}
} catch {
print("Failed to save scrum: \(error.localizedDescription)")
}
```
### MeetingView
As with `DetailView`, `MeetingView` is enhanced so that meeting notes are added as soon as they've been created (rather than being stored in volatile RAM until the app is minimized):
``` swift
do {
try Realm().write() {
guard let thawedScrum = scrum.thaw() else {
print("Unable to thaw scrum")
return
}
thawedScrum.historyList.insert(newHistory, at: 0)
}
} catch {
print("Failed to add meeting to scrum: \(error.localizedDescription)")
}
```
### CardView (+ Other Views)
There are no changes needed to the view that's responsible for displaying a summary for a scrum. The changes we made to the `DailyScrum` model in order to store it in Realm don't impact how it's used within the app.
Cardview
Similarly, there are no significant changes needed to `EditView`, `HistoryView`, `MeetingTimerView`, `MeetingHeaderView`, or `MeetingFooterView`.
## Summary
I hope that this post has shown that moving an iOS app to Realm is a straightforward process. The Realm SDK abstracts away the complexity of serialization and persisting data to disk. This is especially true when developing with SwiftUI.
Now that Scrumdinger uses Realm, very little extra work is needed to add new features based on filtering, synchronizing, and sharing data. Let me know in the community forum if you try adding any of that functionality.
## Resources
- Apple's tutorial
- Pre-Realm Scrumdinger code
- Realm Scrumdinger code
- Diff - all changes required to migrate Scrumdinger to Realm
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Swift",
"Realm",
"iOS",
"React Native"
],
"pageDescription": "Learn how to add Realm to an iOS/SwiftUI app to add persistence and flexibility. Uses Apple's Scrumdinger tutorial app as the starting point.",
"contentType": "Code Example"
} | Adapting Apple's Scrumdinger SwiftUI Tutorial App to Use Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/update-array-elements-document-mql-positional-operators | created | # Update Array Elements in a Document with MQL Positional Operators
MongoDB offers a rich query language that's great for create, read, update, and delete operations as well as complex multi-stage aggregation pipelines. There are many ways to model your data within MongoDB and regardless of how it looks, the MongoDB Query Language (MQL) has you covered.
One of the lesser recognized but extremely valuable features of MQL is in the positional operators that you'd find in an update operation.
Let's say that you have a document and inside that document, you have an array of objects. You need to update one or more of those objects in the array, but you don't want to replace the array or append to it. This is where a positional operator might be valuable.
In this tutorial, we're going to look at a few examples that would benefit from a positional operator within MongoDB.
## Use the $ Operator to Update the First Match in an Array
Let's use the example that we have an array in each of our documents and we want to update only the first match within that array, even if there's a potential for numerous matches.
To do this, we'd probably want to use the `$` operator which acts as a placeholder to update the first element matched.
For this example, let's use an old-school Pokemon video game. Take look at the following MongoDB document:
``` json
{
"_id": "red",
"pokemon":
{
"number": 6,
"name": "Charizard"
}
{
"number": 25,
"name": "Pikachu",
},
{
"number": 0,
"name": "MissingNo"
}
]
}
```
Let's assume that the above document represents the Pokemon information for the Pokemon Red video game. The document is not a true reflection and it is very much incomplete. However, if you're a fan of the game, you'll probably remember the glitch Pokemon named "MissingNo." To make up a fictional story, let's assume the developer, at some point in time, wanted to give that Pokemon an actual name, but forgot.
We can update that particular element in the array by doing something like the following:
``` javascript
db.pokemon_game.update(
{ "pokemon.name": "MissingNo" },
{
"$set": {
"pokemon.$.name": "Agumon"
}
}
);
```
In the above example, we are doing a filter for documents that have an array element with a `name` field set to `MissingNo`. With MongoDB, you don't need to specify the array index in your filter for the `update` operator. In the manipulation step, we are using the `$` positional operator to change the first occurrence of the match in the filter. Yes, in my example, I am renaming the "MissingNo" Pokemon to that of a Digimon, which is an entirely different brand.
The new document would look like this:
``` json
{
"_id": "red",
"pokemon": [
{
"number": 6,
"name": "Charizard"
}
{
"number": 25,
"name": "Pikachu",
},
{
"number": 0,
"name": "Agumon"
}
]
}
```
Had "MissingNo" appeared numerous times within the array, only the first occurrence would be updated. If "MissingNo" appeared numerous times, but the surrounding fields were different, you could match on multiple fields using the `$elemMatch` operator to narrow down which particular element should be updated.
More information on the `$` positional operator can be found in the [documentation.
## Use the $\\] Operator to Update All Array Elements Within a Document
Let's say that you have an array in your document and you need to update every element in that array using a single operation. To do this, we might want to take a look at the `$[]` operator which does exactly that.
Using the same Pokemon video game example, let's imagine that we have a team of Pokemon and we've just finished a battle in the game. The experience points gained from the battle need to be distributed to all the Pokemon on your team.
The document that represents our team might look like the following:
``` json
{
"_id": "red",
"team": [
{
"number": 1,
"name": "Bulbasaur",
"xp": 5
},
{
"number": 25,
"name": "Pikachu",
"xp": 32
}
]
}
```
At the end of the battle, we want to make sure every Pokemon on our team receives 10 XP. To do this with the `$[]` operator, we can construct an `update` operation that looks like the following:
``` javascript
db.pokemon_game.update(
{ "_id": "red" },
{
"$inc": {
"team.$[].xp": 10
}
}
);
```
In the above example, we use the `$inc` modifier to increase all `xp` fields within the `team` array by a constant number. To learn more about the `$inc` operator, check out the [documentation.
Our new document would look like this:
``` json
{
"_id": "red",
"team": [
{
"number": 1,
"name": "Bulbasaur",
"xp": 15
},
{
"number": 25,
"name": "Pikachu",
"xp": 42
}
]
}
]
```
While useful for this example, we don't exactly get to provide criteria in case one of your Pokemon shouldn't receive experience points. If your Pokemon has fainted, maybe they shouldn't get the increase.
We'll learn about filters in the next part of the tutorial.
To learn more about the `$[]` operator, check out the [documentation.
## Use the $\\\] Operator to Update Elements that Match a Filter Condition
Let's use the example that we have several array elements that we want to update in a single operation and we don't want to worry about excessive client-side code paired with a replace operation.
To do this, we'd probably want to use the `$[]` operator which acts as a placeholder to update all elements that match an `arrayFilters` condition.
To put things into perspective, let's say that we're dealing with Pokemon trading cards, instead of video games, and tracking their values. Our documents might look like this:
``` javascript
db.pokemon_collection.insertMany(
[
{
_id: "nraboy",
cards: [
{
"name": "Charizard",
"set": "Base",
"variant": "1st Edition",
"value": 200000
},
{
"name": "Pikachu",
"set": "Base",
"variant": "Red Cheeks",
"value": 300
}
]
},
{
_id: "mraboy",
cards: [
{
"name": "Pikachu",
"set": "Base",
"variant": "Red Cheeks",
"value": 300
},
{
"name": "Pikachu",
"set": "McDonalds 25th Anniversary Promo",
"variant": "Holo",
"value": 10
}
]
}
]
);
```
Of course, the above snippet isn't a document, but an operation to insert two documents into some `pokemon_collection` collection within MongoDB. In the above scenario, each document represents a collection of cards for an individual. The `cards` array has information about the card in the collection as well as the current value.
In our example, we need to update prices of cards, but we don't want to do X number of update operations against the database. We only want to do a single operation to update the values of each of our cards.
Take the following query:
``` javascript
db.pokemon_collection.update(
{},
{
"$set": {
"cards.$[elemX].value": 350,
"cards.$[elemY].value": 500000
}
},
{
"arrayFilters": [
{
"elemX.name": "Pikachu",
"elemX.set": "Base",
"elemX.variant": "Red Cheeks"
},
{
"elemY.name": "Charizard",
"elemY.set": "Base",
"elemY.variant": "1st Edition"
}
],
"multi": true
}
);
```
The above `update` operation is like any other, but with an extra step for our positional operator. The first parameter, which is an empty object, represents our match criteria. Because it is empty, we'll be updating all documents within the collection.
The next parameter is the manipulation we want to do to our documents. Let's skip it for now and look at the `arrayFilters` in the third parameter.
Imagine that we want to update the price for two particular cards that might exist in any person's Pokemon collection. In this example, we want to update the price of the Pikachu and Charizard cards. If you're a Pokemon trading card fan, you'll know that there are many variations of the Pikachu and Charizard card, so we get specific in our `arrayFilters` array. For each object in the array, the fields of those objects represent an `and` condition. So, for `elemX`, which has no specific naming convention, all three fields must be satisfied.
In the above example, we are using `elemX` and `elemY` to represent two different filters.
Let's go back to the second parameter in the `update` operation. If the filter for `elemX` comes back as true because an array item in a document matched, then the `value` field for that object will be set to a new value. Likewise, the same thing could happen for the `elemY` filter. If a document has an array and one of the filters does not ever match an element in that array, it will be ignored.
If looking at our example, the documents would now look like the following:
``` json
[
{
"_id": "nraboy",
"cards": [
{
"name": "Charizard",
"set": "Base",
"variant": "1st Edition",
"value": 500000
},
{
"name": "Pikachu",
"set": "Base",
"variant": "Red Cheeks",
"value": 350
}
]
},
{
"_id": "mraboy",
"cards": [
{
"name": "Pikachu",
"set": "Base",
"variant": "Red Cheeks",
"value": 350
},
{
"name": "Pikachu",
"set": "McDonalds 25th Anniversary Promo",
"variant": "Holo",
"value": 10
}
]
}
]
```
If any particular array contained multiple matches for one of the `arrayFilter` criteria, all matches would have their price updated. This means that if I had, say, 100 matching Pikachu cards in my Pokemon collection, all 100 would now have new prices.
More information on the `$[]` operator can be found in the [documentation.
## Conclusion
You just saw how to use some of the positional operators within the MongoDB Query Language (MQL). These operators are useful when working with arrays because they prevent you from having to do full replaces on the array or extended client-side manipulation.
To learn more about MQL, check out my previous tutorial titled, Getting Started with Atlas and the MongoDB Query Language (MQL).
If you have any questions, take a moment to stop by the MongoDB Community Forums. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to work with the positional array operators within the MongoDB Query Language (MQL).",
"contentType": "Tutorial"
} | Update Array Elements in a Document with MQL Positional Operators | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/retail-search-mongodb-databricks | created | # Learn to Build AI-Enhanced Retail Search Solutions with MongoDB and Databricks
In the rapidly evolving retail landscape, businesses are constantly seeking ways to optimize operations, improve customer experience, and stay ahead of competition. One of the key strategies to achieve this is through leveraging the opportunities search experiences provide.
Imagine this: You walk into a department store filled with products, and you have something specific in mind. You want a seamless and fast shopping experience — this is where product displays play a pivotal role. In the digital world of e-commerce, the search functionality of your site is meant to be a facilitating tool to efficiently display what users are looking for.
Shockingly, statistics reveal that only about 50% of searches on retail websites yield the results customers seek. Think about it — half the time, customers with a strong buying intent are left without an answer to their queries.
The search component of your e-commerce site is not merely a feature; it's the bridge between customers and the products they desire. Enhancing your search engine logic with artificial intelligence is the best way to ensure that the bridge is sturdy.
In this article, we'll explore how MongoDB and Databricks can be integrated to provide robust solutions for the retail industry, with a particular focus on the MongoDB Apache Spark Streaming processor; orchestration with Databricks workflows; data transformation and featurization with MLFlow and the Spark User Defined Functions; and by building a product catalog index, sorting, ranking, and autocomplete with Atlas Search.
Let’s get to it!
### Solution overview
A modern e-commerce-backed system should be able to collate data from multiple sources in real-time, as well as batch loads, and be able to transform this data into a schema upon which a Lucene search index can be built. This enables discovery of the added inventory.
The solution should integrate website customer behavior events data in real-time to feed an “intelligence layer” that will create the criteria to display and order the most interesting products in terms of both relevance to the customer and relevance to the business.
These features are nicely captured in the above-referenced e-commerce architecture. We’ll divide it into four different stages or layers:
1. **Multi-tenant streaming ingestion:** With the help of the MongoDB Kafka connector, we are able to sync real-time data from multiple sources to Mongodb. For the sake of simplicity, in this tutorial, we will not focus on this stage.
2. **Stream processing:** With the help of the MongoDB Spark connector and Databricks jobs and notebooks, we are able to ingest data and transform it to create machine learning model features.
3. **AI/ML modeling:** All the generated streams of data are transformed and written into a unified view in a MongoDB collection called catalog, which is used to build search indexes and support querying and discovery of products.
4. **Building the search logic:** With the help of Atlas Search capabilities and robust aggregation pipelines, we can power features such as search/discoverability, hyper-personalization, and featured sort on mobile/web applications.
## Prerequisites
Before running the app, you'll need to have the following installed on your system:
* MongoDB Atlas cluster
* Databricks cluster
* python>=3.7
* pip3
* Node.js and npm
* Apache Kafka
* GitHub repository
## Streaming data into Databricks
In this tutorial, we’ll focus on explaining how to orchestrate different ETL pipelines in real time using Databricks Jobs. A Databricks job represents a single, standalone execution of a Databricks notebook, script, or task. It is used to run specific code or analyses at a scheduled time or in response to an event.
Our search solution is meant to respond to real-time events happening in an e-commerce storefront, so the search experience for a customer can be personalized and provide search results that fit two criteria:
1. **Relevant for the customer:** We will define a static score comprising behavioral data (click logs) and an Available to Promise status, so search results are products that we make sure are available and relevant based off of previous demand.
2. **Relevant for the business:** The results will be scored based on which products are more price sensitive, so higher price elasticity means they appear first on the product list page and as search results. We will also compute an optimal suggested price for the product.
So let’s check out how to configure these ETL processes over Databricks notebooks and orchestrate them using Databricks jobs to then fuel our MongoDB collections with the intelligence that we will use to build our search experience.
## Databricks jobs for product stream processing, static score, and pricing
We’ll start by explaining how to configure notebooks in Databricks. Notebooks are a key tool for data science and machine learning, allowing collaboration, real-time coauthoring, versioning, and built-in data visualization. You can also make them part of automated tasks, called jobs in Databricks. A series of jobs are called workflows. Your notebooks and workflows can be attached to computing resources that you can set up at your convenience, or they can be run via autoscale.
Learn more about how to configure jobs in Databricks using JSON configuration files.
You can find our first job JSON configuration files in our GitHub. In these JSON files, we specify the different parameters on how to run the various jobs in our Databricks cluster. We specify different parameters such as the user, email notifications, task details, cluster information, and notification settings for each task within the job. This configuration is used to automate and manage data processing and analysis tasks within a specified environment.
Now, without further ado, let’s start with our first workflow, the “Catalog collection indexing workflow.”
## Catalog collection indexing workflow
The above diagram shows how our solution will run two different jobs closely related to each other in two separate notebooks. Let’s unpack this job with the code and its explanation:
The first part of your notebook script is where you’ll define and install different packages. In the code below, we have all the necessary packages, but the main ones — `pymongo` and `tqdm` — are explained below:
* PyMongo is commonly used in Python applications that need to store, retrieve, or analyze data stored in MongoDB, especially in web applications, data pipelines, and analytics projects.
* tqdm is often used in Python scripts or applications where there's a need to provide visual feedback to users about the progress of a task.
The rest of the packages are pandas, JSON, and PySpark. In this part of the snippet, we also define a variable for the MongoDB connection string to our cluster.
```
%pip install pymongo tqdm
import pandas as pd
import json
from collections import Counter
from tqdm import tqdm
from pymongo import MongoClient
from pyspark.sql import functions as F
from pyspark.sql import types as T
from pyspark.sql import Window
import pyspark
from pyspark import SparkContext
from pyspark.sql import SparkSession
conf = pyspark.SparkConf()
import copy
import numpy as np
tqdm.pandas()
MONGO_CONN = 'mongodb+srv://:@retail-demo.2wqno.mongodb.net/?retryWrites=true&w=majority'
```
## Data streaming from MongoDB
The script reads data streams from various MongoDB collections using the spark.readStream.format("mongodb") method.
For each collection, specific configurations are set, such as the MongoDB connection URI, database name, collection name, and other options related to change streams and aggregation pipelines.
The snippet below is the continuation of the code from above. It can be put in a different cell in the same notebook.
```
atp = spark.readStream.format("mongodb").\ option('spark.mongodb.connection.uri', MONGO_CONN).\ option('spark.mongodb.database', "search").\ option('spark.mongodb.collection', "atp_status_myn").\ option('spark.mongodb.change.stream.publish.full.document.only','true').\ option('spark.mongodb.aggregation.pipeline',]).\ option("forceDeleteTempCheckpointLocation", "true").load()
```
In this specific case, the code is reading from the atp_status collection. It specifies options for the MongoDB connection, including the URI, and enables the capture of the full document when changes occur in the MongoDB collection. The empty aggregation pipeline indicates that no specific transformations are applied at this stage.
Following with the next stage of the job for the atp_status collection, we can break down the code snippet into three different parts:
#### Data transformation and data writing to MongoDB
After reading the data streams, we drop the ``_id`` field. This is a special field that serves as the primary key for a document within a collection. Every document in a MongoDB collection must have a [unique _id field, which distinguishes it from all other documents in the same collection. As we are going to create a new collection, we need to drop the previous _id field of the original documents, and when we insert it into a new collection, a new _id field will be assigned.
```
atp = atp.drop("_id")
```
#### Data writing to MongoDB
The transformed data streams are written back to MongoDB using the **writeStream.format("mongodb")** method.
The data is written to the catalog_myn collection in the search database.
Specific configurations are set for each write operation, such as the MongoDB connection URI, database name, collection name, and other options related to upserts, checkpoints, and output modes.
The below code snippet is a continuation of the notebook from above.
```
atp.writeStream.format("mongodb").\ option('spark.mongodb.connection.uri', MONGO_CONN).\ option('spark.mongodb.database', "search").\ option('spark.mongodb.collection', "catalog_myn").\ option('spark.mongodb.operationType', "update").\ option('spark.mongodb.upsertDocument', True).\ option('spark.mongodb.idFieldList', "id").\
```
#### Checkpointing
Checkpoint locations are specified for each write operation. Checkpoints are used to maintain the state of streaming operations, allowing for recovery in case of failures. The checkpoints are stored in the /tmp/ directory with specific subdirectories for each collection.
Here is an example of checkpointing. It’s included in the script right after the code from above.
```
option("forceDeleteTempCheckpointLocation", "true").\ option("checkpointLocation", "/tmp/retail-atp-myn4/_checkpoint/").\ outputMode("append").\ start()
```
The full snippet of code performs different data transformations for the various collections we are ingesting into Databricks, but they all follow the same pattern of ingestion, transformation, and rewriting back to MongoDB. Make sure to check out the full first indexing job notebook.
For the second part of the indexing job, we will use a user-defined function (UDF) in our code to embed our product catalog data using a transformers model. This is useful to be able to build Vector Search features.
This is an example of how to define a user-defined function. You can define your functions early in your notebook so you can reuse them later for running your data transformations or analytics calculations. In this case, we are using it to embed text data from a document.
The **‘@F.udf()’** decorator is used to define a user-defined function in PySpark using the F object, which is an alias for the pyspark.sql.functions module. In this specific case, it is defining a UDF named ‘get_vec’ that takes a single argument text and returns the result of calling ‘model.encode(text)’.
The code from below is a continuation of the same notebook.
```
@F.udf() def get_vec(text):
return model.encode(text)
```
Our notebook code continues with similar snippets to previous examples. We'll use the MongoDB Connector for Spark to ingest data from the previously built catalog collection.
```
catalog_status = spark.readStream.format("mongodb").\ option('spark.mongodb.connection.uri', MONGO_CONN).\ option('spark.mongodb.database', "search").\ option('spark.mongodb.collection', "catalog_myn").\ option('spark.mongodb.change.stream.publish.full.document.only','true').\ option('spark.mongodb.aggregation.pipeline',]).\ option("forceDeleteTempCheckpointLocation", "true").load()
```
Then, it performs data transformations on the catalog_status DataFrame, including adding a new column, the atp_status that is now a boolean value, 1 for available, and 0 for unavailable. This is useful for us to be able to define the business logic of the search results showcasing only the products that are available.
We also calculate the discounted price based on data from another job we will explain further along.
The below snippet is a continuation of the notebook code from above:
```
catalog_status = catalog_status.withColumn("discountedPrice", F.col("price") * F.col("pred_price")) catalog_status = catalog_status.withColumn("atp", (F.col("atp").cast("boolean") & F.lit(1).cast("boolean")).cast("integer"))
```
We vectorize the title of the product and we create a new field called “vec”. We then drop the "_id" field, indicating that this field will not be updated in the target MongoDB collection.
```
catalog_status.withColumn("vec", get_vec("title")) catalog_status = catalog_status.drop("_id")
```
Finally, it sets up a structured streaming write operation to write the transformed data to a MongoDB collection named "catalog_final_myn" in the "search" database while managing query state and checkpointing.
```
catalog_status.writeStream.format("mongodb").\ option('spark.mongodb.connection.uri', MONGO_CONN).\ option('spark.mongodb.database', "search").\ option('spark.mongodb.collection', "catalog_final_myn").\ option('spark.mongodb.operationType', "update").\ option('spark.mongodb.idFieldList', "id").\ option("forceDeleteTempCheckpointLocation", "true").\ option("checkpointLocation", "/tmp/retail-atp-myn5/_checkpoint/").\ outputMode("append").\ start()
```
Let’s see how to configure the second workflow to calculate a BI score for each product in the collection and introduce the result back into the same document so it’s reusable for search scoring.
## BI score computing logic workflow
![Diagram overview of the BI score computing job logic using materialized views to ingest data from a MongoDB collection and process user click logs with Empirical Bayes algorithm.
In this stage, we will explain the script to be run in our Databricks notebook as part of the BI score computing job. Please bear in mind that we will only explain what makes this code snippet different from the previous, so make sure to understand how the complete snippet works. Please feel free to clone our complete repository so you can get a full view on your local machine.
We start by setting up the configuration for Apache Spark using the SparkConf object and specify the necessary package dependency for our MongoDB Spark connector.
```
conf = pyspark.SparkConf() conf.set("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector_2.12:10.1.0")
```
Then, we initialize a Spark session for our Spark application named "test1" running in local mode. It also configures Spark with the MongoDB Spark connector package dependency, which is set up in the conf object defined earlier. This Spark session can be used to perform various data processing and analytics tasks using Apache Spark.
The below code is a continuation to the notebook snippet explained above:
```
spark = SparkSession.builder \
.master("local") \
.appName("test1") \
.config(conf = conf) \
.getOrCreate()
```
We’ll use MongoDB Aggregation Pipelines in our code snippet to get a set of documents, each representing a unique "product_id" along with the corresponding counts of total views, purchases, and cart events. We’ll use the transformed resulting data to feed an Empirical Bayes algorithm and calculate a value based on the cumulative distribution function (CDF) of a beta distribution.
Make sure to check out the entire .ipynb file in our repository.
This way, we can calculate the relevance of a product based on the behavioral data described before. We’ll also use window functions to calculate different statistics on each one of the products — like the average of purchases and the purchase beta (the difference between the average total clicks and average total purchases) — to use as input to create a BI relevance score. This is what is shown in the below code:
```
@F.udf(T.FloatType())
def beta_fn(pct,a,b):
return float(100*beta.cdf(pct, a,b)) w = Window().partitionBy() df =
df.withColumn("purchase_alpha", F.avg('purchase').over(w)) df = df.withColumn("cart_alpha", F.avg('cart').over(w)) df = df.withColumn("total_views_mean", F.avg('total_views').over(w)) df = df.withColumn("purchase_beta", F.expr('total_views_mean - purchase_alpha'))
df = df.withColumn("cart_beta", F.expr('total_views_mean - cart_alpha')) df = df.withColumn("purchase_pct", F.expr('(purchase+purchase_alpha)/(total_views+purchase_alpha+purchase_beta)'))
df = df.withColumn("cart_pct", F.expr('(purchase+cart_alpha)/(total_views+cart_alpha+cart_beta)'))
```
After calculating the BI score for our product, we want to use a machine learning algorithm to calculate the price elasticity of demand for the product and the optimal price.
## Calculating optimal price workflow
For calculating the optimal recommended price, first, we need to figure out a pipeline that will shape the data according to what we need. Get the pipeline definition in our repository.
We’ll first take in data from the MongoDB Atlas click logs (clog) collection that’s being ingested in the database in real-time, and create a DataFrame that will be used as input for a Random Forest regressor machine learning model. We’ll leverage the MLFlow library to be able to run MLOps stages, run tests, and register the best-performing model that will be used in the second job to calculate the price elasticity of demand, the suggested discount, and optimal price for each product. Let’s see what the code looks like!
```
model_name = "retail_competitive_pricing_model_1"
with mlflow.start_run(run_name=model_name):
# Create and fit a linear regression model
model = RandomForestRegressor(n_estimators=50, max_depth=3)
model.fit(X_train, y_train)
wrappedModel = CompPriceModelWrapper(model)
# Log model parameters and metrics
mlflow.log_params(model.get_params())
mlflow.log_metric("mse", np.mean((model.predict(X_test) - y_test) ** 2))
# Log the model with a signature that defines the schema of the model's inputs and outputs.
# When the model is deployed, this signature will be used to validate inputs.
signature = infer_signature(X_train, wrappedModel.predict(None,X_train))
# MLflow contains utilities to create a conda environment used to serve models.
# The necessary dependencies are added to a conda.yaml file which is logged along with the model.
conda_env = _mlflow_conda_env(
additional_conda_deps=None,
additional_pip_deps="scikit-learn=={}".format(sklearn.__version__)],
additional_conda_channels=None,
)
mlflow.pyfunc.log_model(model_name, python_model=wrappedModel, conda_env=conda_env, signature=signature)
```
After we’ve done the test and train split required for fitting the model, we leverage the mlFlow model wrapping to be able to log model parameters, metrics, and dependencies.
For the next stage, we apply the previously trained and registered model to the sales data:
```
model_name = "retail_competitive_pricing_model_1"
apply_model_udf = mlflow.pyfunc.spark_udf(spark, f"models:/{model_name}/staging")
# Apply the model to the new data
columns = ['old_sales','total_sales','min_price','max_price','avg_price','old_avg_price']
udf_inputs = struct(*columns)
udf_inputs
```
Then, we just need to create the sales DataFrame with the resulting data. But first, we use the [.fillna function to make sure all our null values are cast into floats 0.0. We need to perform this so our model has proper data and because most machine learning models return an error if you pass null values.
Now, we can calculate new columns to add to the sales DataFrame: the predicted optimal price, the price elasticity of demand per product, and a discount column which will be rounded up to the next nearest integer. The below code is a continuation of the code from above — they both reside in the same notebook:
```
sales = sales.fillna(0.0)
sales = sales.withColumn("pred_price",apply_model_udf(udf_inputs))
sales = sales.withColumn("price_elasticity", F.expr("((old_sales - total_sales)/(old_sales + total_sales))/(((old_avg_price - avg_price)+1)/(old_avg_price + avg_price))"))
sales = sales.withColumn("discount", F.ceil((F.lit(1) - F.col("pred_price"))*F.lit(100)))
```
Then, we push the data back using the MongoDB Connector for Spark into the proper MongoDB collection. These will be used together with the rest as the baseline on top of which we’ll build our application’s search business logic.
```
sales.select("id", "pred_price", "price_elasticity").write.format("mongodb").\ option('spark.mongodb.connection.uri', MONGO_CONN).\ option('spark.mongodb.database', "search").\ option('spark.mongodb.collection', "price_myn").\ option('spark.mongodb.idFieldList', 'id').\ mode('overwrite').\ save()
```
After these workflows are configured, you should be able to see the new collections and updated documents for your products.
## Building the search logic
To build the search logic, first, you’ll need to create an index. This is how we’ll make sure that our application runs smoothly as a search query, instead of having to look into all the documents in the collection. We will limit the scan by defining the criteria for those scans.
To understand more about indexing in MongoDB, you can check out the article from the documentation. But for the purposes of this tutorial, let’s dive into the two main parameters you’ll need to define for building our solution:
**Mappings:** This key dictates how fields in the index should be stored and how they should be treated when queries are made against them.
**Fields:** The fields describe the attributes or columns of the index. Each field can have specific data types and associated settings. We implement the sortable number functionality for the fields ‘pred_price’, ‘price_elasticity’, and ‘score’. So in this way, our search results are organized by relevance.
The latter steps of building the solution come to defining the index mapping for the application. You can find the full mappings snippet in our GitHub repository.
To configure the index, you can insert the snippet in MongoDB Atlas by browsing your cluster splash page and clicking over the “Search” tab:
Next, you can click over “Create Index.” Make sure you select “JSON Editor”:
Paste the JSON snippet from above — make sure you select the correct database and collection! In our case, the collection name is **`catalog_final_myn`**.
## Autocomplete
To define autocomplete indexes, you can follow the same browsing instructions from the Building the search logic stage, but in the JSON editor, your code snippet may vary. Follow our tutorial to learn how to fully configure autocomplete in Atlas Search.
For our search solution, check out the code below. We define how the data should be treated and indexed for autocomplete features.
```
{
"mappings": {
"dynamic": false,
"fields": {
"query":
{
"foldDiacritics": false,
"maxGrams": 7,
"minGrams": 3,
"tokenization": "edgeGram",
"type": "autocomplete"
}
]
}
}
}
```
Let’s break down each of the parameters:
**foldDiacritics:** Setting this to false means diacritic marks on characters (like accents on letters) are treated distinctly. For instance, "résumé" and "resume" would be treated as different words.
**minGrams and maxGrams:** These specify the minimum and maximum lengths of the edge n-grams. In this case, it would index substrings (edgeGrams) with lengths ranging from 3 to 7.
**Tokenization:** The value edgeGram means the text is tokenized into substrings starting from the beginning of the string. For instance, for the word "example", with minGrams set to 3, the tokens would be "exa", "exam", "examp", etc. This is commonly used in autocomplete scenarios to match partial words.
After all of this, you should have an AI-enhanced search functionality for your e-commerce storefront!
### Conclusion
In summary, we’ve covered how to integrate MongoDB Atlas and Databricks to build a performant and intelligent search feature for an e-commerce application.
By using the MongoDB Connector for Spark and Databricks, along with MLFlow for MLOps, we've created real-time pipelines for AI. Additionally, we've configured MongoDB Atlas Search indexes, utilizing features like Autocomplete, to build a cutting-edge search engine.
Grasping the complexities of e-commerce business models is complicated enough without also having to handle knotty integrations and operational overhead! Counting on the right tools for the job gets you several months ahead out-innovating the competition.
Check out the [GitHub repository or reach out over LinkedIn if you want to discuss search or any other retail functionality!
| md | {
"tags": [
"Atlas",
"Python",
"Node.js",
"Kafka"
],
"pageDescription": "Learn how to utilize MongoDB and Databricks to build ai-enhanced retail search solutions.",
"contentType": "Tutorial"
} | Learn to Build AI-Enhanced Retail Search Solutions with MongoDB and Databricks | 2024-05-20T17:32:23.500Z |
devcenter | https://www.mongodb.com/developer/languages/rust/rust-mongodb-crud-tutorial | created | # Get Started with Rust and MongoDB
This Quick Start post will help you connect your Rust application to a MongoDB cluster. It will then show you how to do Create, Read, Update, and Delete (CRUD) operations on a collection. Finally, it'll cover how to use serde to map between MongoDB's BSON documents and Rust structs.
## Series Tools & Versions
This series assumes that you have a recent version of the Rust toolchain installed (v1.57+), and that you're comfortable with Rust syntax. It also assumes that you're reasonably comfortable using the command-line and your favourite code editor.
>Rust is a powerful systems programming language with high performance and low memory usage which is suitable for a wide variety of tasks. Although currently a niche language for working with data, its popularity is quickly rising.
If you use Rust and want to work with MongoDB, this blog series is the place to start! I'm going to show you how to do the following:
- Install the MongoDB Rust driver. The Rust driver is the mongodb crate which allows you to communicate with a MongoDB cluster.
- Connect to a MongoDB instance.
- Create, Read, Update & Delete (CRUD) documents in your database.
Later blog posts in the series will cover things like *Change Streams*, *Transactions* and the amazing *Aggregation Pipeline* feature which allows you to run advanced queries on your data.
## Prerequisites
I'm going to assume you have a working knowledge of Rust. I won't use any complex Rust code - this is a MongoDB tutorial, not a Rust tutorial - but you'll want to know the basics of error-handling and borrowing in Rust, at least! You may want to run `rustup update` if you haven't since January 2022 because I'll be working with a recent release.
You'll need the following:
- An up-to-date Rust toolchain, version 1.47+. I recommend you install it with Rustup if you haven't already.
- A code editor of your choice. I recommend either IntelliJ Rust or the free VS Code with the official Rust plugin
The MongoDB Rust driver uses Tokio by default - and this tutorial will do that too. If you're interested in running under async-std, or synchronously, the changes are straightforward. I'll cover them at the end.
## Creating your database
You'll use MongoDB Atlas to host a MongoDB cluster, so you don't need to worry about how to configure MongoDB itself.
> Get started with an M0 cluster on Atlas. It's free forever, and it's the easiest way to try out the steps in this blog series. You won't even need to provide payment details.
You'll need to create a new cluster and load it with sample data My awesome colleague Maxime Beugnet has created a video tutorial to help you out, but I also explain the steps below:
- Click "Start free" on the MongoDB homepage.
- Enter your details, or just sign up with your Google account, if you have one.
- Accept the Terms of Service
- Create a *Starter* cluster.
- Select the same cloud provider you're used to, or just leave it as-is. Pick a region that makes sense for you.
- You can change the name of the cluster if you like. I've called mine "RustQuickstart".
It will take a couple of minutes for your cluster to be provisioned, so while you're waiting you can move on to the next step.
## Starting your project
In your terminal, change to the directory where you keep your coding projects and run the following command:
``` bash
cargo new --bin rust_quickstart
```
This will create a new directory called `rust_quickstart` containing a new, nearly-empty project. In the directory, open `Cargo.toml` and change the `dependencies]` section so it looks like this:
``` toml
[dependencies]
mongodb = "2.1"
bson = { version = "2", features = ["chrono-0_4"] } # Needed for using chrono datetime in doc
tokio = "1"
chrono = "0.4" # Used for setting DateTimes
serde = "1" # Used in the Map Data into Structs section
```
Now you can download and build the dependencies by running:
``` bash
cargo run
```
You should see *lots* of dependencies downloaded and compiled. Don't worry, most of this only happens the first time you run it! At the end, if everything went well, it should print "Hello, World!" in your console.
## Set up your MongoDB instance
Your MongoDB cluster should have been set up and running for a little while now, so you can go ahead and get your database set up for the next steps.
In the Atlas web interface, you should see a green button at the bottom-left of the screen, saying "Get Started". If you click on it, it'll bring up a checklist of steps for getting your database set up. Click on each of the items in the list (including the optional "Load Sample Data" item), and it'll help you through the steps to get set up.
### Create a User
Following the "Get Started" steps, create a user with "Read and write access to any database". You can give it a username and password of your choice - take a note of them, you'll need them in a minute. Use the "autogenerate secure password" button to ensure you have a long random password which is also safe to paste into your connection string later.
### Allow an IP address
When deploying an app with sensitive data, you should only allow the IP address of the servers which need to connect to your database. Click the 'Add IP Address' button, then click 'Add Current IP Address' and finally, click 'Confirm'. You can also set a time-limit on an access list entry, for added security. Note that sometimes your IP address may change, so if you lose the ability to connect to your MongoDB cluster during this tutorial, go back and repeat these steps.
## Connecting to MongoDB
Now you've got the point of this tutorial - connecting your Rust code to a MongoDB database! The last step of the "Get Started" checklist is "Connect to your Cluster". Select "Connect your application".
Usually, in the dialog that shows up, you'd select "Rust" in the "Driver" menu, but because the Rust driver has only just been released, it may not be in the list! You should select "Python" with a version of "3.6 or later".
Ensure Step 2 has "Connection String only" highlighted, and press the "Copy" button to copy the URL to your pasteboard (just storing it temporarily in a text file is fine). Paste it to the same place you stored your username and password. Note that the URL has `` as a placeholder for your password. You should paste your password in here, replacing the whole placeholder including the '\<' and '>' characters.
Back in your Rust project, open `main.rs` and replace the contents with the following:
``` rust
use mongodb::{Client, options::{ClientOptions, ResolverConfig}};
use std::env;
use std::error::Error;
use tokio;
#[tokio::main]
async fn main() -> Result<(), Box> {
// Load the MongoDB connection string from an environment variable:
let client_uri =
env::var("MONGODB_URI").expect("You must set the MONGODB_URI environment var!");
// A Client is needed to connect to MongoDB:
// An extra line of code to work around a DNS issue on Windows:
let options =
ClientOptions::parse_with_resolver_config(&client_uri, ResolverConfig::cloudflare())
.await?;
let client = Client::with_options(options)?;
// Print the databases in our MongoDB cluster:
println!("Databases:");
for name in client.list_database_names(None, None).await? {
println!("- {}", name);
}
Ok(())
}
```
In order to run this, you'll need to set the MONGODB_URI environment variable to the connection string you obtained above. Run one of the following in your terminal window, depending on your platform:
``` bash
# Unix (including MacOS):
export MONGODB_URI='mongodb+srv://yourusername:[email protected]/test?retryWrites=true&w=majority'
# Windows CMD shell:
set MONGODB_URI='mongodb+srv://yourusername:[email protected]/test?retryWrites=true&w=majority'
# Powershell:
$Env:MONGODB_URI='mongodb+srv://yourusername:[email protected]/test?retryWrites=true&w=majority'
```
Once you've done that, you can `cargo run` this code, and the result should look like this:
``` none
$ cargo run
Compiling rust_quickstart v0.0.1 (/Users/judy2k/development/rust_quickstart)
Finished dev [unoptimized + debuginfo] target(s) in 3.35s
Running `target/debug/rust_quickstart`
Databases:
- sample_airbnb
- sample_analytics
- sample_geospatial
- sample_mflix
- sample_supplies
- sample_training
- sample_weatherdata
- admin
- local
```
**Congratulations!** You just connected your Rust program to MongoDB and listed the databases in your cluster. If you don't see this list then you may not have successfully loaded sample data into your cluster - you'll want to go back a couple of steps until running this command shows the list above.
## BSON - How MongoDB understands data
Before you go ahead querying & updating your database, it's useful to have an overview of BSON and how it relates to MongoDB. BSON is the binary data format used by MongoDB to store all your data. BSON is also the format used by the MongoDB query language and aggregation pipelines (I'll get to these later).
It's analogous to JSON and handles all the same core types, such as numbers, strings, arrays, and objects (which are called Documents in BSON), but BSON supports more types than JSON. This includes things like dates & decimals, and it has a special ObjectId type usually used for identifying documents in a MongoDB collection. Because BSON is a binary format it's not human readable - usually when it's printed to the screen it'll be printed to look like JSON.
Because of the mismatch between BSON's dynamic schema and Rust's static type system, dealing with BSON in Rust can be tricky. Fortunately the `bson` crate provides some useful tools for dealing with BSON data, including the `doc!` macro for generating BSON documents, and it implements [serde for the ability to serialize and deserialize between Rust structs and BSON data.
Creating a document structure using the `doc!` macro looks like this:
``` rust
use chrono::{TimeZone, Utc};
use mongodb::bson::doc;
let new_doc = doc! {
"title": "Parasite",
"year": 2020,
"plot": "A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.",
"released": Utc.ymd(2020, 2, 7).and_hms(0, 0, 0),
};
```
If you use `println!` to print the value of `new_doc` to the console, you should see something like this:
``` none
{ title: "Parasite", year: 2020, plot: "A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.", released: Date("2020-02-07 00:00:00 UTC") }
```
(Incidentally, Parasite is an absolutely amazing movie. It isn't already in the database you'll be working with because it was released in 2020 but the dataset was last updated in 2015.)
Although the above output looks a bit like JSON, this is just the way the BSON library implements the `Display` trait. The data is still handled as binary data under the hood.
## Creating Documents
The following examples all use the sample_mflix dataset that you loaded into your Atlas cluster. It contains a fun collection called `movies`, with the details of a whole load of movies with releases dating back to 1903, from IMDB's database.
The Client type allows you to get the list of databases in your cluster, but not much else. In order to actually start working with data, you'll need to get a Database using either Client's `database` or `database_with_options` methods. You'll do this in the next section.
The code in the last section constructs a Document in memory, and now you're going to persist it in the movies database. The first step before doing anything with a MongoDB collection is to obtain a Collection object from your database. This is done as follows:
``` rust
// Get the 'movies' collection from the 'sample_mflix' database:
let movies = client.database("sample_mflix").collection("movies");
```
If you've browsed the movies collection with Compass or the "Collections" tab in Atlas, you'll see that most of the records have more fields than the document I built above using the `doc!` macro. Because MongoDB doesn't enforce a schema within a collection by default, this is perfectly fine, and I've just cut down the number of fields for readability. Once you have a reference to your MongoDB collection, you can use the `insert_one` method to insert a single document:
``` rust
let insert_result = movies.insert_one(new_doc.clone(), None).await?;
println!("New document ID: {}", insert_result.inserted_id);
```
The `insert_one` method returns the type `Result` which can be used to identify any problems inserting the document, and can be used to find the id generated for the new document in MongoDB. If you add this code to your main function, when you run it, you should see something like the following:
``` none
New document ID: ObjectId("5e835f3000415b720028b0ad")
```
This code inserts a single `Document` into a collection. If you want to insert multiple Documents in bulk then it's more efficient to use `insert_many` which takes an `IntoIterator` of Documents which will be inserted into the collection.
## Retrieve Data from a Collection
Because I know there are no other documents in the collection with the name Parasite, you can look it up by title using the following code, instead of the ID you retrieved when you inserted the record:
``` rust
// Look up one document:
let movie: Document = movies
.find_one(
doc! {
"title": "Parasite"
},
None,
).await?
.expect("Missing 'Parasite' document.");
println!("Movie: {}", movie);
```
This code should result in output like the following:
``` none
Movie: { _id: ObjectId("5e835f3000415b720028b0ad"), title: "Parasite", year: 2020, plot: "A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.", released: Date("2020-02-07 00:00:00 UTC") }
```
It's very similar to the output above, but when you inserted the record, the MongoDB driver generated a unique ObjectId for you to identify this document. Every document in a MongoDB collection has a unique `_id` value. You can provide a value yourself if you have a value that is guaranteed to be unique, or MongoDB will generate one for you, as it did in this case. It's usually good practice to explicitly set a value yourself.
The find_one method is useful to retrieve a single document from a collection, but often you will need to search for multiple records. In this case, you'll need the find method, which takes similar options as this call, but returns a `Result`. The `Cursor` is used to iterate through the list of returned data.
The find operations, along with their accompanying filter documents are very powerful, and you'll probably use them a lot. If you need more flexibility than `find` and `find_one` can provide, then I recommend you check out the documentation on Aggregation Pipelines which are super-powerful and, in my opinion, one of MongoDB's most powerful features. I'll write another blog post in this series just on that topic - I'm looking forward to it!
## Update Documents in a Collection
Once a document is stored in a collection, it can be updated in various ways. If you would like to completely replace a document with another document, you can use the find_one_and_replace method, but it's more common to update one or more parts of a document, using update_one or update_many. Each separate document update is atomic, which can be a useful feature to keep your data consistent within a document. Bear in mind though that `update_many` is not itself an atomic operation - for that you'll need to use multi-document ACID Transactions, available in MongoDB since version 4.0 (and available for sharded collections since 4.2). Version 2.x of the Rust driver supports transactions for replica sets.
To update a single document in MongoDB, you need two BSON Documents: The first describes the query to find the document you'd like to update; The second Document describes the update operations you'd like to conduct on the document in the collection. Although the "release" date for Parasite was in 2020, I think this refers to the release in the USA. The *correct* year of release was 2019, so here's the code to update the record accordingly:
``` rust
// Update the document:
let update_result = movies.update_one(
doc! {
"_id": &movie.get("_id")
},
doc! {
"$set": { "year": 2019 }
},
None,
).await?;
println!("Updated {} document", update_result.modified_count);
```
When you run the above, it should print out "Updated 1 document". If it doesn't then something has happened to the movie document you inserted earlier. Maybe you've deleted it? Just to check that the update has updated the year value correctly, here's a `find_one` command you can add to your program to see what the updated document looks like:
``` rust
// Look up the document again to confirm it's been updated:
let movie = movies
.find_one(
doc! {
"_id": &movie.get("_id")
},
None,
).await?
.expect("Missing 'Parasite' document.");
println!("Updated Movie: {}", &movie);
```
When I ran these blocks of code, the result looked like the text below. See how it shows that the year is now 2019 instead of 2020.
``` none
Updated 1 document
Updated Movie: { _id: ObjectId("5e835f3000415b720028b0ad"), title: "Parasite", year: 2019, plot: "A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.", released: Date("2020-02-07 00:00:00 UTC") }
```
## Delete Documents from a Collection
In the above sections you learned how to create, read and update documents in the collection. If you've run your program a few times, you've probably built up quite a few documents for the movie Parasite! It's now a good time to clear that up using the `delete_many` method. The MongoDB rust driver provides 3 methods for deleting documents:
- `find_one_and_delete` will delete a single document from a collection and return the document that was deleted, if it existed.
- `delete_one` will find the documents matching a provided filter and will delete the first one found (if any).
- `delete_many`, as you might expect, will find the documents matching a provided filter, and will delete *all* of them.
In the code below, I've used `delete_many` because you may have created several records when testing the code above. The filter just searches for the movie by name, which will match and delete *all* the inserted documents, whereas if you searched by an `_id` value it would delete just one, because ids are unique.
If you're constantly filtering or sorting on a field, you should consider adding an index to that field to improve performance as your collection grows. Check out the MongoDB Manual for more details.
``` rust
// Delete all documents for movies called "Parasite":
let delete_result = movies.delete_many(
doc! {
"title": "Parasite"
},
None,
).await?;
println!("Deleted {} documents", delete_result.deleted_count);
```
You did it! Create, read, update and delete operations are the core operations you'll use again and again for accessing and managing the data in your MongoDB cluster. After the taster that this tutorial provides, it's definitely worth reading up in more detail on the following:
- Query Documents which are used for all read, update and delete operations.
- The MongoDB crate and docs which describe all of the operations the MongoDB driver provides for accessing and modifying your data.
- The bson crate and its accompanying docs describe how to create and map data for insertion or retrieval from MongoDB.
- The serde crate provides the framework for mapping between Rust data types and BSON with the bson crate, so it's important to learn how to take advantage of it.
## Using serde to Map Data into Structs
One of the features of the bson crate which may not be readily apparent is that it provides a BSON data format for the `serde` framework. This means you can take advantage of the serde crate to map between Rust datatypes and BSON types for persistence in MongoDB.
For an example of how this is useful, see the following example of how to access the `title` field of the `new_movie` document (*without* serde):
``` rust
use serde::{Deserialize, Serialize};
use mongodb::bson::{Bson, oid::ObjectId};
// Working with Document can be verbose:
if let Ok(title) = new_doc.get_str("title") {
println!("title: {}", title);
} else {
println!("no title found");
}
```
The first line of the code above retrieves the value of `title` and then attempts to retrieve it *as a string* (`Bson::as_str` returns `None` if the value is a different type). There's quite a lot of error-handling and conversion involved. The serde framework provides the ability to define a struct like the one below, with fields that match the document you're expecting to receive.
``` rust
// You use `serde` to create structs which can serialize & deserialize between BSON:
#derive(Serialize, Deserialize, Debug)]
struct Movie {
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
id: Option,
title: String,
year: i32,
plot: String,
#[serde(with = "bson::serde_helpers::chrono_datetime_as_bson_datetime")]
released: chrono::DateTime,
}
```
Note the use of the `Serialize` and `Deserialize` macros which tell serde that this struct can be serialized and deserialized. The `serde` attribute is also used to tell serde that the `id` struct field should be serialized to BSON as `_id`, which is what MongoDB expects it to be called. The parameter `skip_serializing_if = "Option::is_none"` also tells serde that if the optional value of `id` is `None` then it should not be serialized at all. (If you provide `_id: None` BSON to MongoDB it will store the document with an id of `NULL`, whereas if you do not provide one, then an id will be generated for you, which is usually the behaviour you want.) Also, we need to use an attribute to point ``serde`` to the helper that it needs to serialize and deserialize timestamps as defined by ``chrono``.
The code below creates an instance of the `Movie` struct for the Captain Marvel movie. (Wasn't that a great movie? I loved that movie!) After creating the struct, before you can save it to your collection, it needs to be converted to a BSON *document*. This is done in two steps: First it is converted to a Bson value with `bson::to_bson`, which returns a `Bson` instance; then it's converted specifically to a `Document` by calling `as_document` on it. It is safe to call `unwrap` on this result because I already know that serializing a struct to BSON creates a BSON document type.
Once your program has obtained a bson `Document` instance, you can call `insert_one` with it in exactly the same way as you did in the section above called [Creating Documents.
``` rust
// Initialize struct to be inserted:
let captain_marvel = Movie {
id: None,
title: "Captain Marvel".to_owned(),
year: 2019,
};
// Convert `captain_marvel` to a Bson instance:
let serialized_movie = bson::to_bson(&captain_marvel)?;
let document = serialized_movie.as_document().unwrap();
// Insert into the collection and extract the inserted_id value:
let insert_result = movies.insert_one(document.to_owned(), None).await?;
let captain_marvel_id = insert_result
.inserted_id
.as_object_id()
.expect("Retrieved _id should have been of type ObjectId");
println!("Captain Marvel document ID: {:?}", captain_marvel_id);
```
When I ran the code above, the output looked like this:
``` none
Captain Marvel document ID: ObjectId(5e835f30007760020028b0ae)
```
It's great to be able to create data using Rust's native datatypes, but I think it's even more valuable to be able to deserialize data into structs. This is what I'll show you next. In many ways, this is the same process as above, but in reverse.
The code below retrieves a single movie document, converts it into a `Bson::Document` value, and then calls `from_bson` on it, which will deserialize it from BSON into whatever type is on the left-hand side of the expression. This is why I've had to specify that `loaded_movie` is of type `Movie` on the left-hand side, rather than just allowing the rust compiler to derive that information for me. An alternative is to use the turbofish notation on the `from_bson` call, explicitly calling `from_bson::(loaded_movie)`. At the end of the day, as in many things Rust, it's your choice.
``` rust
// Retrieve Captain Marvel from the database, into a Movie struct:
// Read the document from the movies collection:
let loaded_movie = movies
.find_one(Some(doc! { "_id": captain_marvel_id.clone() }), None)
.await?
.expect("Document not found");
// Deserialize the document into a Movie instance
let loaded_movie_struct: Movie = bson::from_bson(Bson::Document(loaded_movie))?;
println!("Movie loaded from collection: {:?}", loaded_movie_struct);
```
And finally, here's what I got when I printed out the debug representation of the Movie struct (this is why I derived `Debug` on the struct definition above):
``` none
Movie loaded from collection: Movie { id: Some(ObjectId(5e835f30007760020028b0ae)), title: "Captain Marvel", year: 2019 }
```
You can check out the full Tokio code example on github.
## When You Don't Want To Run Under Tokio
### Async-std
If you prefer to use `async-std` instead of `tokio`, you're in luck! The changes are trivial. First, you'll need to disable the defaults features and enable the `async-std-runtime` feature:
``` none
dependencies]
async-std = "1"
mongodb = { version = "2.1", default-features = false, features = ["async-std-runtime"] }
```
The only changes you'll need to make to your rust code is to add `use async_std;` to the imports and tag your async main function with `#[async_std::main]`. All the rest of your code should be identical to the Tokio example.
``` rust
use async_std;
#[async_std::main]
async fn main() -> Result<(), Box> {
// Your code goes here.
}
```
You can check out the full async-std code example [on github.
### Synchronous Code
If you don't want to run under an async framework, you can enable the sync feature. In your `Cargo.toml` file, disable the default features and enable `sync`:
``` none
dependencies]
mongodb = { version = "2.1", default-features = false, features = ["sync"] }
```
You won't need your enclosing function to be an `async fn` any more. You'll need to use a different `Client` interface, defined in `mongodb::sync` instead, and you don't need to await the result of any of the IO functions:
``` rust
use mongodb::sync::Client;
// Use mongodb::sync::Client, instead of mongodb::Client:
let client = Client::with_uri_str(client_uri.as_ref())?;
// .insert_one().await? becomes .insert_one()?
let insert_result = movies.insert_one(new_doc.clone(), None)?;
```
You can check out the full synchronous code example [on github.
## Further Reading
The documentation for the MongoDB Rust Driver is very good. Because the BSON crate is also leveraged quite heavily, it's worth having the docs for that on-hand too. I made lots of use of them writing this quick start.
- Rust Driver Crate
- Rust Driver Reference Docs
- Rust Driver GitHub Repository
- BSON Crate
- BSON Reference Docs
- BSON GitHub Repository
- The BSON Specification
- Serde Documentation
## Conclusion
Phew! That was a pretty big tutorial, wasn't it? The operations described here will be ones you use again and again, so it's good to get comfortable with them.
What *I* learned writing the code for this tutorial is how much value the `bson` crate provides to you and the mongodb driver - it's worth getting to know that at least as well as the `mongodb` crate, as you'll be using it for data generation and conversion *a lot* and it's a deceptively rich library.
There will be more Rust Quick Start posts on MongoDB Developer Hub, covering different parts of MongoDB and the MongoDB Rust Driver, so keep checking back!
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Rust"
],
"pageDescription": "Learn how to perform CRUD operations using Rust for MongoDB databases.",
"contentType": "Quickstart"
} | Get Started with Rust and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/querying-mongodb-browser-realm-react | created | # Querying MongoDB in the Browser with React and the Web SDK
When we think of connecting to a database, we think of serverside. Our application server connects to our database server using the applicable driver for our chosen language. But with the Atlas App Service's Realm Web SDK, we can run queries against our Atlas cluster from our web browser, no server required.
## Security and Authentication
One of the primary reasons database connections are traditionally server-to-server is so that we do not expose any admin credentials. Leaking credentials like this is not a concern with the Web SDK as it has APIs for user management, authentication, and access control. There are no admin credentials to expose as each user has a separate account. Then, using Rules, we control what data each user has permission to access.
The MongoDB service uses a strict rules system that prevents all operations unless they are specifically allowed. MongoDB Atlas App Services determines if each operation is allowed when it receives the request from the client, based on roles that you define. Roles are sets of document-level and field-level CRUD permissions and are chosen individually for each document associated with a query.
Rules have the added benefit of enforcing permissions at the data access level, so you don't need to include any permission checks in your application logic.
## Creating an Atlas Cluster and Realm App
You can find instructions for creating a free MongoDB Atlas cluster and App Services App in our documentation: Create a App (App Services UI).
We're going to be using one of the sample datasets in this tutorial, so after creating your free cluster, click on Collections and select the option to load a sample dataset. Once the data is loaded, you should see several new databases in your cluster. We're going to be using the `sample_mflix` database in our code later.
## Users and Authentication Providers
Atlas Application Services supports a multitude of different authentication providers, including Google, Apple, and Facebook. For this tutorial, we're going to stick with regular email and password authentication.
In your App, go to the Users section and enable the Email/Password provider, the user confirmation method should be "automatic", and the password reset method should be a reset function. You can use theprovided stubbed reset function for now.
In a real-world application, we would have a registration flow so that users could create accounts. But for the sake of this tutorial, we're going to create a new user manually. While still in the "Users" section of your App, click on "Add New User" and enter the email and password you would like to use.
## Rules and Roles
Rules and Roles govern what operations a user can perform. If an operation has not been explicitly allowed, Atlas App Services will reject it. At the moment, we have no Rules or Roles, so our users can't access anything. We need to configure our first set of permissions.
Navigate to the "Rules" section and select the `sample_mflix` database and the `movies` collection. App Services has several "Permissions Template"s ready for you to use.
- Users can only read and write their own data.
- Users can read all data, but only write their own data.
- Users can only read all data.
These are just the most common types of permissions; you can create your own much more advanced rules to match your requirements.
- Configure a role that can only insert documents.
- Define field-level read or write permissions for a field in an embedded document.
- Determine field-level write permissions dynamically using a JSON expression.
- Invoke an Atlas Function to perform more involved checks, such as checking data from a different collection.
Read the documentation "Configure Advanced Rules" for more information.
We only want our users to be able to access their data and nothing else, so select the "Users can only read and write their own data" template.
App Services does not stipulate what field name you must use to store your user id; we must enter it when creating our configuration. Enter `authorId` as the field name in this example.
By now, you should have the Email/Password provider enabled, a new user created, and rules configured to allow users to access any data they own. Ensure you deploy all your changes, and we can move onto the code.
## Creating Our Web Application with React
Download the source for our demo application from GitHub.
Once you have the code downloaded, you will need to install a couple of dependencies.
``` shell
npm install
```
## The App Provider
As we're going to require access to our App client throughout our React component tree, we use a Context Provider. You can find the context providers for this project in the `providers` folder in the repo.
``` javascript
import * as RealmWeb from "realm-web"
import React, { useContext, useState } from "react"
const RealmAppContext = React.createContext(null)
const RealmApp = ({ children }) => {
const REALM_APP_ID = "realm-web-demo"
const app = new RealmWeb.App({ id: REALM_APP_ID })
const user, setUser] = useState(null)
const logIn = async (email, password) => {
const credentials = RealmWeb.Credentials.emailPassword(email, password)
try {
await app.logIn(credentials)
setUser(app.currentUser)
return app.currentUser
} catch (e) {
setUser(null)
return null
}
}
const logOut = () => {
if (user !== null) {
app.currentUser.logOut()
setUser(null)
}
}
return (
{children}
)
}
export const useRealmApp = () => {
const realmContext = useContext(RealmAppContext)
if (realmContext == null) {
throw new Error("useRealmApp() called outside of a RealmApp?")
}
return realmContext
}
export default RealmApp
```
This provider handles the creation of our Web App client, as well as providing methods for logging in and out. Let's look at these parts in more detail.
``` javascript
const RealmApp = ({ children }) => {
const REALM_APP_ID = "realm-web-demo"
const app = new RealmWeb.App({ id: REALM_APP_ID })
const [user, setUser] = useState(null)
```
The value for `REALM_APP_ID` is on your Atlas App Services dashboard. We instantiate a new Web App with the relevant ID. It is this App which allows us to access the different Atlas App Services services. You can find all required environment variables in the `.envrc.example` file.
You should ensure these variables are available in your environment in whatever manner you normally use. My personal preference is [direnv.
``` javascript
const logIn = async (email, password) => {
const credentials = RealmWeb.Credentials.emailPassword(email, password)
try {
await app.logIn(credentials)
setUser(app.currentUser)
return app.currentUser
} catch (e) {
setUser(null)
return null
}
}
```
The `logIn` method accepts the email and password provided by the user and creates an App Services credentials object. We then use this to attempt to authenticate with our App. If successful, we store the authenticated user in our state.
## The MongoDB Provider
Just like the App context provider, we're going to be accessing the Atlas service throughout our component tree, so we create a second context provider for our database.
``` javascript
import React, { useContext, useEffect, useState } from "react"
import { useRealmApp } from "./realm"
const MongoDBContext = React.createContext(null)
const MongoDB = ({ children }) => {
const { user } = useRealmApp()
const db, setDb] = useState(null)
useEffect(() => {
if (user !== null) {
const realmService = user.mongoClient("mongodb-atlas")
setDb(realmService.db("sample_mflix"))
}
}, [user])
return (
{children}
)
}
export const useMongoDB = () => {
const mdbContext = useContext(MongoDBContext)
if (mdbContext == null) {
throw new Error("useMongoDB() called outside of a MongoDB?")
}
return mdbContext
}
export default MongoDB
```
The Web SDK provides us with access to some of the different Atlas App Services, as well as [our custom functions. For this example, we are only interested in the `mongodb-atlas` service as it provides us with access to the linked MongoDB Atlas cluster.
``` javascript
useEffect(() => {
if (user !== null) {
const realmService = user.mongoClient("mongodb-atlas")
setDb(realmService.db("sample_mflix"))
}
}, user])
```
In this React hook, whenever our user variable updates—and is not null, so we have an authenticated user—we set our db variable equal to the database service for the `sample_mflix` database.
Once the service is ready, we can begin to run queries against our MongoDB database in much the same way as we would with the Node.js driver.
However, it is a subset of actions, so not all are available—the most notable absence is `collection.watch()`, but that is being actively worked on and should be released soon—but the common CRUD actions will work.
## Wrap the App in Index.js
The default boilerplate generated by `create-react-app` places the DOM renderer in `index.js`, so this is a good place for us to ensure that we wrap the entire component tree within our `RealmApp` and `MongoDB` contexts.
``` javascript
ReactDOM.render(
,
document.getElementById("root")
)
```
The order of these components is essential. We must create our Web App first before we attempt to access the `mongodb-atlas` service. So, you must ensure that `` is before ``. Now that we have our `` component nestled within our App and MongoDB contexts, we can query our Atlas cluster from within our React component!
## The Demo Application
Our demo has two main components: a login form and a table of movies, both of which are contained within the `App.js`. Which component we show depends upon whether the current user has authenticated or not.
``` javascript
function LogInForm(props) {
return (
Log in
props.setEmail(e.target.value)}
value={props.email}
/>
props.setPassword(e.target.value)}
value={props.password}
/>
Log in
)
}
```
The login form consists of two controlled text inputs and a button to trigger the handleLogIn function.
``` javascript
function MovieList(props) {
return (
Title
Plot
Rating
Year
{props.movies.map((movie) => (
{movie.title}
{movie.plot}
{movie.rated}
{movie.year}
))}
Log Out
)
}
```
The MovieList component renders an HTML table with a few details about each movie, and a button to allow the user to log out.
``` javascript
function App() {
const { logIn, logOut, user } = useRealmApp()
const { db } = useMongoDB()
const [email, setEmail] = useState("")
const [password, setPassword] = useState("")
const [movies, setMovies] = useState([])
useEffect(() => {
async function wrapMovieQuery() {
if (user && db) {
const authoredMovies = await db.collection("movies").find()
setMovies(authoredMovies)
}
}
wrapMovieQuery()
}, [user, db])
async function handleLogIn() {
await logIn(email, password)
}
return user && db && user.state === "active" ? (
) : (
)
}
export default App
```
Here, we have our main `` component. Let's look at the different sections in order.
``` javascript
const { logIn, logOut, user } = useRealmApp()
const { db } = useMongoDB()
const [email, setEmail] = useState("")
const [password, setPassword] = useState("")
const [movies, setMovies] = useState([])
```
We're going to use the App and the MongoDB provider in this component: App for authentication, MongoDB to run our query. We also set up some state to store our email and password for logging in, and hopefully later, any movie data associated with our account.
``` javascript
useEffect(() => {
async function wrapMovieQuery() {
if (user && db) {
const authoredMovies = await db.collection("movies").find()
setMovies(authoredMovies)
}
}
wrapMovieQuery()
}, [user, db])
```
This React hook runs whenever our user or db updates, which occurs whenever we successfully log in or out. When the user logs in—i.e., we have a valid user and a reference to the `mongodb-atlas` service—then we run a find on the movies collection.
``` javascript
const authoredMovies = await db.collection("movies").find()
```
Notice we do not need to specify the User Id to filter by in this query. Because of the rules, we configured earlier only those documents owned by the current user will be returned without any additional filtering on our part.
## Taking Ownership of Documents
If you run the demo and log in now, the movie table will be empty. We're using the sample dataset, and none of the documents within it belongs to our current user. Before trying the demo, modify a few documents in the movies collection and add a new field, `authorId`, with a value equal to your user's ID. You can find their ID in the App Users section.
Once you have given ownership of some documents to your current user, try running the demo application and logging in.
Congratulations! You have successfully queried your database from within your browser, no server required!
## Change the Rules
Try modifying the rules and roles you created to see how it impacts the demo application.
Ignore the warning and delete the configuration for the movies collection. Now, your App should die with a 403 error: "no rule exists for namespace' sample_mflix.movies.'"
Use the "Users can read all data, but only write their own data" template. I would suggest also modifying the `find()` or adding a `limit()` as otherwise, the demo will try to show every movie in your table!
Add field-level permissions. In this example, non-owners cannot write to any documents, but they can read the title and year fields for all documents.
``` javascript
{
"roles": [
{
"name": "owner",
"apply_when": {
"authorId": "%%user.id"
},
"insert": true,
"delete": true,
"search": true,
"read": true,
"write": true,
"fields": {
"title": {},
"year": {}
},
"additional_fields": {}
},
{
"name": "non-owner",
"apply_when": {},
"insert": false,
"delete": false,
"search": true,
"write": false,
"fields": {
"title": {
"read": true
},
"year": {
"read": true
}
},
"additional_fields": {}
}
],
"filters": [
{
"name": "filter 1",
"query": {},
"apply_when": {},
"projection": {}
}
],
"schema": {}
}
```
## Further Reading
For more information on MongoDB Atlas App Services and the Web SDK, I recommend reading our documentation:
- [Introduction to MongoDB Atlas App Services for Backend and Web Developers
- Users & Authentication
- Realm Web SDK
>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"Atlas",
"JavaScript",
"React"
],
"pageDescription": "Learn how to run MongoDB queries in the browser with the Web SDK and React",
"contentType": "Tutorial"
} | Querying MongoDB in the Browser with React and the Web SDK | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/introduction-indexes-mongodb-atlas-search | created | # An Introduction to Indexes for MongoDB Atlas Search
Imagine reading a long book like "A Song of Fire and Ice," "The Lord of
the Rings," or "Harry Potter." Now imagine that there was a specific
detail in one of those books that you needed to revisit. You wouldn't
want to search every page in those long books to find what you were
looking for. Instead, you'd want to use some sort of book index to help
you quickly locate what you were looking for. This same concept of
indexing content within a book can be carried to MongoDB Atlas
Search with search indexes.
Atlas Search makes it easy to build fast, relevant, full-text search on
top of your data in the cloud. It's fully integrated, fully managed, and
available with every MongoDB Atlas cluster running MongoDB version 4.2
or higher.
Correctly defining your indexes is important because they are
responsible for making sure that you're receiving relevant results when
using Atlas Search. There is no one-size-fits-all solution and different
indexes will bring you different benefits.
In this tutorial, we're going to get a gentle introduction to creating
indexes that will be valuable for various full-text search use cases.
Before we get too invested in this introduction, it's important to note
that Atlas Search uses Apache Lucene. This
means that search indexes are not unique to Atlas Search and if you're
already comfortable with Apache Lucene, your existing knowledge of
indexing will transfer. However, the tutorial could act as a solid
refresher regardless.
## Understanding the Data Model for the Documents in the Example
Before we start creating indexes, we should probably define what our
data model will be for the example. In an effort to cover various
indexing scenarios, the data model will be complex.
Take the following for example:
``` json
{
"_id": "cea29beb0b6f7b9187666cbed2f070b3",
"name": "Pikachu",
"pokedex_entry": {
"red": "When several of these Pokemon gather, their electricity could build and cause lightning storms.",
"yellow": "It keeps its tail raised to monitor its surroundings. If you yank its tail, it will try to bite you."
},
"moves":
{
"name": "Thunder Shock",
"description": "A move that may cause paralysis."
},
{
"name": "Thunder Wave",
"description": "An electrical attack that may paralyze the foe."
}
],
"location": {
"type": "Point",
"coordinates": [-127, 37]
}
}
```
The above example document is around Pokemon, but Atlas Search can be
used on whatever documents are part of your application.
Example documents like the one above allow us to use text search, geo
search, and potentially others. For each of these different search
scenarios, the index might change.
When we create an index for Atlas Search, it is created at the
collection level.
## Statically Mapping Fields in a Document or Dynamically Mapping Fields as the Schema Evolves
There are two ways to map fields within a document when creating an
index:
- Dynamic Mappings
- Static Mappings
If your document schema is still changing or your use case doesn't allow
for it to be rigidly defined, you might want to choose to dynamically
map your document fields. A dynamic mapping will automatically assign
fields when new data is inserted.
Take the following for example:
``` json
{
"mappings": {
"dynamic": true
}
}
```
The above JSON represents a valid index. When you add it to a
collection, you are essentially mapping every field that exists in the
documents and any field that might exist in the future.
We can do a simple search using this index like the following:
``` javascript
db.pokemon.aggregate([
{
"$search": {
"text": {
"query": "thunder",
"path": ["moves.name"]
}
}
}
]);
```
We didn't explicitly define the fields for this index, but attempting to
search for "thunder" within the `moves` array will give us matching
results based on our example data.
To be clear, dynamic mappings can be applied at the document level or
the field level. At the document level, a dynamic mapping automatically
indexes all common data types. At both levels, it automatically indexes
all new and existing data.
While convenient, having a dynamic mapping index on all fields of a
document comes at a cost. These indexes will take up more disk space and
may be less performant.
The alternative is to use a static mapping, in which case you specify
the fields to map and what type of fields they are. Take the following
for example:
``` json
{
"mappings": {
"dynamic": false,
"fields": {
"name": {
"type": "string"
}
}
}
}
```
In the above example, the only field within our document that is being
indexed is the `name` field.
The following search query would return results:
``` javascript
db.pokemon.aggregate([
{
"$search": {
"text": {
"query": "pikachu",
"path": ["name"]
}
}
}
]);
```
If we try to search on any other field within our document, we won't end
up with results because those fields are not statically mapped nor is
the document schema dynamically mapped.
There is, however, a way to get the best of both worlds if we need it.
Take the following which uses static and dynamic mappings:
``` json
{
"mappings": {
"dynamic": false,
"fields": {
"name": {
"type": "string"
},
"pokedex_entry": {
"type": "document",
"dynamic": true
}
}
}
}
```
In the above example, we are still using a static mapping for the `name`
field. However, we are using a dynamic mapping on the `pokedex_entry`
field. The `pokedex_entry` field is an object so any field within that
object will get the dynamic mapping treatment. This means all sub-fields
are automatically mapped, as well as any new fields that might exist in
the future. This could be useful if you want to specify what top level
fields to map, but map all fields within a particular object as well.
Take the following search query as an example:
``` javascript
db.pokemon.aggregate([
{
"$search": {
"text": {
"query": "pokemon",
"path": ["name", "pokedex_entry.red"]
}
}
}
]);
```
The above search will return results if "pokemon" appears in the `name`
field or the `red` field within the `pokedex_entry` object.
When using a static mapping, you need to specify a type for the field or
have `dynamic` set to true on the field. If you only specify a type,
`dynamic` defaults to false. If you only specify `dynamic` as true, then
Atlas Search can automatically default certain field types (e.g.,
string, date, number).
## Atlas Search Indexes for Complex Fields within a Document
With the basic dynamic versus static mapping discussion out of the way
for MongoDB Atlas Search indexes, now we can focus on more complicated
or specific scenarios.
Let's first take a look at what our fully mapped index would look like
for the document in our example:
``` json
{
"mappings": {
"dynamic": false,
"fields": {
"name": {
"type": "string"
},
"moves": {
"type": "document",
"fields": {
"name": {
"type": "string"
},
"description": {
"type": "string"
}
}
},
"pokedex_entry": {
"type": "document",
"fields": {
"red": {
"type": "string"
},
"yellow": {
"type": "string"
}
}
},
"location": {
"type": "geo"
}
}
}
}
```
In the above example, we are using a static mapping for every field
within our documents. An interesting thing to note is the `moves` array
and the `pokedex_entry` object in the example document. Even though one
is an array and the other is an object, the index is a `document` for
both. While writing searches isn't the focus of this tutorial, searching
an array and object would be similar using dot notation.
Had any of the fields been nested deeper within the document, the same
approach would be applied. For example, we could have something like
this:
``` json
{
"mappings": {
"dynamic": false,
"fields": {
"pokedex_entry": {
"type": "document",
"fields": {
"gameboy": {
"type": "document",
"fields": {
"red": {
"type": "string"
},
"yellow": {
"type": "string"
}
}
}
}
}
}
}
}
```
In the above example, the `pokedex_entry` field was changed slightly to
have another level of objects. Probably not a realistic way to model
data for this dataset, but it should get the point across about mapping
deeper nested fields.
## Changing the Options for Specific Mapped Fields
Up until now, each of the indexes have only had their types defined in
the mapping. The default options are currently being applied to every
field. Options are a way to refine the index further based on your data
to ultimately get more relevant search results. Let's play around with
some of the options within the mappings of our index.
Most of the fields in our example use the
[string
data type, so there's so much more we can do using options. Let's see
what some of those are.
``` json
{
"mappings": {
"dynamic": false,
"fields": {
"name": {
"type": "string",
"searchAnalyzer": "lucene.spanish",
"ignoreAbove": 3000
}
}
}
}
```
In the above example, we are specifying that we want to use a
language
analyzer on the `name` field instead of the default
standard
analyzer. We're also saying that the `name` field should not be indexed
if the field value is greater than 3000 characters.
The 3000 characters is just a random number for this example, but adding
a limit, depending on your use case, could improve performance or the
index size.
In a future tutorial, we're going to explore the finer details in
regards to what the search analyzers are and what they can accomplish.
These are just some of the available options for the string data type.
Each data type will have its own set of options. If you want to use the
default for any particular option, it does not need to be explicitly
added to the mapped field.
You can learn more about the data types and their indexing options in
the official
documentation.
## Conclusion
You just received what was hopefully a gentle introduction to creating
indexes to be used in Atlas Search. To use Atlas Search, you will need
at least one index on your collection, even if it is a default dynamic
index. However, if you know your schema and are able to create static
mappings, it is usually the better way to go to fine-tune relevancy and
performance.
To learn more about Atlas Search indexes and the various data types,
options, and analyzers available, check out the official
documentation.
To learn how to build more on Atlas Search, check out my other
tutorials: Building an Autocomplete Form Element with Atlas Search and
JavaScript
and Visually Showing Atlas Search Highlights with JavaScript and
HTML.
Have a question or feedback about this tutorial? Head to the MongoDB
Community Forums and let's chat!
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Get a gentle introduction for creating a variety of indexes to be used with MongoDB Atlas Search.",
"contentType": "Tutorial"
} | An Introduction to Indexes for MongoDB Atlas Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-data-types | created | # Realm Data Types
## Introduction
A key feature of Realm is you don’t have to think about converting data to/from JSON, or using ORMs. Just create your objects using the data types your language natively supports. We’re adding new supported types to all our SDKs, here is a refresher and a taste of the new supported types.
## Swift: Already supported types
The complete reference of supported data types for iOS can be found here.
| Type Name | Code Sample |
| --------- | ----------- |
| Bool
A value type whose instances are either true or false. | `// Declaring as Required`
`@objc dynamic var value = false`
`// Declaring as Optional`
`let value = RealmProperty()` |
| Int, Int8, Int16, Int32, Int64
A signed integer value type. | `// Declaring as Required`
`@objc dynamic var value = 0`
`// Declaring as Optional`
`let value = RealmProperty()` |
| Float
A single-precision, floating-point value type. | `// Declaring as Required`
`@objc dynamic var value: Float = 0.0`
`// Declaring as Optional` `let value = RealmProperty()` |
| Double
A double-precision, floating-point value type. | `// Declaring as Required`
`@objc dynamic var value: Double = 0.0`
`// Declaring as Optional`
`let value = RealmProperty()` |
| String
A Unicode string value that is a collection of characters. | `// Declaring as Required`
`@objc dynamic var value = ""`
`// Declaring as Optional`
`@objc dynamic var value: String? = nil` |
| Data
A byte buffer in memory. | `// Declaring as Required`
`@objc dynamic var value = Data()`
`// Declaring as Optional`
`@objc dynamic var value: Data? = nil` |
| Date
A specific point in time, independent of any calendar or time zone. | `// Declaring as Required`
`@objc dynamic var value = Date()`
`// Declaring as Optional`
`@objc dynamic var value: Date? = nil` |
| Decimal128
A structure representing a base-10 number. | `// Declaring as Required`
`@objc dynamic var decimal: Decimal128 = 0`
`// Declaring as Optional`
`@objc dynamic var decimal: Decimal128? = nil` |
| List
List is the container type in Realm used to define to-many relationships. | `let value = List()` |
| ObjectId
A 12-byte (probably) unique object identifier. Compatible with the ObjectId type used in the MongoDB database. | `// Declaring as Required`
`@objc dynamic var objectId = ObjectId.generate()`
`// Declaring as Optional`
`@objc dynamic var objectId: ObjectId? = nil` |
| User-defined Object Your own classes. | `// Declaring as Optional`
`@objc dynamic var value: MyClass? = nil` |
## Swift: New Realm Supported Data Types
Starting with **Realm iOS 10.8.0**
| Type Name | Code Sample |
| --------- | ----------- |
| Maps
Store data in arbitrary key-value pairs. They’re used when a developer wants to add flexibility to data models that may evolve over time, or handle unstructured data from a remote endpoint. | `class Player: Object {`
`@objc dynamic var name = String?`
`@objc dynamic var email: String?`
`@objc dynamic var playerHandle: String?`
`let gameplayStats = Map()`
`let competitionStats = Map()`
`}`
`try! realm.write {`
`let player = Player()`
`player.name = "iDubs"`
`// get the RealmDictionary field from the object we just created and add stats`
`let statsDictionary = player.gameplayStats`
`statsDictioanry"mostCommonRole"] = "Medic"`
`statsDictioanry["clan"] = "Realmers"`
`statsDictioanry["favoriteMap"] = "Scorpian bay"`
`statsDictioanry["tagLine"] = "Always Be Healin"`
`statsDictioanry["nemesisHandle"] = "snakeCase4Life"`
`let competitionStats = player.comeptitionStats`
`competitionStats["EastCoastInvitational"] = "2nd Place"`
`competitionStats["TransAtlanticOpen"] = "4th Place"`
`}` |
| [MutableSet
MutableSet is the container type in Realm used to define to-many relationships with distinct values as objects. | `// MutableSet declaring as required`
`let value = MutableSet()`
`// Declaring as Optional`
`let value: MutableSet? = nil `|
| AnyRealmValue
AnyRealmValue is a Realm property type that can hold different data types. | `// Declaring as Required`
`let value = RealmProperty()`
`// Declaring as Optional`
`let value: RealmProperty? = nil` |
| UUID
UUID is a 16-byte globally-unique value. | `// Declaring as Required`
`@objc dynamic var uuid = UUID()`
`// Declaring as Optional`
`@objc dynamic var uuidOpt: UUID? = nil` |
## Android/Kotlin: Already supported types
You can use these types in your RealmObject subclasses. The complete reference of supported data types for Kotlin can be found here.
| Type Name | Code Sample |
| --------- | ----------- |
| Boolean or boolean
Represents boolean objects that can have two values: true and false. | `// Declaring as Required`
`var visited = false`
`// Declaring as Optional`
`var visited = false` |
| Integer or int
A 32-bit signed number. | `// Declaring as Required`
`var number: Int = 0`
`// Declaring as Optional`
`var number: Int? = 0` |
| Short or short
A 16-bit signed number. | `// Declaring as Required`
`var number: Short = 0`
`// Declaring as Optional`
`var number: Short? = 0` |
| Long or long
A 64-bit signed number. | `// Declaring as Required`
`var number: Long = 0`
`// Declaring as Optional`
`var number: Long? = 0` |
| Byte or byte]
A 8-bit signed number. | `// Declaring as Required`
`var number: Byte = 0`
`// Declaring as Optional`
`var number: Byte? = 0` |
| [Double or double
Floating point number(IEEE 754 double precision) | `// Declaring as Required`
`var number: Double = 0`
`// Declaring as Optional`
`var number: Double? = 0.0` |
| Float or float
Floating point number(IEEE 754 single precision) | `// Declaring as Required`
`var number: Float = 0`
`// Declaring as Optional`
`var number: Float? = 0.0` |
| String | `// Declaring as Required`
`var sdkName: String = "Realm"`
`// Declaring as Optional`
`var sdkName: String? = "Realm"` |
| Date | `// Declaring as Required`
`var visited: Date = Date()`
`// Declaring as Optional`
`var visited: Date? = null` |
| Decimal128 from org.bson.types
A binary integer decimal representation of a 128-bit decimal value | `var number: Decimal128 = Decimal128.POSITIVE_INFINITY` |
| ObjectId from org.bson.types
A globally unique identifier for objects. | `var oId = ObjectId()` |
| Any RealmObject subclass | `// Define an embedded object`
`@RealmClass(embedded = true)`
`open class Address(`
`var street: String? = null,`
`var city: String? = null,`
`var country: String? = null,`
`var postalCode: String? = null`
`): RealmObject() {}`
`// Define an object containing one embedded object`
`open class Contact(_name: String = "", _address: Address? = null) : RealmObject() {`
`@PrimaryKey var _id: ObjectId = ObjectId()`
`var name: String = _name`
`// Embed a single object.`
`// Embedded object properties must be marked optional`
`var address: Address? = _address`
`}` |
| RealmList
RealmList is used to model one-to-many relationships in a RealmObject. | `var favoriteColors : RealmList? = null` |
## Android/Kotlin: New Realm Supported Data Types
Starting with **Realm Android 10.6.0**
| Type Name | Code Sample |
| --------- | ----------- |
| RealmDictionary
Manages a collection of unique String keys paired with values. | `import io.realm.RealmDictionary`
`import io.realm.RealmObject`
`open class Frog: RealmObject() {`
`var name: String? = null`
`var nicknamesToFriends: RealmDictionary = RealmDictionary()`
`}` |
| RealmSet
You can use the RealmSet data type to manage a collection of unique keys. | `import io.realm.RealmObject`
`import io.realm.RealmSet`
`open class Frog: RealmObject() {`
`var name: String = ""`
`var favoriteSnacks: RealmSet = RealmSet();`
`}` |
| Mixed
RealmAny
You can use the RealmAny data type to create Realm object fields that can contain any of several underlying types. | `import io.realm.RealmAny`
`import io.realm.RealmObject`
`open class Frog(var bestFriend: RealmAny? = RealmAny.nullValue()) : RealmObject() {`
`var name: String? = null`
`open fun bestFriendToString(): String {`
`if (bestFriend == null) {`
`return "null"`
`}`
`return when (bestFriend!!.type) {`
`RealmAny.Type.NULL -> {`
`"no best friend"`
`}`
`RealmAny.Type.STRING -> {`
`bestFriend!!.asString()`
`}`
`RealmAny.Type.OBJECT -> {`
`if (bestFriend!!.valueClass == Person::class.java) {`
`val person = bestFriend!!.asRealmModel(Person::class.java)`
`person.name`
`}`
`"unknown type"`
`}`
`else -> {`
`"unknown type"`
`}`
`}`
`}`
`}` |
| UUID from java.util.UUID | `var id = UUID.randomUUID()` |
## JavaScript - React Native SDK: : Already supported types
The complete reference of supported data types for JavaScript Node.js can be found here.
| Type Name | Code Sample |
| --------- | ----------- |
| `bool` maps to the JavaScript Boolean type | `var x = new Boolean(false);` |
| `int` maps to the JavaScript Number type. Internally, Realm Database stores int with 64 bits. | `Number('123')` |
| `float` maps to the JavaScript Number type. Internally, Realm Database stores float with 32 bits. | `Number('123.0')` |
| `double` maps to the JavaScript Number type. Internally, Realm Database stores double with 64 bits. | `Number('123.0')` |
| `string` `maps` to the JavaScript String type. | `const string1 = "A string primitive";` |
| `decimal128` for high precision numbers. | |
| `objectId` maps to BSON `ObjectId` type. | `ObjectId("507f1f77bcf86cd799439011")` |
| `data` maps to the JavaScript ArrayBuffer type. | `const buffer = new ArrayBuffer(8);` |
| `date` maps to the JavaScript Date type. | `new Date()` |
| `list` maps to the JavaScript Array type. You can also specify that a field contains a list of primitive value type by appending ] to the type name. | `let fruits = ['Apple', 'Banana']` |
| `linkingObjects` is a special type used to define an inverse relationship. | |
## JavaScript - React Native SDK: New Realm supported types
Starting with __Realm JS 10.5.0__
| Type Name | Code Sample |
| --------- | ----------- |
| [dictionary used to manage a collection of unique String keys paired with values. | `let johnDoe;`
`let janeSmith;`
`realm.write(() => {`
`johnDoe = realm.create("Person", {`
`name: "John Doe",`
`home: {`
`windows: 5,`
`doors: 3,`
`color: "red",`
`address: "Summerhill St.",`
`price: 400123,`
`},`
`});`
`janeSmith = realm.create("Person", {`
`name: "Jane Smith",`
`home: {`
`address: "100 northroad st.",`
`yearBuilt: 1990,`
`},`
`});`
`});` |
| set is based on the JavaScript Set type.
A Realm Set is a special object that allows you to store a collection of unique values. Realm Sets are based on JavaScript sets, but can only contain values of a single type and can only be modified within a write transaction. | `let characterOne, characterTwo;`
`realm.write(() => {`
`characterOne = realm.create("Character", {`
`_id: new BSON.ObjectId(),`
`name: "CharacterOne",`
`inventory: "elixir", "compass", "glowing shield"],`
`levelsCompleted: [4, 9],`
`});`
`characterTwo = realm.create("Character", {`
`_id: new BSON.ObjectId(),`
`name: "CharacterTwo",`
`inventory: ["estus flask", "gloves", "rune"],`
`levelsCompleted: [1, 2, 5, 24],`
`});`
`});` |
| [mixed is a property type that can hold different data types.
The mixed data type is a realm property type that can hold any valid Realm data type except a collection. You can create collections (lists, sets, and dictionaries) of type mixed, but a mixed itself cannot be a collection. Properties using the mixed data type can also hold null values. | `realm.write(() => {`
`// create a Dog with a birthDate value of type string`
`realm.create("Dog", { name: "Euler", birthDate: "December 25th, 2017" });`
`// create a Dog with a birthDate value of type date`
`realm.create("Dog", {`
`name: "Blaise",`
`birthDate: new Date("August 17, 2020"),`
`});`
`// create a Dog with a birthDate value of type int`
`realm.create("Dog", {`
`name: "Euclid",`
`birthDate: 10152021,`
`});`
`// create a Dog with a birthDate value of type null`
`realm.create("Dog", {`
`name: "Pythagoras",`
`birthDate: null,`
`});`
`});` |
| uuid is a universally unique identifier from Realm.BSON.
UUID (Universal Unique Identifier) is a 16-byte unique value. You can use UUID as an identifier for objects. UUID is indexable and you can use it as a primary key. | `const { UUID } = Realm.BSON;`
`const ProfileSchema = {`
`name: "Profile",`
`primaryKey: "_id",`
`properties: {`
`_id: "uuid",`
`name: "string",`
`},`
`};`
`const realm = await Realm.open({`
`schema: ProfileSchema],`
`});`
`realm.write(() => {`
`realm.create("Profile", {`
`name: "John Doe.",`
`_id: new UUID(), // create a _id with a randomly generated UUID`
`});`
`realm.create("Profile", {`
`name: "Tim Doe.",`
`_id: new UUID("882dd631-bc6e-4e0e-a9e8-f07b685fec8c"), // create a _id with a specific UUID value`
`});`
`});` |
## .NET Field Types
The complete reference of supported data types for .Net/C# can be found [here.
| Type Name | Code Sample |
| --- | --- |
| Realm Database supports the following .NET data types and their nullable counterparts:
bool
byte
short
int
long
float
double
decimal
char
string
byte]
DateTimeOffset
Guid
IList, where T is any of the supported data types | Regular C# code, nothing special to see here! |
| ObjectId maps to [BSON `ObjectId` type. | |
## .Net Field Types: New supported types
Starting with __.NET SDK 10.2.0__
| Type Name | Code Sample |
| --------- | ----------- |
| Dictionary
A Realm dictionary is an implementation of IDictionary that has keys of type String and supports values of any Realm type except collections. To define a dictionary, use a getter-only IDictionary property, where TValue is any of the supported types. | `public class Inventory : RealmObject`
`{`
`// The key must be of type string; the value can be`
`// of any Realm-supported type, including objects`
`// that inherit from RealmObject or EmbeddedObject`
`public IDictionary PlantDict { get; }`
`public IDictionary BooleansDict { get; }`
`// Nullable types are supported in local-only`
`// Realms, but not with Sync`
`public IDictionary NullableIntDict { get; }`
`// For C# types that are implicitly nullable, you can`
`// use the Required] attribute to prevent storing null values`
`[Required]`
`public IDictionary RequiredStringsDict { get; }`
`}` |
| [Sets
A Realm set, like the C# HashSet<>, is an implementation of ICollection<> and IEnumerable<>. It supports values of any Realm type except collections. To define a set, use a getter-only ISet property, where TValue is any of the supported types. | `public class Inventory : RealmObject`
`{`
`// A Set can contain any Realm-supported type, including`
`// objects that inherit from RealmObject or EmbeddedObject`
`public ISet PlantSet { get; }
public ISet DoubleSet { get; }`
`// Nullable types are supported in local-only`
`// Realms, but not with Sync`
`public ISet NullableIntsSet { get; }`
`// For C# types that are implicitly nullable, you can`
`// use the Required] attribute to prevent storing null values`
`[Required]`
`public ISet RequiredStrings { get; }`
`}` |
| [RealmValue
The RealmValue data type is a mixed data type, and can represent any other valid Realm data type except a collection. You can create collections (lists, sets and dictionaries) of type RealmValue, but a RealmValue itself cannot be a collection. | `public class MyRealmValueObject : RealmObject`
`{`
`PrimaryKey]`
`[MapTo("_id")]`
`public Guid Id { get; set; }`
`public RealmValue MyValue { get; set; }`
`// A nullable RealmValue preoprtrty is *not supported*`
`// public RealmValue? NullableRealmValueNotAllowed { get; set; }`
`}`
`private void TestRealmValue()`
`{`
`var obj = new MyRealmValueObject();`
`// set the value to null:`
`obj.MyValue = RealmValue.Null;`
`// or an int...`
`obj.MyValue = 1;`
`// or a string...`
`obj.MyValue = "abc";`
`// Use RealmValueType to check the type:`
`if (obj.MyValue.Type == RealmValueType.String)`
`{`
`var myString = obj.MyValue.AsString();`
`}`
`}` |
| [Guid and ObjectId Properties
MongoDB.Bson.ObjectId is a MongoDB-specific 12-byte unique value, while the built-in .NET type Guid is a 16-byte universally-unique value. Both types are indexable, and either can be used as a Primary Key. | | | md | {
"tags": [
"Realm"
],
"pageDescription": "Review of existing and supported Realm Data Types for the different SDKs.",
"contentType": "Tutorial"
} | Realm Data Types | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/creating-user-profile-store-game-nodejs-mongodb | created | # Creating a User Profile Store for a Game With Node.js and MongoDB
When it comes to game development, or at least game development that has an online component to it, you're going to stumble into the territory of user profile stores. These are essentially records for each of your players and these records contain everything from account information to what they've accomplished in the game.
Take the game Plummeting People that some of us at MongoDB (Karen Huaulme, Adrienne Tacke, and Nic Raboy) are building, streaming, and writing about. The idea behind this game, as described in a previous article, is to create a Fall Guys: Ultimate Knockout tribute game with our own spin on it.
Since this game will be an online multiplayer game, each player needs to retain game-play information such as how many times they've won, what costumes they've unlocked, etc. This information would exist inside a user profile document.
In this tutorial, we're going to see how to design a user profile store and then build a backend component using Node.js and MongoDB Realm for interacting with it.
## Designing a Data Model for the Player Documents of a Game
To get you up to speed, Fall Guys: Ultimate Knockout is a battle royale style game where you compete for first place in several obstacle courses. As you play the game, you get karma points, crowns, and costumes to make the game more interesting.
Since we're working on a tribute game and not a straight up clone, we determined our Plummeting People game should have the following data stored for each player:
- Experience points (XP)
- Falls
- Steps taken
- Collisions with players or objects
- Losses
- Wins
- Pineapples (Currency)
- Achievements
- Inventory (Outfits)
- Plummie Tag (Username)
Of course, there could be much more information or much less information stored per player in any given game. In all honesty, the things we think we should store may evolve as we progress further in the development of the game. However, this is a good starting point.
Now that we have a general idea of what we want to store, it makes sense to convert these items into an appropriate data model for a document within MongoDB.
Take the following, for example:
``` json
{
"_id": "4573475234234",
"plummie_tag": "nraboy",
"xp": 298347234,
"falls": 328945783957,
"steps": 438579348573,
"collisions": 2345325,
"losses": 3485,
"wins": 3,
"created_at": 3498534,
"updated_at": 4534534,
"lifetime_hours_played": 5,
"pineapples": 24532,
"achievements":
{
"name": "Super Amazing Person",
"timestamp": 2345435
}
],
"inventory": {
"outfits": [
{
"id": 34345,
"name": "The Kilowatt Huaulme",
"timestamp": 2345345
}
]
}
}
```
Notice that we have the information previously identified. However, the structure is a bit different. In addition, you'll notice extra fields such as `created_at` and other timestamp-related data that could be helpful behind the scenes.
For achievements, an array of objects might be a good idea because the achievements might change over time, and each player will likely receive more than one during the lifetime of their gaming experience. Likewise, the `inventory` field is an object with arrays of objects because, while the current plan is to have an inventory of player outfits, that could later evolve into consumable items to be used within the game, or anything else that might expand beyond outfits.
One thing to note about the above user profile document model is that we're trying to store everything about the player in a single document. We're not trying to maintain relationships to other documents unless absolutely necessary. The document for any given player is like a log of their lifetime experience with the game. It can very easily evolve over time due to the flexible nature of having a JSON document model in a NoSQL database like MongoDB.
To get more insight into the design process of our user profile store documents, check out the [on-demand Twitch recording we created.
## Create a Node.js Backend API With MongoDB Atlas to Interact With the User Profile Store
With a general idea of how we chose to model our player document, we could start developing the backend responsible for doing the create, read, update, and delete (CRUD) spectrum of operations against our database.
Since Express.js is a common, if not the most common, way to work with Node.js API development, it made sense to start there. What comes next will reproduce what we did during the Twitch stream.
From the command line, execute the following commands in a new directory:
``` none
npm init -y
npm install express mongodb body-parser --save
```
The above commands will initialize a new **package.json** file within the current working directory and then install Express.js, the MongoDB Node.js driver, and the Body Parser middleware for accepting JSON payloads.
Within the same directory as the **package.json** file, create a **main.js** file with the following Node.js code:
``` javascript
const { MongoClient, ObjectID } = require("mongodb");
const Express = require("express");
const BodyParser = require('body-parser');
const server = Express();
server.use(BodyParser.json());
server.use(BodyParser.urlencoded({ extended: true }));
const client = new MongoClient(process.env"ATLAS_URI"]);
var collection;
server.post("/plummies", async (request, response, next) => {});
server.get("/plummies", async (request, response, next) => {});
server.get("/plummies/:id", async (request, response, next) => {});
server.put("/plummies/:plummie_tag", async (request, response, next) => {});
server.listen("3000", async () => {
try {
await client.connect();
collection = client.db("plummeting-people").collection("plummies");
console.log("Listening at :3000...");
} catch (e) {
console.error(e);
}
});
```
There's quite a bit happening in the above code. Let's break it down!
You'll first notice the following few lines:
``` javascript
const { MongoClient, ObjectID } = require("mongodb");
const Express = require("express");
const BodyParser = require('body-parser');
const server = Express();
server.use(BodyParser.json());
server.use(BodyParser.urlencoded({ extended: true }));
```
We had previously downloaded the project dependencies, but now we are importing them for use in the project. Once imported, we're initializing Express and are telling it to use the body parser for JSON and URL encoded payloads coming in with POST, PUT, and similar requests. These requests are common when it comes to creating or modifying data.
Next, you'll notice the following lines:
``` javascript
const client = new MongoClient(process.env["ATLAS_URI"]);
var collection;
```
The `client` in this example assumes that your MongoDB Atlas connection string exists in your environment variables. To be clear, the connection string would look something like this:
``` none
mongodb+srv://:@plummeting-us-east-1.hrrxc.mongodb.net/
```
Yes, you could hard-code that value, but because the connection string will contain your username and password, it makes sense to use an environment variable or configuration file for security reasons.
The `collection` variable is being defined because it will have our collection handle for use within each of our endpoint functions.
Speaking of endpoint functions, we're going to skip those for a moment. Instead, let's look at serving our API:
``` javascript
server.listen("3000", async () => {
try {
await client.connect();
collection = client.db("plummeting-people").collection("plummies");
console.log("Listening at :3000...");
} catch (e) {
console.error(e);
}
});
```
In the above code we are serving our API on port 3000. When the server starts, we establish a connection to our MongoDB Atlas cluster. Once connected, we make use of the `plummeting-people` database and the `plummies` collection. In this circumstance, we're calling each player a **plummie**, hence the name of our user profile store collection. Neither the database or collection need to exist prior to starting the application.
Time to focus on those endpoint functions.
To create a player — or plummie, in this case — we need to take a look at the POST endpoint:
``` javascript
server.post("/plummies", async (request, response, next) => {
try {
let result = await collection.insertOne(request.body);
response.send(result);
} catch (e) {
response.status(500).send({ message: e.message });
}
});
```
The above endpoint expects a JSON payload. Ideally, it should match the data model that we had defined earlier in the tutorial, but we're not doing any data validation, so anything at this point would work. With the JSON payload an `insertOne` operation is done and that payload is turned into a user profile. The result of the create is sent back to the user.
If you want to handle the validation of data, check out database level [schema validation or using a client facing validation library like Joi.
With the user profile document created, you may need to fetch it at some point. To do this, take a look at the GET endpoint:
``` javascript
server.get("/plummies", async (request, response, next) => {
try {
let result = await collection.find({}).toArray();
response.send(result);
} catch (e) {
response.status(500).send({ message: e.message });
}
});
```
In the above example, all documents in the collection are returned because there is no filter specified. The above endpoint is useful if you want to find all user profiles, maybe for reporting purposes. If you want to find a specific document, you might do something like this:
``` javascript
server.get("/plummies/:plummie_tag", async (request, response, next) => {
try {
let result = await collection.findOne({ "plummie_tag": request.params.plummie_tag });
response.send(result);
} catch (e) {
response.status(500).send({ message: e.message });
}
});
```
The above endpoint takes a `plummie_tag`, which we're expecting to be a unique value. As long as the value exists on the `plummie_tag` field for a document, the profile will be returned.
Even though there isn't a game to play yet, we know that we're going to need to update these player profiles. Maybe the `xp` increased, or new `achievements` were gained. Whatever the reason, a PUT request is necessary and it might look like this:
``` javascript
server.put("/plummies/:plummie_tag", async (request, response, next) => {
try {
let result = await collection.updateOne(
{ "plummie_tag": request.params.plummie_tag },
{ "$set": request.body }
);
response.send(result);
} catch (e) {
response.status(500).send({ message: e.message });
}
});
```
In the above request, we are expecting a `plummie_tag` to be passed to represent the document we want to update. We are also expecting a payload to be sent with the data we want to update. Like with the `insertOne`, the `updateOne` is experiencing no prior validation. Using the `plummie_tag` we can filter for a document to change and then we can use the `$set` operator with a selection of changes to make.
The above endpoint will update any field that was passed in the payload. If the field doesn't exist, it will be created.
One might argue that user profiles can only be created or changed, but never removed. It is up to you whether or not the profile should have an `active` field or just remove it when requested. For our game, documents will never be deleted, but if you wanted to, you could do the following:
``` javascript
server.delete("/plummies/:plummie_tag", async (request, response, next) => {
try {
let result = await collection.deleteOne({ "plummie_tag": request.params.plummie_tag });
response.send(result);
} catch (e) {
response.status(500).send({ message: e.message });
}
});
```
The above code will take a `plummie_tag` from the game and delete any documents that match it in the filter.
It should be reiterated that these endpoints are expected to be called from within the game. So when you're playing the game and you create your player, it should be stored through the API.
## Realm Webhook Functions: An Alternative Method for Interacting With the User Profile Store:
While Node.js with Express.js might be popular, it isn't the only way to build a user profile store API. In fact, it might not even be the easiest way to get the job done.
During the Twitch stream, we demonstrated how to offload the management of Express and Node.js to Realm.
As part of the MongoDB data platform, Realm offers many things Plummeting People can take advantage of as we build out this game, including triggers, functions, authentication, data synchronization, and static hosting. We very quickly showed how to re-create these APIs through Realm's HTTP Service from right inside of the Atlas UI.
To create our GET, POST, and DELETE endpoints, we first had to create a Realm application. Return to your Atlas UI and click **Realm** at the top. Then click the green **Start a New Realm App** button.
We named our Realm application **PlummetingPeople** and linked to the Atlas cluster holding the player data. All other default settings are fine:
Congrats! Realm Application Creation Achievment Unlocked! 👏
Now click the **3rd Party Services** menu on the left and then **Add a Service**. Select the HTTP service. We named ours **RealmOfPlummies**:
Click the green **Add a Service** button, and you'll be directed to **Add Incoming Webhook**.
Let's re-create our GET endpoint first. Once in the **Settings** tab, name your first webhook **getPlummies**. Enable **Respond with Result** set the HTTP Method to **GET**. To make things simple, let's just run the webhook as the System and skip validation with **No Additional Authorization.** Make sure to click the **Review and Deploy** button at the top along the way.
In this service function editor, replace the example code with the following:
``` javascript
exports = async function(payload, response) {
// get a reference to the plummies collection
const collection = context.services.get("mongodb-atlas").db("plummeting-people").collection("plummies");
var plummies = await collection.find({}).toArray();
return plummies;
};
```
In the above code, note that MongoDB Realm interacts with our `plummies` collection through the global `context` variable. In the service function, we use that context variable to access all of our `plummies.` We can also add a filter to find a specific document or documents, just as we did in the Express + Node.js endpoint above.
Switch to the **Settings** tab of `getPlummies`, and you'll notice a Webhook URL has been generated.
We can test this endpoint out by executing it in our browser. However, if you have tools like Postman installed, feel free to try that as well. Click the **COPY** button and paste the URL into your browser.
If you receive an output showing your plummies, you have successfully created an API endpoint in Realm! Very cool. 💪😎
Now, let's step through that process again to create an endpoint to add new plummies to our game. In the same **RealmOfPlummies** service, add another incoming webhook. Name it `addPlummie` and set it as a **POST**. Switch to the function editor and replace the example code with the following:
``` javascript
exports = function(payload, response) {
console.log("Adding Plummie...");
const plummies = context.services.get("mongodb-atlas").db("plummeting-people").collection("plummies");
// parse the body to get the new plummie
const plummie = EJSON.parse(payload.body.text());
return plummies.insertOne(plummie);
};
```
If you go back to Settings and grab the Webhook URL, you can now use this to POST new plummies to our Atlas **plummeting-people** database.
And finally, the last two endpoints to `DELETE` and to `UPDATE` our players.
Name a new incoming webhook `removePlummie` and set as a POST. The following code will remove the `plummie` from our user profile store:
``` javascript
exports = async function(payload) {
console.log("Removing plummie...");
const ptag = EJSON.parse(payload.body.text());
let plummies = context.services.get("mongodb-atlas").db("plummeting-people").collection("plummies_kwh");
return plummies.deleteOne({"plummie_tag": ptag});
};
```
The final new incoming webhook `updatePlummie` and set as a PUT:
``` javascript
exports = async function(payload, response) {
console.log("Updating Plummie...");
var result = {};
if (payload.body) {
const plummies = context.services.get("mongodb-atlas").db("plummeting-people").collection("plummies_kwh");
const ptag = payload.query.plummie_tag;
console.log("plummie_tag : " + ptag);
// parse the body to get the new plummie update
var updatedPlummie = EJSON.parse(payload.body.text());
console.log(JSON.stringify(updatedPlummie));
return plummies.updateOne(
{"plummie_tag": ptag},
{"$set": updatedPlummie}
);
}
return ({ok:true});
};
```
With that, we have another option to handle all four endpoints allowing complete CRUD operations to our `plummie` data - without needing to spin-up and manage a Node.js and Express backend.
## Conclusion
You just saw some examples of how to design and create a user profile store for your next game. The user profile store used in this tutorial is an active part of a game that some of us at MongoDB (Karen Huaulme, Adrienne Tacke, and Nic Raboy) are building. It is up to you whether or not you want develop your own backend using the MongoDB Node.js driver or take advantage of MongoDB Realm with webhook functions.
This particular tutorial is part of a series around developing a Fall Guys: Ultimate Knockout tribute game using Unity and MongoDB. | md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Learn how to create a user profile store for a game using MongoDB, Node.js, and Realm.",
"contentType": "Tutorial"
} | Creating a User Profile Store for a Game With Node.js and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/adl-sql-integration | created | # Atlas Data Lake SQL Integration to Form Powerful Data Interactions
>As of June 2022, the functionality previously known as Atlas Data Lake is now named Atlas Data Federation. Atlas Data Federation’s functionality is unchanged and you can learn more about it here. Atlas Data Lake will remain in the Atlas Platform, with newly introduced functionality that you can learn about here.
Modern platforms have a wide variety of data sources. As businesses grow, they have to constantly evolve their data management and have sophisticated, scalable, and convenient tools to analyse data from all sources to produce business insights.
MongoDB has developed a rich and powerful query language, including a very robust aggregation framework.
These were mainly done to optimize the way developers work with data and provide great tools to manipulate and query MongoDB documents.
Having said that, many developers, analysts, and tools still prefer the legacy SQL language to interact with the data sources. SQL has a strong foundation around joining data as this was a core concept of the legacy relational databases normalization model.
This makes SQL have a convenient syntax when it comes to describing joins.
Providing MongoDB users the ability to leverage SQL to analyse multi-source documents while having a flexible schema and data store is a compelling solution for businesses.
## Data Sources and the Challenge
Consider a requirement to create a single view to analyze data from operative different systems. For example:
- Customer data is managed in the user administration systems (REST API).
- Financial data is managed in a financial cluster (Atlas cluster).
- End-to-end transactions are stored in files on cold storage gathered from various external providers (cloud object storage - Amazon S3 or Microsoft Azure Blob Storage).
How can we combine and best join this data?
MongoDB Atlas Data Lake connects multiple data sources using the different source types. Once the data sources are mapped, we can create collections consuming this data. Those collections can have SQL schema generated, allowing us to perform sophisticated joins and do JDBC queries from various BI tools.
In this article, we will showcase the extreme power hidden in the Data Lake SQL interface.
## Setting Up My Data Lake
In the following view, I have created three main data sources:
- S3 Transaction Store (S3 sample data).
- Accounts from my Atlas clusters (Sample data sample_analytics.accounts).
- Customer data from a secure https source.
I mapped the stores into three collections under `FinTech` database:
- `Transactions`
- `Accounts`
- `CustomerDL`
Now, I can see them through a data lake connection as MongoDB collections.
Let's grab our data lake connection string from the Atlas UI.
This connection string can be used with our BI tools or client applications to run SQL queries.
## Connecting and Using $sql
Once we connect to the data lake via a mongosh shell, we can generate a SQL schema for our collections. This is required for the JDBC or $sql operators to recognise collections as SQL “tables.”
#### Generate SQL schema for each collection:
```js
use admin;
db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: "FinTech.customersDL"], sampleSize: 1000, setSchemas: true})
{
ok: 1,
schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
}
db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: ["FinTech.accounts"], sampleSize: 1000, setSchemas: true})
{
ok: 1,
schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
}
db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: ["FinTech.transactions"], sampleSize: 1000, setSchemas: true})
{
ok: 1,
schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
}
```
#### Running SQL queries and joins using $sql stage:
```js
use FinTech;
db.aggregate([{
$sql: {
statement: "SELECT a.* , t.transaction_count FROM accounts a, transactions t where a.account_id = t.account_id SORT BY t.transaction_count DESC limit 2",
format: "jdbc",
formatVersion: 2,
dialect: "mysql",
}
}])
```
The above query will prompt account information and the transaction counts of each account.
## Connecting Via JDBC
Let’s connect a powerful BI tool like Tableau with the JDBC driver.
[Download JDBC Driver.
Setting `connection.properties` file.
```
user=root
password=*******
authSource=admin
database=FinTech
ssl=true
compressors=zlib
```
#### Connect to Tableau
Click the “Other Databases (JDBC)” connector and load the connection.properties file pointing to our data lake URI.
Once the data is read successfully, the collections will appear on the right side.
#### Setting and Joining Data
We can drag and drop collections from different sources and link them together.
In my case, I connected `Transactions` => `Accounts` based on the `Account Id` field, and accounts and users based on the `Account Id` to `Accounts` field.
In this view, we will see a unified table for all accounts with usernames and their transactions start quarter.
## Summary
MongoDB has all the tools to read, transform, and analyse your documents for almost any use-case.
Whether your data is in an Atlas operational cluster, in a service, or on cold storage like cloud object storage, Atlas Data Lake will provide you with the ability to join the data in real time. With the option to use powerful join SQL syntax and SQL-based BI tools like Tableau, you can get value out of the data in no time.
Try Atlas Data Lake with your BI tools and SQL today.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how new SQL-based syntax can power your data lake insights in minutes. Integrate this capability with powerful BI tools like Tableau to get immediate value out of your data. ",
"contentType": "Article"
} | Atlas Data Lake SQL Integration to Form Powerful Data Interactions | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/swift/realm-minesweeper | created | # Building a Collaborative iOS Minesweeper Game with Realm
## Introduction
I wanted to build an app that we could use at events to demonstrate Realm Sync. It needed to be fun to interact with, and so a multiplayer game made sense. Tic-tac-toe is too simple to get excited about. I'm not a game developer and so _Call Of Duty_ wasn't an option. Then I remembered Microsoft's Minesweeper.
Minesweeper was a Windows fixture from 1990 until Windows 8 relegated it to the app store in 2012. It was a single-player game, but it struck me as something that could be a lot of fun to play with others. Some family beta-testing of my first version while waiting for a ferry proved that it did get people to interact with each other (even if most interactions involved shouting, "Which of you muppets clicked on that mine?!").
You can download the back end and iOS apps from the Realm-Sweeper repo, and get it up and running in a few minutes if you want to play with it.
This article steps you through some of the key aspects of setting up the backend Realm app, as well as the iOS code. Hopefully, you'll see how simple it is and try building something for yourself. If anyone's looking for ideas, then Sokoban could be interesting.
## Prerequisites
- Realm-Cocoa 10.20.1+
- iOS 15+
## The Minesweeper game
The gameplay for Minesweeper is very simple.
You're presented with a grid of gray tiles. You tap on a tile to expose what's beneath. If you expose a mine, game over. If there isn't a mine, then you'll be rewarded with a hint as to how many mines are adjacent to that tile. If you deduce (or guess) that a tile is covering a mine, then you can plant a flag to record that.
You win the game when you correctly flag every mine and expose what's behind every non-mined tile.
### What Realm-Sweeper adds
Minesweeper wasn't designed for touchscreen devices; you had to use a physical mouse. Realm-Sweeper brings the game into the 21st century by adding touch controls. Tap a tile to reveal what's beneath; tap and hold to plant a flag.
Minesweeper was a single-player game. All people who sign into Realm-Sweeper with the same user ID get to collaborate on the same game in real time.
You also get to configure the size of the grid and how many mines you'd like to hide.
## The data model
I decided to go for a simple data model that would put Realm sync to the test.
Each game is a single document/object that contains meta data (score, number of rows/columns, etc.) together with the grid of tiles (the board):
This means that even a modestly sized grid (20x20 tiles) results in a `Game` document/object with more than 2,000 attributes.
Every time you tap on a tile, the `Game` object has to be synced with all other players. Those players are also tapping on tiles, and those changes have to be synced too. If you tap on a tile which isn't adjacent to any mines, then the app will recursively ripple through exposing similar, connected tiles. That's a lot of near-simultaneous changes being made to the same object from different devices—a great test of Realm's automatic conflict resolution!
## The backend Realm app
If you don't want to set this up yourself, simply follow the instructions from the repo to import the app.
If you opt to build the backend app yourself, there are only two things to configure once you create the empty Realm app:
1. Enable email/password authentication. I kept it simple by opting to auto-confirm new users and sticking with the default password-reset function (which does nothing).
2. Enable partitioned Realm sync. Set the partition key to `partition` and enable developer mode (so that the schema will be created automatically when the iOS app syncs for the first time).
The `partition` field will be set to the username—allowing anyone who connects as that user to sync all of their games.
You can also add sync rules to ensure that a user can only sync their own games (in case someone hacks the mobile app). I always prefer using Realm functions for permissions. You can add this for both the read and write rules:
```json
{
"%%true": {
"%function": {
"arguments":
"%%partition"
],
"name": "canAccessPartition"
}
}
}
```
The `canAccessPartition` function is:
```js
exports = function(partition) {
const user = context.user.data.email;
return partition === user;
};
```
## The iOS app
I'd suggest starting by downloading, configuring, and running the app—just follow the [instructions from the repo. That way, you can get a feel for how it works.
This isn't intended to be a full tutorial covering every line of code in the app. Instead, I'll point out some key components.
As always with Realm and MongoDB, it all starts with the data…
### Model
There's a single top-level Realm Object—`Game`:
```swift
class Game: Object, ObjectKeyIdentifiable {
@Persisted(primaryKey: true) var _id: ObjectId
@Persisted var numRows = 0
@Persisted var numCols = 0
@Persisted var score = 0
@Persisted var startTime: Date? = Date()
@Persisted var latestMoveTime: Date?
@Persisted var secondsTakenToComplete: Int?
@Persisted var board: Board?
@Persisted var gameStatus = GameStatus.notStarted
@Persisted var winningTimeInSeconds: Int?
…
}
```
Most of the fields are pretty obvious.The most interesting is `board`, which contains the grid of tiles:
```swift
class Board: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var rows = List()
@Persisted var startingNumberOfMines = 0
...
}
```
`row` is a list of `Cells`:
```swift
class Row: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var cells = List()
...
}
class Cell: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var isMine = false
@Persisted var numMineNeigbours = 0
@Persisted var isExposed = false
@Persisted var isFlagged = false
@Persisted var hasExploded = false
...
}
```
The model is also where the ~~business~~ game logic is implemented. This means that the views can focus on the UI. For example, `Game` includes a computed variable to check whether the game has been solved:
```swift
var hasWon: Bool {
guard let board = board else { return false }
if board.remainingMines != 0 { return false }
var result = true
board.rows.forEach() { row in
row.cells.forEach() { cell in
if !cell.isExposed && !cell.isFlagged {
result = false
return
}
}
if !result { return }
}
return result
}
```
### Views
As with any SwiftUI app, the UI is built up of a hierarchy of many views.
Here's a quick summary of the views that make up Real-Sweeper:
**`ContentView`** is the top-level view. When the app first runs, it will show the `LoginView`. Once the user has logged in, it shows `GameListView` instead. It's here that we set the Realm Sync partition (to be the `username` of the user that's just logged in):
```swift
GameListView()
.environment(\.realmConfiguration,
realmApp.currentUser!.configuration(partitionValue: username))
```
`ContentView` also includes the `LogoutButton` view.
**`LoginView`** allows the user to provide a username and password:
Those credentials are then used to register or log into the backend Realm app:
```swift
func userAction() {
Task {
do {
if newUser {
try await realmApp.emailPasswordAuth.registerUser(
email: email, password: password)
}
let _ = try await realmApp.login(
credentials: .emailPassword(email: email, password: password))
username = email
} catch {
errorMessage = error.localizedDescription
}
}
}
```
**`GameListView`** reads the list of this user's existing games.
```swift
@ObservedResults(Game.self,
sortDescriptor: SortDescriptor(keyPath: "startTime", ascending: false)) var games
```
It displays each of the games within a `GameSummaryView`. If you tap one of the games, then you jump to a `GameView` for that game:
```swift
NavigationLink(destination: GameView(game: game)) {
GameSummaryView(game: game)
}
```
Tap the settings button and you're sent to `SettingsView`.
Tap the "New Game" button and a new `Game` object is created and then stored in Realm by appending it to the `games` live query:
```swift
private func createGame() {
numMines = min(numMines, numRows * numColumns)
game = Game(rows: numRows, cols: numColumns, mines: numMines)
if let game = game {
$games.append(game)
}
startGame = true
}
```
**`SettingsView`** lets the user choose the number of tiles and mines to use:
If the user uses multiple devices to play the game (e.g., an iPhone and an iPad), then they may want different-sized boards (taking advantage of the extra screen space on the iPad). Because of that, the view uses the device's `UserDefaults` to locally persist the settings rather than storing them in a synced realm:
```swift
@AppStorage("numRows") var numRows = 10
@AppStorage("numColumns") var numColumns = 10
@AppStorage("numMines") var numMines = 15
```
**`GameSummaryView`** displays a summary of one of the user's current or past games.
**`GameView`** shows the latest stats for the current game at the top of the screen:
It uses the `LEDCounter` and `StatusButton` views for the summary.
Below the summary, it displays the `BoardView` for the game.
**`LEDCounter`** displays the provided number as three digits using a retro LED font:
**`StatusButton`** uses a `ZStack` to display the symbol for the game's status on top of a tile image:
The view uses SwiftUI's `GeometryReader` function to discover how much space is available so that it can select an appropriate font size for the symbol:
```swift
GeometryReader { geo in
Text(status)
.font(.system(size: geo.size.height * 0.7))
}
```
**`BoardView`** displays the game's grid of tiles:
Each of the tiles is represented by a `CellView` view.
When a tile is tapped, this view exposes its contents:
```swift
.onTapGesture() {
expose(row: row, col: col)
}
```
On a tap-and-hold, a flag is dropped:
```swift
.onLongPressGesture(minimumDuration: 0.1) {
flag(row: row, col: col)
}
```
When my family tested the first version of the app, they were frustrated that they couldn't tell whether they'd held long enough for the flag to be dropped. This was an easy mistake to make as their finger was hiding the tile at the time—an example of where testing with a mouse and simulator wasn't a substitute for using real devices. It was especially frustrating as getting it wrong meant that you revealed a mine and immediately lost the game. Fortunately, this is easy to fix using iOS's haptic feedback:
```swift
func hapticFeedback(_ isSuccess: Bool) {
let generator = UINotificationFeedbackGenerator()
generator.notificationOccurred(isSuccess ? .success : .error)
}
```
You now feel a buzz when the flag has been dropped.
**`CellView`** displays an individual tile:
What's displayed depends on the contents of the `Cell` and the state of the game. It uses four further views to display different types of tile: `FlagView`, `MineCountView`, `MineView`, and `TileView`.
**`FlagView`**
**`MineCountView`**
**`MineView`**
**`TileView`**
## Conclusion
Realm-Sweeper gives a real feel for how quickly Realm is able to synchronize data over the internet.
I intentionally avoided optimizing how I updated the game data in Realm. When you see a single click exposing dozens of tiles, each cell change is an update to the `Game` object that needs to be synced.
Note that both instances of the game are running in iPhone simulators on an overworked Macbook in England. The Realm backend app is running in the US—that's a 12,000 km/7,500 mile round trip for each sync.
I took this approach as I wanted to demonstrate the performance of Realm synchronization. If an app like this became super-popular with millions of users, then it would put a lot of extra strain on the backend Realm app.
An obvious optimization would be to condense all of the tile changes from a single tap into a single write to the Realm object. If you're interested in trying that out, just fork the repo and make the changes. If you do implement the optimization, then please create a pull request. (I'd probably add it as an option within the settings so that the "slow" mode is still an option.)
Got questions? Ask them in our Community forum. | md | {
"tags": [
"Swift",
"Realm",
"iOS"
],
"pageDescription": "Using MongoDB Realm Sync to build an iOS multi-player version of the classic Windows game",
"contentType": "Tutorial"
} | Building a Collaborative iOS Minesweeper Game with Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-java-to-kotlin-sdk | created | # How to migrate from Realm Java SDK to Realm Kotlin SDK
> This article is targeted to existing Realm developers who want to understand how to migrate to Realm Kotlin SDK.
## Introduction
Android has changed a lot in recent years notably after the Kotlin language became a first-class
citizen, so does the Realm SDK. Realm has recently moved its much-awaited Kotlin SDK to beta
enabling developers to use Realm more fluently with Kotlin and opening doors to the world of Kotlin
Multiplatform.
Let's understand the changes required when you migrate from Java to Kotlin SDK starting from setup
till its usage.
## Changes in setup
The new Realm Kotlin SDK is based on Kotlin Multiplatform architecture which enables you to have one
common module for all your data needs for all platforms. But this doesn't mean to use the new SDK
you would have to convert your existing Android app to KMM app right away, you can do that later.
Let's understand the changes needed in the gradle file to use Realm Kotlin SDK, by comparing the
previous implementation with the new one.
In project level `build.gradle`
Earlier with Java SDK
```kotlin
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath "com.android.tools.build:gradle:4.1.3"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:1.4.31"
// Realm Plugin
classpath "io.realm:realm-gradle-plugin:10.10.1"
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
```
With Kotlin SDK, we can **delete the Realm plugin** from `dependencies`
```kotlin
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath "com.android.tools.build:gradle:4.1.3"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:1.4.31"
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
```
In the module-level `build.gradle`
With Java SDK, we
1. Enabled Realm Plugin
2. Enabled Sync, if applicable
```groovy
plugins {
id 'com.android.application'
id 'kotlin-android'
id 'kotlin-kapt'
id 'realm-android'
}
```
```groovy
android {
... ....
realm {
syncEnabled = true
}
}
```
With Kotlin SDK,
1. Replace ``id 'realm-android'`` with ``id("io.realm.kotlin") version "0.10.0"``
```groovy
plugins {
id 'com.android.application'
id 'kotlin-android'
id 'kotlin-kapt'
id("io.realm.kotlin") version "0.10.0"
}
```
2. Remove the Realm block under android tag
```groovy
android {
... ....
}
```
3. Add Realm dependency under `dependencies` tag
```groovy
dependencies {
implementation("io.realm.kotlin:library-sync:0.10.0")
}
```
> If you are using only Realm local SDK, then you can add
> ```groovy
> dependencies {
> implementation("io.realm.kotlin:library-base:0.10.0")
> }
>```
With these changes, our Android app is ready to use Kotlin SDK.
## Changes in implementation
### Realm Initialization
Traditionally before using Realm for querying information in our project, we had to initialize and
set up few basic properties like name, version with sync config for database, let's update them
as well.
Steps with JAVA SDK :
1. Call `Realm.init()`
2. Setup Realm DB properties like name, version, migration rules etc using `RealmConfiguration`.
3. Setup logging
4. Configure Realm Sync
With Kotlin SDK :
1. Call `Realm.init()` _Is not needed anymore_.
2. Setup Realm DB properties like db name, version, migration rules etc. using `RealmConfiguration`-
_This remains the same apart from a few minor changes_.
3. Setup logging - _This is moved to `RealmConfiguration`_
4. Configure Realm Sync - _No changes_
### Changes to Models
No changes are required in model classes, except you might have to remove a few currently
unsupported annotations like `@RealmClass` which is used for the embedded object.
> Note: You can remove `Open` keyword against `class` which was mandatory for using Java SDK in
> Kotlin.
### Changes to querying
The most exciting part starts from here 😎(IMO).
Traditionally Realm SDK has been on the top of the latest programming trends like Reactive
programming (Rx), LiveData and many more but with the technological shift in Android programming
language from Java to Kotlin, developers were not able to fully utilize the power of the language
with Realm as underlying SDK was still in Java, few of the notable were support for the Coroutines,
Kotlin Flow, etc.
But with the Kotlin SDK that all has changed and further led to the reduction of boiler code.
Let's understand these by examples.
Example 1: As a user, I would like to register my visit as soon as I open the app or screen.
Steps to complete this operation would be
1. Authenticate with Realm SDK.
2. Based on the user information, create a sync config with the partition key.
3. Open Realm instance.
4. Start a Realm Transaction.
5. Query for current user visit count and based on that add/update count.
With JAVA SDK:
```kotlin
private fun updateData() {
_isLoading.postValue(true)
fun onUserSuccess(user: User) {
val config = SyncConfiguration.Builder(user, user.id).build()
Realm.getInstanceAsync(config, object : Realm.Callback() {
override fun onSuccess(realm: Realm) {
realm.executeTransactionAsync {
var visitInfo = it.where(VisitInfo::class.java).findFirst()
visitInfo = visitInfo?.updateCount() ?: VisitInfo().apply {
partition = user.id
}.updateCount()
it.copyToRealmOrUpdate(visitInfo).apply {
_visitInfo.postValue(it.copyFromRealm(this))
}
_isLoading.postValue(false)
}
}
override fun onError(exception: Throwable) {
super.onError(exception)
// some error handling
_isLoading.postValue(false)
}
})
}
realmApp.loginAsync(Credentials.anonymous()) {
if (it.isSuccess) {
onUserSuccess(it.get())
} else {
_isLoading.postValue(false)
}
}
}
```
With Kotlin SDK:
```kotlin
private fun updateData() {
viewModelScope.launch(Dispatchers.IO) {
_isLoading.postValue(true)
val user = realmApp.login(Credentials.anonymous())
val config = SyncConfiguration.Builder(
user = user,
partitionValue = user.identity,
schema = setOf(VisitInfo::class)
).build()
val realm = Realm.open(configuration = config)
realm.write {
val visitInfo = this.query().first().find()
copyToRealm(visitInfo?.updateCount()
?: VisitInfo().apply {
partition = user.identity
visitCount = 1
})
}
_isLoading.postValue(false)
}
}
```
Upon quick comparing, you would notice that lines of code have decreased by 30%, and we are using
coroutines for doing the async call, which is the natural way of doing asynchronous programming in
Kotlin. Let's check this with one more example.
Example 2: As user, I should be notified immediately about any change in user visit info. This is
more like observing the change to visit count.
With Java SDK:
```kotlin
fun onRefreshCount() {
_isLoading.postValue(true)
fun getUpdatedCount(realm: Realm) {
val visitInfo = realm.where(VisitInfo::class.java).findFirst()
visitInfo?.let {
_visitInfo.value = it
_isLoading.postValue(false)
}
}
fun onUserSuccess(user: User) {
val config = SyncConfiguration.Builder(user, user.id).build()
Realm.getInstanceAsync(config, object : Realm.Callback() {
override fun onSuccess(realm: Realm) {
getUpdatedCount(realm)
}
override fun onError(exception: Throwable) {
super.onError(exception)
//TODO: Implementation pending
_isLoading.postValue(false)
}
})
}
realmApp.loginAsync(Credentials.anonymous()) {
if (it.isSuccess) {
onUserSuccess(it.get())
} else {
_isLoading.postValue(false)
}
}
}
```
With Kotlin SDK :
```kotlin
fun onRefreshCount(): Flow {
val user = runBlocking { realmApp.login(Credentials.anonymous()) }
val config = SyncConfiguration.Builder(
user = user,
partitionValue = user.identity,
schema = setOf(VisitInfo::class)
).build()
val realm = Realm.open(config)
return realm.query().first().asFlow()
}
```
Again upon quick comparing you would notice that lines of code have decreased drastically, by more
than **60%**, and apart coroutines for doing async call, we are using _Kotlin Flow_ to observe the
value changes.
With this, as mentioned earlier, we are further able to reduce our boilerplate code,
no callback hell and writing code is more natural now.
## Other major changes
Apart from Realm Kotlin SDK being written in Kotlin language, it is fundamentally little different
from the JAVA SDK in a few ways:
- **Frozen by default**: All objects are now frozen. Unlike live objects, frozen objects do not
automatically update after the database writes. You can still access live objects within a write
transaction, but passing a live object out of a write transaction freezes the object.
- **Thread-safety**: All realm instances, objects, query results, and collections can now be
transferred across threads.
- **Singleton**: You now only need one instance of each realm.
## Should you migrate now?
There is no straight answer to question, it really depends on usage, complexity of the app and time.
But I think so this the perfect time to evaluate the efforts and changes required to migrate as
Realm Kotlin SDK would be the future.
| md | {
"tags": [
"Realm",
"Kotlin",
"Java"
],
"pageDescription": "This article is targeted to existing Realm developers who want to understand how to migrate to Realm Kotlin SDK.",
"contentType": "Tutorial"
} | How to migrate from Realm Java SDK to Realm Kotlin SDK | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/advanced-modeling-realm-dotnet | created | # Advanced Data Modeling with Realm .NET
Realm's intuitive data model approach means that in most cases, you don't even think of Realm models as entities. You just declare your POCOs, have them inherit from `RealmObject`, and you're done. Now you have persistable models, with `INotifyPropertyChanged` capabilities all wired up, that are also "live"—i.e., every time you access a property, you get the latest state and not some snapshot from who knows how long ago. This is great and most of our users absolutely love the simplicity. Still, there are some use cases where being aware that you're working with a database can really bring your data models to the next level. In this blog post, we'll evaluate three techniques you can apply to make your models fit your needs even better.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Constructor Validation
One of the core requirements of Realm is that all models need to have a parameterless constructor. This is needed because Realm needs to be able to instantiate an object without figuring out what arguments to pass to the constructor. What not many people know is that you can make this parameterless constructor private to communicate expectations to your callers. This means that if you have a `Person` class where you absolutely expect that a `Name` is provided upon object creation, you can have a public constructor with a `name` argument and a private parameterless one for use by Realm:
```csharp
class Person : RealmObject
{
public string Name { get; set; }
public Person(string name)
{
ValidateName(name);
Name = name;
}
// This is used by Realm, even though it's private
private Person()
{
}
}
```
And I know what some of you may be thinking: "Oh no! 😱 Does that mean Realm uses the suuuuper slow reflection to create object instances?" Fortunately, the answer is no. Instead, at compile time, Realm injects a nested helper class in each model that has a `CreateInstance` method. Since the helper class is nested in the model classes, it has access to private members and is thus able to invoke the private constructor.
## Property Access Modifiers
Similar to the point above, another relatively unknown feature of Realm is that persisted properties don't need to be public. You can either have the entire property be private or just one of the accessors. This synergizes nicely with the private constructor technique that we mentioned above. If you expose a constructor that explicitly validates the person's name, it would be fairly annoying to do all that work and have some code accidentally set the property to `null` the very next line. So it would make sense to make the setter of the name property above private:
```csharp
class Person : RealmObject
{
public string Name { get; private set; }
public Person(string name)
{
// ...
}
}
```
That way, you're communicating clearly to the class consumers that they need to provide the name at object creation time and that it can't be changed later. A very common use case here is to make the `Id` setter private and generate a random `Id` at object creation time:
```csharp
class Transaction : RealmObject
{
public Guid Id { get; private set; } = Guid.NewGuid();
}
```
Sometimes, it makes sense to make the entire property private—typically, when you want to expose a different public property that wraps it. If we go back to our `Person` and `Name` example, perhaps we want to allow changing the name, but we want to still validate the new name before we persist it. Then, we create a private autoimplemented property that Realm will use for persistence, and a public one that does the validation:
```csharp
class Person : RealmObject
{
[MapTo("Name")]
private string _Name { get; set; }
public string Name
{
get => _Name;
set
{
ValidateName(value);
_Name = value;
}
}
}
```
This is quite neat as it makes the public API of your model safe, while preserving its persistability. Of note is the `MapTo` attribute applied to `_Name`. It is not strictly necessary. I just added it to avoid having ugly column names in the database. You can use it or not use it. It's totally up to you. One thing to note when utilizing this technique is that Realm is completely unaware of the relationship between `Name` and `_Name`. This has two implications. 1) Notifications will be emitted for `_Name` only, and 2) You can't use LINQ queries to filter `Person` objects by name. Let's see how we can mitigate both:
For notifications, we can override `OnPropertyChanged` and raise a notification for `Name` whenever `_Name` changes:
```csharp
class Person : RealmObject
{
protected override void OnPropertyChanged(string propertyName)
{
base.OnPropertyChanged(propertyName);
if (propertyName == nameof(_Name))
{
RaisePropertyChanged(nameof(Name));
}
}
}
```
The code is fairly straightforward. `OnPropertyChanged` will be invoked whenever any property on the object changes and we just re-raise it for the related `Name` property. Note that, as an optimization, `OnPropertyChanged` will only be invoked if there are subscribers to the `PropertyChanged` event. So if you're testing this out and don't see the code get executed, make sure you added a subscriber first.
The situation with queries is slightly harder to work around. The main issue is that because the property is private, you can't use it in a LINQ query—e.g., `realm.All().Where(p => p._Name == "Peter")` will result in a compile-time error. On the other hand, because Realm doesn't know that `Name` is tied to `_Name`, you can't use `p.Name == "Peter"` either. You can still use the string-based queries, though. Just remember to use the name that Realm knows about—i.e., the string argument of `MapTo` if you remapped the property name or the internal property (`_Name`) if you didn't:
```csharp
// _Name is mapped to 'Name' which is what we use here
var peters = realm.All().Filter("Name == 'Peter'");
```
## Using Unpersistable Data Types
Realm has a wide variety of supported data types—most primitive types in the Base Class Library (BCL), as well as advanced collections, such as sets and dictionaries. But sometimes, you'll come across a data type that Realm can't store yet, the most obvious example being enums. In such cases, you can build on top of the previous technique to expose enum properties in your models and have them be persisted as one of the supported data types:
```csharp
enum TransactionState
{
Pending,
Settled,
Error
}
class Transaction : RealmObject
{
private string _State { get; set; }
public TransactionState State
{
get => Enum.Parse(_State);
set => _State = value.ToString();
}
}
```
Using this technique, you can persist many other types, as long as they can be converted losslessly to a persistable primitive type. In this case, we chose `string`, but we could have just as easily used integer. The string representation takes a bit more memory but is also more explicit and less error prone—e.g., if you rearrange the enum members, the data will still be consistent.
All that is pretty cool, but we can take it up a notch. By building on top of this idea, we can also devise a strategy for representing complex data types, such as `Vector3` in a Unity game or a `GeoCoordinate` in a location-aware app. To do so, we'll take advantage of embedded objects—a Realm concept that represents a complex data structure that is owned entirely by its parent. Embedded objects are a great fit for this use case because we want to have a strict 1:1 relationship and we want to make sure that deleting the parent also cleans up the embedded objects it owns. Let's see this in action:
```csharp
class Vector3Model : EmbeddedObject
{
// Casing of the properties here is unusual for C#,
// but consistent with the Unity casing.
private float x { get; set; }
private float y { get; set; }
private float z { get; set; }
public Vector3Model(Vector3 vector)
{
x = vector.x;
y = vector.y;
z = vector.z;
}
private Vector3Model()
{
}
public Vector3 ToVector3() => new Vector3(x, y, z);
}
class Powerup : RealmObject
{
[MapTo("Position")]
private Vector3Model _Position { get; set; }
public Vector3 Position
{
get => _Position?.ToVector3() ?? Vector3.zero;
set => _Position = new Vector3Model(value);
}
protected override void OnPropertyChanged(string propertyName)
{
base.OnPropertyChanged(propertyName);
if (propertyName == nameof(_Position))
{
RaisePropertyChanged(nameof(Position));
}
}
}
```
In this example, we've defined a `Vector3Model` that roughly mirrors Unity's `Vector3`. It has three float properties representing the three components of the vector. We've also utilized what we learned in the previous sections. It has a private constructor to force consumers to always construct it with a `Vector3` argument. We've also marked its properties as private as we don't want consumers directly interacting with them. We want users to always call `ToVector3` to obtain the Unity type. And for our `Powerup` model, we're doing exactly that in the publicly exposed `Position` property. Note that similarly to our `Person` example, we're making sure to raise a notification for `Position` whenever `_Position` changes.
And similarly to the exaple in the previous section, this approach makes querying via LINQ impossible and we have to fall back to the string query syntax if we want to find all powerups in a particular area:
```csharp
IQueryable PowerupsAroundLocation(Vector3 location, float radius)
{
// Note that this query returns a cube around the location, not a sphere.
var powerups = realm.All().Filter(
"Position.x > $0 AND Position.x < $1 AND Position.y > $2 AND Position.y < $3 AND Position.z > $4 AND Position.z < $5",
location.x - radius, location.x + radius,
location.y - radius, location.y + radius,
location.z - radius, location.z + radius);
}
```
## Conclusion
The list of techniques above is by no means meant to be exhaustive. Neither is it meant to imply that this is the only, or even "the right," way to use Realm. For most apps, simple POCOs with a list of properties is perfectly sufficient. But if you need to add extra validations or persist complex data types that you're using a lot, but Realm doesn't support natively, we hope that these examples will give you ideas for how to do that. And if you do come up with an ingenious way to use Realm, we definitely want to hear about it. Who knows? Perhaps we can feature it in our "Advanced^2 Data Modeling" article! | md | {
"tags": [
"Realm",
"C#"
],
"pageDescription": "Learn how to structure your Realm models to add validation, protect certain properties, and even persist complex objects coming from third-party packages.",
"contentType": "Article"
} | Advanced Data Modeling with Realm .NET | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/resumable-initial-sync | created | # Resumable Initial Sync in MongoDB 4.4
## Introduction
Hello, everyone. My name is Nuno and I have been working with MongoDB databases for almost eight years now as a sysadmin and as a Technical Services Engineer.
One of the most common challenges in MongoDB environments is when a replica set member requires a resync and the Initial Sync process is interrupted for some reason.
Interruptions like network partitions between the sync source and the node doing the initial sync causes the process to fail forcing it to restart from scratch to ensure database consistency.
This began to be particularly problematic when faced with a large dataset sizes which can take up to several days when they are in terms of terabytes.
You may have already noticed that I am talking in the past tense as this is no longer a problem you need to face. I am very happy to share with you one of the latest enhancements introduced by MongoDB in v4.4: Resumable Initial Sync.
Resumable Initial Sync now enables nodes doing initial sync to survive events like transient network errors or a sync source restart when fetching data from the sync source node.
## Resumable Initial Sync
The time spent when recovering replica set members with Initial Sync procedures on large data environments has two common challenges:
- Falling off the oplog
- Transient network failures
MongoDB became more resilient to these types of failures with MongoDB v3.4 by adding the ability to pull newly added oplog records during the data copy phase, and more recently with MongoDB v4.4 and the ability to resume the initial sync where it left off.
## Behavioral Description
The initial sync process will restart the interrupted or failed command and keep retrying until the command succeeds a non-resumable error occurs, or a period specified by the parameter initialSyncTransientErrorRetryPeriodSeconds passes (default: 24 hours). These restarts are constrained to use the same sync source, and are not tolerant to rollbacks on the sync source. That is if the sync source experiences a rollback, the entire initial sync attempt will fail.
Resumable errors include retriable errors when `ErrorCodes::isRetriableError` return `true` which includes all network errors as well as some other transient errors.
The `ErrorCodes::NamespaceNotFound`, `ErrorCodes::OperationFailed`, `ErrorCodes::CursorNotFound`, or `ErrorCodes::QueryPlanKilled` mean the collection may have been dropped, renamed, or modified in a way which caused the cursor to be killed. These errors will cause `ErrorCodes::InitialSyncFailure` and will be treated the same as transient retriable errors (except for not killing the cursor), mark `ErrorCodes::isRetriableError` as `true`, and will allow the initial sync to resume where it left off.
On `ErrorCodes::NamespaceNotFound`, it will skip this entire collection and return success. Even if the collection has been renamed, simply resuming the query is sufficient since we are querying by `UUID`; the name change will be handled during `oplog` application.
All other errors are `non-resumable`.
## Configuring Custom Retry Period
The default retry period is 24 hours (86,400 seconds). A database administrator can choose to increase this period with the following command:
``` javascript
// Default is 86400
db.adminCommand({
setParameter: 1,
initialSyncTransientErrorRetryPeriodSeconds: 86400
})
```
>Note: The 24-hour value is the default period estimated for a database administrator to detect any ongoing failure and be able to act on restarting the sync source node.
## Upgrade/Downgrade Requirements and Behaviors
The full resumable behavior will always be available between 4.4 nodes regardless of FCV - Feature Compatibility Version. Between 4.2 and 4.4 nodes, the initial sync will not be resumable during the query phase of the `CollectionCloner` (where we are actually reading data from collections), nor will it be resumable after collection rename, regardless of which node is 4.4. Resuming after transient failures in other commands will be possible when the syncing node is 4.4 and the sync source is 4.2.
## Diagnosis/Debuggability
During initial sync, the sync source node can become unavailable (either due to a network failure or process restart) and still, be able to resume and complete.
Here are examples of what messages to expect in the logs.
Initial Sync attempt successfully started:
``` none
{"t":{"$date":"2020-11-10T19:49:21.826+00:00"},"s":"I", "c":"INITSYNC", "id":21164, "ctx":"ReplCoordExtern-0","msg":"Starting initial sync attempt","attr":{"initialSyncAttempt":1,"initialSyncMaxAttempts":10}}
{"t":{"$date":"2020-11-10T19:49:22.905+00:00"},"s":"I", "c":"INITSYNC", "id":21173, "ctx":"ReplCoordExtern-1","msg":"Initial syncer oplog truncation finished","attr":{"durationMillis":0}}
```
Messages caused by network failures (or sync source node restart):
``` none
{"t":{"$date":"2020-11-10T19:50:04.822+00:00"},"s":"D1", "c":"INITSYNC", "id":21078, "ctx":"ReplCoordExtern-0","msg":"Transient error occurred during cloner stage","attr":{"cloner":"CollectionCloner","stage":"query","error":{"code":6,"codeName":"HostUnreachable","errmsg":"recv failed while exhausting cursor :: caused by :: Connection closed by peer"}}}
{"t":{"$date":"2020-11-10T19:50:04.823+00:00"},"s":"I", "c":"INITSYNC", "id":21075, "ctx":"ReplCoordExtern-0","msg":"Initial Sync retrying cloner stage due to error","attr":{"cloner":"CollectionCloner","stage":"query","error":{"code":6,"codeName":"HostUnreachable","errmsg":"recv failed while exhausting cursor :: caused by :: Connection closed by peer"}}}
```
Initial Sync is resumed after being interrupted:
``` none
{"t":{"$date":"2020-11-10T19:51:43.996+00:00"},"s":"D1", "c":"INITSYNC", "id":21139, "ctx":"ReplCoordExtern-0","msg":"Attempting to kill old remote cursor with id: {id}","attr":{"id":118250522569195472}}
{"t":{"$date":"2020-11-10T19:51:43.997+00:00"},"s":"D1", "c":"INITSYNC", "id":21133, "ctx":"ReplCoordExtern-0","msg":"Collection cloner will resume the last successful query"}
```
Data cloners resume:
``` none
{"t":{"$date":"2020-11-10T19:53:27.345+00:00"},"s":"D1", "c":"INITSYNC", "id":21072, "ctx":"ReplCoordExtern-0","msg":"Cloner finished running stage","attr":{"cloner":"CollectionCloner","stage":"query"}}
{"t":{"$date":"2020-11-10T19:53:27.347+00:00"},"s":"D1", "c":"INITSYNC", "id":21069, "ctx":"ReplCoordExtern-0","msg":"Cloner running stage","attr":{"cloner":"CollectionCloner","stage":"setupIndexBuildersForUnfinishedIndexes"}}
{"t":{"$date":"2020-11-10T19:53:27.349+00:00"},"s":"D1", "c":"INITSYNC", "id":21072, "ctx":"ReplCoordExtern-0","msg":"Cloner finished running stage","attr":{"cloner":"CollectionCloner","stage":"setupIndexBuildersForUnfinishedIndexes"}}
{"t":{"$date":"2020-11-10T19:53:27.350+00:00"},"s":"D1", "c":"INITSYNC", "id":21148, "ctx":"ReplCoordExtern-0","msg":"Collection clone finished","attr":{"namespace":"test.data"}}
{"t":{"$date":"2020-11-10T19:53:27.351+00:00"},"s":"D1", "c":"INITSYNC", "id":21057, "ctx":"ReplCoordExtern-0","msg":"Database clone finished","attr":{"dbName":"test","status":{"code":0,"codeName":"OK"}}}
```
Data cloning phase completes successfully. Oplog cloning phase starts:
``` none
{"t":{"$date":"2020-11-10T19:53:27.352+00:00"},"s":"I", "c":"INITSYNC", "id":21183, "ctx":"ReplCoordExtern-0","msg":"Finished cloning data. Beginning oplog replay","attr":{"databaseClonerFinishStatus":"OK"}}
{"t":{"$date":"2020-11-10T19:53:27.353+00:00"},"s":"I", "c":"INITSYNC", "id":21195, "ctx":"ReplCoordExtern-3","msg":"Writing to the oplog and applying operations until stopTimestamp before initial sync can complete","attr":{"stopTimestamp":{"":{"$timestamp":{"t":1605038002,"i":1}}},"beginFetchingTimestamp":{"":{"$timestamp":{"t":1605037760,"i":1}}},"beginApplyingTimestamp":{"":{"$timestamp":{"t":1605037760,"i":1}}}}}
{"t":{"$date":"2020-11-10T19:53:27.359+00:00"},"s":"I", "c":"INITSYNC", "id":21181, "ctx":"ReplCoordExtern-1","msg":"Finished fetching oplog during initial sync","attr":{"oplogFetcherFinishStatus":"CallbackCanceled: oplog fetcher shutting down","lastFetched":"{ ts: Timestamp(1605038002, 1), t: 296 }"}}
```
Initial Sync completes successfully and statistics are provided:
``` none
{"t":{"$date":"2020-11-10T19:53:27.360+00:00"},"s":"I", "c":"INITSYNC", "id":21191, "ctx":"ReplCoordExtern-1","msg":"Initial sync attempt finishing up"}
{"t":{"$date":"2020-11-10T19:53:27.360+00:00"},"s":"I", "c":"INITSYNC", "id":21192, "ctx":"ReplCoordExtern-1","msg":"Initial Sync Attempt Statistics","attr":{"statistics":{"failedInitialSyncAttempts":0,"maxFailedInitialSyncAttempts":10,"initialSyncStart":{"$date":"2020-11-10T19:49:21.826Z"},"initialSyncAttempts":],"appliedOps":25,"initialSyncOplogStart":{"$timestamp":{"t":1605037760,"i":1}},"initialSyncOplogEnd":{"$timestamp":{"t":1605038002,"i":1}},"totalTimeUnreachableMillis":203681,"databases":{"databasesCloned":3,"admin":{"collections":2,"clonedCollections":2,"start":{"$date":"2020-11-10T19:49:23.150Z"},"end":{"$date":"2020-11-10T19:49:23.452Z"},"elapsedMillis":302,"admin.system.keys":{"documentsToCopy":2,"documentsCopied":2,"indexes":1,"fetchedBatches":1,"start":{"$date":"2020-11-10T19:49:23.150Z"},"end":{"$date":"2020-11-10T19:49:23.291Z"},"elapsedMillis":141,"receivedBatches":1},"admin.system.version":{"documentsToCopy":1,"documentsCopied":1,"indexes":1,"fetchedBatches":1,"start":{"$date":"2020-11-10T19:49:23.291Z"},"end":{"$date":"2020-11-10T19:49:23.452Z"},"elapsedMillis":161,"receivedBatches":1}},"config":{"collections":3,"clonedCollections":3,"start":{"$date":"2020-11-10T19:49:23.452Z"},"end":{"$date":"2020-11-10T19:49:23.976Z"},"elapsedMillis":524,"config.system.indexBuilds":{"documentsToCopy":0,"documentsCopied":0,"indexes":1,"fetchedBatches":0,"start":{"$date":"2020-11-10T19:49:23.452Z"},"end":{"$date":"2020-11-10T19:49:23.591Z"},"elapsedMillis":139,"receivedBatches":0},"config.system.sessions":{"documentsToCopy":1,"documentsCopied":1,"indexes":2,"fetchedBatches":1,"start":{"$date":"2020-11-10T19:49:23.591Z"},"end":{"$date":"2020-11-10T19:49:23.801Z"},"elapsedMillis":210,"receivedBatches":1},"config.transactions":{"documentsToCopy":0,"documentsCopied":0,"indexes":1,"fetchedBatches":0,"start":{"$date":"2020-11-10T19:49:23.801Z"},"end":{"$date":"2020-11-10T19:49:23.976Z"},"elapsedMillis":175,"receivedBatches":0}},"test":{"collections":1,"clonedCollections":1,"start":{"$date":"2020-11-10T19:49:23.976Z"},"end":{"$date":"2020-11-10T19:53:27.350Z"},"elapsedMillis":243374,"test.data":{"documentsToCopy":29000000,"documentsCopied":29000000,"indexes":1,"fetchedBatches":246,"start":{"$date":"2020-11-10T19:49:23.976Z"},"end":{"$date":"2020-11-10T19:53:27.349Z"},"elapsedMillis":243373,"receivedBatches":246}}}}}}
{"t":{"$date":"2020-11-10T19:53:27.451+00:00"},"s":"I", "c":"INITSYNC", "id":21163, "ctx":"ReplCoordExtern-3","msg":"Initial sync done","attr":{"durationSeconds":245}}
```
The new InitialSync statistics from [replSetGetStatus.initialSyncStatus can be useful to review the initial sync progress status.
Starting in MongoDB 4.2.1, replSetGetStatus.initialSyncStatus metrics are only available when run on a member during its initial sync (i.e., STARTUP2 state).
The metrics are:
- syncSourceUnreachableSince - The date and time at which the sync source became unreachable.
- currentOutageDurationMillis - The time in milliseconds that the sync source has been unavailable.
- totalTimeUnreachableMillis - The total time in milliseconds that the member has been unavailable during the current initial sync.
For each Initial Sync attempt from replSetGetStatus.initialSyncStatus.initialSyncAttempts:
- totalTimeUnreachableMillis - The total time in milliseconds that the member has been unavailable during the current initial sync.
- operationsRetried - Total number of all operation retry attempts.
- rollBackId - The sync source's rollback identifier at the start of the initial sync attempt.
An example of this output is:
``` none
replset:STARTUP2> db.adminCommand( { replSetGetStatus: 1 } ).initialSyncStatus
{
"failedInitialSyncAttempts" : 0,
"maxFailedInitialSyncAttempts" : 10,
"initialSyncStart" : ISODate("2020-11-06T20:16:21.649Z"),
"initialSyncAttempts" : ],
"appliedOps" : 0,
"initialSyncOplogStart" : Timestamp(1604693779, 1),
"syncSourceUnreachableSince" : ISODate("2020-11-06T20:16:32.950Z"),
"currentOutageDurationMillis" : NumberLong(56514),
"totalTimeUnreachableMillis" : NumberLong(56514),
"databases" : {
"databasesCloned" : 2,
"admin" : {
"collections" : 2,
"clonedCollections" : 2,
"start" : ISODate("2020-11-06T20:16:22.948Z"),
"end" : ISODate("2020-11-06T20:16:23.219Z"),
"elapsedMillis" : 271,
"admin.system.keys" : {
"documentsToCopy" : 2,
"documentsCopied" : 2,
"indexes" : 1,
"fetchedBatches" : 1,
"start" : ISODate("2020-11-06T20:16:22.948Z"),
"end" : ISODate("2020-11-06T20:16:23.085Z"),
"elapsedMillis" : 137,
"receivedBatches" : 1
},
"admin.system.version" : {
"documentsToCopy" : 1,
"documentsCopied" : 1,
"indexes" : 1,
"fetchedBatches" : 1,
"start" : ISODate("2020-11-06T20:16:23.085Z"),
"end" : ISODate("2020-11-06T20:16:23.219Z"),
"elapsedMillis" : 134,
"receivedBatches" : 1
}
},
"config" : {
"collections" : 3,
"clonedCollections" : 3,
"start" : ISODate("2020-11-06T20:16:23.219Z"),
"end" : ISODate("2020-11-06T20:16:23.666Z"),
"elapsedMillis" : 447,
"config.system.indexBuilds" : {
"documentsToCopy" : 0,
"documentsCopied" : 0,
"indexes" : 1,
"fetchedBatches" : 0,
"start" : ISODate("2020-11-06T20:16:23.219Z"),
"end" : ISODate("2020-11-06T20:16:23.348Z"),
"elapsedMillis" : 129,
"receivedBatches" : 0
},
"config.system.sessions" : {
"documentsToCopy" : 1,
"documentsCopied" : 1,
"indexes" : 2,
"fetchedBatches" : 1,
"start" : ISODate("2020-11-06T20:16:23.348Z"),
"end" : ISODate("2020-11-06T20:16:23.538Z"),
"elapsedMillis" : 190,
"receivedBatches" : 1
},
"config.transactions" : {
"documentsToCopy" : 0,
"documentsCopied" : 0,
"indexes" : 1,
"fetchedBatches" : 0,
"start" : ISODate("2020-11-06T20:16:23.538Z"),
"end" : ISODate("2020-11-06T20:16:23.666Z"),
"elapsedMillis" : 128,
"receivedBatches" : 0
}
},
"test" : {
"collections" : 1,
"clonedCollections" : 0,
"start" : ISODate("2020-11-06T20:16:23.666Z"),
"test.data" : {
"documentsToCopy" : 29000000,
"documentsCopied" : 714706,
"indexes" : 1,
"fetchedBatches" : 7,
"start" : ISODate("2020-11-06T20:16:23.666Z"),
"receivedBatches" : 7
}
}
}
}
replset:STARTUP2>
```
## Wrap Up
Upgrade your MongoDB database to the new v4.4 and take advantage of the new Resumable Initial Sync feature. Your deployment will now survive transient network errors or a sync source restarts.
> If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Discover the new Resumable Initial Sync feature in MongoDB v4.4",
"contentType": "Article"
} | Resumable Initial Sync in MongoDB 4.4 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-visual-studio-code-plugin | created | # How To Use The MongoDB Visual Studio Code Plugin
To make developers more productive when working with MongoDB, we built
MongoDB for Visual Studio
Code,
an extension that allows you to quickly connect to MongoDB and MongoDB
Atlas and work with your data to
build applications right inside your code editor. With MongoDB for
Visual Studio Code you can:
- Connect to a MongoDB or MongoDB
Atlas cluster, navigate
through your databases and collections, get a quick overview of your
schema, and see the documents in your collections;
- Create MongoDB Playgrounds, the fastest way to prototype CRUD
operations and MongoDB commands;
- Quickly access the MongoDB Shell, to launch the MongoDB Shell from
the command palette and quickly connect to the active cluster.
## Getting Started with MongoDB Atlas
### Create an Atlas Account
First things first, we will need to set up a MongoDB
Atlas account. And don't worry,
you can create an M0 MongoDB Atlas cluster for free. No credit card is
required to get started! To get up and running with a free M0 cluster,
follow the MongoDB Atlas Getting Started
guide, or follow the
steps below. First you will need to start at the MongoDB Atlas
registration page, and
fill in your account information. You can find more information about
how to create a MongoDB Atlas account in our
documentation
### Deploy a Free Tier Cluster
Once you log in, Atlas prompts you to build your first cluster. You need
to click "Build a Cluster." You will then select the Starter Cluster.
Starter clusters include the M0, M2, and M5 cluster tiers. These
low-cost clusters are suitable for users who are learning MongoDB or
developing small proof-of-concept applications.
Atlas supports M0 Free Tier clusters on Amazon Web Services
(AWS),
Google Cloud Platform
(GCP),
and Microsoft
Azure.
Atlas displays only the regions that support M0 Free Tier and M2/M5
Shared tier clusters.
Once you deploy your cluster, it can take up to 10 minutes for your
cluster to provision and become ready to use.
### Add Your Connection IP Address to Your IP Access List
You must add your IP address to the IP access
list
before you can connect to your cluster. To add your IP address to the IP
access list. This is important, as it ensures that only you can access
the cluster in the cloud from your IP address. You also have the option
of allowing access from anywhere, though this means that anyone can have
network access to your cluster. This is a potential security risk if
your password and other credentials leak. From your Clusters view, click
the Connect button for your cluster.
### Configure your IP access list entry
Click Add Your Current IP Address.
### Create a Database User for Your Cluster
For security purposes, you must create a database user to access your
cluster.
Enter the new username and password. You'll then have the option of
selecting user privileges, including admin, read/write access, or
read-only access. From your Clusters view, click the Connect button for
your cluster.
In the **Create a MongoDB User** step of the dialog, enter a Username
and a Password for your database user. You'll use this username and
password combination to access data on your cluster.
>
>
>For information on configuring additional database users on your
>cluster, see Configure Database
>Users.
>
>
## Install MongoDB for Visual Studio Code
Next, we are going to connect to our new MongoDB Atlas database cluster
using the Visual Studio Code MongoDB
Plugin.
To install MongoDB for Visual Studio Code, simply search for it in the
Extensions list directly inside Visual Studio Code or head to the
"MongoDB for Visual Studio Code"
homepage
in the Visual Studio Code Marketplace.
## Connect Your MongoDB Data
MongoDB for Visual Studio Code can connect to MongoDB standalone
instances or clusters on MongoDB Atlas or self-hosted. Once connected,
you can **browse databases**, **collections**, and **read-only views**
directly from the tree view.
For each collection, you will see a list of sample documents and a quick
overview of the schema. This is very useful as a reference while writing
queries and aggregations.
Once installed there will be a new MongoDB tab that we can use to add
our connections by clicking "Add Connection". If you've used MongoDB
Compass before, then the form
should be familiar. You can enter your connection details in the form,
or use a connection string. I went with the latter as my database is
hosted on MongoDB Atlas.
To obtain your connection string, navigate to your "Clusters" page and
select "Connect".
Choose the "Connect using MongoDB Compass" option and copy the
connection string. Make sure to add your username and password in their
respective places before entering the string in Visual Studio Code.
Then paste this string into Visual Studio Code.
Once you've connected successfully, you should see an alert. At this
point, you can explore the data in your cluster, as well as your
schemas.
## Navigate Your Data
Once you connect to your deployment using MongoDB for Visual Studio
Code, use the left navigation to:
- Explore your databases, collections, read-only views, and documents.
- Create new databases and collections.
- Drop databases and collections.
## Databases and Collections
When you expand an active connection, MongoDB for Visual Studio Code
shows the databases in that deployment. Click a database to view the
collections it contains.
### View Collection Documents and Schema
When you expand a collection, MongoDB for Visual Studio Code displays
that collection's document count next to the Documents label in the
navigation panel.
When you expand a collection's documents, MongoDB for Visual Studio Code
lists the `_id` of each document in the collection. Click an `_id` value
to open that document in Visual Studio Code and view its contents.
Alternatively, right-click a collection and click View Documents to view
all the collection's documents in an array.
Opening collection documents provides a **read-only** view of your data.
To modify your data using MongoDB for Visual Studio Code, use a
JavaScript
Playground
or launch a shell by right-clicking your active deployment in the
MongoDB view in the Activity Bar.
#### Schema
Your collection's schema defines the fields and data types within the
collection. Due to MongoDB's flexible schema model, different documents
in a collection may contain different fields, and data types may vary
within a field. MongoDB can enforce schema
validation to
ensure your collection documents have the same shape.
When you expand a collection's schema, MongoDB for Visual Studio Code
lists the fields which appear in that collection's documents. If a field
exists in all documents and its type is consistent throughout the
collection, MongoDB for Visual Studio Code displays an icon indicating
that field's data type.
### Create a New Database
When you create a new database, you must populate it with an initial
collection. To create a new database:
1. Hover over the connection for the deployment where you want your
database to exist.
2. Click the Plus icon that appears.
3. In the prompt, enter a name for your new database.
4. Press the enter key.
5. Enter a name for the first collection in your new database.
6. Press the enter key.
### Create a New Collection
To create a new collection:
1. Hover over the database where you want your collection to exist.
2. Click the Plus icon that appears.
3. In the prompt, enter a name for your new collection.
4. Press the enter key to confirm your new collection.
## Explore Your Data with Playgrounds
MongoDB Playgrounds are the most convenient way to prototype and execute
CRUD operations and other MongoDB commands directly inside Visual Studio
Code. Use JavaScript environments to interact your data. Prototype
queries, run aggregations, and more.
- Prototype your queries, aggregations, and MongoDB commands with
MongoDB syntax highlighting and intelligent autocomplete for MongoDB
shell API, MongoDB operators, and for database, collection, and
field names.
- Run your playgrounds and see the results instantly. Click the play
button in the tab bar to see the output.
- Save your playgrounds in your workspace and use them to document how
your application interacts with MongoDB
- Build aggregations quickly with helpful and well-commented stage
snippets
### Open the Visual Studio Code Command Palette.
To open a playground and begin interacting with your data, open Visual
Studio Code and press one of the following key combinations:
- Control + Shift + P on Windows or Linux.
- Command + Shift + P on macOS.
The Command Palette provides quick access to commands and keyboard
shortcuts.
### Find and run the "Create MongoDB Playground" command.
Use the Command Palette search bar to search for commands. All commands
related to MongoDB for Visual Studio Code are prefaced with MongoDB:.
When you run the MongoDB: Create MongoDB Playground command, MongoDB for
Visual Studio Code opens a playground pre-configured with a few
commands.
## Run a Playground
To run a playground, click the Play Button in Visual Studio Code's top
navigation bar.
You can use a MongoDB Playground to perform CRUD (create, read, update,
and delete) operations on documents in a collection on a connected
deployment. Use the
MongoDB CRUD Operators and
shell methods to
interact with your databases in MongoDB Playgrounds.
### Perform CRUD Operations
Let's run through the default MongoDB Playground template that's created
when you initialize a new Playground. In the default template, it
executes the following:
1. `use('mongodbVSCodePlaygroundDB')` switches to the
`mongodbVSCodePlaygroundDB` database.
2. db.sales.drop()
drops the sales collection, so the playground will start from a
clean slate.
3. Inserts eight documents into the mongodbVSCodePlaygroundDB.sales
collection.
1. Since the collection was dropped, the insert operations will
create the collection and insert the data.
2. For a detailed description of this method's parameters, see
insertOne()
in the MongoDB Manual.
4. Runs a query to read all documents sold on April 4th, 2014.
1. For a detailed description of this method's parameters, see
find()
in the MongoDB Manual.
``` javascript
// MongoDB Playground
// To disable this template go to Settings \| MongoDB \| Use Default Template For Playground.
// Make sure you are connected to enable completions and to be able to run a playground.
// Use Ctrl+Space inside a snippet or a string literal to trigger completions.
// Select the database to use.
use('mongodbVSCodePlaygroundDB');
// The drop() command destroys all data from a collection.
// Make sure you run it against proper database and collection.
db.sales.drop();
// Insert a few documents into the sales collection.
db.sales.insertMany(
{ '_id' : 1, 'item' : 'abc', 'price' : 10, 'quantity' : 2, 'date' : new Date('2014-03-01T08:00:00Z') },
{ '_id' : 2, 'item' : 'jkl', 'price' : 20, 'quantity' : 1, 'date' : new Date('2014-03-01T09:00:00Z') },
{ '_id' : 3, 'item' : 'xyz', 'price' : 5, 'quantity' : 10, 'date' : new Date('2014-03-15T09:00:00Z') },
{ '_id' : 4, 'item' : 'xyz', 'price' : 5, 'quantity' : 20, 'date' : new Date('2014-04-04T11:21:39.736Z') },
{ '_id' : 5, 'item' : 'abc', 'price' : 10, 'quantity' : 10, 'date' : new Date('2014-04-04T21:23:13.331Z') },
{ '_id' : 6, 'item' : 'def', 'price' : 7.5, 'quantity': 5, 'date' : new Date('2015-06-04T05:08:13Z') },
{ '_id' : 7, 'item' : 'def', 'price' : 7.5, 'quantity': 10, 'date' : new Date('2015-09-10T08:43:00Z') },
{ '_id' : 8, 'item' : 'abc', 'price' : 10, 'quantity' : 5, 'date' : new Date('2016-02-06T20:20:13Z') },
]);
// Run a find command to view items sold on April 4th, 2014.
db.sales.find({
date: {
$gte: new Date('2014-04-04'),
$lt: new Date('2014-04-05')
}
});
```
When you press the Play Button, this operation outputs the following
document to the Output view in Visual Studio Code:
``` javascript
{
acknowleged: 1,
insertedIds: {
'0': 2,
'1': 3,
'2': 4,
'3': 5,
'4': 6,
'5': 7,
'6': 8,
'7': 9
}
}
```
You can learn more about the basics of MQL and CRUD operations in the
post, [Getting Started with Atlas and the MongoDB Query Language
(MQL).
### Run Aggregation Pipelines
Let's run through the last statement of the default MongoDB Playground
template. You can run aggregation
pipelines on your
collections in MongoDB for Visual Studio Code. Aggregation pipelines
consist of
stages
that process your data and return computed results.
Common uses for aggregation include:
- Grouping data by a given expression.
- Calculating results based on multiple fields and storing those
results in a new field.
- Filtering data to return a subset that matches a given criteria.
- Sorting data.
When you run an aggregation, MongoDB for Visual Studio Code conveniently
outputs the results directly within Visual Studio Code.
This pipeline performs an aggregation in two stages:
1. The
$match
stage filters the data such that only sales from the year 2014 are
passed to the next stage.
2. The
$group
stage groups the data by item. The stage adds a new field to the
output called totalSaleAmount, which is the culmination of the
item's price and quantity.
``` javascript
// Run an aggregation to view total sales for each product in 2014.
const aggregation =
{ $match: {
date: {
$gte: new Date('2014-01-01'),
$lt: new Date('2015-01-01')
}
} },
{ $group: {
_id : '$item', totalSaleAmount: {
$sum: { $multiply: [ '$price', '$quantity' ] }
}
} },
];
db.sales.aggregate(aggregation);
```
When you press the Play Button, this operation outputs the following
documents to the Output view in Visual Studio Code:
``` javascript
[
{
_id: 'abc',
totalSaleAmount: 120
},
{
_id: 'jkl',
totalSaleAmount: 20
},
{
_id: 'xyz',
totalSaleAmount: 150
}
]
```
See [Run Aggregation
Pipelines
for more information on running the aggregation pipeline from the
MongoDB Playground.
## Terraform snippet for MongoDB Atlas
If you use Terraform to manage your infrastructure, MongoDB for Visual
Studio Code helps you get started with the MongoDB Atlas
Provider.
We aren't going to cover this feature today, but if you want to learn
more, check out Create an Atlas Cluster from a Template using
Terraform,
from the MongoDB manual.
## Summary
There you have it! MongoDB for Visual Studio Code Extension allows you
to connect to your MongoDB instance and enables you to interact in a way
that fits into your native workflow and development tools. You can
navigate and browse your MongoDB databases and collections, and
prototype queries and aggregations for use in your applications.
If you are a Visual Studio Code user, getting started with MongoDB for
Visual Studio Code is easy:
1. Install the extension from the
marketplace;
2. Get a free Atlas cluster
if you don't have a MongoDB server already;
3. Connect to it and start building a playground.
You can find more information about MongoDB for Visual Studio Code and
all its features in the
documentation.
>
>
>If you have any questions on MongoDB for Visual Studio Code, you can
>join in the discussion at the MongoDB Community
>Forums, and you can
>share feature requests using the MongoDB Feedback
>Engine.
>
>
>
>
>When you're ready to try out the MongoDB Visual Studio Code plugin for
>yourself, check out MongoDB Atlas, MongoDB's
>fully managed database-as-a-service. Atlas is the easiest way to get
>started with MongoDB and has a generous, forever-free tier.
>
>
## Related Links
Check out the following resources for more information:
- Ready to install MongoDB for Visual Studio
Code?
- MongoDB for Visual Studio Code
Documentation
- Getting Started with Atlas and the MongoDB Query Language
(MQL)
- Want to learn more about MongoDB? Be sure to take a class on the
MongoDB University
- Have a question, feedback on this post, or stuck on something be
sure to check out and/or open a new post on the MongoDB Community
Forums
- Want to check out more cool articles about MongoDB? Be sure to
check out more posts like this on the MongoDB Developer
Hub
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to connect to MongoDB from VS Code! Navigate your databases, use playgrounds to prototype queries and aggregations, and more!",
"contentType": "Tutorial"
} | How To Use The MongoDB Visual Studio Code Plugin | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/joining-collections-mongodb-dotnet-core-aggregation-pipeline | created | # Joining Collections in MongoDB with .NET Core and an Aggregation Pipeline
If you've been keeping up with my .NET Core series on MongoDB, you'll remember that we explored creating a simple console application as well as building a RESTful API with basic CRUD support. In both examples, we used basic filters when interacting with MongoDB from our applications.
But what if we need to do something a bit more complex, like join data from two different MongoDB collections?
In this tutorial, we're going to take a look at aggregation pipelines and some of the ways that you can work with them in a .NET Core application.
## The Requirements
Before we get started, there are a few requirements that must be met to be successful:
- Have a MongoDB Atlas cluster deployed and configured.
- Install .NET Core 6+.
- Install the MongoDB sample data sets.
We will be using .NET Core 6.0 for this particular tutorial. Older or newer versions might work, but there's a chance that some of the commands may be a little different. The expectation is that you already have a MongoDB Atlas cluster ready to go. This could be a free M0 cluster or better, but you'll need it properly configured with user roles and network access rules. You'll also need the MongoDB sample data sets to be attached.
If you need help with this, check out a previous tutorial I wrote on the topic.
## A Closer Look at the Data Model and Expected Outcomes
Because we're expecting to accomplish some fairly complicated things in this tutorial, it's probably a good idea to break down the data going into it and the data that we're expecting to come out of it.
In this tutorial, we're going to be using the **sample_mflix** database and the **movies** collection. We're also going to be using a custom **playlist** collection that we're going to add to the **sample_mflix** database.
To give you an idea of the data that we're going to be working with, take the following document from the **movies** collection:
```json
{
"_id": ObjectId("573a1390f29313caabcd4135"),
"title": "Blacksmith Scene",
"plot": "Three men hammer on an anvil and pass a bottle of beer around.",
"year": 1893,
// ...
}
```
Alright, so I didn't include the entire document because it is actually quite huge. Knowing every single field is not going to help or hurt the example as long as we're familiar with the `_id` field.
Next, let's look at a document in the proposed **playlist** collection:
```json
{
"_id": ObjectId("61d8bb5e2d5fe0c2b8a1007d"),
"username": "nraboy",
"items":
"573a1390f29313caabcd42e8",
"573a1391f29313caabcd8a82"
]
}
```
Knowing the fields in the above document is important as they'll be used throughout our aggregation pipelines.
One of the most important things to take note of between the two collections is the fact that the `_id` fields are `ObjectId` and the values in the `items` field are strings. More on this as we progress.
Now that we know our input documents, let's take a look at what we're expecting as a result of our queries. If I were to query for a playlist, I don't want the id values for each of the movies. I want them fully expanded, like the following:
```json
{
"_id": ObjectId("61d8bb5e2d5fe0c2b8a1007d"),
"username": "nraboy",
"items": [
{
"_id": ObjectId("573a1390f29313caabcd4135"),
"title": "Blacksmith Scene",
"plot": "Three men hammer on an anvil and pass a bottle of beer around.",
"year": 1893,
// ...
},
{
"_id": ObjectId("573a1391f29313caabcd8a82"),
"title": "The Terminator",
"plot": "A movie about some killer robots.",
"year": 1984,
// ...
}
]
}
```
This is where the aggregation pipelines come in and some joining because we can't just do a normal filter on a `Find` operation, unless we wanted to perform multiple `Find` operations.
## Creating a New .NET Core Console Application with MongoDB Support
To keep things simple, we're going to be building a console application that uses our aggregation pipeline. You can take the logic and apply it towards a web application if that is what you're interested in.
From the CLI, execute the following:
```bash
dotnet new console -o MongoExample
cd MongoExample
dotnet add package MongoDB.Driver
```
The above commands will create a new .NET Core project and install the latest MongoDB driver for C#. Everything we do next will happen in the project's "Program.cs" file.
Open the "Program.cs" file and add the following C# code:
```csharp
using MongoDB.Driver;
using MongoDB.Bson;
MongoClient client = new MongoClient("ATLAS_URI_HERE");
IMongoCollection playlistCollection = client.GetDatabase("sample_mflix").GetCollection("playlist");
List results = playlistCollection.Find(new BsonDocument()).ToList();
foreach(BsonDocument result in results) {
Console.WriteLine(result["username"] + ": " + string.Join(", ", result["items"]));
}
```
The above code will connect to a MongoDB cluster, get a reference to our **playlist** collection, and dump all the documents from that collection into the console. Finding and returning all the documents in the collection is not a requirement for the aggregation pipeline, but it might help with the learning process.
The `ATLAS_URI_HERE` string can be obtained from the [MongoDB Atlas Dashboard after clicking "Connect" for a particular cluster.
## Building an Aggregation Pipeline with .NET Core Using Raw BsonDocument Stages
We're going to explore a few different options towards creating an aggregation pipeline query with .NET Core. The first will use raw `BsonDocument` type data.
We know our input data and we know our expected outcome, so we need to come up with a few pipeline stages to bring it together.
Let's start with the first stage:
```csharp
BsonDocument pipelineStage1 = new BsonDocument{
{
"$match", new BsonDocument{
{ "username", "nraboy" }
}
}
};
```
The first stage of this pipeline uses the `$match` operator to find only documents where the `username` is "nraboy." This could be more than one because we're not treating `username` as a unique field.
With the filter in place, let's move to the next stage:
```csharp
BsonDocument pipelineStage2 = new BsonDocument{
{
"$project", new BsonDocument{
{ "_id", 1 },
{ "username", 1 },
{
"items", new BsonDocument{
{
"$map", new BsonDocument{
{ "input", "$items" },
{ "as", "item" },
{
"in", new BsonDocument{
{
"$convert", new BsonDocument{
{ "input", "$$item" },
{ "to", "objectId" }
}
}
}
}
}
}
}
}
}
}
};
```
Remember how the document `_id` fields were ObjectId and the `items` array were strings? For the join to be successful, they need to be of the same type. The second pipeline stage is more of a manipulation stage with the `$project` operator. We're defining the fields we want passed to the next stage, but we're also modifying some of the fields, in particular the `items` field. Using the `$map` operator we can take the string values and convert them to ObjectId values.
If your `items` array contained ObjectId instead of string values, this particular stage wouldn't be necessary. It might also not be necessary if you're using POCO classes instead of `BsonDocument` types. That is a lesson for another day though.
With our item values mapped correctly, we can push them to the next stage in the pipeline:
```csharp
BsonDocument pipelineStage3 = new BsonDocument{
{
"$lookup", new BsonDocument{
{ "from", "movies" },
{ "localField", "items" },
{ "foreignField", "_id" },
{ "as", "movies" }
}
}
};
```
The above pipeline stage is where the JOIN operation actually happens. We're looking into the **movies** collection and we're using the ObjectId fields from our **playlist** collection to join them to the `_id` field of our **movies** collection. The output from this JOIN will be stored in a new `movies` field.
The `$lookup` is like saying the following:
```
SELECT movies
FROM playlist
JOIN movies ON playlist.items = movies._id
```
Of course there is more to it than the above SQL statement because `items` is an array, something you can't natively work with in most SQL databases.
So as of right now, we have our joined data. However, its not quite as elegant as what we wanted in our final outcome. This is because the `$lookup` output is an array which will leave us with a multidimensional array. Remember, `items` was an array and each `movies` is an array. Not the most pleasant thing to work with, so we probably want to further manipulate the data in another stage.
```csharp
BsonDocument pipelineStage4 = new BsonDocument{
{ "$unwind", "$movies" }
};
```
The above stage will take our new `movies` field and flatten it out with the `$unwind` operator. The `$unwind` operator basically takes each element of an array and creates a new result item to sit adjacent to the rest of the fields of the parent document. So if you have, for example, one document that has an array with two elements, after doing an `$unwind`, you'll have two documents.
Our end goal, though, is to end up with a single dimension array of movies, so we can fix this with another pipeline stage.
```csharp
BsonDocument pipelineStage5 = new BsonDocument{
{
"$group", new BsonDocument{
{ "_id", "$_id" },
{
"username", new BsonDocument{
{ "$first", "$username" }
}
},
{
"movies", new BsonDocument{
{ "$addToSet", "$movies" }
}
}
}
}
};
```
The above stage will group our documents and add our unwound movies to a new `movies` field, one that isn't multidimensional.
So let's bring the pipeline stages together so they can be run in our application.
```csharp
BsonDocument] pipeline = new BsonDocument[] {
pipelineStage1,
pipelineStage2,
pipelineStage3,
pipelineStage4,
pipelineStage5
};
List pResults = playlistCollection.Aggregate(pipeline).ToList();
foreach(BsonDocument pResult in pResults) {
Console.WriteLine(pResult);
}
```
Executing the code thus far should give us our expected outcome in terms of data and format.
Now, you might be thinking that the above five-stage pipeline was a lot to handle for a JOIN operation. There are a few things that you should be aware of:
- Our id values were not of the same type, which resulted in another stage.
- Our values to join were in an array, not a one-to-one relationship.
What I'm trying to say is that the length and complexity of your pipeline is going to depend on how you've chosen to model your data.
## Using a Fluent API to Build Aggregation Pipeline Stages
Let's look at another way to accomplish our desired outcome. We can make use of the Fluent API that MongoDB offers instead of creating an array of pipeline stages.
Take a look at the following:
```csharp
var pResults = playlistCollection.Aggregate()
.Match(new BsonDocument{{ "username", "nraboy" }})
.Project(new BsonDocument{
{ "_id", 1 },
{ "username", 1 },
{
"items", new BsonDocument{
{
"$map", new BsonDocument{
{ "input", "$items" },
{ "as", "item" },
{
"in", new BsonDocument{
{
"$convert", new BsonDocument{
{ "input", "$$item" },
{ "to", "objectId" }
}
}
}
}
}
}
}
}
})
.Lookup("movies", "items", "_id", "movies")
.Unwind("movies")
.Group(new BsonDocument{
{ "_id", "$_id" },
{
"username", new BsonDocument{
{ "$first", "$username" }
}
},
{
"movies", new BsonDocument{
{ "$addToSet", "$movies" }
}
}
})
.ToList();
foreach(var pResult in pResults) {
Console.WriteLine(pResult);
}
```
In the above example, we used methods such as `Match`, `Project`, `Lookup`, `Unwind`, and `Group` to get our final result. For some of these methods, we didn't need to use a `BsonDocument` like we saw in the previous example.
## Conclusion
You just saw two ways to do a MongoDB aggregation pipeline for joining collections within a .NET Core application. Like previously mentioned, there are a few ways to accomplish what we want, all of which are going to be dependent on how you've chosen to model the data within your collections.
There is a third way, which we'll explore in another tutorial, and this uses LINQ to get the job done.
If you have questions about anything you saw in this tutorial, drop by the [MongoDB Community Forums and get involved! | md | {
"tags": [
"C#",
"MongoDB"
],
"pageDescription": "Learn how to use the MongoDB aggregation pipeline to create stages that will join documents and collections in a .NET Core application.",
"contentType": "Tutorial"
} | Joining Collections in MongoDB with .NET Core and an Aggregation Pipeline | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-asyncopen-autoopen | created | # Open Synced Realms in SwiftUI using @Auto/AsyncOpen
## Introduction
We’re very happy to announce that v10.12.0 of the Realm Cocoa SDK includes our two new property wrappers `@AutoOpen` and `@AsyncOpen` for asynchronous opening of a realm for Realm Sync users. This new feature, which is a response to your community feedback, aligns with our goal to make our developer experience better and more effortless, integrating it with SwiftUI, and removing boilerplate code.
Up until now, the standard approach for opening a realm for any sync user is to call `Realm.asyncOpen()` using a user’s sync configuration, then publish the opened Realm to the view:
``` swift
enum AsyncOpenState {
case waiting
case inProgress(Progress)
case open(Realm)
case error(Error)
}
struct AsyncView: View {
@State var asyncOpenState: AsyncOpenState = .waiting
var body: some View {
switch asyncOpenState {
case .waiting:
ProgressView()
.onAppear(perform: initAsyncOpen)
case .inProgress(let progress):
ProgressView(progress)
case .open(let realm):
ContactsListView()
.environment(\.realm, realm)
case .error(let error):
ErrorView(error: error)
}
}
func initAsyncOpen() {
let app = App(id: "appId")
guard let currentUser = app.currentUser else { return }
let realmConfig = currentUser.configuration(partitionValue: "myPartition")
Realm.asyncOpen(configuration: realmConfig,
callbackQueue: DispatchQueue.main) { result in
switch result {
case .success(let realm):
asyncOpenState = .open(realm)
case .failure(let error):
asyncOpenState = .error(error)
}
}.addProgressNotification { syncProgress in
let progress = Progress(totalUnitCount: Int64(syncProgress.transferredBytes))
progress.completedUnitCount = Int64(syncProgress.transferredBytes)
asyncOpenState = .inProgress(progress)
}
}
}
```
With `@AsyncOpen` and `@AutoOpen`, we are reducing development time and boilerplate, making it easier, faster, and cleaner to implement Realm.asyncOpen(). `@AsyncOpen` and `@AutoOpen` give the user the possibility to cover two common use cases in synced apps.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Prerequisites
- Realm Cocoa 10.12.0+
## @AsyncOpen
With the `@AsyncOpen` property wrapper, we have the same behavior as using `Realm.asyncOpen()`, but with a much more natural API for SwiftUI developers. Using this property wrapper prevents your app from trying to fetch the Realm file if there is no network connection, and it will only return a realm when it's synced with MongoDB Realm data. If there is no internet connection, then @AsyncOpen< will throw an error.
Let’s take, for example, a game app, which the user can play both on an iPhone and iPad. Having the data not updated would result in losing track of the current status of the player. In this case, it’s very important to have our data updated with any latest changes. This is the perfect use case for `@AsyncOpen`.
This property wrapper's API gives you the flexibility to optionally specify a MongoDB Realm AppId. If no AppId is provided, and you’ve only used one ID within your App, then that will be used. You can also provide a timeout for your asynchronous operation:
```swift
@AsyncOpen(appId: "appId",
partitionValue: "myPartition",
configuration: Realm.Configuration(objectTypes: SwiftPerson.self])
timeout: 20000)
var asyncOpen
```
Adding it to your SwiftUI App is as simple as declaring it in your view and have your view react to the state of the sync operation:
- Display a progress view while downloading or waiting for a user to be logged in.
- Display an error view if there is a failure during sync.
- Navigate to a new view after our realm is opened
Once the synced realm has been successfully opened, you can pass it to another view (embedded or via a navigation link):
```swift
struct AsyncOpenView: View {
@AsyncOpen(appId: "appId",
partitionValue: "myPartition",
configuration: Realm.Configuration(objectTypes: [SwiftPerson.self])
timeout: 20000)
var asyncOpen
var body: some View {
VStack {
switch asyncOpen {
case .connecting:
ProgressView()
case .waitingForUser:
ProgressView("Waiting for user to logged in...")
case .open(let realm):
ListView()
.environment(\.realm, realm)
case .error(let error):
ErrorView(error: error)
case .progress(let progress):
ProgressView(progress)
}
}
}
}
```
If you have been using Realm.asyncOpen() in your current SwiftUI App and want to maintain the same behavior, you may want to migrate to `@AsyncOpen`. It will simplify your code and make it more intuitive.
## @AutoOpen
`@AutoOpen` should be used when you want to work with the synced realm file even when there is no internet connection.
Let’s take, for example, Apple’s Notes app, which tries to sync your data if there is internet access and shows you all the notes synced from other devices. If there is no internet connection, then Notes shows you your local (possibly stale) data. This use case is perfect for the `@AutoOpen` property wrapper. When the user recovers a network connection, Realm will sync changes in the background, without the need to add any extra code.
The syntax for using `@AutoOpen` is the same as for `@AsyncOpen`:
```swift
struct AutoOpenView: View {
@AutoOpen(appId: "appId",
partitionValue: "myPartition",
configuration: Realm.Configuration(objectTypes: [SwiftPerson.self])
timeout: 10000)
var autoOpen
var body: some View {
VStack {
switch autoOpen {
case .connecting:
ProgressView()
case .waitingForUser:
ProgressView("Waiting for user to logged in...")
case .open(let realm):
ContactView()
.environment(\.realm, realm)
case .error(let error):
ErrorView(error: error)
case .progress(let progress):
ProgressView(progress)
}
}
}
}
```
## One Last Thing…
We added a new key to our set of Environment Values: a “partition value” environment key which is used by our new property wrappers `@AsyncOpen` and `@AutoOpen` to dynamically inject a partition value when it's derived and not static. For example, in the case of using the user id as a partition value, you can pass this environment value to the view where `@AsyncOpen` or `@AutoOpen` are used:
```swift
AsyncView()
.environment(\.partitionValue, user.id!)
```
## Conclusion
With these property wrappers, we continue to better integrate Realm into your SwiftUI apps. With the release of this feature, and more to come, we want to make it easier for you to incorporate our SDK and sync functionality into your apps, no matter whether you’re using UIKit or SwiftUI.
We are excited for our users to test these new features. Please share any feedback or ideas for new features in our [community forum.
Documentation on both of these property wrappers can be found in our docs.
| md | {
"tags": [
"Realm"
],
"pageDescription": "Learn how to use the new Realm @AutoOpen and @AsyncOpen property wrappers to open synced realms from your SwiftUI apps.",
"contentType": "News & Announcements"
} | Open Synced Realms in SwiftUI using @Auto/AsyncOpen | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/nextjs-building-modern-applications | created | # Building Modern Applications with Next.js and MongoDB
>
>
>This article is out of date. Check out the official Next.js with MongoDB tutorial for the latest guide on integrating MongoDB with Next.js.
>
>
Developers have more choices than ever before when it comes to choosing the technology stack for their next application. Developer productivity is one of the most important factors in choosing a modern stack and I believe that Next.js coupled with MongoDB can get you up and running on the next great application in no time at all. Let's find out how and why!
If you would like to follow along with this tutorial, you can get the code from the GitHub repo. Also, be sure to sign up for a free MongoDB Atlas account to make it easier to connect your MongoDB database.
## What is Next.js
Next.js is a React based framework for building modern web applications. The framework comes with a lot of powerful features such as server side rendering, automatic code splitting, static exporting and much more that make it easy to build scalable and production ready apps. Its opinionated nature means that the framework is focused on developer productivity, but still flexible enough to give developers plenty of choice when it comes to handling the big architectural decisions.
For this tutorial, I'll assume that you are already familiar with React, and if so, you'll be up and running with Next.js in no time at all. If you are not familiar with React, I would suggest looking at resources such as the official React docs or taking a free React starter course to get familiar with the framework first.
## What We're Building: Macro Compliance Tracker
The app we're building today is called the Macro Compliance Tracker. If you're like me, you probably had a New Years Resolution of *"I'm going to get in better shape!"* This year, I am taking that resolution seriously, and have gotten a person trainer and nutritionist. One interesting thing that I learned is that while the old adage of calories in needs to be less than calories out to lose weight is generally true, your macronutrients also play just as an important role in weight loss.
There are many great apps that help you track your calories and macros. Unfortunately, most apps do not allow you to track a range and another interesting thing that I learned in my fitness journey this year is that for many beginners trying to hit their daily macro goals is a challenge and many folks end up giving up when they fail to hit the exact targets consistently. For that reason, my coach suggests a target range for calories and macros rather than a hard set number.
So that's what we're building today. We'll use Next.js to build our entire application and MongoDB as our database to store our progress. Let's get into it!
## Setting up a Next.js Application
The easiest way to create a Next.js application is by using the official create-next-app npx command. To do that we'll simply open up our Terminal window and type: `npx create-next-app mct`. "mct" is going to be the name of our application as well as the directory where our code is going to live.
Execute this command and a default application will be created. Once the files are created navigate into the directory by running `cd mct` in the Terminal window and then execute `npm run dev`. This will start a development server for your Next.js application which you'll be able to access at `localhost:3000`.
Navigate to `localhost:3000` and you should see a page very similar to the one in the above screenshot. If you see the Welcome to Next.js page you are good to go. If not, I would suggest following the Next.js docs and troubleshooting tips to ensure proper setup.
## Next.js Directory Structure
Before we dive into building our application any further, let's quickly look at how Next.js structures our application. The default directory structure looks like this:
The areas we're going to be focused on are the pages, components, and public directories. The .next directory contains the build artifacts for our application, and we should generally avoid making direct changes to it.
The pages directory will contain our application pages, or another way to think of these is that each file here will represent a single route in our application. Our default app only has the index.js page created which corresponds with our home route. If we wanted to add a second page, for example, an about page, we can easily do that by just creating a new file called about.js. The name we give to the filename will correspond to the route. So let's go ahead and create an `about.js` file in the pages directory.
As I mentioned earlier, Next.js is a React based framework, so all your React knowledge is fully transferable here. You can create components using either as functions or as classes. I will be using the function based approach. Feel free to grab the complete GitHub repo if you would like to follow along. Our About.js component will look like this:
``` javascript
import React from 'react'
import Head from 'next/head'
import Nav from '../components/nav'
const About = () => (
About
MACRO COMPLIANCE TRACKER!
This app will help you ensure your macros are within a selected range to help you achieve your New Years Resolution!
)
export default About
```
Go ahead and save this file. Next.js will automatically rebuild the application and you should be able to navigate to `http://localhost:3000/about` now and see your new component in action.
Next.js will automatically handle all the routing plumbing and ensure the right component gets loaded. Just remember, whatever you name your file in the pages directory is what the corresponding URL will be.
## Adding Some Style with Tailwind.css
Our app is looking good, but from a design perspective, it's looking pretty bare. Let's add Tailwind.css to spruce up our design and make it a little easier on the eyes. Tailwind is a very powerful CSS framework, but for brevity we'll just import the base styles from a CDN and won't do any customizations. To do this, we'll simply add `` in the Head components of our pages.
Let's do this for our About component and also add some Tailwind classes to improve our design. Our next component should look like this:
``` javascript
import React from 'react'
import Head from 'next/head'
import Nav from '../components/nav'
const About = () => (
About
Macro Compliance Tracker!
This app will help you ensure your macros are within a selected range to help you achieve your New Years Resolution!
)
export default About
```
If we go and refresh our browser, the About page should look like this:
Good enough for now. If you want to learn more about Tailwind, check out their official docs here.
Note: If when you make changes to your Next.js application such as adding the `className`'s or other changes, and they are not reflected when you refresh the page, restart the dev server.
## Creating Our Application
Now that we have our Next.js application setup, we've gone through and familiarized ourselves with how creating components and pages works, let's get into building our Macro Compliance Tracker app. For our first implementation of this app, we'll put all of our logic in the main index.js page. Open the page up and delete all the existing Next.js boilerplate.
Before we write the code, let's figure out what features we'll need. We'll want to show the user their daily calorie and macro goals, as well as if they're in compliance with their targeted range or not. Additionally, we'll want to allow the user to update their information every day. Finally, we'll want the user to be able to view previous days and see how they compare.
Let's create the UI for this first. We'll do it all in the Home component, and then start breaking it up into smaller individual components. Our code will look like this:
``` javascript
import React from 'react'
import Head from 'next/head'
import Nav from '../components/nav'
const Home = () => (
Home
Macro Compliance Tracker
Previous Day
1/23/2020
Next Day
1850
1700
1850
2000
Calories
195
150
160
170
Carbs
55
50
60
70
Fat
120
145
160
175
Protein
Results
Calories
Carbs
Fat
Protein
Save
Target
Calories
Carbs
Fat
Protein
Save
Variance
Calories
Carbs
Fat
Protein
Save
)
export default Home
```
And this will result in our UI looking like this:
There is a bit to unwind here. So let's take a look at it piece by piece. At the very top we have a simple header that just displays the name of our application. Next, we have our day information and selection options. After that, we have our daily results showing whether we are in compliance or not for the selected day. If we are within the suggested range, the background is green. If we are over the range, meaning we've had too much of a particular macro, the background is red, and if we under-consumed a particular macro, the background is blue. Finally, we have our form which allows us to update our daily results, our target calories and macros, as well as variance for our range.
Our code right now is all in one giant component and fairly static. Next let's break up our giant component into smaller parts and add our front end functionality so we're at least working with non-static data. We'll create our components in the components directory and then import them into our index.js page component. Components we create in the components directory can be used across multiple pages with ease allowing us reusability if we add multiple pages to our application.
The first component that we'll create is the result component. The result component is the green, red, or blue block that displays our result as well as our target and variance ranges. Our component will look like this:
``` javascript
import React, {useState, useEffect} from 'react'
const Result = ({results}) => {
let bg, setBg] = useState("");
useEffect(() => {
setBackground()
});
const setBackground = () => {
let min = results.target - results.variant;
let max = results.target + results.variant;
if(results.total >= min && results.total <= max) {
setBg("bg-green-500");
} else if ( results.total < min){
setBg("bg-blue-500");
} else {
setBg("bg-red-500")
}
}
return (
{results.total}
{results.target - results.variant}
{results.target}
{results.target + results.variant}
{results.label}
)
}
export default Result
```
This will allow us to feed this component dynamic data and based on the data provided, we'll display the correct background, as well as target ranges for our macros. We can now simplify our index.js page component by removing all the boilerplate code and replacing it with:
``` xml
```
Let's also go ahead and create some dummy data for now. We'll get to retrieving live data from MongoDB soon, but for now let's just create some data in-memory like so:
``` javascript
const Home = () => {
let data = {
calories: {
label: "Calories",
total: 1840,
target: 1840,
variant: 15
},
carbs: {
label: "Carbs",
total: 190,
target: 160,
variant: 15
},
fat: {
label: "Fat",
total: 55,
target: 60,
variant: 10
},
protein: {
label: "Protein",
total: 120,
target: 165,
variant: 10
}
}
const [results, setResults] = useState(data);
return ( ... )}
```
If we look at our app now, it won't look very different at all. And that's ok. All we've done so far is change how our UI is rendered, moving it from hard coded static values, to an in-memory object. Next let's go ahead and make our form work with this in-memory data. Since our forms are very similar, we can create a component here as well and re-use the same component.
We will create a new component called MCTForm and in this component we'll pass in our data, a name for the form, and an onChange handler that will update the data dynamically as we change the values in the input boxes. Also, for simplicity, we'll remove the Save button and move it outside of the form. This will allow the user to make changes to their data in the UI, and when the user wants to lock in the changes and save them to the database, then they'll hit the Save button. So our Home component will now look like this:
``` javascript
const Home = () => {
let data = {
calories: {
label: "Calories",
total: 1840,
target: 1850,
variant: 150
},
carbs: {
label: "Carbs",
total: 190,
target: 160,
variant: 15
},
fat: {
label: "Fat",
total: 55,
target: 60,
variant: 10
},
protein: {
label: "Protein",
total: 120,
target: 165,
variant: 10
}
}
const [results, setResults] = useState(data);
const onChange = (e) => {
const data = { ...results };
let name = e.target.name;
let resultType = name.split(" ")[0].toLowerCase();
let resultMacro = name.split(" ")[1].toLowerCase();
data[resultMacro][resultType] = e.target.value;
setResults(data);
}
return (
Home
Macro Compliance Tracker
Previous Day
1/23/2020
Next Day
Save
)}
export default Home
```
Aside from cleaning up the UI code, we also added an onChange function that will be called every time the value of one of the input boxes changes. The onChange function will determine which box was changed and update the data value accordingly as well as re-render the UI to show the new changes.
Next, let's take a look at our implementation of the `MCTForm` component.
``` javascript
import React from 'react'
const MCTForm = ({data, item, onChange}) => {
return(
{item}
Calories
onChange(e)}>
Carbs
onChange(e)}>
Fat
onChange(e)}>
Protein
onChange(e)}>
)
}
export default MCTForm
```
As you can see this component is in charge of rendering our forms. Since the input boxes are the same for all three types of forms, we can reuse the component multiple times and just change the type of data we are working with.
Again if we look at our application in the browser now, it doesn't look much different. But now the form works. We can replace the values and the application will be dynamically updated showing our new total calories and macros and whether or not we are in compliance with our goals. Go ahead and play around with it for a little bit to make sure it all works.
## Connecting Our Application to MongoDB
Our application is looking good. It also works. But, the data is all in memory. As soon as we refresh our page, all the data is reset to the default values. In this sense, our app is not very useful. So our next step will be to connect our application to a database so that we can start seeing our progress over time. We'll use MongoDB and [MongoDB Atlas to accomplish this.
## Setting Up Our MongoDB Database
Before we can save our data, we'll need a database. For this I'll use MongoDB and MongoDB Atlas to host my database. If you don't already have MongoDB Atlas, you can sign up and use it for free here, otherwise go into an existing cluster and create a new database. Inside MongoDB Atlas, I will use an existing cluster and set up a new database called MCT. With this new database created, I will create a new collection called daily that will store my daily results, target macros, as well as allowed variants.
With my database set up, I will also add a few days worth of data. Feel free to add your own data or if you'd like the dataset I'm using, you can get it here. I will use MongoDB Compass to import and view the data, but you can import the data however you want: use the CLI, add in manually, or use Compass.
Thanks to MongoDB's document model, I can represent the data exactly as I had it in-memory. The only additional fields I will have in my MongoDB model is an `_id` field that will be a unique identifier for the document and a date field that will represent the data for a specific date. The image below shows the data model for one document in MongoDB Compass.
Now that we have some real data to work with, let's go ahead and connect our Next.js application to our MongoDB Database. Since Next.js is a React based framework that's running Node server-side we will use the excellent Mongo Node Driver to facilitate this connection.
## Connecting Next.js to MongoDB Atlas
Our pages and components directory renders both server-side on the initial load as well as client-side on subsequent page changes. The MongoDB Node Driver works only on the server side and assumes we're working on the backend. Not to mention that our credentials to MongoDB need to be secure and not shared to the client ever.
Not to worry though, this is where Next.js shines. In the pages directory, we can create an additional special directory called api. In this API directory, as the name implies, we can create api endpoints that are executed exclusively on the backend. The best way to see how this works is to go and create one, so let's do that next. In the pages directory, create an api directory, and there create a new file called daily.js.
In the `daily.js` file, add the following code:
``` javascript
export default (req, res) => {
res.statusCode = 200
res.setHeader('Content-Type', 'application/json')
res.end(JSON.stringify({ message: 'Hello from the Daily route' }))
}
```
Save the file, go to your browser and navigate to `localhost:3000/api/daily`. What you'll see is the JSON response of `{message:'Hello from the Daily route'}`. This code is only ever run server side and the only thing the browser receives is the response we send. This seems like the perfect place to set up our connection to MongoDB.
While we can set the connection in this daily.js file, in a real world application, we are likely to have multiple API endpoints and for that reason, it's probably a better idea to establish our connection to the database in a middleware function that we can pass to all of our api routes. So as a best practice, let's do that here.
Create a new middleware directory at the root of the project structure alongside pages and components and call it middleware. The middleware name is not reserved so you could technically call it whatever you want, but I'll stick to middleware for the name. In this new directory create a file called database.js. This is where we will set up our connection to MongoDB as well as instantiate the middleware so we can use it in our API routes.
Our `database.js` middleware code will look like this:
``` javascript
import { MongoClient } from 'mongodb';
import nextConnect from 'next-connect';
const client = new MongoClient('{YOUR-MONGODB-CONNECTION-STRING}', {
useNewUrlParser: true,
useUnifiedTopology: true,
});
async function database(req, res, next) {
req.dbClient = client;
req.db = client.db('MCT');
return next();
}
const middleware = nextConnect();
middleware.use(database);
export default middleware;
```
If you are following along, be sure to replace the `{YOUR-MONGODB-CONNECTION-STRING}` variable with your connection string, as well as ensure that the client.db matches the name you gave your database. Also be sure to run `npm install --save mongodb next-connect` to ensure you have all the correct dependencies. Database names are case sensitive by the way. Save this file and now open up the daily.js file located in the pages/api directory.
We will have to update this file. Since now we want to add a piece of middleware to our function, we will no longer be using an anonymous function here. We'll utility next-connect to give us a handler chain as well as allow us to chain middleware to the function. Let's take a look at what this will look like.
``` javascript
import nextConnect from 'next-connect';
import middleware from '../../middleware/database';
const handler = nextConnect();
handler.use(middleware);
handler.get(async (req, res) => {
let doc = await req.db.collection('daily').findOne()
console.log(doc);
res.json(doc);
});
export default handler;
```
As you can see we now have a handler object that gives us much more flexibility. We can use different HTTP verbs, add our middleware, and more. What the code above does, is that it connects to our MongoDB Atlas cluster and from the MCT database and daily collection, finds and returns one item and then renders it to the screen. If we hit `localhost:3000/api/daily` now in our browser we'll see this:
Woohoo! We have our data and the data model matches our in-memory data model, so our next step will be to use this real data instead of our in-memory sample. To do that, we'll open up the index.js page.
Our main component is currently instantiated with an in-memory data model that the rest of our app acts upon. Let's change this. Next.js gives us a couple of different ways to do this. We can always get the data async from our React component, and if you've used React in the past this should be second nature, but since we're using Next.js I think there is a different and perhaps better way to do it.
Each Next.js page component allows us to fetch data server-side thanks to a function called `getStaticProps`. When this function is called, the initial page load is rendered server-side, which is great for SEO. The page doesn't render until this function completes. In `index.js`, we'll make the following changes:
``` javascript
import fetch from 'isomorphic-unfetch'
const Home = ({data}) => { ... }
export async function getStaticProps(context) {
const res = await fetch("http://localhost:3000/api/daily");
const json = await res.json();
return {
props: {
data: json,
},
};
}
export default Home
```
Install the `isomorphic-unfetch` library by running `npm install --save isomorphic-unfetch`, then below your Home component add the `getStaticProps` method. In this method we're just making a fetch call to our daily API endpoint and storing that json data in a prop called data. Since we created a data prop, we then pass it into our Home component, and at this point, we can go and remove our in-memory data variable. Do that, save the file, and refresh your browser.
Congrats! Your data is now coming live from MongoDB. But at the moment, it's only giving us one result. Let's make a few final tweaks so that we can see daily results, as well as update the data and save it in the database.
## View Macro Compliance Tracker Data By Day
The first thing we'll do is add the ability to hit the Previous Day and Next Day buttons and display the corresponding data. We won't be creating a new endpoint since I think our daily API endpoint can do the job, we'll just have to make a few enhancements. Let's do those first.
Our new daily.js API file will look as such:
``` javascript
handler.get(async (req, res) => {
const { date } = req.query;
const dataModel = { "_id": new ObjectID(), "date": date, "calories": { "label": "Calories", "total": 0, "target": 0, "variant": 0 }, "carbs": { "label": "Carbs", "total": 0, "target": 0, "variant": 0 }, "fat": { "label" : "Fat", "total": 0, "target": 0, "variant": 0 }, "protein": { "label" : "Protein", "total": 0, "target": 0, "variant": 0 }}
let doc = {}
if(date){
doc = await req.db.collection('daily').findOne({date: new Date(date)})
} else {
doc = await req.db.collection('daily').findOne()
}
if(doc == null){
doc = dataModel
}
res.json(doc)
});
```
We made a couple of changes here so let's go through them one by one. The first thing we did was we are looking for a date query parameter to see if one was passed to us. If a date parameter was not passed, then we'll just pick a random item using the `findOne` method. But, if we did receive a date, then we'll query our MongoDB database against that date and return the data for that specified date.
Next, as our data set is not exhaustive, if we go too far forwards or backwards, we'll eventually run out of data to display, so we'll create an empty in-memory object that serves as our data model. If we don't have data for a specified date in our database, we'll just set everything to 0 and serve that. This way we don't have to do a whole lot of error handling on the front and can always count on our backend to serve some type of data.
Now, open up the `index.js` page and let's add the functionality to see the previous and next days. We'll make use of dayjs to handle our dates, so install it by running `npm install --save dayjs` first. Then make the following changes to your `index.js` page:
``` javascript
// Other Imports ...
import dayjs from 'dayjs'
const Home = ({data}) => {
const results, setResults] = useState(data);
const onChange = (e) => {
}
const getDataForPreviousDay = async () => {
let currentDate = dayjs(results.date);
let newDate = currentDate.subtract(1, 'day').format('YYYY-MM-DDTHH:mm:ss')
const res = await fetch('http://localhost:3000/api/daily?date=' + newDate)
const json = await res.json()
setResults(json);
}
const getDataForNextDay = async () => {
let currentDate = dayjs(results.date);
let newDate = currentDate.add(1, 'day').format('YYYY-MM-DDTHH:mm:ss')
const res = await fetch('http://localhost:3000/api/daily?date=' + newDate)
const json = await res.json()
setResults(json);
}
return (
Previous Day
{dayjs(results.date).format('MM/DD/YYYY')}
Next Day
)}
```
We added two new methods, one to get the data from the previous day and one to get the data from the following day. In our UI we also made the date label dynamic so that it displays and tells us what day we are currently looking at. With these changes go ahead and refresh your browser and you should be able to see the new data for days you have entered in your database. If a particular date does not exist, it will show 0's for everything.
![MCT No Data
## Saving and Updating Data In MongoDB
Finally, let's close out this tutorial by adding the final piece of functionality to our app, which will be to make updates and save new data into our MongoDB database. Again, I don't think we need a new endpoint for this, so we'll use our existing daily.js API. Since we're using the handler convention and currently just handle the GET verb, let's add onto it by adding logic to handle a POST to the endpoint.
``` javascript
handler.post(async (req, res) => {
let data = req.body;
data = JSON.parse(data);
data.date = new Date(data.date);
let doc = await req.db.collection('daily').updateOne({date: new Date(data.date)}, {$set:data}, {upsert: true})
res.json({message: 'ok'});
})
```
The code is pretty straightforward. We'll get our data in the body of the request, parse it, and then save it to our MongoDB daily collection using the `updateOne()` method. Let's take a closer look at the values we're passing into the `updateOne()` method.
The first value we pass will be what we match against, so in our collection if we find that the specific date already has data, we'll update it. The second value will be the data we are setting and in our case, we're just going to set whatever the front-end client sends us. Finally, we are setting the upsert value to true. What this will do is, if we cannot match on an existing date, meaning we don't have data for that date already, we'll go ahead and create a new record.
With our backend implementation complete, let's add the functionality on our front end so that when the user hits the Save button, the data gets properly updated. Open up the index.js file and make the following
changes:
``` javascript
const Home = ({data}) => {
const updateMacros = async () => {
const res = await fetch('http://localhost:3000/api/daily', {
method: 'post',
body: JSON.stringify(results)
})
}
return (
Save
)}
```
Our new updateMacros method will make a POST request to our daily API endpoint with the new data. Try it now! You should be able to update existing macros or create data for new days that you don't already have any data for. We did it!
## Putting It All Together
We went through a lot in today's tutorial. Next.js is a powerful framework for building modern web applications and having a flexible database powered by MongoDB made it possible to build a fully fledged application in no time at all. There were a couple of items we omitted for brevity such as error handling and deployment, but feel free to clone the application from GitHub, sign up for MongoDB Atlas for free, and build on top of this foundation. | md | {
"tags": [
"JavaScript",
"Next.js"
],
"pageDescription": "Learn how to couple Next.js and MongoDB for your next-generation applications.",
"contentType": "Tutorial"
} | Building Modern Applications with Next.js and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/integrating-mongodb-amazon-apache-kafka | created | # Integrating MongoDB with Amazon Managed Streaming for Apache Kafka (MSK)
Amazon Managed Streaming for Apache Kafka (MSK) is a fully managed, highly available Apache Kafka service. MSK makes it easy to ingest and process streaming data in real time and leverage that data easily within the AWS ecosystem. By being able to quickly stand up a Kafka solution, you spend less time managing infrastructure and more time solving your business problems, dramatically increasing productivity. MSK also supports integration of data sources such as MongoDB via the AWS MSK Connect (Connect) service. This Connect service works with the MongoDB Connector for Apache Kafka, enabling you to easily integrate MongoDB data.
In this blog post, we will walk through how to set up MSK, configure the MongoDB Connector for Apache Kafka, and create a secured VPC Peered connection with MSK and a MongoDB Atlas cluster. The high-level process is as follows:
* Configure Amazon Managed Streaming for Apache Kafka
* Configure EC2 client
* Configure a MongoDB Atlas Cluster
* Configure Atlas Private Link including VPC and subnet of the MSK
* Configure plugin in MSK for MongoDB Connector
* Create topic on MSK Cluster
* Install MongoSH command line tool on client
* Configure MongoDB Connector as a source or sink
In this example, we will have two collections in the same MongoDB cluster—the “source” and the “sink.” We will insert sample data into the source collection from the client, and this data will be consumed by MSK via the MongoDB Connector for Apache Kafka running as an MSK connector. As data arrives in the MSK topic, another instance of the MongoDB Connector for Apache Kafka will write the data to the MongoDB Atlas cluster “sink” collection. To align with best practices for secure configuration, we will set up an AWS Network Peered connection between the MongoDB Atlas cluster and the VPC containing MSK and the client EC2 instance.
## Configure AWS Managed Service for Kafka
To create an Amazon MSK cluster using the AWS Management Console, sign in to the AWS Management Console, and open the Amazon MSK console.
* Choose Create cluster and select Quick create.
For Cluster name, enter MongoDBMSKCluster.
For Apache Kafka version, select one that is 2.6.2 or above.
For broker type, select kafla.t3.small.
From the table under All cluster settings, copy the values of the following settings and save them because you will need them later in this blog:
* VPC
* Subnets
* Security groups associated with VPC
* Choose “Create cluster.”
## Configure an EC2 client
Next, let's configure an EC2 instance to create a topic. This is where the MongoDB Atlas source collection will write to. This client can also be used to query the MSK topic and monitor the flow of messages from the source to the sink.
To create a client machine, open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
* Choose Launch instances.
* Choose Select to create an instance of Amazon Linux 2 AMI (HVM) - Kernel 5.10, SSD Volume Type.
* Choose the t2.micro instance type by selecting the check box.
* Choose Next: Configure Instance Details.
* Navigate to the Network list and choose the VPC whose ID you saved in the previous step.
* Go to Auto-assign Public IP list and choose Enable.
* In the menu near the top, select Add Tags.
* Enter Name for the Key and MongoDBMSKCluster for the Value.
* Choose Review and Launch, and then click Launch.
* Choose Create a new key pair, enter MongoDBMSKKeyPair for Key pair name, and then choose Download Key Pair. Alternatively, you can use an existing key pair if you prefer.
* Start the new instance by pressing Launch Instances.
Next, we will need to configure the networking to allow connectivity between the client instance and the MSK cluster.
* Select View Instances. Then, in the Security Groups column, choose the security group that is associated with the MSKTutorialClient instance.
* Copy the name of the security group, and save it for later.
* Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
* In the navigation pane, click on Security Groups. Find the security group whose ID you saved in Step 1 (Create an Amazon MSK Cluster). Choose this row by selecting the check box in the first column.
* In the Inbound Rules tab, choose Edit inbound rules.
* Choose Add rule.
* In the new rule, choose All traffic in the Type column. In the second field in the Source column, select the security group of the client machine. This is the group whose name you saved earlier in this step.
* Click Save rules.
The cluster's security group can now accept traffic that comes from the client machine's security group.
## Create MongoDB Atlas Cluster
To create a MongoDB Atlas Cluster, follow the Getting Started with Atlas tutorial. Note that in this blog, you will need to create an M30 Atlas cluster or above—as VPC peering is not available for M0, M2, and M5 clusters.
Once the cluster is created, configure an AWS private endpoint in the Atlas Network Access UI supplying the same subnets and VPC.
* Click on Network Access.
* Click on Private Endpoint, and then the Add Private Endpoint button.
* Fill out the VPC and subnet IDs from the previous section.
* SSH into the client machine created earlier and issue the following command in the Atlas portal: **aws ec2 create-vpc-endpoint **
* Note that you may have to first configure the AWS CLI command using **aws configure** before you can create the VPC through this tool. See Configuration Basics for more information.
## Configure MSK plugin
Next, we need to create a custom plugin for MSK. This custom plugin will be the MongoDB Connector for Apache Kafka. For reference, note that the connector will need to be uploaded to an S3 repository **before** you can create the plugin. You can download the MongoDB Connector for Apache Kafka from Confluent Hub.
* Select “Create custom plugin” from the Custom Plugins menu within MSK.
* Fill out the custom plugin form, including the S3 location of the downloaded connector, and click “Create custom plugin.”
## Create topic on MSK cluster
When we start reading the data from MongoDB, we also need to create a topic in MSK to accept the data. On the client EC2 instance, let’s install Apache Kafka, which includes some basic tools.
To begin, run the following command to install Java:
**sudo yum install java-1.8.0**
Next, run the command below to download Apache Kafka.
**wget https://archive.apache.org/dist/kafka/2.6.2/kafka_2.12-2.6.2.tgz**
Building off the previous step, run this command in the directory where you downloaded the TAR file:
**tar -xzf kafka_2.12-2.6.2.tgz**
The distribution of Kafka includes a **bin** folder with tools that can be used to manage topics. Go to the **kafka_2.12-2.6.2** directory.
To create the topic that will be used to write MongoDB events, issue this command:
`bin/kafka-topics.sh --create --zookeeper (INSERT YOUR ZOOKEEPER INFO HERE)--replication-factor 1 --partitions 1 --topic MongoDBMSKDemo.Source`
Also, remember that you can copy the Zookeeper server endpoint from the “View Client Information” page on your MSK Cluster. In this example, we are using plaintext.
## Configure source connector
Once the plugin is created, we can create an instance of the MongoDB connector by selecting “Create connector” from the Connectors menu.
* Select the MongoDB plug in and click “Next.”
* Fill out the form as follows:
Conector name: **MongoDB Source Connector**
Cluster Type: **MSK Connector**
Select the MSK cluster that was created previously, and select “None” under the authentication drop down menu.
Enter your connector configuration (shown below) in the configuration settings text area.
`connector.class=com.mongodb.kafka.connect.MongoSourceConnector
database=MongoDBMSKDemo
collection=Source
tasks.max=1
connection.uri=(MONGODB CONNECTION STRING HERE)
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter=org.apache.kafka.connect.storage.StringConverter`
**Note**: You can find your Atlas connection string by clicking on the Connect button on your Atlas cluster. Select “Private Endpoint” if you have already configured the Private Endpoint above, then press “Choose a connection method.” Next, select “Connect your application” and copy the **mongodb+srv** connection string.
In the “Access Permissions” section, you will need to create an IAM role with the required trust policy.
Once this is done, click “Next.” The last section will offer you the ability to use logs—which we highly recommend, as it will simplify the troubleshooting process.
## Configure sink connector
Now that we have the source connector up and running, let’s configure a sink connector to complete the round trip. Create another instance of the MongoDB connector by selecting “Create connector” from the Connectors menu.
Select the same plugin that was created previously, and fill out the form as follows:
Connector name: **MongoDB Sink Connector**
Cluster type: **MSK Connector**
Select the MSK cluster that was created previously and select “None” under the authentication drop down menu.
Enter your connector configuration (shown below) in the Configuration Settings text area.
`connector.class=com.mongodb.kafka.connect.MongoSinkConnector
database=MongoDBMSKDemo
collection=Sink
tasks.max=1
topics=MongoDBMSKDemo.Source
connection.uri=(MongoDB Atlas Connection String Gos Here)
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter=org.apache.kafka.connect.storage.StringConverter`
In the Access Permissions section, select the IAM role created earlier that has the required trust policy. As with the previous connector, be sure to leverage a log service like CloudWatch.
Once the connector is successfully configured, we can test the round trip by writing to the Source collection and seeing the same data in the Sink collection.
We can insert data in one of two ways: either through the intuitive Atlas UI, or with the new MongoSH (mongoshell) command line tool. Using MongoSH, you can interact directly with a MongoDB cluster to test queries, perform ad hoc database operations, and more.
For your reference, we’ve added a section on how to use the mongoshell on your client EC2 instance below.
## Install MongoDB shell on client
On the client EC2 instance, create a **/etc/yum.repos.d/mongodb-org-5.0.repo** file by typing:
`sudo nano /etc/yum.repos.d/mongodb-org-5.0.repo`
Paste in the following:
`mongodb-org-5.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/5.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=[https://www.mongodb.org/static/pgp/server-5.0.asc`
Next, install the MongoSH shell with this command:
`sudo yum install -y mongodb-mongosh`
Use the template below to connect to your MongoDB cluster via mongoshell:
`mongosh “ (paste in your Atlas connection string here) “`
Once connected, type:
`Use MongoDBMSKDemo
db.Source.insertOne({“Testing”:123})`
To check the data on the sink collection, use this command:
`db.Sink.find({})`
If you run into any issues, be sure to check the log files. In this example, we used CloudWatch to read the events that were generated from MSK and the MongoDB Connector for Apache Kafka.
## Summary
Amazon Managed Streaming for Apache Kafka (MSK) is a fully managed, secure, and highly available Apache Kafka service that makes it easy to ingest and process streaming data in real time. MSK allows you to import Kafka connectors such as the MongoDB Connector for Apache Kafka. These connectors make working with data sources seamless within MSK. In this article, you learned how to set up MSK, MSK Connect, and the MongoDB Connector for Apache Kafka. You also learned how to set up a MongoDB Atlas cluster and configure it to use AWS network peering. To continue your learning, check out the following resources:
MongoDB Connector for Apache Kafka Documentation
Amazon MSK Getting Started
Amazon MSK Connect Getting Started | md | {
"tags": [
"Java",
"MongoDB",
"Kafka"
],
"pageDescription": "In this article, learn how to set up Amazon MSK, configure the MongoDB Connector for Apache Kafka, and how it can be used as both a source and sink for data integration with MongoDB Atlas running in AWS.",
"contentType": "Tutorial"
} | Integrating MongoDB with Amazon Managed Streaming for Apache Kafka (MSK) | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/christmas-2021-mongodb-data-api | created | # Christmas Lights and Webcams with the MongoDB Data API
> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.
When I set out to demonstrate how the MongoDB Atlas Data API allows modern microcontrollers to communicate directly with MongoDB Atlas, I initially built a pager. You know, those buzzy things that go off during the holiday season to let you know there is bad news and you need to go fix something right now. I quickly realized this was not what people wanted to be reminded of during the holiday season, nor did it allow everyone viewing to interact… Well, I could let them page me and interrupt my holiday, but how would they know I got it? In the end, I decided to put my Christmas tree on the internet instead.
Looking at the problem, it meant I needed to provide two parts: a way to control the tree lights using Atlas and an API, and a way to view the tree. In this holiday special article, I describe how to do both: create API-controlled fairy lights, and build a basic MongoDB-powered IP surveillance camera.
Before I bore you with details of breadboard voltages, SRAM banks, and base64 encoding, here is a link to the live view of the tree with details of how you can change the light colours. It may take a few seconds before you see your change.
#### https://xmastree-lpeci.mongodbstitch.com/
This is not a step-by-step tutorial. I'm sorry but that would be too long. However, if you are familiar with Arduino and other Maker tools, or are prepared to google a few how-tos, it should provide you with all the information you need to create your own setup. Otherwise, it's simply a fascinating read about tiny computers and their challenges.
The MongoDB Atlas Data APIis an HTTPS-based API that allows us to read and write data in Atlas, where a MongoDB driver library is either not available or not desirable. In this case, I am looking at how to call it from an ESP32 Microcontroller using the Arduino APIs and C++/Wiring.
## Prerequisites
You will need the Arduino IDE to upload code to our microcontrollers.
You will also need an Atlas cluster for which you have enabled the Data API, and our endpoint URL and API key. You can learn how to get these in this article or this video if you do not have them already.
If you want to directly upload the Realm application to enable the API and viewer, you will need the Realm command-line interface.
You will also need the following hardware or similar:
##### Lights
* JZK ESP-32S ESP32 Development Board ($10 Here)
* Neopixel compatible light string ($20 Here)
* Breadboard, Power Regulator, and Dupont Cables ($5 here)
* 9v x 3A Power supply ($14 here)
* 1000 microfarad capacitor
* 330 ohm resistor
##### Webcam
* ESP32 AI Thinker Camera ($10 here)
* USB Power Supply
### Creating the Christmas Light Hardware
Neopixel 24-bit RGB individually addressable LEDs have become somewhat ubiquitous in maker projects. Aside from occasional finicky power requirements, they are easy to use and well supported and need only power and one data line to be connected. Neopixels and clones have also dropped in price dramatically since I first tried to make smart Christmas lights back in (checks email…) 2014, when I paid $14 for four of them and even then struggled to solder them nicely. I saw a string of 50 LEDs at under $20 on very fine wire and thought I had to try again with the Christmas tree idea.
Neopixels on a String
Overall, the circuit for the tree is very simple. Neopixels don't need a lot of supporting hardware, just a capacitor across the power cables to avoid any sudden power spikes. They do need to be run at 5v, though. Initially, I was using 3.3v as that was what the ESP board data pins output, but that resulted in the blue colour being dim as underpowered and equal red, green, and blue values giving an orange colour rather than white.
Since moving to an all 5v power circuit and 3.3v just on the data line, it's a much better colour, although given the length and fineness of the wire, you can see the furthest away neopixels are dimmer, especially in blue. Looping wires from the end back to the start like a household ring-main would be a good fix for this but I'd already cut the JST connector off at the top.
My board isn't quite as neatly laid out. I had to take 5v from the power pins directly. It works just as well though. (DC Power Supply not shown in the image, Pixel string out of shot.)
## Developing the Controller Software for the ESP32
Source Code Location:
https://github.com/mongodb-developer/xmas_2021_tree_camera/tree/main/ESP32-Arduino/mongo_xmastree
I used the current Arduino IDE to develop the controller software as it is so well supported and there are many great resources to help. I only learned about the 2.0 beat after I finished it.
Arduino IDE is designed to be simple and easy to use
After the usual messing about and selecting a neopixel library (my first two choices didn't work correctly as neopixels are unusually sensitive to the specific processor and board due to strict timing requirements), I got them to light up and change colour, so I set about pulling the colour values from Atlas.
Unlike the original Arduino boards with their slow 16 bit CPUs, today's ESP32 are fast and have plenty of RAM (500KB!), and built-in WiFi. It also has some hardware support for TLS encryption calculations. For a long time, if you wanted to have a Microcontroller talk to the Internet, you had to use an unencrypted HTTP connection, but no more. ESP32 boards can talk to anything.
Like most Makers, I took some code I already had, read some tutorial hints and tips, and mashed it all together. And it was surprisingly easy to put this code together. You can see what I ended up with here (and no, that's not my real WiFi password).
The `setup() `function starts a serial connection for debugging, connects to the WiFi network, initialises the LED string, and sets the clock via NTP. I'm not sure that's required but HTTPS might want to check certificate expiry and ESP32s have no real-time clock.
The` loop()` function just checks if 200ms have passed, and if so, calls the largest function, getLightDefinition().
I have to confess, the current version of this code is sub-optimal. I optimised the camera code as you will see later but didn't backport the changes. This means it creates and destroys an SSL and then HTTPS connection every time it's called, which on this low powered hardware can take nearly a second. But I didn't need a super-fast update time here.
## Making an HTTPS Call from an ESP32
Once it creates a WiFiClientSecure, it then sets a root CA certificate for it. This is required to allow it to make an HTTPS connection. What I don't understand is why *this* certificate works as it's not in the chain of trust for Atlas. I suspect the ESP32 just ignores cases where it cannot validate the server, but it does demand to be given some form of root CA. Let me know if you have an answer to that.
Once you have a WiFiClientSecure, which encapsulates TLS rather than TCP, the rest of the code is the same as an HTTP connection, but you pass the TLS-enabled WiFiClientSecure in the constructor of the HTTPClient object. And of course, give it an HTTPS URL to work with.
To authenticate to the Data API, all I need to do is pass a header called "api-key" with the Atlas API key. This is very simple and can be done with the following fragment.
```
HTTPClient https;
if (https.begin(*client, AtlasAPIEndpoint)) { // HTTPS
/* Headers Required for Data API*/
https.addHeader("Content-Type", "application/json");
https.addHeader("api-key", AtlasAPIKey);
```
The Data API uses POSTed JSON for all calls. This makes it cleaner for more complex requests and avoids any caching issues. You could argue that find() operations, which are read-only, should use GET, but that would require passing JSON as part of the URL and that is ugly and has security and size limitations, so all the calls use POST and a JSON Body.
## Writing JSON on an ESP32 Using Arduino JSON
I was amazed at how easy and efficient the ArduinoJSON library was to use. If you aren't used to computers with less than 1MB of total RAM and a 240MHz CPU, you may think of JSON as just a good go-to data format. But the truth is JSON is far from efficient when it comes to processing. This is one reason MongoDB uses BSON. I think only XML takes more CPU cycles to read and write than JSON does. Benoît Blanchon has done an amazing job developing this lightweight and efficient but comprehensive library.
This might be a good time to mention that although ESP32-based systems can run https://micropython.org/, I chose to build this using the Arduino IDE and C++/Wiring. This is a bit more work but possibly required for some of the libraries I used.
This snippet shows what a relatively small amount of code is required to create a JSON payload and call the Atlas Data API to get the latest light pattern.
```
DynamicJsonDocument payload (1024);
payload"dataSource"] = "Cluster0";
payload["database"] = "xmastree";
payload["collection"] = "patterns";
payload["filter"]["device"] = "tree_1";
if(strcmp(lastid,"0")) payload["filter"]["_id"]["$gt"]["$oid"] = lastid;
payload["limit"] = 1;
payload["sort"]["_id"] = -1;
String JSONText;
size_t JSONlength = serializeJson(payload, JSONText);
Serial.println(JSONText);
int httpCode = https.sendRequest("POST", JSONText);
```
## Using Explicit BSON Data Types to Search via EJSON
To avoid fetching the light pattern every 500ms, I included a query to say only fetch the latest pattern` sort({_id:1).limit(1)` and only if the _id field is greater than the last one I fetched. My _id field is using the default ObjectID data type, which means as I insert them, they are increasing in value automatically.
Note that to search for a field of type ObjectID, a MongoDB-specific Binary GUID data type, I had to use Extended JSON (EJSON) and construct a query that goes` { _id : { $gt : {$oid : "61bb4a79ee3a9009e25f9111"}}}`. If I used just `{_id:61bb4a79ee3a9009e25f9111"}`, Atlas would be searching for that string, not for an ObjectId with that binary value.
## Parsing a JSON Payload with Arduino and ESP32
The [ArduinoJSON library also made parsing my incoming response very simple too—both to get the pattern of lights but also to get the latest value of _id to use in future queries. Currently, the Data API only returns JSON, not EJSON, so you don't need to worry about parsing any BSON types—for example, our ObjectId.
```
if (httpCode == HTTP_CODE_OK || httpCode == HTTP_CODE_MOVED_PERMANENTLY) {
String payload = https.getString();
DynamicJsonDocument description(32687);
DeserializationError error = deserializeJson(description, payload);
if (error) {
Serial.println(error.f_str());
delete client;
return;
}
if(description"documents"].size() == 0) {
Serial.println("No Change to Lights");
delete client; return;}
JsonVariant lights = description["documents"][0]["state"];
if(! lights.is()) {
Serial.println("state is not an array");
delete client;
return;
}
setLights(lights.as());
strncpy(lastid,description["documents"][0]["_id"],24);
```
Using ArduinoJSON, I can even pass a JSONArray to a function without needing to know what it is an array of. I can inspect the destination function and deal with different data types appropriately. I love when libraries in strongly typed languages like C++ that deal with dynamic data structures provide this type of facility.
This brings us to the last part of our lights code: setting the lights. This makes use of JsonVariant, a type you can inspect and convert at runtime to the C++ type you need.
```
void setLights(const JsonArray& lights)
{
Serial.println(lights.size());
int light_no;
for (JsonVariant v : lights) {
int r = (int) v["r"].as();
int g = (int) v["g"].as();
int b = (int) v["b"].as();
RgbColor light_colour(r,g,b);
strip.SetPixelColor(light_no,light_colour);
light_no++;
}
Serial.println("Showing strip");
strip.Show();
}
```
## Creating a Pubic Lighting Control API with MongoDB Realm
Whilst I could set the values of the lights with the MongoDB Shell, I wanted a safe way to allow anyone to set the lights. The simplest and safest way to do that was to create an API. And whilst I could have used the Amazon API Gateway and AWS Lambda, I chose to use hosted functions in Realm instead. After all, I do work for MongoDB.
I created an HTTPS endpoint in the Realm GUI, named it /lights, marked it as requiring a POST, and that it sends a response. I then, in the second part, said it should call a function.
![
I then added the following function, running as system, taking care to sanitise the input and take only what I was expecting from it.
```
// This function is the endpoint's request handler.
exports = async function({ query, headers, body}, response) {
try {
const payload = JSON.parse(body.text());
console.log(JSON.stringify(payload))
state = payload.state;
if(!state) { response.StatusCode = 400; return "Missing state" }
if(state.length != 50) { response.StatusCode = 400; return "Must be 50 states"}
newstate = ];
for(x=0;x255||g<0||g>255||b<0||b>255) { response.StatusCode = 400; return "Value out of range"}
newstate.push({r,g,b})
}
doc={device:"tree_1",state:newstate};
const collection = context.services.get("mongodb-atlas").db("xmastree").collection("patterns")
rval = await collection.insertOne(doc)
response.StatusCode = 201; return rval;
} catch(e) {
console.error(e);
response.StatusCode = 500; return `Internal error, Sorry. ${e}`;
}
return "Eh?"
};
```
I now had the ability to change the light colours by posting to the URL shown on the web page.
## Creating the Webcam Hardware
This was all good, and if you were making a smart Christmas tree for yourself, you could stop there. But I needed to allow others to see it. I had honestly considered just an off-the-shelf webcam and a Twitch stream, but I stumbled across what must be the bargain of the decade: the AI Thinker ESP32 Cam. These are super low-cost ESP32 chips with a camera, an SD card slot, a bright LED light, and enough CPU and RAM to do some slow but capable AI inferencing—for example, to recognize faces—and they cost $10 or less. They are in two parts. The camera board is ready to plug into a breadboard, which has no USB circuitry, so you need a USB to FTDI programmer or similar and a nice USB to FTDI docking station you can use to program it. And if you just want to power it from USB as I do, this adds a reset button too.
![
**ESP CAM: 160MHz CPU, 4MB RAM + 520K Fast RAM, Wifi, Bluetooth, Camera, SD Card slot for $10**
There was nothing I had to do for the hardware except clip these two components together, plug in a USB cable (being careful not to snap the USB socket off as I did the first time I tried), and mount it on a tripod.
## Writing the Webcam Software
Source Code : https://github.com/mongodb-developer/xmas_2021_tree_camera/tree/main/ESP32-Arduino/mongo_cam
Calling the Data API with a POST should have been just the same as it was in the lights, a different endpoint to insert images in a collection rather than finding them, but otherwise the same. However, this time I hit some other challenges. After a lot of searching, debugging, and reading the library source, I'd like to just highlight the difficult parts so if you do anything like this, it will help.
## Sending a Larger Payload with ESP32 HTTP POST by Using a JSONStream
##
I quickly discovered that unless the image resolution was configured to be tiny, the POST requests failed, arriving mangled at the Data API. Researching, I found there was a size limit on the size of a POST imposed by the HTTP Library. If the payload was supplied as a string, it would be passed to the TLS layer, which had a limit and only posted part of it. The HTTP layer, then, rather than send the next part, simply returned an error. This seemed to kick in at about 14KB of data.
Reading the source, I realised this did not happen if, instead of posting the body as a string, you sent a stream—a class like a filehandle or a queue that the consumer can query for data until it's empty. The HTTP library, in this case, would send the whole buffer—only 1.4KB at a time, but it would send it as long as the latency to the Data API was low. This would work admirably.
I, therefore, wrote a stream class that converted a JSONObject to a stream of i's string representation.
```
class JSONStream: public Stream {
private:
uint8_t *buffer;
size_t buffer_size;
size_t served;
int start;
int end;
public:
JSONStream(DynamicJsonDocument &payload ) {
int jsonlen = measureJson(payload);
this->buffer = (uint8_t*) heap_caps_calloc(jsonlen + 1, 1, MALLOC_CAP_8BIT);
this->buffer_size = serializeJson(payload, this->buffer, jsonlen + 1);
this->served = 0;
this->start = millis();
}
~JSONStream() {
heap_caps_free((void*)this->buffer);
}
void clear() {}
size_t write(uint8_t) {}
int available() {
size_t whatsleft = buffer_size - served;
if (whatsleft == 0) return -1;
return whatsleft;
}
int peek() {
return 0;
}
void flush() { }
int read() {}
size_t readBytes(uint8_t *outbuf, size_t nbytes) {
//Serial.println(millis()-this->start);
if (nbytes > buffer_size - served) {
nbytes = buffer_size - served;
}
memcpy(outbuf, buffer + served, nbytes);
served = served + nbytes;
return nbytes;
}
};
```
Then use this to send an ArduinoJson Object to it and stream the JSON String.
```
DynamicJsonDocument payload (1024);
payload"dataSource"] = "Cluster0";
payload["database"] = "espcam";
payload["collection"] = "frames";
time_t nowSecs = time(nullptr);
char datestring[32];
sprintf(datestring, "%lu000", nowSecs);
payload["document"]["time"]["$date"]["$numberLong"] = datestring; /*Encode Date() as EJSON*/
const char* base64Image = base64EncodeImage(fb) ;
payload["document"]["img"]["$binary"]["base64"] = base64Image; /*Encide as a Binary() */
payload["document"]["img"]["$binary"]["subType"] = "07";
JSONStream *buffer = new JSONStream(payload);
int httpCode = https.sendRequest("POST", buffer, buffer->available());
```
## Allocating more than 32KB RAM on an ESP32 Using Capability Defined RAM
##
This was simple, generally, except where I tried to allocate 40KB of RAM using malloc and discovered the default behaviour is to allocate that on the stack which was too small. I, therefore, had to use heap_caps_calloc() with MALLOC_CAP_8BIT to be more specific about the exact place I wanted my RAM allocated. And of course, had to use the associated heap_caps_free() to free it. This is doubly important in something that has both SRAM and PSRAM with different speeds and hardware access paths.
## Sending Dates to the Data API with a Microcontroller and C++
A pair of related challenges I ran into involved sending data that wasn't text or numbers. I needed a date in my documents so I could use a TTL index to delete them once they were a few days old. Holding a huge number of images would quickly fill my free tier data quota. This is easy with EJSON. You send JSON of the form` { $date: { $numberLong: "xxxxxxxxx"}} `where the string is the number of milliseconds since 1-1-1970. Sounds easy enough. However, being a 32-bit machine, the ESP32 really didn't like printing 64-bit numbers, and I tried a lot of bit-shifting, masking, and printing two 32-bit unsigned numbers until I realised I could simply print the 32-bit seconds since 1-1-1970 and add "000" on the end.
## Base 64 Encoding on the ESP32 to Send Via JSON
The other was how to send a Binary() datatype to MongoDB to hold the image. The EJSON representation of that is {$binary:{$base64: "Base 64 String of Data"}} but it was very unclear how to get an ESP32 to do base64 encoding. Many people seemed to have written their own, and things I tried failed until I eventually found a working library and applied what I know about allocating capability memory. That led me to the code below. This can be easily adapted to any binary buffer if you also know the length.
```
#include "mbedtls/base64.h"
const char* base64EncodeImage(camera_fb_t *fb)
{
/* Base 64 encode the image - this was the simplest way*/
unsigned char* src = fb->buf;
size_t slen = fb->len;
size_t dlen = 0;
int err = mbedtls_base64_encode(NULL, 0 , &dlen, src, slen);
/* For a larger allocation like thi you need to use capability allocation*/
const char *dst = (char*) heap_caps_calloc(dlen, 1, MALLOC_CAP_8BIT);
size_t olen;
err = mbedtls_base64_encode((unsigned char*)dst, dlen , &olen, src, slen);
if (err != 0) {
Serial.printf("error base64 encoding, error %d, buff size: %d", err, olen);
return NULL;
}
return dst;
}
```
## Viewing the Webcam Images with MongoDB Realm
Having put all that together, I needed a way to view it. And for this, I decided rather than create a web service, I would use Realm Web and QueryAnywhere with Read-only security rules and anonymous users.
This is easy to set up by clicking a few checkboxes in your Realm app. Then in a web page (hosted for free in Realm Hosting), I can simply add code as follows, to poll for new images (again, using the only fetch if changes trick with _id).
```
```
You can see this in action at https://xmastree-lpeci.mongodbstitch.com/. Use *view-source *or the developer console in Chrome to see the code. Or look at it in GitHub [here.
##
## Conclusion
##
I don't have a deep or dramatic conclusion for this as I did this mostly for fun. I personally learned a lot about connecting the smallest computers to the modern cloud. Some things we can take for granted on our desktops and laptops due to an abundance of RAM and CPU still need thought and consideration. The Atlas Data API, though, worked exactly the same way as it does on these larger platforms, which is awesome. Next time, I'll use Micropython or even UIFlow Block coding and see if it's even easier.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "I built a Christmas tree with an API so you, dear reader, can control the lights as well as a webcam to view it. All built using ESP32 Microcontrollers, Neopixels and the MongoDB Atlas Data API.",
"contentType": "Article"
} | Christmas Lights and Webcams with the MongoDB Data API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/real-time-location-updates-stitch-change-streams-mapbox | created | # Real-Time Location Updates with MongoDB Stitch, Change Streams, and Mapbox
>
>
>Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.
When it comes to modern web applications, interactions often need to be done in real-time. This means that instead of periodically checking in for changes, watching or listening for changes often makes more sense.
Take the example of tracking something on a map. When it comes to package shipments, device tracking, or anything else where you need to know the real-time location, watching for those changes in location is great. Imagine needing to know where your fleet is so that you can dispatch them to a nearby incident?
When it comes to MongoDB, watching for changes can be done through change streams. These change streams can be used in any of the drivers, including front-end applications with MongoDB Stitch.
In this tutorial, we're going to leverage MongoDB Stitch change streams. When the location data in our NoSQL documents change, we're going to update the information on an interactive map powered by Mapbox.
Take the following animated image for example:
Rather than building an Internet of Things (IoT) device to track and submit GPS data, we're going to simulate the experience by directly changing our documents in MongoDB. When the update operations are complete, the front-end application with the interactive map is watching for those changes and responding appropriately.
## The Requirements
To be successful with this example, you'll need to have a few things ready to go prior:
- A MongoDB Atlas cluster
- A MongoDB Stitch application
- A Mapbox account
For this example, the data will exist in MongoDB Atlas. Since we're planning on interacting with our data using a front-end application, we'll be using MongoDB Stitch. A Stitch application should be created within the MongoDB Cloud and connected to the MongoDB Atlas cluster prior to exploring this tutorial.
>Get started with MongoDB Atlas and Stitch for FREE in the MongoDB Cloud.
Mapbox will be used as our interactive map. Since Mapbox is a service, you'll need to have created an account and have access to your access token.
In the animated image, I'm using the MongoDB Visual Studio Code plugin for interacting with the documents in my collection. You can do the same or use another tool such as Compass, the CLI, or the data explorer within Atlas to get the job done.
## Understanding the Document Model for the Location Tracking Example
Because we're only planning on moving a marker around on a map, the data model that we use doesn't need to be extravagant. For this example, the following is more than acceptable:
``` json
{
"_id": "5ec44f70fa59d66ba0dd93ae",
"coordinates": [
-121.4252,
37.7397
],
"username": "nraboy"
}
```
In the above example, the coordinates array has the first item representing the longitude and the second item representing the latitude. We're including a username to show that we are going to watch for changes based on a particular document field. In a polished application, all users probably wouldn't be watching for changes for all documents. Instead they'd probably be watching for changes of documents that belong to them.
While we could put authorization rules in place for users to access certain documents, it is out of the scope of this example. Instead, we're going to mock it.
## Building a Real-Time Location Tracking Application with Mapbox and the Stitch SDK
Now we're going to build our client-facing application which consists of Mapbox, some basic HTML and JavaScript, and MongoDB Stitch.
Let's start by adding the following boilerplate code:
``` xml
```
The above code sets us up by including the Mapbox and MongoDB Stitch SDKs. When it comes to querying MongoDB and interacting with the map, we're going to be doing that from within the ` | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to use change streams with MongoDB Stitch to update location on a Mapbox map in real-time.",
"contentType": "Article"
} | Real-Time Location Updates with MongoDB Stitch, Change Streams, and Mapbox | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/ops-manager/enterprise-operator-kubernetes-openshift | created | # Introducing the MongoDB Enterprise Operator for Kubernetes and OpenShift
Today more DevOps teams are leveraging the power of containerization,
and technologies like Kubernetes and Red Hat OpenShift, to manage
containerized database clusters. To support teams building cloud-native
apps with Kubernetes and OpenShift, we are introducing a Kubernetes
Operator (beta) that integrates with Ops Manager, the enterprise
management platform for MongoDB. The operator enables a user to deploy
and manage MongoDB clusters from the Kubernetes API, without having to
manually configure them in Ops Manager.
With this Kubernetes integration, you can consistently and effortlessly
run and deploy workloads wherever they need to be, standing up the same
database configuration in different environments, all controlled with a
simple, declarative configuration. Operations teams can also offer
developers new services like MongoDB-as-a-Service, that could provide
for them a fully managed database, alongside other products and
services, managed by Kubernetes and OpenShift.
In this blog, we'll cover the following:
- Brief discussion on the container revolution
- Overview of MongoDB Ops Manager
- How to Install and configure the MongoDB Enterprise Operator for
Kubernetes
- Troubleshooting
- Where to go for more information
## The containerization movement
If you ever visited an international shipping port or drove down an
interstate highway you may have seen large rectangular metal containers
generally referred to as intermodal containers. These containers are
designed and built using the same specifications even though the
contents of these boxes can vary greatly. The consistent design not only
enables these containers to freely move from ship, to rail, and to
truck, they also allow this movement without unloading and reloading the
cargo contents.
This same concept of a container can be applied to software applications
where the application is the contents of the container along with its
supporting frameworks and libraries. The container can be freely moved
from one platform to another all without disturbing the application.
This capability makes it easy to move an application from an on-premise
datacenter server to a public cloud provider, or to quickly stand up
replica environments for development, test, and production usage.
MongoDB 4.0 introduces the MongoDB Enterprise Operator for Kubernetes
which enables a user to deploy and manage MongoDB clusters from the
Kubernetes API, without the user having to connect directly to Ops
Manager or Cloud Manager
(the hosted version of Ops Manager, delivered as a
service.
While MongoDB is fully supported in a containerized environment, you
need to make sure that the benefits you get from containerizing the
database exceed the cost of managing the configuration. As with any
production database workload, these containers should use persistent
storage and will require additional configuration depending on the
underlying container technology used. To help facilitate the management
of the containers themselves, DevOps teams are leveraging the power of
orchestration technologies like Kubernetes and Red Hat OpenShift. While
these technologies are great at container management, they are not aware
of application specific configurations and deployment topologies such as
MongoDB replica sets and sharded clusters. For this reason, Kubernetes
has Custom Resources and Operators which allow third-parties to extend
the Kubernetes API and enable application aware deployments.
Later in this blog you will learn how to install and get started with
the MongoDB Enterprise Operator for Kubernetes. First let's cover
MongoDB Ops Manager, which is a key piece in efficient MongoDB cluster
management.
## Managing MongoDB
Ops Manager is an
enterprise class management platform for MongoDB clusters that you run
on your own infrastructure. The capabilities of Ops Manager include
monitoring, alerting, disaster recovery, scaling, deploying and
upgrading of replica sets and sharded clusters, and other MongoDB
products, such as the BI Connector. While a thorough discussion of Ops
Manager is out of scope of this blog it is important to understand the
basic components that make up Ops Manager as they will be used by the
Kubernetes Operator to create your deployments.
A simplified Ops Manager architecture is shown in Figure 2 below. Note
that there are other agents that Ops Manager uses to support features
like backup but these are outside the scope of this blog and not shown.
For complete information on MongoDB Ops Manager architecture see the
online documentation found at the following URL:
The MongoDB HTTP Service provides a web application for administration.
These pages are simply a front end to a robust set of Ops Manager REST
APIs that are hosted in the Ops Manager HTTP Service. It is through
these REST
APIs that
the Kubernetes Operator will interact with Ops Manager.
## MongoDB Automation Agent
With a typical Ops Manager deployment there are many management options
including upgrading the cluster to a different version, adding
secondaries to an existing replica set and converting an existing
replica set into a sharded cluster. So how does Ops Manager go about
upgrading each node of a cluster or spinning up new MongoD instances? It
does this by relying on a locally installed service called the Ops
Manager Automation Agent which runs on every single MongoDB node in the
cluster. This lightweight service is available on multiple operating
systems so regardless if your MongoDB nodes are running in a Linux
Container or Windows Server virtual machine or your on-prem PowerPC
Server, there is an Automation Agent available for that platform. The
Automation Agents receive instructions from Ops Manager REST APIs to
perform work on the cluster node.
## MongoDB Monitoring Agent
When Ops Manager shows statistics such as database size and inserts per
second it is receiving this telemetry from the individual nodes running
MongoDB. Ops Manager relies on the Monitoring Agent to connect to your
MongoDB processes, collect data about the state of your deployment, then
send that data to Ops Manager. There can be one or more Monitoring
Agents deployed in your infrastructure for reliability but only one
primary agent per Ops Manager Project is collecting data. Ops Manager is
all about automation and as soon as you have the automation agent
deployed, other supporting agents like the Monitoring agent are deployed
for you. In the scenario where the Kubernetes Operator has issued a
command to deploy a new MongoDB cluster in a new project, Ops Manager
will take care of deploying the monitoring agent into the containers
running your new MongoDB cluster.
## Getting started with MongoDB Enterprise Operator for Kubernetes
Ops Manager is an integral part of automating a MongoDB cluster with
Kubernetes. To get started you will need access to an Ops Manager 4.0+
environment or MongoDB Cloud Manager.
The MongoDB Enterprise Operator for Kubernetes is compatible with
Kubernetes v1.9 and above. It also has been tested with Openshift
version 3.9. You will need access to a Kubernetes environment. If you do
not have access to a Kubernetes environment, or just want to stand up a
test environment, you can use minikube which deploys a local single node
Kubernetes cluster on your machine. For additional information and setup
instructions check out the following URL:
https://kubernetes.io/docs/setup/minikube.
The following sections will cover the three step installation and
configuration of the MongoDB Enterprise Operator for Kubernetes. The
order of installation will be as follows:
- Step 1: Installing the MongoDB Enterprise Operator via a helm or
yaml file
- Step 2: Creating and applying a Kubernetes ConfigMap file
- Step 3: Create the Kubernetes secret object which will store the Ops
Manager API Key
## Step 1: Installing MongoDB Enterprise Operator for Kubernetes
To install the MongoDB Enterprise Operator for Kubernetes you can use
helm, the Kubernetes package manager, or pass a yaml file to kubectl.
The instructions for both of these methods is as follows, pick one and
continue to step 2.
To install the operator via Helm:
To install with Helm you will first need to clone the public repo
Change directories into the local copy and run the following command on
the command line:
``` shell
helm install helm_chart/ --name mongodb-enterprise
```
To install the operator via a yaml file:
Run the following command from the command line:
``` shell
kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/mongodb-enterprise.yaml
```
At this point the MongoDB Enterprise Operator for Kubernetes is
installed and now needs to be configured. First, we must create and
apply a Kubernetes ConfigMap file. A Kubernetes ConfigMap file holds
key-value pairs of configuration data that can be consumed in pods or
used to store configuration data. In this use case the ConfigMap file
will store configuration information about the Ops Manager deployment we
want to use.
## Step 2: Creating the Kubernetes ConfigMap file
For the Kubernetes Operator to know what Ops Manager you want to use you
will need to obtain some properties from the Ops Manager console and
create a ConfigMap file. These properties are as follows:
- **Base Url**: The URL of your Ops Manager or Cloud Manager.
- **Project Id**: The id of an Ops Manager Project which the
Kubernetes Operator will deploy into.
- **User**: An existing Ops Manager username.
- **Public API Key**: Used by the Kubernetes Operator to connect to
the Ops Manager REST API endpoint.
- **Base Url**: The Base Uri is the URL of your Ops Manager or Cloud
Manager.
If you already know how to obtain these fellows, copy them down and
proceed to Step 3.
>
>
>Note: If you are using Cloud Manager the Base Url is
>
>
>
To obtain the Base Url in Ops Manager copy the Url used to connect to
your Ops Manager server from your browser's navigation bar. It should be
something similar to . You can also perform the
following:
Login to Ops Manager and click on the Admin button. Next select the "Ops
Manager Config" menu item. You will be presented with a screen similar
to the figure below:
Copy down the value displayed in the URL To Access Ops Manager box.
Note: If you don't have access to the Admin drop down you will have to
copy the Url used to connect to your Ops Manager server from your
browser's navigation bar.
**Project Id**
The Project Id is the id of an Ops Manager Project which the Kubernetes
Operator will deploy into.
An Ops Manager Project is a logical organization of MongoDB clusters and
also provides a security boundary. One or more
Projects
are apart of an Ops Manager Organization. If you need to create an
Organization click on your user name at the upper right side of the
screen and select, "Organizations". Next click on the "+ New
Organization" button and provide a name for your Organization. Once you
have an Organization you can create a Project.
To create a new Project, click on your Organization name. This will
bring you to the Projects page and from here click on the "+ New
Project" button and provide a unique name for your Project. If you are
not an Ops Manager administrator you may not have this option and will
have to ask your administrator to create a Project.
Once the Project is created or if you already have a Project created on
your behalf by an administrator you can obtain the Project Id by
clicking on the Settings menu option as shown in the Figure below.
Copy the Project ID.
**User**
The User is an existing Ops Manager username.
To see the list of Ops Manager users return to the Project and click on
the "Users & Teams" menu. You can use any Ops Manager user who has at
least Project Owner access. If you'd like to create another username
click on the "Add Users & Team" button as shown in Figure 6.
Copy down the email of the user you would like the Kubernetes Operator
to use when connecting to Ops Manager.
**Public API Key**
The Ops Manager API Key is used by the Kubernetes Operator to connect to
the Ops Manager REST API endpoint. You can create a API Key by clicking
on your username on the upper right hand corner of the Ops Manager
console and selecting, "Account" from the drop down menu. This will open
the Account Settings page as shown in Figure 7.
Click on the "Public API Access" tab. To create a new API key click on
the "Generate" button and provide a description. Upon completion you
will receive an API key as shown in Figure 8.
Be sure to copy the API Key as it will be used later as a value in a
configuration file. **It is important to copy this value while the
dialog is up since you can not read it back once you close the dialog**.
If you missed writing the value down you will need to delete the API Key
and create a new one.
*Note: If you are using MongoDB Cloud Manager or have Ops Manager
deployed in a secured network you may need to allow the IP range of your
Kubernetes cluster so that the Operator can make requests to Ops Manager
using this API Key.*
Now that we have acquired the necessary Ops Manager configuration
information we need to create a Kubernetes ConfigMap file for the
Kubernetes Project. To do this use a text editor of your choice and
create the following yaml file, substituting the bold placeholders for
the values you obtained in the Ops Manager console. For sample purposes
we can call this file "my-project.yaml".
``` yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: <>
namespace: mongodb
data:
projectId: <>
baseUrl: <>
```
Figure 9: Sample ConfigMap file
Note: The format of the ConfigMap file may change over time as features
and capabilities get added to the Operator. Be sure to check with the
MongoDB documentation if you are having problems submitting the
ConfigMap file.
Once you create this file you can apply the ConfigMap to Kubernetes
using the following command:
``` shell
kubectl apply -f my-project.yaml
```
## Step 3: Creating the Kubernetes Secret
For a user to be able to create or update objects in an Ops Manager
Project they need a Public API Key. Earlier in this section we created a
new API Key and you hopefully wrote it down. This API Key will be held
by Kubernetes as a Secret object. You can create this Secret with the
following command:
``` shell
kubectl -n mongodb create secret generic <> --from-literal="user=<>" --from-literal="publicApiKey=<>"
```
Make sure you replace the User and Public API key values with those you
obtained from your Ops Manager console. You can pick any name for the
credentials - just make a note of it as you will need it later when you
start creating MongoDB clusters.
Now we're ready to start deploying MongoDB Clusters!
## Deploying a MongoDB Replica Set
Kubernetes can deploy a MongoDB standalone, replica set or a sharded
cluster. To deploy a 3 node replica set create the following yaml file:
``` shell
apiVersion: mongodb.com/v1
kind: MongoDbReplicaSet
metadata:
name: <>
namespace: mongodb
spec:
members: 3
version: 3.6.5
persistent: false
project: <>
credentials: <>
```
Figure 10: simple-rs.yaml file describing a three node replica set
The name of your new cluster can be any name you chose. The name of the
OpsManager Project config map and the name of credentials secret were
defined previously.
To submit the request for Kubernetes to create this cluster simply pass
the name of the yaml file you created to the following kubectl command:
``` shell
kubectl apply -f simple-rs.yaml
```
After a few minutes your new cluster will show up in Ops Manager as
shown in Figure 11.
Notice that Ops Manager installed not only the Automation Agents on
these three containers running MongoDB, it also installed Monitoring
Agent and Backup Agents.
## A word on persistent storage
What good would a database be if anytime the container died your data
went to the grave as well? Probably not a good situation and maybe one
where tuning up the resumé might be a good thing to do as well. Up until
recently, the lack of persistent storage and consistent DNS mappings
were major issues with running databases within containers. Fortunately,
recent work in the Kubernetes ecosystem has addressed this concern and
new features like `PersistentVolumes` and `StatefulSets` have emerged
allowing you to deploy databases like MongoDB without worrying about
losing data because of hardware failure or the container moved elsewhere
in your datacenter. Additional configuration of the storage is required
on the Kubernetes cluster before you can deploy a MongoDB Cluster that
uses persistent storage. In Kubernetes there are two types of persistent
volumes: static and dynamic. The Kubernetes Operator can provision
MongoDB objects (i.e. standalone, replica set and sharded clusters)
using either type.
## Connecting your application
Connecting to MongoDB deployments in Kubernetes is no different than
other deployment topologies. However, it is likely that you'll need to
address the network specifics of your Kubernetes configuration. To
abstract the deployment specific information such as hostnames and ports
of your MongoDB deployment, the Kubernetes Enterprise Operator for
Kubernetes uses Kubernetes Services.
### Services
Each MongoDB deployment type will have two Kubernetes services generated
automatically during provisioning. For example, suppose we have a single
3 node replica set called "my-replica-set", then you can enumerate the
services using the following statement:
``` shell
kubectl get all -n mongodb --selector=app=my-replica-set-svc
```
This statement yields the following results:
``` shell
NAME READY STATUS RESTARTS AGE
pod/my-replica-set-0 1/1 Running 0 29m
pod/my-replica-set-1 1/1 Running 0 29m
pod/my-replica-set-2 1/1 Running 0 29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-replica-set-svc ClusterIP None 27017/TCP 29m
service/my-replica-set-svc-external NodePort 10.103.220.236 27017:30057/TCP 29m
NAME DESIRED CURRENT AGE
statefulset.apps/my-replica-set 3 3 29m
```
**Note the appended string "-svc" to the name of the replica set.**
The service with "-external" is a NodePort - which means it's exposed to
the overall cluster DNS name on port 30057.
Note: If you are using Minikube you can obtain the IP address of the
running replica set by issuing the following:
``` shell
minikube service list
```
In our example which used minikube the result set contained the
following information: mongodb my-replica-set-svc-external
Now that we know the IP of our MongoDB cluster we can connect using the
Mongo Shell or whatever application or tool you would like to use.
## Basic Troubleshooting
If you are having problems submitting a deployment you should read the
logs. Issues like authentication issues and other common problems can be
easily detected in the log files. You can view the MongoDB Enterprise
Operator for Kubernetes log files via the following command:
``` shell
kubectl logs -f deployment/mongodb-enterprise-operator -n mongodb
```
You can also use kubectl to see the logs of the database pods. The main
container processes is continually tailing the Automation Agent logs and
can be seen with the following statement:
``` shell
kubectl logs <> -n mongodb
```
Note: You can enumerate the list of pods using
``` shell
kubectl get pods -n mongodb
```
Another common troubleshooting technique is to shell into one of the
containers running MongoDB. Here you can use common Linux tools to view
the processes, troubleshoot, or even check mongo shell connections
(sometimes helpful in diagnosing network issues).
``` shell
kubectl exec -it <> -n mongodb -- /bin/bash
```
An example output of this command is as follows:
``` shell
UID PID PPID C STIME TTY TIME CMD
mongodb 1 0 0 16:23 ? 00:00:00 /bin/sh -c supervisord -c /mongo
mongodb 6 1 0 16:23 ? 00:00:01 /usr/bin/python /usr/bin/supervi
mongodb 9 6 0 16:23 ? 00:00:00 bash /mongodb-automation/files/a
mongodb 25 9 0 16:23 ? 00:00:00 tail -n 1000 -F /var/log/mongodb
mongodb 26 1 4 16:23 ? 00:04:17 /mongodb-automation/files/mongod
mongodb 45 1 0 16:23 ? 00:00:01 /var/lib/mongodb-mms-automation/
mongodb 56 1 0 16:23 ? 00:00:44 /var/lib/mongodb-mms-automation/
mongodb 76 1 1 16:23 ? 00:01:23 /var/lib/mongodb-mms-automation/
mongodb 8435 0 0 18:07 pts/0 00:00:00 /bin/bash
```
From inside the container we can make a connection to the local MongoDB
node easily by running the mongo shell via the following command:
``` shell
/var/lib/mongodb-mms-automation/mongodb-linux-x86_64-3.6.5/bin/mongo --port 27017
```
Note: The version of the automation agent may be different than 3.6.5,
be sure to check the directory path
## Where to go for more information
More information will be available on the MongoDB documentation
website in the near future. Until
then check out these resources for more information:
GitHub:
To see all MongoDB operations best practices, download our whitepaper:
| md | {
"tags": [
"Ops Manager",
"Kubernetes"
],
"pageDescription": "Introducing a Kubernetes Operator (beta) that integrates with Ops Manager, the enterprise management platform for MongoDB.",
"contentType": "News & Announcements"
} | Introducing the MongoDB Enterprise Operator for Kubernetes and OpenShift | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/polymorphic-pattern | created | # Building with Patterns: The Polymorphic Pattern
## Introduction
One frequently asked question when it comes to MongoDB is "How do I
structure my schema in MongoDB for my application?" The honest answer
is, it depends. Does your application do more reads than writes? What
data needs to be together when read from the database? What performance
considerations are there? How large are the documents? How large will
they get? How do you anticipate your data will grow and scale?
All of these questions, and more, factor into how one designs a database
schema in MongoDB. It has been said that MongoDB is schemaless. In fact,
schema design is very important in MongoDB. The hard fact is that most
performance issues we've found trace back to poor schema design.
Over the course of this series, Building with Patterns, we'll take a
look at twelve common Schema Design Patterns that work well in MongoDB.
We hope this series will establish a common methodology and vocabulary
you can use when designing schemas. Leveraging these patterns allows for
the use of "building blocks" in schema planning, resulting in more
methodology being used than art.
MongoDB uses a document data
model. This
model is inherently flexible, allowing for data models to support your
application needs. The flexibility also can lead to schemas being more
complex than they should. When thinking of schema design, we should be
thinking of performance, scalability, and simplicity.
Let's start our exploration into schema design with a look at what can
be thought as the base for all patterns, the *Polymorphic Pattern*. This
pattern is utilized when we have documents that have more similarities
than differences. It's also a good fit for when we want to keep
documents in a single collection.
## The Polymorphic Pattern
When all documents in a collection are of similar, but not identical,
structure, we call this the Polymorphic Pattern. As mentioned, the
Polymorphic Pattern is useful when we want to access (query) information
from a single collection. Grouping documents together based on the
queries we want to run (instead of separating the object across tables
or collections) helps improve performance.
Imagine that our application tracks professional sports athletes across
all different sports.
We still want to be able to access all of the athletes in our
application, but the attributes of each athlete are very different. This
is where the Polymorphic Pattern shines. In the example below, we store
data for athletes from two different sports in the same collection. The
data stored about each athlete does not need to be the same even though
the documents are in the same collection.
Professional athlete records have some similarities, but also some
differences. With the Polymorphic Pattern, we are easily able to
accommodate these differences. If we were not using the Polymorphic
Pattern, we might have a collection for Bowling Athletes and a
collection for Tennis Athletes. When we wanted to query on all athletes,
we would need to do a time-consuming and potentially complex join.
Instead, since we are using the Polymorphic Pattern, all of our data is
stored in one Athletes collection and querying for all athletes can be
accomplished with a simple query.
This design pattern can flow into embedded sub-documents as well. In the
above example, Martina Navratilova didn't just compete as a single
player, so we might want to structure her record as follows:
From an application development standpoint, when using the Polymorphic
Pattern we're going to look at specific fields in the document or
sub-document to be able to track differences. We'd know, for example,
that a tennis player athlete might be involved with different events,
while a different sports player may not be. This will, typically,
require different code paths in the application code based on the
information in a given document. Or, perhaps, different classes or
subclasses are written to handle the differences between tennis,
bowling, soccer, and rugby players.
## Sample Use Case
One example use case of the Polymorphic Pattern is Single View
applications. Imagine
working for a company that, over the course of time, acquires other
companies with their technology and data patterns. For example, each
company has many databases, each modeling "insurances with their
customers" in a different way. Then you buy those companies and want to
integrate all of those systems into one. Merging these different systems
into a unified SQL schema is costly and time-consuming.
MetLife was able to leverage MongoDB and the
Polymorphic Pattern to build their single view application in a few
months. Their Single View application aggregates data from multiple
sources into a central repository allowing customer service, insurance
agents, billing, and other departments to get a 360° picture of a
customer. This has allowed them to provide better customer service at a
reduced cost to the company. Further, using MongoDB's flexible data
model and the Polymorphic Pattern, the development team was able to
innovate quickly to bring their product online.
A Single View application is one use case of the Polymorphic Pattern. It
also works well for things like product catalogs where a bicycle has
different attributes than a fishing rod. Our athlete example could
easily be expanded into a more full-fledged content management system
and utilize the Polymorphic Pattern there.
## Conclusion
The Polymorphic Pattern is used when documents have more similarities
than they have differences. Typical use cases for this type of schema
design would be:
- Single View applications
- Content management
- Mobile applications
- A product catalog
The Polymorphic Pattern provides an easy-to-implement design that allows
for querying across a single collection and is a starting point for many
of the design patterns we'll be exploring in upcoming posts. The next
pattern we'll discuss is the
Attribute Pattern.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.",
"contentType": "Article"
} | Building with Patterns: The Polymorphic Pattern | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/triggers-tricks-auto-increment-fields | created | # Triggers Treats and Tricks - Auto-Increment a Running ID Field
In this blog series, we are trying to inspire you with some reactive Realm trigger use cases. We hope these will help you bring your application pipelines to the next level.
Essentially, triggers are components in our Atlas projects/Realm apps that allow a user to define a custom function to be invoked on a specific event.
- **Database triggers:** We have triggers that can be scheduled based on database events—like `deletes`, `inserts`, `updates`, and `replaces`—called database triggers.
- **Scheduled triggers:** We can schedule a trigger based on a `cron` expression via scheduled triggers.
- **Authentication triggers:** These triggers are only relevant for Realm authentication. They are triggered by one of the Realm auth providers' authentication events and can be configured only via a Realm application.
For this blog post, I would like to showcase an auto-increment of a running ID in a collection similar to relational database sequence use. A sequence in relational databases like Oracle or SqlServer lets you use it to maintain a running ID for your table rows.
If we translate this into a `students` collection example, we would like to get the `studentId` field auto incremented.
``` javascript
{
studentId : 1,
studentName : "Mark Olsen",
age : 15,
phone : "+1234546789",
},
{
studentId : 2,
studentName : "Peter Parker",
age : 17,
phone : "+1234546788",
}
```
I wanted to share an interesting solution based on triggers, and throughout this article, we will use a students collection example with `studentsId` field to explain the discussed approach.
## Prerequisites
First, verify that you have an Atlas project with owner privileges to create triggers.
- MongoDB Atlas account, Atlas cluster
- A MongoDB Realm application or access to MongoDB Atlas triggers.
> If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
>
## The Idea Behind the Main Mechanism
Three main components allow our auto-increment mechanism to function.
## 1. Define a Source Collection
We should pick the collection that we need the auto-increment to work upon (`students`) and we can define a unique index on that field. This is not a must but it makes sense:
``` javascript
db.students.createIndex({studentsId : 1}, {unique : true});
```
## 2. Define a Generic Function to Auto-Increment the ID
In order for us to reuse the auto-increment code for more than one collection, I've decided to build a generic function and later associate it with the relevant triggers. Let's call the function `autoIncrement`. This function will receive an "insert" event from the source collection and increment a helper `counters` collection document that stores the current counter per collection. It uses `findOneAndUpdate` to return an automatically incremented value per the relevant source namespace, using the \_id as the namespace identifier. Once retrieved, the source collection is being set with a generic field called `Id` (in this example, `studentsId`).
```javascript
exports = async function(changeEvent) {
// Source document _id
const docId = changeEvent.fullDocument._id;
// Get counter and source collection instances
const counterCollection = context.services.get("").db(changeEvent.ns.db).collection("counters");
const targetCollection = context.services.get("").db(changeEvent.ns.db).collection(changeEvent.ns.coll);
// automically increment and retrieve a sequence relevant to the current namespace (db.collection)
const counter = await counterCollection.findOneAndUpdate({_id: changeEvent.ns },{ $inc: { seq_value: 1 }}, { returnNewDocument: true, upsert : true});
// Set a generic field Id
const doc = {};
doc`${changeEvent.ns.coll}Id`] = counter.seq_value;
const updateRes = await targetCollection.updateOne({_id: docId},{ $set: doc});
console.log(`Updated ${JSON.stringify(changeEvent.ns)} with counter ${counter.seq_value} result: ${JSON.stringify(updateRes)}`);
};
```
>Important: Replace \ with your linked service. The default value is "mongodb-atlas" if you have only one cluster linked to your Realm application.
Note that when we query and increment the counter, we expect to get the new version of the document `returnNewDocument: true` and `upsert: true` in case this is the first document.
The `counter` collection document after the first run on our student collection will look like this:
``` javascript
{
_id: {
db: "app",
coll: "students"
},
seq_value: 1
}
```
## 3. Building the Trigger on Insert Operation and Associating it with Our Generic Function
Now let's define our trigger based on our Atlas cluster service and our database and source collection, in my case, `app.students`.
Please make sure to select "Event Ordering" toggled to "ON" and the "insert" operation.
Now let's associate it with our pre-built function: `autoIncrement`.
Once we will insert our document into the collection, it will be automatically updated with a running unique number for `studentsId`.
## Wrap Up
With the presented technique, we can leverage triggers to auto-increment and populate id fields. This may open your mind to other ideas to design your next flows on MongoDB Realm.
In the following article in this series, we will use triggers to auto-translate our documents and benefit from Atlas Search's multilingual abilities.
> If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Atlas"
],
"pageDescription": "In this article, we will explore a trick that lets us auto-increment a running ID using a trigger.",
"contentType": "Article"
} | Triggers Treats and Tricks - Auto-Increment a Running ID Field | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/use-function-accumulator-operators | created | # How to Use Custom Aggregation Expressions in MongoDB 4.4
The upcoming release of MongoDB 4.4 makes it easier than ever to work with, transform, access, and make sense of your data. This release, the beta of which you can try right now, comes with a couple of new operators that make it possible to write custom functions to extend the MongoDB Query Language. This feature, called Custom Aggregation Expressions, allows you to write JavaScript functions that execute as part of an aggregation pipeline stage. These come in handy when you need to implement behavior that is not supported by the MongoDB Query Language by default.
The MongoDB Query Language has many operators, or functions, that allow you to manipulate and transform your data to fit your application's use case. Operators such as $avg , $concat , and $filter make it easy for developers to query, manipulate, and transform their dataset directly at the database level versus having to write additional code and transforming the data elsewhere. While there are operators for almost anything you can think of, there are a few edge cases where a provided operator or series of operators won't be sufficient, and that's where custom aggregation expressions come in.
In this blog post, we'll learn how we can extend the MongoDB Query Language to suit our needs by writing our own custom aggregation expressions using the new $function and $accumulator operators. Let's dive in!
## Prerequisites
For this tutorial you'll need:
- MongoDB 4.4.
- MongoDB Compass.
- Familiarity with MongoDB Aggregation Framework.
## Custom Aggregation Expressions
MongoDB 4.4 comes with two new operators: $function and $accumulator . These two operators allow us to write custom JavaScript functions that can be used in a MongoDB aggregation pipeline. We are going to look at examples of how to use both by implementing our own custom aggregation expressions.
To get the most value out of this blog post, I will assume that you are already familiar with the MongoDB aggregation framework. If not, I suggest checking out the docs and following a tutorial or two and becoming familiar with how this feature works before diving into this more advanced topic.
Before we get into the code, I want to briefly talk about why you would care about this feature in the first place. The first reason is delivering higher performance to your users. If you can get the exact data you need directly out of the database in one trip, without having to do additional processing and manipulating, you will be able to serve and fulfill requests quicker. Second, custom aggregation expressions allow you to take care of edge cases directly in your aggregation pipeline stage. If you've worked with the aggregation pipeline in the past, you'll feel right at home and be productive in no time. If you're new to the aggregation pipeline, you'll only have to learn it once. By the time you find yourself with a use case for the `$function` or `$accumulator` operators, all of your previous knowledge will transfer over. I think those are two solid reasons to care about custom aggregation expressions: better performance for your users and increased developer productivity.
The one caveat to the liberal use of the `$function` and `$accumulator` operators is performance. Executing JavaScript inside of an aggregation expression is resource intensive and may reduce performance. You should always opt to use existing, highly optimized operators first, especially if they can get the job done for your use case. Only consider using `$function` and `$accumulator` if an existing operator cannot fulfill your application's needs.
## $function Operator
The first operator we'll take a look at is called `$function`. As the name implies, this operator allows you to implement a custom JavaScript function to implement any sort of behavior. The syntax for this operator is:
```
{
$function: {
body: ,
args: ,
lang: "js"
}
}
```
The `$function` operator has three properties. The `body` , which is going to be our JavaScript function, an `args` array containing the arguments we want to pass into our function, and a `lang` property specifying the language of our `$function`, which as of MongoDB 4.4 only supports JavaScript.
The `body` property holds our JavaScript function as either a type of BSON Code or String. In our examples in this blog post, we'll write our code as a String. Our JavaScript function will have a signature that looks like this:
```
function(arg){
return arg
}
```
From a cursory glance, it looks like a standard JavaScript function. You can pass in `n` number of arguments, and the function returns a result. The arguments within the `body` property will be mapped to the arguments provided in the `args` array property, so you'll need to make sure you pass in and capture all of the provided arguments.
### Implementing the $function Operator
Now that we know the properties of the `$function` operator, let's use it in an aggregation pipeline. To get started, let's choose a data set to work from. We'll use one of the provided MongoDB sample datasets that you can find on MongoDB Atlas. If you don't already have a cluster set up, you can do so by creating a free MongoDB Atlas account. Loading the sample datasets is as simple as clicking the "..." button on your cluster and selecting the "Load Sample Dataset" option.
Once you have the sample dataset loaded, let's go ahead and connect to our MongoDB cluster. Whenever learning something new, I prefer to use a visual approach, so for this tutorial, I'll rely on MongoDB Compass. If you already have MongoDB Compass installed, connect to your cluster that has the sample dataset loaded, otherwise download the latest version here, and then connect.
Whether you are using MongoDB Compass or connecting via the mongo shell, you can find your MongoDB Atlas connection string by clicking the "Connect" button on your cluster, choosing the type of app you'll be using to connect with, and copying the string which will look like this: `mongodb+srv://mongodb:@cluster0-tdm0q.mongodb.net/test`.
Once you are connected, the dataset that we will work with is called `sample_mflix` and the collection `movies`. Go ahead and connect to that collection and then navigate to the "Aggregations" tab. To ensure that everything works fine, let's write a very simple aggregation pipeline using the new `$function` operator. From the dropdown, select the `$addFields` operator and add the following code as its implementation:
```
{
fromFunction: {$function: {body: "function(){return 'hello'}", args: ], lang: 'js'}}
}
```
If you are using the mongo shell to execute these queries the code will look like this:
```
db.movies.aggregate([
{
$addFields: {
fromFunction: {
$function: {
body: "function(){return 'hello'}",
args: [],
lang: 'js'
}
}
}
}
])
```
If you look at the output in MongoDB Compass and scroll to the bottom of each returned document, you'll see that each document now has a field called `fromFunction` with the text `hello` as its value. We could have simply passed the string "hello" instead of using the `$function` operator, but the reason I wanted to do this was to ensure that your version of MongoDB Compass supports the `$function` operator and this is a minimal way to test it.
![Basic Example of $function operator
Next, let's implement a custom function that actually does some work. Let's add a new field to every movie that has Ado's review score, or perhaps your own?
I'll name my field `adoScore`. Now, my rating system is unique. Depending on the day and my mood, I may like a movie more or less, so we'll start figuring out Ado's score of a movie by randomly assigning it a value between 0 and 5. So we'll have a base that looks like this: `let base = Math.floor(Math.random() * 6);`.
Next, if critics like the movie, then I do too, so let's say that if a movie has an IMDB score of over 8, we'll give it +1 to Ado's score. Otherwise, we'll leave it as is. For this, we'll pass in the `imdb.rating` field into our function.
Finally, movies that have won awards also get a boost in Ado's scoring system. So for every award nomination a movie receives, the total Ado score will increase by 0.25, and for every award won, the score will increase by 0.5. To calculate this, we'll have to provide the `awards` field into our function as well.
Since nothing is perfect, we'll add a custom rule to our function: if the total score exceeds 10, we'll just output the final score to be 9.9. Let's see what this entire function looks like:
```
{
adoScore: {$function: {
body: "function(imdb, awards){let base = Math.floor(Math.random() * 6) \n let imdbBonus = 0 \n if(imdb > 8){ imdbBonus = 1} \n let nominations = (awards.nominations * 0.25) \n let wins = (awards.wins * 0.5) \n let final = base + imdbBonus + nominations + wins \n if(final > 10){final = 9.9} \n return final}",
args: "$imdb.rating", "$awards"],
lang: 'js'}}
}
```
To make the JavaScript function easier to read, here it is in non-string form:
```
function(imdb, awards){
let base = Math.floor(Math.random() * 6)
let imdbBonus = 0
if(imdb > 8){ imdbBonus = 1}
let nominations = awards.nominations * 0.25
let wins = awards.wins * 0.5
let final = base + imdbBonus + nominations + wins
if(final > 10){final = 9.9}
return final
}
```
And again, if you are using the mongo shell, the code will look like:
```
db.movies.aggregate([
{
$addFields: {
adoScore: {
$function: {
body: "function(imdb, awards){let base = Math.floor(Math.random() * 6) \n let imdbBonus = 0 \n if(imdb > 8){ imdbBonus = 1} \n let nominations = (awards.nominations * 0.25) \n let wins = (awards.wins * 0.5) \n let final = base + imdbBonus + nominations + wins \n if(final > 10){final = 9.9} \n return final}",
args: ["$imdb.rating", "$awards"],
lang: 'js'
}
}
}
}
])
```
Running the above `$addFields` aggregation , which uses the `$function` operator, will produce a result that adds a new `adoScore` field to the end of each document. This field will contain a numeric value ranging from 0 to 9.9. In the `$function` operator, we passed our custom JavaScript function into the `body` property. As we iterated through our documents, the `$imdb.rating` and `$awards` fields from each document were passed into our custom function.
Using dot notation, we've seen how to specify any sub-document you may want to use in an aggregation. We also learned how to use an entire field and it's subfields in an aggregation, as we've seen with the `$awards` parameter in our earlier example. Our final result looks like this:
![Ado Review Score using $function
This is just scratching the surface of what we can do with the `$function` operator. In our above example, we paired it with the `$addFields` operator, but we can also use `$function` as an alternative to the `$where` operator, or with other operators as well. Check out the `$function` docs for more information.
## $accumulator Operator
The next operator that we'll look at, which also allows us to write custom JavaScript functions, is called the $accumulator operator and is a bit more complex. This operator allows us to define a custom accumulator function with JavaScript. Accumulators are operators that maintain their state as documents progress through the pipeline. Much of the same rules apply to the `$accumulator` operator as they do to `$function`. We'll start by taking a look at the syntax for the `$accumulator` operator:
```
{
$accumulator: {
init: ,
initArgs: , // Optional
accumulate: ,
accumulateArgs: ,
merge: ,
finalize: , // Optional
lang:
}
}
```
We have a couple of additional fields to discuss. Rather than just one `body` field that holds a JavaScript function, the `$accumulator` operator gives us four additional places to write JavaScript:
- The `init` field that initializes the state of the accumulator.
- The `accumulate` field that accumulates documents coming through the pipeline.
- The `merge` field that is used to merge multiple states.
- The `finalize` field that is used to update the result of the accumulation.
For arguments, we have two places to provide them: the `initArgs` that get passed into our `init` function, and the `accumulateArgs` that get passed into our `accumulate` function. The process for defining and passing the arguments is the same here as it is for the `$function` operator. It's important to note that for the `accumulate` function the first argument is the `state` rather than the first item in the `accumulateArgs` array.
Finally, we have to specify the `lang` field. As before, it will be `js` as that's the only supported language as of the MongoDB 4.4 release.
### Implementing the $accumulator Operator
To see a concrete example of the `$accumulator` operator in action, we'll continue to use our `sample_mflix` dataset. We'll also build on top of the `adoScore` we added with the `$function` operator. We'll pair our `$accumulator` with a `$group` operator and return the number of movies released each year from our dataset, as well as how many movies are deemed watchable by Ado's scoring system (meaning they have a score greater than 8). Our `$accumulator` function will look like this:
```
{
_id: "$year",
consensus: {
$accumulator: {
init: "function(){return {total:0, worthWatching: 0}}",
accumulate: "function(state, adoScore){let worthIt = 0; if(adoScore > 8){worthIt = 1}; return {total:state.total + 1, worthWatching: state.worthWatching + worthIt }}",
accumulateArgs:"$adoScore"],
merge: "function(state1, state2){return {total: state1.total + state2.total, worthWatching: state1.worthWatching + state2.worthWatching}}",
}
}
}
```
And just to display the JavaScript functions in non-string form for readability:
```
// Init
function(){
return { total:0, worthWatching: 0 }
}
// Accumulate
function(state, adoScore){
let worthIt = 0;
if(adoScore > 8){ worthIt = 1};
return {
total: state.total + 1,
worthWatching: state.worthWatching + worthIt }
}
// Merge
function(state1, state2){
return {
total: state1.total + state2.total,
worthWatching: state1.worthWatching + state2.worthWatching
}
}
```
If you are running the above aggregation using the mongo shell, the query will look like this:
```
db.movies.aggregate([
{
$group: {
_id: "$year",
consensus: {
$accumulator: {
init: "function(){return {total:0, worthWatching: 0}}",
accumulate: "function(state, adoScore){let worthIt = 0; if(adoScore > 8){worthIt = 1}; return {total:state.total + 1, worthWatching: state.worthWatching + worthIt }}",
accumulateArgs:["$adoScore"],
merge: "function(state1, state2){return {total: state1.total + state2.total, worthWatching: state1.worthWatching + state2.worthWatching}}",
}
}
}
}
])
```
The result of running this query on the `sample_mflix` database will look like this:
![$accumulator function
Note: Since the `adoScore` function does rely on `Math.random()` for part of its calculation, you may get varying results each time you run the aggregation.
Just like the `$function` operator, writing a custom accumulator and using the `$accumulator` operator should only be done when existing operators cannot fulfill your application's use case. Similarly, we are also just scratching the surface of what is achievable by writing your own accumulator. Check out the docs for more.
Before we close out this blog post, let's take a look at what our completed aggregation pipeline will look like combining both our `$function` and `$accumulator` operators. If you are using the `sample_mflix` dataset, you should be able to run both examples with the following aggregation pipeline code:
```
db.movies.aggregate(
{
'$addFields': {
'adoScore': {
'$function': {
'body': 'function(imdb, awards){let base = Math.floor(Math.random() * 6) \n let imdbBonus = 0 \n if(imdb > 8){ imdbBonus = 1} \n let nominations = (awards.nominations * 0.25) \n let wins = (awards.wins * 0.5) \n let final = base + imdbBonus + nominations + wins \n if(final > 10){final = 9.9} \n return final}',
'args': [
'$imdb.rating', '$awards'
],
'lang': 'js'
}
}
}
}, {
'$group': {
'_id': '$year',
'consensus': {
'$accumulator': {
'init': 'function(){return {total:0, worthWatching: 0}}',
'accumulate': 'function(state, adoScore){let worthIt = 0; if(adoScore > 8){worthIt = 1}; return {total:state.total + 1, worthWatching: state.worthWatching + worthIt }}',
'accumulateArgs': [
'$adoScore'
],
'merge': 'function(state1, state2){return {total: state1.total + state2.total, worthWatching: state1.worthWatching + state2.worthWatching}}'
}
}
}
}
])
```
## Conclusion
The new `$function` and `$accumulator` operators released in MongoDB 4.4 improve developer productivity and allow MongoDB to handle many more edge cases out of the box. Just remember that these new operators, while powerful, should only be used if existing operators cannot get the job done as they may degrade performance!
Whether you are trying to use new functionality with these operators, fine-tuning your MongoDB cluster to get better performance, or are just trying to get more done with less, MongoDB 4.4 is sure to provide a few new and useful things for you. You can try all of these features out today by deploying a MongoDB 4.4 beta cluster on [MongoDB Atlas for free.
If you have any questions about these new operators or this blog post, head over to the MongoDB Community forums and I'll see you there.
Happy experimenting!
>
>
>**Safe Harbor Statement**
>
>The development, release, and timing of any features or functionality
>described for MongoDB products remains at MongoDB's sole discretion.
>This information is merely intended to outline our general product
>direction and it should not be relied on in making a purchasing decision
>nor is this a commitment, promise or legal obligation to deliver any
>material, code, or functionality. Except as required by law, we
>undertake no obligation to update any forward-looking statements to
>reflect events or circumstances after the date of such statements.
>
>
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Learn how to use custom aggregation expressions in your MongoDB aggregation pipeline operations.",
"contentType": "Tutorial"
} | How to Use Custom Aggregation Expressions in MongoDB 4.4 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/learn-mongodb-university-online-free-mooc | created | # Learn MongoDB with MongoDB University Free Courses
## Introduction
Your cheap boss doesn't want to pay for this awesome MongoDB Training
you found online? In-person trainings are quite challenging given the
current situation we are all facing with COVID-19. Also, is this
training even up-to-date with the most recent MongoDB release?
Who is better than MongoDB to teach you MongoDB? MongoDB
University offers free courses for
beginners and more advanced MongoDB users. Our education team is
dedicated to keep these courses up-to-date and build new content around
our latest features.
In this blog post, we will have a look at these
courses and what you
can learn from each of them.
## Course Format
MongoDB courses are online and free. You can do them at your pace and at
the most convenient time for you.
Each course contains a certain number of chapters, and each chapter
contains a few items.
An item usually contains a five- to 10-minute video in English, focussed
around one specific topic you are learning, and a little quiz or a
guided lab, which are here to make sure that you understood the core
concepts of that particular item.
## Learning Paths
MongoDB University proposes two
different learning paths to suit you best. There are also some more
courses that are not in the learning paths, but you are completely free
to create your own learning path and follow whichever courses you want
to. These two are just general guidelines if you don't know where to
start.
- If you are a developer, you will probably want to start with the
developer
path.
- If you are a DBA, you will more likely prefer the DBA
path.
### Developer Path
The developer path contains six recommended trainings, which I will
describe more in detail in the next section.
- M001: MongoDB Basics.
- M103: Basic Cluster Administration.
- M121: Aggregation Framework.
- M220: MongoDB for Developers.
- M201: MongoDB Performance.
- M320: MongoDB Data Modeling.
### DBA Path
The DBA path contains five recommended trainings, which I will also
describe in detail in the next section.
- M001: MongoDB Basics.
- M103: Basic Cluster Administration.
- M201: MongoDB Performance.
- M301: MongoDB Security.
- M312: Diagnostics and Debugging.
## MongoDB University Courses
Let's see all the courses available in more details.
### M001 - MongoDB Basics
Level: Introductory
In this six-chapter course, you will get your hands on all the basics,
including querying, computing, connecting to, storing, indexing, and
analyzing your data.
Learn more and
register.
### M100 - MongoDB for SQL Pros
Level: Introductory
In this four-chapter course, you will build a solid understanding of how
MongoDB differs from relational databases. You will learn how to model
in terms of documents and how to use MongoDB's drivers to easily access
the database.
Learn more and
register.
### M103 - Basic Cluster Administration
Level: Introductory
In this four-chapter course, you'll build standalone nodes, replica
sets, and sharded clusters from scratch. These will serve as platforms
to learn how administration varies depending on the makeup of a cluster.
Learn more and
register.
### M121 - The MongoDB Aggregation Framework
Level: Introductory
In this seven-chapter course, you'll build an understanding of how to
use MongoDB Aggregation Framework pipeline, document transformation, and
data analysis. We will look into the internals of the Aggregation
Framework alongside optimization and pipeline building practices.
Learn more and
register.
### A300 - Atlas Security
Level: Intermediate
In this one-chapter course, you'll build a solid understanding of Atlas
security features such as:
- Threat Modeling and Security Concepts
- Data Flow
- Network Access Control
- Authentication and Authorization
- Encryption
- Logging
- Compliance
- Configuring VPC Peering
- VPC Peering Lab
Learn more and
register.
### M201 - MongoDB Performance
Level: Intermediate
In this five-chapter course, you'll build a good understanding of how to
analyze the different trade-offs of commonly encountered performance
scenarios.
Learn more and
register.
### M220J - MongoDB for Java Developers
Level: Intermediate
In this five-chapter course, you'll build the back-end for a
movie-browsing application called MFlix.
Using the MongoDB Java Driver, you will implement MFlix's basic
functionality. This includes basic and complex movie searches,
registering new users, and posting comments on the site.
You will also add more features to the MFlix application. This includes
writing analytical reports, increasing the durability of MFlix's
connection with MongoDB, and implementing security best practices.
Learn more and
register.
### M220JS - MongoDB for JavaScript Developers
Level: Intermediate
Same as the one above but with JavaScript and Node.js.
Learn more and
register.
### M220JS - MongoDB for .NET Developers
Level: Intermediate
Same as the one above but with C# and .NET.
Learn more and
register.
### M220P - MongoDB for Python Developers
Level: Intermediate
Same as the one above but with Python.
Learn more and
register.
### M310 - MongoDB Security
Level: Advanced
In this three-chapter course, you'll build an understanding of how to
deploy a secure MongoDB cluster, configure the role-based authorization
model to your needs, set up encryption, do proper auditing, and follow
security best practices.
Learn more and
register.
### M312 - Diagnostics and Debugging
Level: Advanced
In this five-chapter course, you'll build a good understanding of the
tools you can use to diagnose the most common issues that arise in
production deployments, and how to fix those problems when they arise.
Learn more and
register.
### M320 - Data Modeling
Level: Advanced
In this five-chapter course, you'll build a solid understanding of
frequent patterns to apply when modeling and will be able to apply those
in your designs.
Learn more and
register.
## Get MongoDB Certified
If you have built enough experience with MongoDB, you can get
certified and be
officially recognised as a MongoDB expert.
Two certifications are available:
- C100DEV:
MongoDB Certified Developer Associate Exam.
- C100DBA:
MongoDB Certified DBA Associate Exam.
Once certified, you will appear in the list of MongoDB Certified
Professionals which can be found in the MongoDB Certified Professional
Finder.
## Wrap-Up
MongoDB University is the best place
to learn MongoDB. There is content available for beginners and more
advanced users.
MongoDB official certifications are definitely a great addition to your
LinkedIn profile too once you have built enough experience with MongoDB.
>
>
>If you have questions, please head to our developer community
>website where the MongoDB engineers and
>the MongoDB community will help you build your next big idea with
>MongoDB.
>
>
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Presentation of MongoDB's free courses in the MongoDB University online.",
"contentType": "News & Announcements"
} | Learn MongoDB with MongoDB University Free Courses | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-sample-datasets | created | # The MongoDB Atlas Sample Datasets
Did you know that MongoDB Atlas provides a complete set of example data to help you learn faster? The Load Sample Data feature enables you to load eight datasets into your database to explore. You can use this with the MongoDB Atlas M0 free tier to try out MongoDB Atlas and MongoDB's features. The sample data helps you try out features such as indexing, querying including geospatial, and aggregations, as well as using MongoDB Tooling such as MongoDB Charts and MongoDB Compass.
In the rest of this post, we'll explore why it was created, how to first load the sample data, and then we'll outline what the datasets contain. We'll also cover how you can download these datasets to use them on your own local machine.
## Table of Contents
- Why Did We Create This Sample Data Set?
- Loading the Sample Data Set into Your Atlas Cluster
- A Deeper Dive into the Atlas Sample Data
- Sample AirBnB Listings Dataset
- Sample Analytics Dataset
- Sample Geospatial Dataset
- Sample Mflix Dataset
- Sample Restaurants Dataset
- Sample Supply Store Dataset
- Sample Training Dataset
- Sample Weather Dataset
- Downloading the Dataset for Use on Your Local Machine
- Wrap Up
## Why Did We Create This Sample Data Set?
Before diving into how we load the sample data, it's worth highlighting why we built the feature in the first place. We built this feature because often people would create a new empty Atlas cluster and they'd then have to wait until they wrote their application or imported data into it before they were able to learn and explore the platform. Atlas's Sample Data was the solution. It removes this roadblock and quickly allows you to get a feel for how MongoDB works with different types of data.
## Loading the Sample Data Set into Your Atlas Cluster
Loading the Sample Data requires an existing Atlas cluster and three steps.
- In your left navigation pane in Atlas, click Clusters, then choose which cluster you want to load the data into.
- For that cluster, click the Ellipsis (...) button.
- Then, click the button "Load Sample Dataset."
- Click the correspondingly named button, "Load Sample Dataset."
This process will take a few minutes to complete, so let's look at exactly what kind of data we're going to load. Once the process is completed, you should see a banner on your Atlas Cluster similar to this image below.
## A Deeper Dive into the Atlas Sample Data
The Atlas Sample Datasets are comprised of eight databases and their associated collections. Each individual dataset is documented to illustrate the schema, the collections, the indexes, and a sample document from each collection.
### Sample AirBnB Listings Dataset
This dataset consists of a single collection of AirBnB reviews and listings. There are indexes on the `property type`, `room type`, `bed`, `name`, and on the `location` fields as well as on the `_id` of the documents.
The data is a randomized subset of the original publicly available AirBnB dataset. It covers several different cities around the world. This dataset is used extensively in MongoDB University courses.
You can find more details on the Sample AirBnB Documentation page.
### Sample Analytics Dataset
This dataset consists of three collections of randomly generated financial services data. There are no additional indexes beyond the `_id` index on each collection. The collections represent accounts, transactions, and customers.
The transactions collection uses the Bucket Pattern to hold a set of transactions for a period. It was built for MongoDB's private training, specifically for the MongoDB for Data Analysis course.
The advantages in using this pattern are a reduction in index size when compared to storing each transaction in a single document. It can potentially simplify queries and it provides the ability to use pre-aggregated data in our documents.
``` json
// transaction collection document example
{
"account_id": 794875,
"transaction_count": 6,
"bucket_start_date": {"$date": 693792000000},
"bucket_end_date": {"$date": 1473120000000},
"transactions":
{
"date": {"$date": 1325030400000},
"amount": 1197,
"transaction_code": "buy",
"symbol": "nvda",
"price": "12.7330024299341033611199236474931240081787109375",
"total": "15241.40390863112172326054861"
},
{
"date": {"$date": 1465776000000},
"amount": 8797,
"transaction_code": "buy",
"symbol": "nvda",
"price": "46.53873172406391489630550495348870754241943359375",
"total": "409401.2229765902593427995271"
},
{
"date": {"$date": 1472601600000},
"amount": 6146,
"transaction_code": "sell",
"symbol": "ebay",
"price": "32.11600884852845894101847079582512378692626953125",
"total": "197384.9903830559086514995215"
},
{
"date": {"$date": 1101081600000},
"amount": 253,
"transaction_code": "buy",
"symbol": "amzn",
"price": "37.77441226157566944721111212857067584991455078125",
"total": "9556.926302178644370144411369"
},
{
"date": {"$date": 1022112000000},
"amount": 4521,
"transaction_code": "buy",
"symbol": "nvda",
"price": "10.763069758141103449133879621513187885284423828125",
"total": "48659.83837655592869353426977"
},
{
"date": {"$date": 936144000000},
"amount": 955,
"transaction_code": "buy",
"symbol": "csco",
"price": "27.992136535152877030441231909207999706268310546875",
"total": "26732.49039107099756407137647"
}
]
}
```
You can find more details on the [Sample Analytics Documentation page.
### Sample Geospatial Dataset
This dataset consists of a single collection with information on shipwrecks. It has an additional index on the `coordinates` field (GeoJSON). This index is a Geospatial 2dsphere index. This dataset was created to help explore the possibility of geospatial queries within MongoDB.
The image below was created in MongoDB Charts and shows all of the shipwrecks on the eastern seaboard of North America.
You can find more details on the Sample Geospatial Documentation page.
### Sample Mflix Dataset
This dataset consists of five collections with information on movies, movie theatres, movie metadata, and user movie reviews and their ratings for specific movies. The data is a subset of the IMDB dataset. There are three additional indexes beyond `_id`: on the sessions collection on the `user_id` field, on the theatres collection on the `location.geo` field, and on the users collection on the `email` field. You can see this dataset used in this MongoDB Charts tutorial.
The Atlas Search Movies site uses this data and MongoDB's Atlas Search to provide a searchable movie catalog.
This dataset is the basis of our Atlas Search tutorial.
You can find more details on the Sample Mflix Documentation page.
### Sample Restaurants Dataset
This dataset consists of two collections with information on restaurants and neighbourhoods in New York. There are no additional indexes. This dataset is the basis of our Geospatial tutorial. The restaurant document only contains the location and the name for a given restaurant.
``` json
// restaurants collection document example
{
location: {
type: "Point",
coordinates: -73.856077, 40.848447]
},
name: "Morris Park Bake Shop"
}
```
In order to use the collections for geographical searching, we need to add an index, specifically a [2dsphere index. We can add this index and then search for all restaurants in a one-kilometer radius of a given location, with the results being sorted by those closest to those furthest away. The code below creates the index, then adds a helper variable to represent 1km, which our query then uses with the $nearSphere criteria to return the list of restaurants within 1km of that location.
``` javascript
db.restaurants.createIndex({ location: "2dsphere" })
var ONE_KILOMETER = 1000
db.restaurants.find({ location: { $nearSphere: { $geometry: { type: "Point", coordinates: -73.93414657, 40.82302903 ] }, $maxDistance: ONE_KILOMETER } } })
```
You can find more details on the [Sample Restaurants Documentation page.
### Sample Supply Store Dataset
This dataset consists of a single collection with information on mock sales data for a hypothetical office supplies company. There are no additional indexes. This is the second dataset used in the MongoDB Chart tutorials.
The sales collection uses the Extended Reference pattern to hold both the items sold and their details as well as information on the customer who purchased these items. This pattern includes frequently accessed fields in the main document to improve performance at the cost of additional data duplication.
``` json
// sales collection document example
{
"_id": {
"$oid": "5bd761dcae323e45a93ccfe8"
},
"saleDate": {
"$date": { "$numberLong": "1427144809506" }
},
"items":
{
"name": "notepad",
"tags": [ "office", "writing", "school" ],
"price": { "$numberDecimal": "35.29" },
"quantity": { "$numberInt": "2" }
},
{
"name": "pens",
"tags": [ "writing", "office", "school", "stationary" ],
"price": { "$numberDecimal": "56.12" },
"quantity": { "$numberInt": "5" }
},
{
"name": "envelopes",
"tags": [ "stationary", "office", "general" ],
"price": { "$numberDecimal": "19.95" },
"quantity": { "$numberInt": "8" }
},
{
"name": "binder",
"tags": [ "school", "general", "organization" ],
"price": { "$numberDecimal": "14.16" },
"quantity": { "$numberInt": "3" }
}
],
"storeLocation": "Denver",
"customer": {
"gender": "M",
"age": { "$numberInt": "42" },
"email": "[email protected]",
"satisfaction": { "$numberInt": "4" }
},
"couponUsed": true,
"purchaseMethod": "Online"
}
```
You can find more details on the [Sample Supply Store Documentation page.
### Sample Training Dataset
This dataset consists of nine collections with no additional indexes. It represents a selection of realistic data and is used in the MongoDB private training courses.
It includes a number of public, well-known data sources such as the OpenFlights, NYC's OpenData, and NYC's Citibike Data.
The routes collection uses the Extended Reference pattern to hold OpenFlights data on airline routes between airports. It references airline information in the `airline` sub document, which has details about the specific plane on the route. This is another example of improving performance at the cost of minor data duplication for fields that are likely to be frequently accessed.
``` json
// routes collection document example
{
"_id": {
"$oid": "56e9b39b732b6122f877fa5c"
},
"airline": {
"alias": "2G",
"iata": "CRG",
"id": 1654,
"name": "Cargoitalia"
},
"airplane": "A81",
"codeshare": "",
"dst_airport": "OVB",
"src_airport": "BTK",
"stops": 0
}
```
You can find more details on the Sample Training Documentation page.
### Sample Weather Dataset
This dataset consists of a single collection with no additional indexes. It represents detailed weather reports from locations across the world. It holds geospatial data on the locations in the form of legacy coordinate pairs.
You can find more details on the Sample Weather Documentation page.
If you have ideas or suggestions for new datasets, we are always interested. Let us know on the developer community website.
### Downloading the Dataset for Use on Your Local Machine
It is also possible to download and explore these datasets on your own local machine. You can download the complete sample dataset via the wget command:
``` shell
wget https://atlas-education.s3.amazonaws.com/sampledata.archive
```
Note: You can also use the curl command:
``` shell
curl https://atlas-education.s3.amazonaws.com/sampledata.archive -o sampledata.archive
```
You should check you are running a local `mongod` instance or you should start a new `mongod` instance at this point. This `mongod` will be used in conjunction with `mongorestore` to unpack and host a local copy of the sample dataset. You can find more details on starting mongod instances on this documentation page.
This section assumes that you're connecting to a relatively straightforward setup, with a default authentication database and some authentication set up. (You should *always* create some users for authentication!)
If you don't provide any connection details to `mongorestore`, it will attempt to connect to MongoDB on your local machine, on port 27017 (which is MongoDB's default). This is the same as providing `--host localhost:27017`.
``` bash
mongorestore --archive=sampledata.archive
```
You can use a variety of tools to view your documents. You can use MongoDB Compass, the CLI, or the MongoDB Visual Studio Code (VSCode) plugin to interact with the documents in your collections. You can find out how to use MongoDB Playground for VSCode and integrate MongoDB into a Visual Studio Code environment.
If you find the sample data useful for building or helpful, let us know on the community forums!
## Wrap Up
These datasets offer a wide selection of data that you can use to both explore MongoDB's features and prototype your next project without having to worry about where you'll find the data.
Check out the documentation on Load Sample Data to learn more on these datasets and load it into your Atlas Cluster today to start exploring it!
To learn more about schema patterns and MongoDB, please check out our blog series Building with Patterns and the free MongoDB University Course M320: Data Modeling to level up your schema design skills.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Explaining the MongoDB Atlas Sample Data and diving into its various datasets",
"contentType": "Article"
} | The MongoDB Atlas Sample Datasets | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/using-expo-realm-expo-dev-client | created | # Using Expo and Realm React Native with expo-dev-client
In our last post on how to build an offline-first React Native mobile app with Expo and Realm React Native, we talked about a limitation of using Realm React Native and Expo where we stated that Realm React Native is not compatible with Expo-managed workflows. Well, wait no more, because now Expo works with Realm React Native and we have a nice custom development client that will have roughly the same functionality as Expo Go.
## Creating a React Native app using Expo and Realm React Native in one simple step
Yes, it sounds like clickbait, but it's true. If you want to build a full application that uses TypeScript, just type in your terminal:
```bash
npx expo-cli init ReactRealmTSTemplateApp -t @realm/expo-template-js
```
If you'd rather do JavaScript, just type:
```bash
npx expo-cli init ReactRealmJSTemplateApp -t @realm/expo-template-js
```
After either of these two, change to the directory containing the project that has just been created and start the iOS or Android app:
```bash
cd ReactRealmJSTemplateApp
yarn android
```
Or
```bash
cd ReactRealmJSTemplateApp
yarn ios
```
This will create a prebuilt Expo app. That is, you'll see `ios` and `android` folders in your project and this won't be a managed Expo app, where all the native details are hidden and Expo takes care of everything. Having said that, you don't need to go into the `ios` or `android` folders unless you need to add some native code in Swift or Kotlin.
Once launched, the app will ask to open in `ReactRealmJSTemplateApp`, not in Expo Go. This means we're running this nice, custom, dev client that will bring us most of the Expo Go experience while also working with Realm React Native.
We can install our app and use it using `yarn ios/android`. If we want to start the dev-client to develop, we can also use `yarn start`.
## Adding our own code
This template is a quick way to start with Realm React Native, so it includes all code you'll need to write your own Realm React Native application:
* It adds the versions of Expo (^44.0.6), React Native (0.64.3), and Realm (^10.13.0) that work together.
* It also adds `expo-dev-client` and `@realm/react` packages, to make the custom development client part work.
* Finally, in `app`, you'll find sample code to create your own model object, initialize a connection with Atlas Device Sync, save and fetch data, etc.
But I want to reuse the Read it Later - Maybe app I wrote for the last post on Expo and Realm React Native. Well, I just need to delete all JavaScript files inside `app`, copy over all my code from that App, and that's all. Now my old app's code will work with this custom dev client!
## Putting our new custom development client to work
Showing the debug menu is explained in the React Native debug documentation, but you just need to:
> Use the ⌘D keyboard shortcut when your app is running in the iOS Simulator, or ⌘M when running in an Android emulator on macOS, and Ctrl+M on Windows and Linux.
| Android Debug Menu | iOS Debug Menu |
|--------------|-----------|
| | |
As this is an Expo app, we can also show the Expo menu by just pressing `m` from terminal while our app is running.
## Now do Hermes and react-native-reanimated
The Realm React Native SDK has a `hermes` branch that is indeed compatible with Hermes. So, it'll work with `react-native-reanimated` v2 but not with Expo, due to the React Native version the Expo SDK is pinned to.
So, right now, you have to choose:
* Have Expo + Realm working out of the box.
* Or start your app using Realm React Native+ Hermes (not using Expo).
Both the Expo team and the Realm JavaScript SDK teams are working hard to make everything work together, and we'll update you with a new post in the future on using React Native Reanimated + Expo + Hermes + Realm (when all required dependencies are in place).
## Recap
In this post, we've shown how simple it is now to create a React Native application that uses Expo + Realm React Native. This still won't work with Hermes, but watch this space as Realm is already compatible with it!
## One more thing
Our community has also started to leverage our new capabilities here. Watch this video from Aaron Saunders explaining how to use Realm React Native + Expo building a React Native app.
And, as always, you can hang out in our Community Forums and ask questions (and get answers) about your React Native development with Expo, Realm React Native and MongoDB.
| md | {
"tags": [
"Realm",
"JavaScript",
"TypeScript",
"React Native"
],
"pageDescription": "Now we can write our React Native Expo Apps using Realm, React Native and use a custom-dev-client to get most of the functionality of Expo Go, in just one simple step.",
"contentType": "Tutorial"
} | Using Expo and Realm React Native with expo-dev-client | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/use-atlas-on-heroku | created | # How to Deploy MongoDB on Heroku
## Can I deploy MongoDB on Heroku?
Yes! It's easy to set up and free to use with MongoDB Atlas.
As we begin building more cloud-native applications, choosing the right services and tools can be quite overwhelming. Luckily, when it comes to choosing a cloud database service, MongoDB Atlas may be the easiest choice yet!
When paired with Heroku, one of the most popular PaaS solutions for developers, you'll be able to build and deploy fully managed cloud applications in no time. The best part? MongoDB Atlas integrates easily with Heroku applications. All you need to do is set your Atlas cluster's connection string to a Heroku config variable. That's really all there is to it!
If you're already familiar with MongoDB, using MongoDB Atlas with your cloud applications is a natural choice. MongoDB Atlas is a fully-managed cloud database service for MongoDB that automates the management of MongoDB clusters in the cloud. Offering features such as automated backup, auto-scaling, multi-AZ fault tolerance, and a full suite of management and analytics tools, Atlas is the most sophisticated DBaaS anywhere, and is just a few clicks away.
To see how quick it is to get up and running with MongoDB Atlas, just follow the next few steps to set up your first free cluster. Then, see how quickly you can connect your new Atlas cluster to your Heroku application by following the step-by-step instructions later on in this tutorial.
## Prerequisites
This tutorial assumes the following:
- You are familiar with MongoDB and have written applications that use MongoDB.
- You are familiar with Heroku and know how to deploy Heroku apps. - You have the Heroku CLI installed.
- You are familiar with and have Git installed.
With these assumptions in mind, let's get started!
## Setting up your Atlas Cluster in 5 steps (or less!)
### Step 1: Create an Atlas account
>💡 If you already created a MongoDB account using your email address, you can skip this step! Sign into your account instead.
You can register for an Atlas account with your email address or your Google Account.
### Step 2: Create your organization and project
After registering, Atlas will prompt you to create an organization and project where you can deploy your cluster.
### Step 3: Deploy Your first cluster
You'll now be able to select from a range of cluster options. For this tutorial, we'll select the Shared Clusters option, which is Atlas's Free Tier cluster. Click "Create a cluster" under the Shared Clusters option:
On the next page, you'll be prompted to choose a few options for your cluster:
*Cloud provider & region*
Choose where you want to deploy your cluster to. It is important to select the available region closest to your application, and ideally the same region, in order to minimize latency. In our case, let's choose the N. Virginia (us-east-1) region, with AWS as our cloud provider (since we're deploying on Heroku, and that is where Heroku hosts its infrastructure):
*Cluster tier*
Here, you'll see the cluster tiers available for the shared clusters option. You can view a comparison of RAM, Storage, vCPU, and Base Price between the tiers to help you choose the right tier. For our tutorial, leave the default M0 Sandbox tier selected:
*Additional settings*
Depending on the tier you choose, some additional options may be available for you. This includes the MongoDB version you wish to deploy and, for M2 clusters and up, Backup options. For this tutorial, select the latest version, MongoDB 4.4:
*Cluster name*
Lastly, you can give your cluster a name. Keep in mind that once your cluster is created, you won't be able to change it! Here, we'll name our cluster `leaflix-east` to help us know which project and region this cluster will be supporting:
That's it! Be sure to review your options one last time before clicking the "Create Cluster" button.
### Step 4: Create a database user for your cluster
Atlas requires clients to authenticate as MongoDB database users to access clusters, so let's create one real quick for your cluster.
As you can see in the GIF above, creating a database user is straightforward. First navigate to the "Database Access" section (located under "Security" in the left-hand navigation bar). Click on "Create a new Database User". A prompt will appear where you can choose this user's authentication method and database user privileges.
Select the "Password" authentication method and give this user a username and password. As a convenience, you can even autogenerate a secure password right in Atlas, which we highly recommend.
>💡 After autogenerating your password, be sure to click Copy and store it in a safe place for now. We'll need it later when connecting to our cluster!
Choose a built-in role for this user. For this tutorial, I'm choosing "Atlas admin" which grants the most privileges.
Finally, click the "Add User" button. You've created your cluster's first database user!
### Step 5: Grant authorized IP addresses access to your cluster
The last step in setting up your cluster is to choose which IP addresses are allowed to access it. To quickly get up and running, set your cluster to allow access from anywhere:
**Congratulations! You've just successfully set up your Atlas cluster!**
>💡 Note: You probably don't want to allow this type of access in a production environment. Instead, you'll want to identify the exact IP addresses you know your application will be hosted on and explicitly set which IP addresses, or IP ranges, should have access to your cluster. After setting up your Heroku app, follow the steps in the "Configuring Heroku IP Addresses in Atlas" section below to see how to add the proper IP addresses for your Heroku app.
## Configuring Heroku to point to MongoDB Atlas Cluster using config vars
Quickly setting up our Atlas cluster was pretty exciting, but we think you'll find this section even more thrilling!
Atlas-backed, Heroku applications are simple to set up. All you need to do is create an application-level config var that holds your cluster's connection string. Once set up, you can securely access that config var within your application!
Here's how to do it:
### Step 1: Log into the Heroku CLI
``` bash
heroku login
```
This command opens your web browser to the Heroku login page. If you're already logged in, just click the "Log in" button. Alternatively, you can use the -i flag to log in via the command line.
### Step 2: Clone My Demo App
To continue this tutorial, I've created a demo Node application that uses MongoDB Atlas and is an app I'd like to deploy to Heroku. Clone it, then navigate to its directory:
``` bash
git clone https://github.com/adriennetacke/mongodb-atlas-heroku-leaflix-demo.git
cd mongodb-atlas-heroku-leaflix-demo
```
### Step 3: Create the Heroku app
``` bash
heroku create leaflix
```
As you can see, I've named mine `leaflix`.
### Get your Atlas Cluster connection string
Head back to your Atlas cluster's dashboard as we'll need to grab our connection string.
Click the "Connect" button.
Choose the "Connect your application" option.
Here, you'll see the connection string we'll need to connect to our cluster. Copy the connection string.
Paste the string into an editor; we'll need to modify it a bit before we can set it to a Heroku config variable.
As you can see, Atlas has conveniently added the username of the database user we previously created. To complete the connection string and make it valid, replace the \ with your own database user's password and `` with `sample_mflix`, which is the sample dataset our demo application will use.
>💡 If you don't have your database user's password handy, autogenerate a new one and use that in your connection string. Just remember to update it if you autogenerate it again! You can find the password by going to Database Access \> Clicking "Edit" on the desired database user \> Edit Password \> Autogenerate Secure Password
### Set a MONGODB_URI config var
Now that we've properly formed our connection string, it's time to store it in a Heroku config variable. Let's set our connection string to a config var called MONGODB_URI:
``` bash
heroku config:set MONGODB_URI="mongodb+srv://yourUsername:[email protected]/sample_mflix?retryWrites=true&w=majority"
```
Some important things to note here:
- This command is all one line.
- Since the format of our connection string contains special characters, it is necessary to wrap it within quotes.
That's all there is to it! You've now properly added your Atlas cluster's connection string as a Heroku config variable, which means you can securely access that string once your application is deployed to Heroku.
>💡 Alternatively, you can also add this config var via your app's "Settings" tab in the Heroku Dashboard. Head to your apps \> leaflix \> Settings. Within the Config Vars section, click the "Reveal Config Vars" button, and add your config var there.
The last step is to modify your application's code to access these variables.
## Connecting your app to MongoDB Atlas Cluster using Heroku config var values
In our demo application, you'll see that we have hard-coded our Atlas cluster connection string. We should refactor our code to use the Heroku config variable we previously created.
Config vars are exposed to your application's code as environment variables. Accessing these variables will depend on your application's language; for example, you'd use `System.getenv('key')` calls in Java or `ENV'key']` calls in Ruby.
Knowing this, and knowing our application is written in Node, we can access our Atlas cluster via the `process.env` property, made available to us in Node.js. In the `server.js` file, change the uri constant to this:
``` bash
const uri = process.env.MONGODB_URI;
```
That's it! Since we've added our Atlas cluster connection string as a Heroku config var, our application will be able to access it securely once it's deployed.
Save that file, commit that change, then deploy your code to Heroku.
``` bash
git commit -am "fix: refactor hard coded connection string to Heroku config var"
git push heroku master
```
Your app is now deployed! You can double check that at least one instance of Leaflix is running by using this command:
``` bash
heroku ps:scale web=1
```
If you see a message that says `Scaling dynos... done, now running web at 1:Free`, you'll know that at least one instance is up and running.
Finally, go visit your app. You can do so with this useful command:
``` bash
heroku open
```
If all is well, you'll see something like this:
![Leaflix App
When you click on the "Need a Laugh?" button, our app will randomly choose a movie that has the "Comedy" genre in its genres field. This comes straight from our Atlas cluster and uses the `sample_mflix` dataset.
## Configuring Heroku IP addresses in MongoDB Atlas
We have our cluster up and running and our app is deployed to Heroku!
To get us through the tutorial, we initially configured our cluster to accept connections from any IP address. Ideally you would like to restrict access to only your application, and there are a few ways to do this on Heroku.
The first way is to use an add-on to provide a static outbound IP address for your application that you can use to restrict access in Atlas. You can find some listed here:
Another way would be to use Heroku Private Spaces and use the static outbound IPs for your space. This is a more expensive option, but does not require a separate add-on.
There are some documents and articles out there that suggest you can use IP ranges published by either AWS or Heroku to allow access to IPs originating in your AWS region or Heroku Dynos located in those regions. While this is possible, it is not recommended as those ranges are subject to change over time. Instead we recommend one of the two methods above.
Once you have the IP address(es) for your application, you can use them to configure your firewall in Atlas.
Head to your Atlas cluster, delete any existing IP ranges, then add them to your allow list:
Of course, at all times you will be communicating between your application and your Atlas database securely via TLS encryption.
## Conclusion
We've accomplished quite a bit in a relatively short time! As a recap:
- We set up and deployed an Atlas cluster in five steps or less.
- We created a Heroku config variable to securely store our Atlas connection string, enabling us to connect our Atlas cluster to our Heroku application.
- We learned that Heroku config variables are exposed to our application's code as environment variables.
- We refactored the hard-coded URI string in our code to point to a `process.env.MONGODB_URI` variable instead.
Have additional questions or a specific use case not covered here? Head over to MongoDB Developer's Community Forums and start a discussion! We look forward to hearing from you.
And to learn more about MongoDB Atlas, check out this great Intro to MongoDB Atlas in 10 Minutes by fellow developer advocate Jesse Hall!
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how to deploy MongoDB Atlas on Heroku for fully managed cloud applications.",
"contentType": "Tutorial"
} | How to Deploy MongoDB on Heroku | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/javascript/anonytexts | created | # Anonytexts
## Creators
Maryam Mudashiru and Idris Aweda Zubair contributed this project.
## About the Project
Anonytexts lets you message friends and family completely anonymously. Pull a prank with your friends or send your loved one a secret message.
## Inspiration
It's quite a popular way to have fun amongst students in Nigeria to create profiles on anonymous messaging platforms to be shared amongst their peers so they may speak their minds.
Being a student, and having used a couple of these, most of them don't make it as easy and fun as it should be.
## Why MongoDB?
We wanted to stand out by adding giving users more flexibility and customization while also considering the effects in the long run. We needed a database with a flexible structure that allows for scalability with zero deployment issues. MongoDB Atlas was the best bet.
## How It Works
You create an account on the platform, with just your name, email and password. You choose to set a username or not. You get access to your dashboard where you can share your unique link to friends. People message you completely anonymously, leaving you to have to figure out which message is from which person. You may reply messages from users who have an account on the platform too. | md | {
"tags": [
"JavaScript"
],
"pageDescription": "A web application to help users message and be messaged completely anonymously.",
"contentType": "Code Example"
} | Anonytexts | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/build-offline-first-react-native-mobile-app-with-expo-and-realm | created | # Build an Offline-First React Native Mobile App with Expo and Realm React Native
* * *
> Atlas App Services (Formerly MongoDB Realm )
>
> Atlas Device Sync (Formerly Realm Sync)
>
* * *
## Introduction
Building Mobile Apps that work offline and sync between different devices is not an easy task. You have to write code to detect when you’re offline, save data locally, detect when you’re back online, compare your local copy of data with that in the server, send and receive data, parse JSON, etc.
It’s a time consuming process that’s needed, but that appears over and over in every single mobile app. You end up solving the same problem for each new project you write. And it’s worse if you want to run your app in iOS and Android. This means redoing everything twice, with two completely different code bases, different threading libraries, frameworks, databases, etc.
To help with offline data management and syncing between different devices, running different OSes, we can use MongoDB’s client-side datastore Realm and Atlas Device Sync. To create a single code base that works well in both platforms we can use React Native. And the simplest way to create React Native Apps is using Expo.
### React Native Apps
The React Native Project, allows you to create iOS and Android apps using React _“a best-in-class JavaScript library for building user interfaces_”. So if you’re an experienced Web developer who already knows React, using React Native will be the natural next step to create native Mobile Apps.
But even if you’re a native mobile developer with some experience using SwiftUI in iOS or Compose in Android, you’ll find lots of similarities here.
### Expo and React Native
Expo is a set of tools built around React Native. Using Expo you can create React Native Apps quickly and easily. For that, we need to install Expo using Node.js package manager `npm`:
```
npm install --global expo-cli
```
This will install `expo-cli` globally so we can call it from anywhere in our system. In case we need to update Expo we’ll use that very same command. __For this tutorial we’ll need the latest version of Expo, that’s been updated to support the Realm React Native__. You can find all the new features and changes in the Expo SDK 44 announcement blog post.
To ensure you have the latest Expo version run:
```
expo --version
```
Should return at least `5.0.1`. If not, run again `npm install --global expo-cli`
## Prerequisites
Now that we have the latest Expo installed, let’s check out that we have everything we need to develop our application:
* Xcode 13, including Command Line Tools, if we want to develop an iOS version. We’ll also need a macOS computer running at least macOS 11/Big Sur in order to run Xcode.
* Android Studio, to develop for Android and at least one Android Emulator ready to test our apps.
* Any code editor. I’ll be using Visual Studio Code as it has plugins to help with React Native Development, but you can use any other editor.
* Check that you have the latest version of yarn running `npm install -g yarn`
* Make sure you are NOT on the latest version of node, however, or you will see errors about unsupported digital envelope routines. You need the LTS version instead. Get the latest LTS version number from https://nodejs.org/ and then run:
```
nvm install 16.13.1 # swap for latest LTS version
```
If you don’t have Xcode or Android Studio, and need to build without installing anything locally you can also try Expo Application Services, a cloud-based building service that allows you to build your Expo Apps remotely.
### MongoDB Atlas and App Services App
Our App will store data in a cloud-backed MongoDB Atlas cluster. So we need to create a free MongoDB account and set up a cluster. For this tutorial, a Free-forever, M0 cluster will be enough.
Once we have our cluster created we can go ahead and create an app in Atlas Application Services. The app will sync our data from a mobile device into a MongoDB Atlas database, although it has many other uses: manages authentication, can run serverless functions, host static sites, etc. Just follow this quick tutorial (select the React Native template) but don’t download any code, as we’re going to use Expo to create our app from scratch. That will configure our app correctly to use Sync and set it into Development Mode.
## Read It Later - Maybe
Now we can go ahead and create our app, a small “read it later” kind of app to store web links we save for later reading. As sometimes we never get back to those links I’ll call it Read It Later - _Maybe_.
You can always clone the repo and follow along.
| Login | Adding a Link |
| :-------------: | :----------: |
| | |
| All Links | Deleting a Link |
| :-------------: | :----------: |
| | |
### Install Expo and create the App
We’ll use Expo to create our app using `expo init read-later-maybe`. This will ask us which template we want to use for our app. Using up and down cursors we can select the desired template, in this case, from the Managed Workflows we will choose the `blank` one, that uses JavaScript. This will create a `read-later-maybe` directory for us containing all the files we need to get started.
To start our app, just enter that directory and start the React Native Metro Server using ` yarn start`. This will tell Expo to install any dependencies and start the Metro Server.
```bash
cd read-later-maybe
yarn start
```
This will open our default browser, with the Expo Developer Tools at http://localhost:19002/. If your browser doesn't automatically open, press `d` to open Developer Tools in the browser. From this web page we can:
* Start our app in the iOS Simulator
* Start our app in the Android Emulator
* Run it in a Web browser (if our app is designed to do that)
* Change the connection method to the Developer Tools Server
* Get a link to our app. (More on this later when we talk about Expo Go)
We can also do the same using the developer menu that’s opened in the console, so it’s up to you to use the browser and your mouse or your Terminal and the keyboard.
## Running our iOS App
To start the iOS App in the Simulator, we can either click “Start our app in the iOS Simulator” on Expo Developer Tools or type `i` in the console, as starting expo leaves us with the same interface we have in the browser, replicated in the console. We can also directly run the iOS app in Simulator by typing `yarn ios` if we don’t want to open the development server.
### Expo Go
The first time we run our app Expo will install Expo Go. This is a native application (both for iOS and Android) that will take our JavaScript and other resources bundled by Metro and run it in our devices (real or simulated/emulated). Once run in Expo Go, we can make changes to our JavaScript code and Expo will take care of updating our app on the fly, no reload needed.
| Open Expo Go | 1st time Expo Go greeting | Debug menu |
| :-------------: | :----------: | :----------: |
| | | |
Expo Go apps have a nice debugging menu that can be opened pressing “m” in the Expo Developer console.
### Structure of our App
Now our app is working, but it only shows a simple message: “Open up App.js to start working on your app!”. So we’ll open the app using our code editor. These are the main files and folders we have so far:
```
.
├── .expo-shared
│ └── assets.json
├── assets
│ ├── adaptive-icon.png
│ ├── favicon.png
│ ├── icon.png
│ └── splash.png
├── .gitignore
├── App.js
├── app.json
├── babel.config.js
├── package.json
└── yarn.lock
```
The main three files here are:
* `package.json`, where we can check / add / delete our app’s dependencies
* `app.json`: configuration file for our app
* `App.js`: the starting point for our JavaScript code
These changes can be found in tag `step-0` of the repo.
## Let’s add some navigation
Our App will have a Login / Register Screen and then will show the list of Links for that particular User. We’ll navigate from the Login Screen to the list of Links and when we decide to Log Out our app we’ll navigate back to the Login / Register Screen. So first we need to add the React Native Navigation Libraries, and the gesture handler (for swipe & touch detection, etc). Enter the following commands in the Terminal:
```bash
expo install @react-navigation/native
expo install @react-navigation/stack
expo install react-native-gesture-handler
expo install react-native-safe-area-context
expo install react-native-elements
```
These changes can be found in tag `step-1` of the repo.
Now, we’ll create a mostly empty LoginView in `views/LoginView.js` (the `views` directory does not exist yet, we need to create it first) containing:
```javascript
import React from "react";
import { View, Text, TextInput, Button, Alert } from "react-native";
export function LoginView({ navigation }) {
return (
Sign Up or Sign In:
);
}
```
This is just the placeholder for our Login screen. We open it from App.js. Change the `App` function to:
```javascript
export default function App() {
return (
);
}
```
And add required `imports` to the top of the file, below the existing `import` lines.
```javascript
import { NavigationContainer } from "@react-navigation/native";
import { createStackNavigator } from "@react-navigation/stack";
import { LoginView } from './views/LoginView';
const Stack = createStackNavigator();
```
All these changes can be found in tag `step-2` of the repo.
## Adding the Realm React Native
### Installing Realm React Native
To add our Realm React Native SDK to the project we’ll type in the Terminal:
```bash
expo install realm
```
This will add Realm as a dependency in our React Native Project. Now we can also create a file that will hold the Realm initialization code, we’ll call it `RealmApp.js` and place it in the root of the directory, alongside `App.js`.
```javascript
import Realm from "realm";
const app = new Realm.App({id: "your-atlas-app-id-here"});
export default app;
```
We need to add a App ID to our code. Here are instructions on how to do so. In short, we will use a local database to save changes and will connect to MongoDB Atlas using a App Services applicaation that we create in the cloud. We have Realm React Native as a library in our Mobile App, doing all the heavy lifting (sync, offline, etc.) for our React Native app, and an App Services App in the cloud that connects to MongoDB Atlas, acting as our backend. This way, if we go offline we’ll be using our local database on device and when online, all changes will propagate in both directions.
All these changes can be found in tag `step-3` of the repo.
>
> __Update 24 January 2022__
>
> A simpler way to create a React Native App that uses Expo & Realm is just to create it using a template.
> For JavaScript based apps:
> `npx expo-cli init ReactRealmJsTemplateApp -t @realm/expo-template-js`
>
> For TypeScript based apps:
> `npx create-react-native-app ReactRealmTsTemplateApp -t with-realm`
>
## Auth Provider
All Realm related code to register a new user, log in and log out is inside a Provider. This way we can provide all descendants of this Provider with a context that will hold a logged in user. All this code is in `providers/AuthProvider.js`. You’ll need to create the `providers` folder and then add `AuthProvider.js` to it.
With Realm mobile database you can store data offline and with Atlas Device Sync, you can sync across multiple devices and stores all your data in MongoDB Atlas, but can also run Serverless Functions, host static html sites or authenticate using multiple providers. In this case we’ll use the simpler email/password authentication.
We create the context with:
```javascript
const AuthContext = React.createContext(null);
```
The SignIn code is asynchronous:
```javascript
const signIn = async (email, password) => {
const creds = Realm.Credentials.emailPassword(email, password);
const newUser = await app.logIn(creds);
setUser(newUser);
};
```
As is the code to register a new user:
```javascript
const signUp = async (email, password) => {
await app.emailPasswordAuth.registerUser({ email, password });
};
```
To log out we simply check if we’re already logged in, in that case call `logOut`
```javascript
const signOut = () => {
if (user == null) {
console.warn("Not logged in, can't log out!");
return;
}
user.logOut();
setUser(null);
};
```
All these changes can be found in tag `step-4` of the repo.
### Login / Register code
Take a moment to have a look at the styles we have for the app in the `stylesheet.js` file, then modify the styles to your heart’s content.
Now, for Login and Logout we’ll add a couple `states` to our `LoginView` in `views/LoginView.js`. We’ll use these to read both email and password from our interface.
Place the following code inside `export function LoginView({ navigation }) {`:
```javascript
const email, setEmail] = useState("");
const [password, setPassword] = useState("");
```
Then, we’ll add the UI code for Login and Sign up. Here we use `signIn` and `signUp` from our `AuthProvider`.
```javascript
const onPressSignIn = async () => {
console.log("Trying sign in with user: " + email);
try {
await signIn(email, password);
} catch (error) {
const errorMessage = `Failed to sign in: ${error.message}`;
console.error(errorMessage);
Alert.alert(errorMessage);
}
};
const onPressSignUp = async () => {
console.log("Trying signup with user: " + email);
try {
await signUp(email, password);
signIn(email, password);
} catch (error) {
const errorMessage = `Failed to sign up: ${error.message}`;
console.error(errorMessage);
Alert.alert(errorMessage);
}
};
```
All changes can be found in [`step-5`.
## Prebuilding our Expo App
On save we’ll find this error:
```
Error: Missing Realm constructor. Did you run "pod install"? Please see https://realm.io/docs/react-native/latest/#missing-realm-constructor for troubleshooting
```
Right now, Realm React Native is not compatible with Expo Managed Workflows. In a managed Workflow Expo hides all iOS and Android native details from the JavaScript/React developer so they can concentrate on writing React code. Here, we need to prebuild our App, which will mean that we lose the nice Expo Go App that allows us to load our app using a QR code.
The Expo Team is working hard on improving the compatibility with Realm React Native, as is our React Native SDK team, who are currently working on improving the compatibility with Expo, supporting the Hermes JavaScript Engine and expo-dev-client. Watch this space for all these exciting announcements!
So to run our app in iOS we’ll do:
```
expo run:ios
```
We need to provide a Bundle Identifier to our iOS app. In this case we’ll use `com.realm.read-later-maybe`
This will install all needed JavaScript libraries using `yarn`, then install all native libraries using CocoaPods, and finally will compile and run our app. To run on Android we’ll do:
```
expo run:android
```
## Navigation completed
Now we can register and login in our App. Our `App.js` file now looks like:
```javascript
export default function App() {
return (
);
}
```
We have an AuthProvider that will provide the user logged in to all descendants. Inside is a Navigation Container with one Screen: Login View. But we need to have two Screens: our “Login View” with the UI to log in/register and “Links Screen”, which will show all our links.
So let’s create our LinksView screen:
```javascript
import React, { useState, useEffect } from "react";
import { Text } from "react-native";
export function LinksView() {
return (
Links go here
);
}
```
Right now only shows a simple message “Links go here”, as you can check in `step-6`
## Log out
We can register and log in, but we also need to log out of our app. To do so, we’ll add a Nav Bar item to our Links Screen, so instead of having “Back” we’ll have a logout button that closes our Realm, calls logout and pops out our Screen from the navigation, so we go back to the Welcome Screen.
In our LinksView Screen in we’ll add:
```javascript
React.useLayoutEffect(() => {
navigation.setOptions({
headerBackTitle: "Log out",
headerLeft: () =>
});
}, navigation]);
```
Here we use a `components/Logout` component that has a button. This button will call `signOut` from our `AuthProvider`. You’ll need to add the `components` folder.
```javascript
return (
{
Alert.alert("Log Out", null, [
{
text: "Yes, Log Out",
style: "destructive",
onPress: () => {
navigation.popToTop();
closeRealm();
signOut();
},
},
{ text: "Cancel", style: "cancel" },
]);
}}
/>
);
```
Nice! Now we have Login, Logout and Register! You can follow along in [`step-7`.
## Links
### CRUD
We want to store Links to read later. So we’ll start by defining how our Link class will look like. We’ll store a Name and a URL for each link. Also, we need an `id` and a `partition` field to avoid pulling all Links for all users. Instead we’ll just sync Links for the logged in user. These changes are in `schemas.js`
```javascript
class Link {
constructor({
name,
url,
partition,
id = new ObjectId(),
}) {
this._partition = partition;
this._id = id;
this.name = name;
this.url = url;
}
static schema = {
name: 'Link',
properties: {
_id: 'objectId',
_partition: 'string',
name: 'string',
url: 'string',
},
primaryKey: '_id',
};
}
```
You can get these changes in `step-8` of the repo.
And now, we need to code all the CRUD methods. For that, we’ll go ahead and create a `LinksProvider` that will fetch Links and delete them. But first, we need to open a Realm to read the Links for this particular user:
```javascript
realm.open(config).then((realm) => {
realmRef.current = realm;
const syncLinks = realm.objects("Link");
let sortedLinks = syncLinks.sorted("name");
setLinks(...sortedLinks]);
// we observe changes on the Links, in case Sync informs us of changes
// started in other devices (or the cloud)
sortedLinks.addListener(() => {
console.log("Got new data!");
setLinks([...sortedLinks]);
});
});
```
To add a new Link we’ll have this function that uses `[realm.write` to add a new Link. This will also be observed by the above listener, triggering a UI refresh.
```javascript
const createLink = (newLinkName, newLinkURL) => {
const realm = realmRef.current;
realm.write(() => {
// Create a new link in the same partition -- that is, using the same user id.
realm.create(
"Link",
new Link({
name: newLinkName || "New Link",
url: newLinkURL || "http://",
partition: user.id,
})
);
});
};
```
Finally to delete Links we’ll use `realm.delete`.
```javascript
const deleteLink = (link) => {
const realm = realmRef.current;
realm.write(() => {
realm.delete(link);
// after deleting, we get the Links again and update them
setLinks(...realm.objects("Link").sorted("name")]);
});
};
```
### Showing Links
Our `LinksView` will `map` the contents of the `links` array of `Link` objects we get from `LinkProvider` and show a simple List of Views to show name and URL of each Link. We do that using:
```javascript
{links.map((link, index) =>
{link.name}
{link.url}
```
### UI for deleting Links
As we want to delete links we’ll use a swipe right-to-left gesture to show a button to delete that Link
```javascript
onClickLink(link)}
bottomDivider
key={index}
rightContent={
deleteLink(link)}
/>
}
>
```
We get `deleteLink` from the `useLinks` hook in `LinksProvider`:
```javascript
const { links, createLink, deleteLink } = useLinks();
```
### UI for adding Links
We’ll have a [TextInput for entering name and URL, and a button to add a new Link directly at the top of the List of Links. We’ll use an accordion to show/hide this part of the UI:
```javascript
Create new Link
}
isExpanded={expanded}
onPress={() => {
setExpanded(!expanded);
}}
>
{
<>
{ createLink(linkDescription, linkURL); }}
/>
</>
}
```
## Adding Links in the main App
Finally, we’ll integrate the new `LinksView` inside our `LinksProvider` in `App.js`
```javascript
{() => {
return (
);
}}
```
## The final App
Wow! That was a lot, but now we have a React Native App, that works with the same code base in both iOS and Android, storing data in a MongoDB Atlas Database in the cloud thanks to Atlas Device Sync. And what’s more, any changes in one device syncs in all other devices with the same user logged-in. But the best part is that Atlas Device Sync works even when offline!
| Syncing iOS and Android | Offline Syncing! |
| :-------------: | :----------: |
| | |
## Recap
In this tutorial we’ve seen how to build a simple React Native application using Expo that takes advantage of Atlas Device Sync for their offline and syncing capabilities. This App is a prebuilt app as right now Managed Expo Workflows won’t work with Realm React Native (yet, read more below). But you still get all the simplicity of use that Expo gives you, all the Expo libraries and the EAS: build your app in the cloud without having to install Xcode or Android Studio.
The Realm React Native team is working hard to make the SDK fully compatible with Hermes. Once we release an update to the Realm React Native SDK compatible with Hermes, we’ll publish a new post updating this app. Also, we’re working to finish an Expo Custom Development Client. This will be our own Expo Development Client that will substitute Expo Go while developing with Realm React Native. Expect also a piece of news when that is approved!
All the code for this tutorial can be found in this repo.
| md | {
"tags": [
"Realm",
"JavaScript",
"React Native"
],
"pageDescription": "In this post we'll build, step by step, a simple React Native Mobile App for iOS and Android using Expo and Realm React Native. The App will use Atlas Device Sync to store data in MongoDB Atlas, will Sync automatically between devices and will work offline.",
"contentType": "Tutorial"
} | Build an Offline-First React Native Mobile App with Expo and Realm React Native | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/saving-data-in-unity3d-using-files | created | # Saving Data in Unity3D Using Files
*(Part 2 of the Persistence Comparison Series)*
Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.
In Part 1 of this series, we explored Unity's own solution: `PlayerPrefs`. This time, we look into one of the ways we can use the underlying .NET framework by saving files. Here is an overview of the complete series:
- Part 1: PlayerPrefs
- Part 2: Files *(this tutorial)*
- Part 3: BinaryReader and BinaryWriter *(coming soon)*
- Part 4: SQL
- Part 5: Realm Unity SDK
- Part 6: Comparison of all those options
Like Part 1, this tutorial can also be found in the https://github.com/realm/unity-examples repository on the persistence-comparison branch.
Each part is sorted into a folder. The three scripts we will be looking at are in the `File` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding.
## Example game
*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we are using the same example for all parts of the series so that it is easier to see the differences between the approaches.*
The goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game.
Therefore, the example we will be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.
A simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.
When you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.
You can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.
The scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.
```cs
using UnityEngine;
///
/// This script shows the basic structure of all other scripts.
///
public class HitCountExample : MonoBehaviour
{
// Keep count of the clicks.
SerializeField] private int hitCount; // 1
private void Start() // 2
{
// Read the persisted data and set the initial hit count.
hitCount = 0; // 3
}
private void OnMouseDown() // 4
{
// Increment the hit count on each click and save the data.
hitCount++; // 5
}
}
```
The first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor.
Whenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.
The second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series.
## File
(See `FileExampleSimple.cs` in the repository for the finished version.)
One of the ways the .NET framework offers us to save data is using the [`File` class:
> Provides static methods for the creation, copying, deletion, moving, and opening of a single file, and aids in the creation of FileStream objects.
Besides that, the `File` class is also used to manipulate the file itself, reading and writing data. On top of that, it offers ways to read meta data of a file, like time of creation.
When working with a file, you can also make use of several options to change `FileMode` or `FileAccess.`
The `FileStream` mentioned in the documentation is another approach to work with those files, providing additional options. In this tutorial, we will just use the plain `File` class.
Let's have a look at what we have to change in the example presented in the previous section to save the data using `File`:
```cs
using System;
using System.IO;
using UnityEngine;
public class FileExampleSimple : MonoBehaviour
{
// Resources:
// https://docs.microsoft.com/en-us/dotnet/api/system.io.file?view=net-5.0
SerializeField] private int hitCount = 0;
private const string HitCountFile = "hitCountFile.txt";
private void Start()
{
if (File.Exists(HitCountFile))
{
var fileContent = File.ReadAllText(HitCountFile);
hitCount = Int32.Parse(fileContent);
}
}
private void OnMouseDown()
{
hitCount++;
// The easiest way when working with Files is to use them directly.
// This writes all input at once and overwrites a file if executed again.
// The File is opened and closed right away.
File.WriteAllText(HitCountFile, hitCount.ToString());
}
}
```
First we define a name for the file that will hold the data (1). If no additional path is provided, the file will just be saved in the project folder when running the game in the Unity editor or the game folder when running a build. This is fine for the example.
Whenever we click on the capsule (2) and increment the hit count (3), we need to save that change. Using `File.WriteAllText()` (4), the file will be opened, data will be saved, and it will be closed right away. Besides the file name, this function expects the contents as a string. Therefore, we have to transform the `hitCount` by calling `ToString()` before passing it on.
The next time we start the game (5), we want to load the previously saved data. First we check if the file already exists (6). If it does not exist, we never saved before and can just keep the default value for `hitCount`. If the file exists, we use `ReadAllText()` to get that data (7). Since this is a string again, we need to convert here as well using `Int32.Parse()` (8). Note that this means we have to be sure about what we read. If the structure of the file changes or the player edits it, this might lead to problems during the parsing of the file.
Let's look into extending this simple example in the next section.
## Extended example
(See `FileExampleExtended.cs` in the repository for the finished version.)
The previous section showed the most simple example, using just one variable that needs to be saved. What if we want to save more than that?
Depending on what needs to saved, there are several different approaches. You could use multiple files or you can write multiple lines inside the same file. The latter shall be shown in this section by extending the game to recognize modifier keys. We want to detect normal clicks, Shift+Click, and Control+Click.
First, update the hit counts so that we can save three of them:
```cs
[SerializeField] private int hitCountUnmodified = 0;
[SerializeField] private int hitCountShift = 0;
[SerializeField] private int hitCountControl = 0;
```
We also want to use a different file name so we can look at both versions next to each other:
```cs
private const string HitCountFileUnmodified = "hitCountFileExtended.txt";
```
The last field we need to define is the key that is pressed:
```cs
private KeyCode modifier = default;
```
The first thing we need to do is check if a key was pressed and which key it was. Unity offers an easy way to achieve this using the [`Input` class's `GetKey` function. It checks if the given key was pressed or not. You can pass in the string for the key or to be a bit more safe, just use the `KeyCode` enum. We cannot use this in the `OnMouseClick()` when detecting the mouse click though:
> Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop.
Add a new method called `Update()` (1) which is called in every frame. Here we need to check if the `Shift` or `Control` key was pressed (2) and if so, save the corresponding key in `modifier` (3). In case none of those keys was pressed (4), we consider it unmodified and reset `modifier` to its `default` (5).
```cs
private void Update() // 1
{
// Check if a key was pressed.
if (Input.GetKey(KeyCode.LeftShift)) // 2
{
// Set the LeftShift key.
modifier = KeyCode.LeftShift; // 3
}
else if (Input.GetKey(KeyCode.LeftControl)) // 2
{
// Set the LeftControl key.
modifier = KeyCode.LeftControl; // 3
}
else // 4
{
// In any other case reset to default and consider it unmodified.
modifier = default; // 5
}
}
```
Now to saving the data when a click happens:
```cs
private void OnMouseDown() // 6
{
// Check if a key was pressed.
switch (modifier)
{
case KeyCode.LeftShift: // 7
// Increment the Shift hit count.
hitCountShift++; // 8
break;
case KeyCode.LeftCommand: // 7
// Increment the Control hit count.
hitCountControl++; // 8
break;
default: // 9
// If neither Shift nor Control was held, we increment the unmodified hit count.
hitCountUnmodified++; // 10
break;
}
// 11
// Create a string array with the three hit counts.
string] stringArray = {
hitCountUnmodified.ToString(),
hitCountShift.ToString(),
hitCountControl.ToString()
};
// 12
// Save the entries, line by line.
File.WriteAllLines(HitCountFileUnmodified, stringArray);
}
```
Whenever a mouse click is detected on the capsule (6), we can then perform a similar check to what happened in `Update()`, only we use `modifier` instead of `Input.GetKey()` here.
Check if `modifier` was set to `KeyCode.LeftShift` or `KeyCode.LeftControl` (7) and if so, increment the corresponding hit count (8). If no modifier was used (9), increment the `hitCountUnmodified`.
As seen in the last section, we need to create a string that can be saved in the file. There is a second function on `File` that accepts a string array and then saves each entry in one line: `WriteAllLines()`.
Knowing this, we create an array containing the three hit counts (11) and pass this one on to `File.WriteAllLines()`.
Start the game, and click the capsule using Shift and Control. You should see the three counters in the Inspector.
![
After stopping the game and therefore saving the data, a new file `hitCountFileExtended.txt` should exist in your project folder. Have a look at it. It should look something like this:
Last but not least, let's look at how to load the file again when starting the game:
```cs
private void Start()
{
// 12
// Check if the file exists. If not, we never saved before.
if (File.Exists(HitCountFileUnmodified))
{
// 13
// Read all lines.
string] textFileWriteAllLines = File.ReadAllLines(HitCountFileUnmodified);
// 14
// For this extended example we would expect to find three lines, one per counter.
if (textFileWriteAllLines.Length == 3)
{
// 15
// Set the counters correspdoning to the entries in the array.
hitCountUnmodified = Int32.Parse(textFileWriteAllLines[0]);
hitCountShift = Int32.Parse(textFileWriteAllLines[1]);
hitCountControl = Int32.Parse(textFileWriteAllLines[2]);
}
}
}
```
First, we check if the file even exists (12). If we ever saved data before, this should be the case. If it exists, we read the data. Similar to writing with `WriteAllLines()`, we use `ReadAllLines` (13) to create a string array where each entry represents one line in the file.
We do expect there to be three lines, so we should expect the string array to have three entries (14).
Using this knowledge, we can then assign the three entries from the array to the corresponding hit counts (15).
As long as all the data saved to those lines belongs together, the file can be one option. If you have several different properties, you might create multiple files. Alternatively, you can save all the data into the same file using a bit of structure. Note, though, that the numbers will not be associated with the properties. If the structure of the object changes, we would need to migrate the file as well and take this into account the next time we open and read the file.
Another possible approach to structuring your data will be shown in the next section using JSON.
## More complex data
(See `FileExampleJson.cs` in the repository for the finished version.)
JSON is a very common approach when saving structured data. It's easy to use and there are frameworks for almost every language. The .NET framework provides a [`JsonSerializer`. Unity has its own version of it: `JsonUtility`.
As you can see in the documentation, the functionality boils down to these three methods:
- *FromJson*: Create an object from its JSON representation.
- *FromJsonOverwrite*: Overwrite data in an object by reading from its JSON representation.
- *ToJson*: Generate a JSON representation of the public fields of an object.
The `JsonUtility` transforms JSON into objects and back. Therefore, our first change to the previous section is to define such an object with public fields:
```cs
private class HitCount
{
public int Unmodified;
public int Shift;
public int Control;
}
```
The class itself can be `private` and just be added inside the `FileExampleJson` class, but its fields need to be public.
As before, we use a different file to save this data. Update the filename to:
```cs
private const string HitCountFileJson = "hitCountFileJson.txt";
```
When saving the data, we will use the same `Update()` method as before to detect which key was pressed.
The first part of `OnMouseDown()` (1) can stay the same as well, since this part only increments the hit count in depending on the modifier used.
```cs
private void OnMouseDown()
{
// 1
// Check if a key was pressed.
switch (modifier)
{
case KeyCode.LeftShift:
// Increment the Shift hit count.
hitCountShift++;
break;
case KeyCode.LeftCommand:
// Increment the Control hit count.
hitCountControl++;
break;
default:
// If neither Shift nor Control was held, we increment the unmodified hit count.
hitCountUnmodified++;
break;
}
// 2
// Create a new HitCount object to hold this data.
var updatedCount = new HitCount
{
Unmodified = hitCountUnmodified,
Shift = hitCountShift,
Control = hitCountControl,
};
// 3
// Create a JSON using the HitCount object.
var jsonString = JsonUtility.ToJson(updatedCount, true);
// 4
// Save the json to the file.
File.WriteAllText(HitCountFileJson, jsonString);
}
```
However, we need to update the second part. Instead of a string array, we create a new `HitCount` object and set the three public fields to the values of the hit counters (2).
Using `JsonUtility.ToJson()`, we can transform this object to a string (3). If you pass in `true` for the second, optional parameter, `prettyPrint`, the string will be formatted in a nicely readable way.
Finally, as in `FileExampleSimple.cs`, we just use `WriteAllText()` since we're only saving one string, not an array (4).
Then, when the game starts, we need to read the data back into the hit count:
```cs
private void Start()
{
// Check if the file exists to avoid errors when opening a non-existing file.
if (File.Exists(HitCountFileJson)) // 5
{
// 6
var jsonString = File.ReadAllText(HitCountFileJson);
var hitCount = JsonUtility.FromJson(jsonString);
// 7
if (hitCount != null)
{
// 8
hitCountUnmodified = hitCount.Unmodified;
hitCountShift = hitCount.Shift;
hitCountControl = hitCount.Control;
}
}
}
```
We check if the file exists first (5). In case it does, we saved data before and can proceed reading it.
Using `ReadAllText`, we read the string from the file and transform it via `JsonUtility.FromJson<>()` into an object of type `HitCount` (6).
If this happened successfully (7), we can then assign the three properties to their corresponding hit count (8).
When you run the game, you will see that in the editor, it looks identical to the previous section since we are using the same three counters. If you open the file `hitCountFileJson.txt`, you should then see the three counters in a nicely formatted JSON.
Note that the data is saved in plain text. In a future tutorial, we will look at encryption and how to improve safety of your data.
## Conclusion
In this tutorial, we learned how to utilize `File` to save data. `JsonUtility` helps structure this data. They are simple and easy to use, and not much code is required.
What are the downsides, though?
First of all, we open, write to, and save the file every single time the capsule is clicked. While not a problem in this case and certainly applicable for some games, this will not perform very well when many save operations are made.
Also, the data is saved in plain text and can easily be edited by the player.
The more complex your data is, the more complex it will be to actually maintain this approach. What if the structure of the `HitCount` object changes? You have to change account for that when loading an older version of the JSON. Migrations are necessary.
In the following tutorials, we will (among other things) have a look at how databases can make this job a lot easier and take care of the problems we face here.
Please provide feedback and ask any questions in the Realm Community Forum. | md | {
"tags": [
"C#",
"Realm",
"Unity"
],
"pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well. In this tutorial series, we will explore the options given to us by Unity and third-party libraries.",
"contentType": "Tutorial"
} | Saving Data in Unity3D Using Files | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/can-you-keep-a-secret | created | # Can You Keep a Secret?
The median time to discovery for a secret key leaked to GitHub is 20 seconds. By the time you realise your mistake and rotate your secrets, it could be too late. In this talk, we'll look at some techniques for secret management which won't disrupt your workflow, while keeping your services safe.
>:youtube]{vid=2XNIbOMYr_Q}
>
>This is a complete transcript of the [2020 PyCon Australia conference talk "Can you keep a secret?" Slides are also available to download on Notist.
Hey, everyone. Thank you for joining me today.
Before we get started, I would just like to take a moment to express my heartfelt thanks, gratitude, and admiration to everyone involved with this year's PyCon Australia. They have done such an amazing job, in a really very difficult time.
It would have been so easy for them to have skipped putting on a conference at all this year, and no one would have blamed them if they did, but they didn't, and what they achieved should really be celebrated. So a big thank you to them.
With that said, let's get started!
So, I'm Aaron Bassett.
You can find me pretty much everywhere as Aaron Bassett, because I have zero imagination. Twitter, GitHub, LinkedIn, there's probably an old MySpace and Bebo account out there somewhere too. You can find me on them all as Aaron Bassett.
I am a Senior Developer Advocate at MongoDB.
For anyone who hasn't heard of MongoDB before, it is a general purpose, document-based, distributed database, often referred to as a No-SQL database. We have a fully managed cloud database service called Atlas, an on-premise Enterprise Server, an on device database called Realm, but we're probably most well known for our free and open source Community Server.
In fact, much of what we do at MongoDB is open source, and as a developer advocate, almost the entirety of what I produce is open source and publicly available. Whether it is a tutorial, demo app, conference talk, Twitch stream, and so on. It's all out there to use.
Here's an example of the type of code I write regularly. This is a small snippet to perform a geospatial query.
``` python
import pprint
from pymongo import MongoClient
client = MongoClient(
"C01.5tsil.mongodb.net",
username="admin", password="hunter2"
)
db = client.geo_example
query = {"loc": {"$within": {"$center": [0, 0], 6]}}}
for doc in db.places.find(query).sort("_id"):
pprint.pprint(doc)
```
First, we import our MongoDB Python Driver. Then, we instantiate our database client. And finally, we execute our query. Here, we're trying to find all documents whose location is within a defined radius of a chosen point.
But even in this short example, we have some secrets that we really shouldn't be sharing. The first line highlighted here is the URI. This isn't so much a secret as a configuration variable.
Something that's likely to change between your development, staging, and production environments. So, you probably don't want this hard coded either. The next line, however, is the real secrets. Our database username and password. These are the types of secrets you never want to hard code in your scripts, not even for a moment.
``` python
import pprint
from pymongo import MongoClient
DB_HOST = "C01.5tsil.mongodb.net"
DB_USERNAME = "admin"
DB_PASSWORD = "hunter2"
client = MongoClient(DB_HOST, username=DB_USERNAME, password=DB_PASSWORD)
db = client.geo_example
query = {"loc": {"$within": {"$center": [[0, 0], 6]}}}
for doc in db.places.find(query).sort("_id"):
pprint.pprint(doc)
```
So often I see it where someone has pulled out their secrets into variables, either at the top of their script§ or sometimes they'll hard code them in a settings.py or similar. I've been guilty of this as well.
You have every intention of removing the secrets before you publish your code, but then it's a couple of days later, the kids are trying to get your attention, you **need** to go make your morning coffee, or there's one of the million other things that happen in our day-to-day lives distracting you, and as you get up, you decide to save your working draft, muscle memory kicks in...
``` shell
git add .
git commit -m "wip"
git push
```
And... well... that's all it takes.
All it takes is that momentary lapse and now your secrets are public, and as soon as those secrets hit GitHub or another public repository, you have to assume they're immediately breached.
Michael Meli, Matthew R. McNiece, and Bradley Reaves from North Carolina State University published a research paper titled ["How Bad Can It Git? Characterizing Secret Leakage in Public GitHub Repositories".
This research showed that the median time for discovery for a secret published to GitHub was 20 seconds, and it could be as low as half a second. It appeared to them that the only limiting factor on how fast you could discover secrets on GitHub was how fast GitHub was able to index new code as it was pushed up.
The longest time in their testing from secrets being pushed until they could potentially be compromised was four minutes. There was no correlation between time of day, etc. It most likely would just depend on how many other people were pushing code at the same time. But once the code was indexed, then they were able to locate the secrets using some well-crafted search queries.
But this is probably not news to most developers. Okay, the speed of which secrets can be compromised might be surprising, but most developers will know the perils of publishing their secrets publicly.
Many of us have likely heard or read horror stories of developers accidentally committing their AWS keys and waking up to a huge bill as someone has been spinning up EC2 instances on their account. So why do we, and I'm including myself in that we, why do we keep doing it?
Because it is easy. We know it's not safe. We know it is likely going to bite us in the ass at some point. But it is so very, very easy. And this is the case in most software.
This is the security triangle. It represents the balance between security, functionality, and usability. It's a trade-off. As two points increase, one will always decrease. If we have an app that is very, very secure and has a lot of functionality, it's probably going to feel pretty restrictive to use. If our app is very secure and very usable, it probably doesn't have to do much.
A good example of where a company has traded some security for additional functionality and usability is Amazon's One Click Buy button.
It functions very much as the name implies. When you want to order a product, you can click a single button and Amazon will place your order using your default credit card and shipping address from their records. What you might not be aware of is that Amazon cannot send the CVV with that order. The CVV is the normally three numbers on the back of your card above the signature strip.
Card issuers say that you should send the CVV for each Card Not Present transaction. Card Not Present means that the retailer cannot see that you have the physical card in your possession, so every online transaction is a Card Not Present transaction.
Okay, so the issuers say that you should send the CVV each time, but they also say that you MUST not store it. This is why for almost all retailers, even if they have your credit card stored, you will still need to enter the CVV during checkout, but not Amazon. Amazon simply does not send the CVV. They know that decreases their security, but for them, the trade-off for additional functionality and ease of use is worth it.
A bad example of where a company traded sanity—sorry, I mean security—for usability happened at a, thankfully now-defunct, agency I worked at many, many years ago. They decided that while storing customer's passwords in plaintext lowered their security, being able to TELL THE CUSTOMER THEIR PASSWORD OVER THE TELEPHONE WHEN THEY CALLED was worth it in usability.
It really was the wild wild west of the web in those days...
So a key tenant of everything I'm suggesting here is that it has to be as low friction as possible. If it is too hard, or if it reduces the usability side of our triangle too much, then people will not adopt it.
It also has to be easy to implement. I want these to be techniques which you can start using personally today, and have them rolled out across your team by this time next week.
It can't have any high costs or difficult infrastructure to set up and manage. Because again, we are competing with hard code variables, without a doubt the easiest method of storing secrets.
So how do we know when we're done? How do we measure success for this project? Well, for that, I'm going to borrow from the 12 factor apps methodology.
The 12 factor apps methodology is designed to enable web applications to be built with portability and resilience when deployed to the web. And it covers 12 different factors.
Codebase, dependencies, config, backing services, build, release, run, and so on. We're only interested in number 3: Config.
Here's what 12 factor apps has to say about config;
"A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials"
And this is super important even for those of you who may never publish your code publicly. What would happen if your source code were to leak right now? In 2015, researchers at internetwache found that 9700 websites in Alexa's top one million had their .git folder publicly available in their site root. This included government websites, NGOs, banks, crypto exchanges, large online communities, a few porn sites, oh, and MTV.
Deploying websites via Git pulls isn't as uncommon as you might think, and for those websites, they're just one server misconfiguration away from leaking their source code. So even if your application is closed source, with source that will never be intentionally published publicly, it is still imperative that you do not hard code secrets.
Leaking your source code would be horrible. Leaking all the keys to your kingdom would be devastating.
So if we can't store our secrets in our code, where do we put them? Environment variables are probably the most common place.
Now remember, we're going for ease of use and low barrier to entry. There are better ways for managing secrets in production. And I would highly encourage you to look at products like HashiCorp's Vault. It will give you things like identity-based access, audit logs, automatic key rotation, encryption, and so much more. But for most people, this is going to be overkill for development, so we're going to stick to environment variables.
But what is an environment variable? It is a variable whose value is set outside of your script, typically through functionality built into your operating system and are part of the environment in which a process runs. And we have a few different ways these can be accessed in Python.
``` python
import os
import pprint
from pymongo import MongoClient
client = MongoClient(
os.environ"DB_HOST"],
username=os.environ["DB_USERNAME"],
password=os.environ["DB_PASSWORD"],
)
db = client.geo_example
query = {"loc": {"$within": {"$center": [[0, 0], 6]}}}
for doc in db.places.find(query).sort("_id"):
pprint.pprint(doc)
```
Here we have the same code as earlier, but now we've removed our hard coded values and instead we're using environment variables in their place. Environ is a mapping object representing the environment variables. It is worth noting that this mapping is captured the first time the os module is imported, and changes made to the environment after this time will not be reflected in environ. Environ behaves just like a Python dict. We can reference a value by providing the corresponding key. Or we can use get.
``` python
import os
import pprint
from pymongo import MongoClient
client = MongoClient(
os.environ.get("DB_HOST"),
username=os.environ.get("DB_USERNAME"),
password=os.environ.get("DB_PASSWORD"),
)
db = client.geo_example
query = {"loc": {"$within": {"$center": [[0, 0], 6]}}}
for doc in db.places.find(query).sort("_id"):
pprint.pprint(doc)
```
The main difference between the two approaches is when using get, if an environment variable does not exist, it will return None, whereas if you are attempting to access it via its key, then it will raise a KeyError exception. Also, get allows you to provide a second argument to be used as a default value if the key does not exist. There is a third way you can access environment variables: getenv.
``` python
import os
import pprint
from pymongo import MongoClient
client = MongoClient(
os.getenv("DB_HOST"),
username=os.getenv("DB_USERNAME"),
password=os.getenv("DB_PASSWORD"),
)
db = client.geo_example
query = {"loc": {"$within": {"$center": [[0, 0], 6]}}}
for doc in db.places.find(query).sort("_id"):
pprint.pprint(doc)
```
getenv behaves just like environ.get. In fact, it behaves so much like it I dug through the source to try and figure out what the difference was between the two and the benefits of each. But what I found is that there is no difference. None.
``` python
def getenv(key, default=None):
"""Get an environment variable, return None if it doesn't exist.
The optional second argument can specify an alternate default.
key, default and the result are str."""
return environ.get(key, default)
```
getenv is simply a wrapper around environ.get. I'm sure there is a reason for this beyond saving a few key strokes, but I did not uncover it during my research. If you know the reasoning behind why getenv exists, I would love to hear it.
>[Joe Drumgoole has put forward a potential reason for why `getenv` might exist: "I think it exists because the C library has an identical function called getenv() and it removed some friction for C programmers (like me, back in the day) who were moving to Python."
Now we know how to access environment variables, how do we create them? They have to be available in the environment whenever we run our script, so most of the time, this will mean within our terminal. We could manually create them each time we open a new terminal window, but that seems like way too much work, and very error-prone. So, where else can we put them?
If you are using virtualenv, you can manage your environment variables within your activate script.
``` shell
# This file must be used with "source bin/activate" *from bash*
# you cannot run it directly
deactivate () {
...
# Unset variables
unset NEXMO_KEY
unset NEXMO_SECRET
unset MY_NUMBER
}
...
export NEXMO_KEY="a925db1ar392"
export NEXMO_SECRET="01nd637fn29oe31mc721"
export MY_NUMBER="447700900981"
```
It's a little back to front in that you'll find the deactivate function at the top, but this is where you can unset any environment variables and do your housekeeping. Then at the bottom of the script is where you can set your variables. This way, when you activate your virtual environment, your variables will be automatically set and available to your scripts. And when you deactivate your virtual environment, it'll tidy up after you and unset those same variables.
Personally, I am not a fan of this approach.
I never manually alter files within my virtual environment. I do not keep them under source control. I treat them as wholly disposable. At any point, I should be able to delete my entire environment and create a new one without fear of losing anything. So, modifying the activate script is not a viable option for me.
Instead, I use direnv. direnv is an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory. What that means is when I cd into a directory containing an .envrc file, direnv will automatically set the environment variables contained within for me.
Let's look at a typical direnv workflow. First, we create an .envrc file and add some export statements, and we get an error. For security reasons, direnv will not load an .envrc file until you have allowed it. Otherwise, you might end up executing malicious code simply by cd'ing into a directory. So, let's tell direnv to allow this directory.
Now that we've allowed the .envrc file, direnv has automatically loaded it for us and set the DB_PASSWORD environment variable. Then, if we leave the directory, direnv will unload and clean up after us by unsetting any environment variables it set.
Now, you should NEVER commit your envrc file. I advise adding it to your projects gitignore file and your global gitignore file. There should be no reason why you should ever commit an .envrc file.
You will, however, want to share a list of what environment variables are required with your team. The convention for this is to create a .envrc.example file which only includes the variable names, but no values. You could even automate this grep or similar.
We covered keeping simple secrets out of your source code, but what about if you need to share secret files with coworkers? Let's take an example of when you might need to share a file in your repo, but ensure that even if your repository becomes public, only those authorised to access the file can do so.
MongoDB supports Encryption at Rest and Client side field level encryption.
With encryption at rest, the encryption occurs transparently in the storage layer; i.e. all data files are fully encrypted from a filesystem perspective, and data only exists in an unencrypted state in memory and during transmission.
With client-side field level encryption, applications can encrypt fields in documents prior to transmitting data over the wire to the server.
Only applications with access to the correct encryption keys can decrypt and read the protected data. Deleting an encryption key renders all data encrypted using that key as permanently unreadable. So. with Encryption at Rest. each database has its own encryption key and then there is a master key for the server. But with client-side field level encryption. you can encrypt individual fields in documents with customer keys.
I should point out that in production, you really should use a key management service for either of these. Like, really use a KMS. But for development, you can use a local key.
These commands generate a keyfile to be used for encryption at rest, set the permissions, and then enables encryption on my server. Now, if multiple developers needed to access this encrypted server, we would need to share this keyfile with them.
And really, no one is thinking, "Eh... just Slack it to them..." We're going to store the keyfile in our repo, but we'll encrypt it first.
git-secret encrypts files and stores them inside the git repository. Perfect. Exactly what we need. With one little caveat...
Remember these processes all need to be safe and EASY. Well, git-secret is easy... ish.
Git-secret itself is very straightforward to use. But it does rely upon PGP. PGP, or pretty good privacy, is an encryption program that provides cryptographic privacy and authentication via public and private key pairs. And it is notoriously fiddly to set up.
There's also the problem of validating a public key belongs to who you think it does. Then there are key signing parties, then web of trust, and lots of other things that are way out of scope of this talk.
Thankfully, there are pretty comprehensive guides for setting up PGP on every OS you can imagine, so for the sake of this talk, I'm going to assume you already have PGP installed and you have your colleagues' public keys.
So let's dive into git-secret. First we initiate it, much the same as we would a git repository. This will create a hidden folder .gitsecret. Now we need to add some users who should know our secrets. This is done with git secret tell followed by the email address associated with their public key.
When we add a file to git-secret, it creates a new file. It does not change the file in place. So, our unencrypted file is still within our repository! We must ensure that it is not accidentally committed. Git-secret tries to help us with this. If you add a file to git-secret, it'll automatically add it to your .gitignore, if it's not already there.
If we take a look at our gitignore file after adding our keyfile to our list of secrets, we can see that it has been added, along with some files which .gitsecret needs to function but which should not be shared.
At this point if we look at the contents of our directory we can see our unencrypted file, but no encrypted version. First we have to tell git secret to hide all the files we've added. Ls again and now we can see the encrypted version of the file has been created. We can now safely add that encrypted version to our repository and push it up.
When one of our colleagues pulls down our encrypted file, they run reveal and it will use their private key to decrypt it.
Git-secret comes with a few commands to make managing secrets and access easier.
- Whoknows will list all users who are able to decrypt secrets in a repository. Handy if someone leaves your team and you need to figure out which secrets need to be rotated.
- List will tell you which files in a repository are secret.
- And if someone does leave and you need to remove their access, there is the rather morbidly named killperson.
The killperson command will ensure that the person cannot decrypt any new secrets which are created, but it does not re-encrypt any existing secrets, so even though the person has been removed, they will still be able to decrypt any existing secrets.
There is little point in re-encrypting the existing files as they will need to be rotated anyways. Then, once the secret has been rotated, when you run hide on the new secret, the removed user will not be able to access the new version.
Another tool I want to look at is confusingly called git secrets, because the developers behind git tools have apparently even less imagination than I do.
git-secrets scans commits, commit messages, and --no-ff merges to prevent adding secrets into your git repositories
All the tools and processes we've looked at so far have attempted to make it easier to safely manage secrets. This tool, however, attacks the problem in a different way. Now we're going to make it more difficult to hard code secrets in your scripts.
Git-secrets uses regexes to attempt to detect secrets within your commits. It does this by using git hooks. Git secrets install will generate some Git templates with hooks already configured to check each commit. We can then specify these templates as the defaults for any new git repositories.
``` shell
$ git secrets --register-aws --global
OK
$ git secrets --install ~/.git-templates/git-secrets
✓ Installed commit-msg hook to /Users/aaronbassett/.git-templates/git-secrets/hooks/commit-msg
✓ Installed pre-commit hook to /Users/aaronbassett/.git-templates/git-secrets/hooks/pre-commit
✓ Installed prepare-commit-msg hook to /Users/aaronbassett/.git-templates/git-secrets/hooks/prepare-commit-msg
$ git config --global init.templateDir ~/.git-templates/git-secrets
```
Git-secrets is from AWS labs, so it comes with providers to detect AWS access keys, but you can also add your own. A provider is simply a list of regexes, one per line. Their recommended method is to store them all in a file and then cat them. But this has some drawbacks.
``` shell
$ git secrets --add-provider -- cat /secret/file/patterns
```
So some regexes are easy to recognise. This is the regex for an RSA key. Straight forward. But what about this one? I'd love to know if anyone recognises this right away. It's a regex for detecting Google oAuth access tokens. This one? Facebook access tokens.
So as you can see, having a single large file with undocumented regexes could quickly become very difficult to maintain. Instead, I place mine in a directory, neatly organised. Seperate files depending on the type of secret I want to detect. Then in each file, I have comments and whitespace to help me group regexes together and document what secret they're going to detect.
But, git-secrets will not accept these as a provider, so we need to get a little creative with egrep.
``` shell
git secrets --add-provider -- egrep -rhv "(^#|^$)" /secret/file/patterns
```
We collect all the files in our directory, strip out any lines which start with a hash or which are empty, and then return the result of this transformation to git-secrets. Which is exactly the input we had before, but now much more maintainable than one long undocumented list!
With git-secrets and our custom providers installed, if we try to commit a private key, it will throw an error. Now, git-secrets can produce false positives. The error message gives you some examples of how you can force your commit through. So if you are totally committed to shooting yourself in the foot, you still can. But hopefully, it introduces just enough friction to make hardcoding secrets more of a hassle than just using environment variables.
Finally, we're going to look at a tool for when all else fails. Gitleaks
Audit git repos for secrets. Gitleaks provides a way for you to find unencrypted secrets and other unwanted data types in git source code repositories. Git leaks is for when even with all of your best intentions, a secret has made it into your repo. Because the only thing worse than leaking a secret is not knowing you've leaked a secret.
It works in much the same way as git-secrets, but rather than inspecting individual commits you can inspect a multitude of things.
- A single repo
- All repos by a single user
- All repos under an organisation
- All code in a GitHub PR
- And it'll also inspect Gitlab users and groups, too
I recommend using it in a couple of different ways.
1. Have it configured to run as part of your PR process. Any leaks block the merge.
2. Run it against your entire organisation every hour/day/week, or at whatever frequency you feel is sufficient. Whenever it detects a leak, you'll get a nice report showing which rule was triggered, by which commit, in which file, and who authored it.
In closing...
- Keep secrets and code separate.
- If you must share secrets, encrypt them first. Yes, PGP can be fiddly, but it's worth it in the long run.
- Automate, automate, automate. If your secret management requires lots of manual work for developers, they will skip it. I know I would. It's so easy to justify to yourself. It's just this once. It's just a little proof of concept. You'll totally remember to remove them before you push. I've made all the same excuses to myself, too. So, keep it easy. Automate where possible.
- And late is better than never. Finding out you've accidentally leaked a secret is a stomach-dropping, heart-racing, breath-catching experience. But leaking a secret and not knowing until after it has been compromised is even worse. So, run your gitleak scans. Run them as often as you can. And have a plan in place for when you do inevitably leak a secret so you can deal with it quickly.
Thank you very much for your attention.
Please do add me on Twitter at aaron bassett. I would love to hear any feedback or questions you might have! If you would like to revisit any of my slides later, they will all be published at Notist shortly after this talk.
I'm not sure how much time we have left for questions, but I will be available in the hallway chat if anyone would like to speak to me there. I know I've been sorely missing seeing everyone at conferences this year, so it will be nice to catch up.
Thanks again to everyone who attended my talk and to the PyCon Australia organisers. | md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": "Can you keep a secret? Here are some techniques that you can use to properly store, share, and manage your secrets.",
"contentType": "Article"
} | Can You Keep a Secret? | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/non-root-user-mongod-process | created | # Procedure to Allow Non-Root Users to Stop/Start/Restart "mongod" Process
## Introduction
Systems' security plays a fundamental role in today's modern
applications. It is very important to restrict non-authorized users'
access to root capabilities. With this blog post, we intend to document
how to avoid jeopardizing root system resources, but allow authorized,
non-root users, to perform administrative operations on `mongod`
processes such as starting or stopping the daemon.
The methodology is easily extensible to other administrative operations
such as preventing non-authorized users from modifying `mongod` audit
logs.
Use this procedure for Linux based systems to allow users with
restricted permissions to stop/start/restart `mongod` processes. These
users are set up under a non-root Linux group. Further, the Linux group
of these users is different from the Linux user group under which the
`mongod` process runs.
## Considerations
>
>
>WARNING: The procedure requires root access for the setup. Incorrect
>settings can lead to an unresponsive system, so always test on a
>development environment before implementing in production. Ensure you
>have a current backup of your data.
>
>
It's recommended to perform this procedure while setting up a new
system. If it is not possible, perform the procedure during the
maintenance window.
The settings will impact only one local system, thus in case of replica
set or a sharded cluster perform the procedure in a rolling matter and
never change all nodes at once.
## Tested Linux flavors
- CentOS 6\|7
- RHEL 6\|7
- Ubuntu 18.04
- Amazon Linux 2
>
>
>Disclaimer: For other Linux distributions the procedure should work in a
>similar way however, only the above versions were tested while writing
>this article.
>
>
## Procedure
- Add the user with limited permissions (replace testuser with your
user):
``` bash
$ adduser testuser
$ groupadd testgroup
```
- Install MongoDB
Community
\|
Enterprise
following our recommended procedures.
- Edit the MongoDB configuration file `/etc/mongod.conf` permissions:
``` none
$ sudo chown mongod:mongod /etc/mongod.conf
$ sudo chmod 600 /etc/mongod.conf
$ ls -l /etc/mongod.conf
-rw-------. 1 mongod mongod 330 Feb 27 18:43 /etc/mongod.conf
```
With this configuration, only the mongod user (and root) will have
permissions to access and edit the `mongod.conf` file. No other user
will be allowed to read/write and have access to its content.
### Systems running with systemd
This procedure works for CentOS 7 and RHEL 7.
- Add the following configuration lines to the
sudoers file with
visudo:
``` bash
%mongod ALL =(ALL) NOPASSWD: /bin/systemctl start mongod.service, /bin/systemctl stop mongod.service, /bin/systemctl restart mongod.service
%testuser ALL =(ALL) NOPASSWD: /bin/systemctl start mongod.service, /bin/systemctl stop mongod.service, /bin/systemctl restart mongod.service
```
>
>
>Note: The root user account may become non-functional if a syntax error
>is introduced in the sudoers file.
>
>
### Systems running with System V Init
This procedure works for CentOS 6, RHEL 6, Amazon Linux 2 and Ubuntu
18.04.
- MongoDB init.d-mongod script is available on our repository
here
in case manual download is required (make sure you save it in the
/etc/init.d/ directory with permissions set to 755).
- Add the following configuration lines to the
sudoers file with
visudo:
For CentOS 6, RHEL 6 and Amazon Linux 2:
``` bash
%mongod ALL =(ALL) NOPASSWD: /sbin/service mongod start, /sbin/service mongod stop, /sbin/service mongod restart
%testuser ALL =(ALL) NOPASSWD: /sbin/service mongod start, /sbin/service mongod stop, /sbin/service mongod restart
```
For Ubuntu 18.04:
``` bash
%mongod ALL =(ALL) NOPASSWD: /usr/sbin/service mongod start, /usr/sbin/service mongod stop, /usr/sbin/service mongod restart
%testuser ALL =(ALL) NOPASSWD: /usr/sbin/service mongod start, /usr/sbin/service mongod stop, /usr/sbin/service mongod restart
```
>
>
>Note: The root may become non-functional if a syntax error is introduced
>in the sudoers file.
>
>
## Testing procedure
### Systems running with systemd (systemctl service)
So with these settings testuser has no permissions to read
/etc/mongod.conf but can start and stop the mongod service:
``` none
testuser@localhost ~]$ sudo /bin/systemctl start mongod.service
[testuser@localhost ~]$ sudo /bin/systemctl stop mongod.service
[testuser@localhost ~]$ vi /etc/mongod.conf
"/etc/mongod.conf" [Permission Denied]
[testuser@localhost ~]$ sudo vi /etc/mongod.conf
"/etc/mongod.conf" [Permission Denied]
```
>
>
>Note: The authorization is given when using the `/bin/systemctl`
>command. With this procedure, the `sudo systemctl start mongod` will
>prompt the sudo password for the testuser.
>
>
### Systems running with System V Init
Use sudo service mongod \[start|stop|restart\]:
``` none
[testuser@localhost ~]$ sudo service mongod start
Starting mongod: [ OK ]
[testuser@localhost ~]$ sudo service mongod stop
Stopping mongod: [ OK ]
[testuser@localhost ~]$ vi /etc/mongod.conf
"/etc/mongod.conf" [Permission Denied]
[testuser@localhost ~]$ sudo vi /etc/mongod.conf
[sudo] password for testuser:
Sorry, user testuser is not allowed to execute '/bin/vi /etc/mongod.conf' as root on localhost.
```
>
>
>Note: Additionally, test restarting other services with the testuser
>with (and without) the required permissions.
>
>
## Wrap Up
It is one of the critical security requirements, not to give
unauthorized users full root privileges. With that requirement in mind,
it is important for system administrators to know that it is possible to
give access to actions like restart/stop/start for a `mongod` process
(or any other process) without giving root privileges, using Linux
systems capabilities.
>
>
>If you have questions, please head to our [developer community
>website where the MongoDB engineers and
>the MongoDB community will help you build your next big idea with
>MongoDB.
>
>
| md | {
"tags": [
"MongoDB",
"Bash"
],
"pageDescription": "Secure your MongoDB installation by allowing non-root users to stop/start/restart your mongod process.",
"contentType": "Tutorial"
} | Procedure to Allow Non-Root Users to Stop/Start/Restart "mongod" Process | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/bash/wordle-bash-data-api | created | # Build Your Own Wordle in Bash with the Data API
> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.
By now, you have most certainly heard about Wordle, the new word game that was created in October 2021 by a former Reddit engineer, Josh Wardle. It gained so much traction at the beginning of the year even Google has a secret easter egg for it when you search for the game.
I wanted to brush up on my Bash scripting skills, so I thought, “Why not create the Wordle game in Bash?” I figured this would be a good exercise that would include some `if` statements and loops. However, the word list I have available for the possible Wordles is in a MongoDB collection. Well, thanks to the new Atlas Data API, I can now connect to my MongoDB database directly from a Bash script.
Let’s get to work!
## Requirements
You can find the complete source code for this repository on Github. You can use any MongoDB Atlas cluster for the data API part; a free tier would work perfectly.
You will need Bash Version 3 or more to run the Bash script.
```bash
$ bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)
```
You will need curl to access the Data API.
```bash
$ curl --version
curl 7.64.1 (x86_64-apple-darwin20.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.41.0
```
Finally, you will use jq to manipulate JSON objects directly in the command line.
```bash
jq --version
jq-1.6
```
## Writing the game
The game will run inside a while loop that will accept user inputs. The loop will go on until either the user finds the right word or has reached five tries without finding the right word.
First, we’ll start by creating a variable that will hold the word that needs to be guessed by the user. In Bash, you don’t need to initialize variables; you can simply assign a value to it. To access the variable, you use the dollar sign followed by the variable's name.
```bash
WORD=MONGO
echo Hello $WORD
# Hello MONGO
```
Next up, we will need a game loop. In Bash, a `while` loop uses the following syntax.
```bash
while ]
do
# Stuff
done
```
Finally, we will also need an if statement to compare the word. The syntax for `if` in Bash is as follows.
```bash
if [ ]
then
# Stuff
elif [ ]
then
# Optional else-if block
else
# Else block
fi
```
To get started with the game, we will create a variable for the while condition, ask the user for input with the `read` command, and exit if the user input matches the word we have hard-coded.
```bash
WORD=MONGO
GO_ON=1
while [ $GO_ON -eq 1 ]
do
read -n 5 -p "What is your guess: " USER_GUESS
if [ $USER_GUESS == $WORD ]
then
echo -e "You won!"
GO_ON=0
fi
done
```
Save this code in a file called `wordle.sh`, set the execute permission on the file, and then run it.
```bash
$ chmod +x ./wordle.sh
$ ./wordle.sh
```
So far, so good; we now have a loop that users can only exit if they find the right word. Let’s now make sure that they can only have five guesses. To do so, we will use a variable called TRIES, which will be incremented using `expr` at every guess. If it reaches five, then we change the value of the GO_ON variable to stop the main loop.
```bash
GO_ON=1
TRIES=0
while [ $GO_ON -eq 1 ]
do
TRIES=$(expr $TRIES + 1)
read -n 5 -p "What is your guess: " USER_GUESS
if [ $USER_GUESS == $WORD ]
then
echo -e "You won!"
GO_ON=0
elif [ $TRIES == 5 ]
then
echo -e "You failed.\nThe word was "$WORD
GO_ON=0
fi
done
```
Let’s now compare the value that we got from the user and compare it with the word. Because we want the coloured squares, we will need to compare the two words letter by letter. We will use a for loop and use the index `i` of the character we want to compare. For loops in Bash have the following syntax.
```bash
for i in {0…10}
do
# stuff
done
```
We will start with an empty `STATE` variable for our round result. We will add a green square for each letter if it’s a match, a yellow square if the letter exists elsewhere, or a black square if it’s not part of the solution. Add the following block after the `read` line and before the `if` statement.
```bash
STATE=""
for i in {0..4}
do
if [ "${WORD:i:1}" == "${USER_GUESS:i:1}" ]
then
STATE=$STATE"🟩"
elif [[ $WORD =~ "${USER_GUESS:i:1}" ]]
then
STATE=$STATE"🟨"
else
STATE=$STATE"⬛️"
fi
done
echo " "$STATE
```
Note how we then output the five squares using the `echo` command. This output will tell the user how close they are to finding the solution.
We have a largely working game already, and you can run it to see it in action. The only major problem left now is that the comparison is case-sensitive. To fix this issue, we can transform the user input into uppercase before starting the comparison. We can achieve this with a tool called `awk` that is frequently used to manipulate text in Bash. Right after the `read` line, and before we initialize the empty STATE variable, add the following line to uppercase the user input.
```bash
USER_GUESS=$(echo "$USER_GUESS" | awk '{print toupper($0)}')
```
That’s it; we now have a fully working Wordle clone.
## Connecting to MongoDB
We now have a fully working game, but it always uses the same start word. In order for our application to use a random word, we will start by populating our database with a list of words, and then pick one randomly from that collection.
When working with MongoDB Atlas, I usually use the native driver available for the programming language I’m using. Unfortunately, no native drivers exist for Bash. That does not mean we can’t access the data, though. We can use curl (or another command-line tool to transfer data) to access a MongoDB collection using the new Data API.
To enable the data API on your MongoDB Atlas cluster, you can follow the instructions from the [Getting Started with the Data API article.
Let’s start with adding a single word to our `words` collection, in the `wordle` database. Each document will have a single field named `word`, which will contain one of the possible Wordles. To add this document, we will use the `insertOne` endpoint of the Data API.
Create a file called `insert_words.sh`. In that file, create three variables that will hold the URL endpoint, the API key to access the data API, and the cluster name.
```bash
API_KEY=""
URL=""
CLUSTER=""
```
Next, use a curl command to insert a single document. As part of the payload for this request, you will add your document, which, in this case, is a JSON object with the word “MONGO.” Add the following to the `insert_words.sh` file.
```bash
curl --location --request POST $URL'/action/insertOne' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header 'api-key: '$API_KEY \
--data-raw '{
"collection":"words",
"database":"wordle",
"dataSource":"'$CLUSTER'",
"document": { "word": "MONGO" }
}'
```
Running this file, you should see a result similar to
```
{"insertedId":"620275c014c4be86ede1e4e7"}
```
This tells you that the insert was successful, and that the new document has this `_id`.
You can add more words to the list, or you can import the official list of words to your MongoDB cluster. You can find that list in the `words.json` file in this project’s repository. You can change the `insert_words.sh` script to use the raw content from Github to import all the possible Wordles at once with the following curl command. This command will use the `insertMany` endpoint to insert the array of documents from Github.
```bash
curl --location --request POST $URL'/action/insertMany' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header 'api-key: '$API_KEY \
--data-raw '{
"collection":"words",
"database":"wordle",
"dataSource":"'$CLUSTER'",
"documents": '$(curl -s https://raw.githubusercontent.com/mongodb-developer/bash-wordle/main/words.json)'
}'
```
Now back to the `wordle.sh` file, add two variables that will hold the URL endpoint, the API key to access the data API, and cluster name at the top of the file.
```bash
API_KEY=""
URL_ENDPOINT=""
CLUSTER=""
```
Next, we’ll use a curl command to run an aggregation pipeline on our Wordle database. This aggregation pipeline will use the `$sample` stage to return one random word. The curl result will then be piped into `jq`, a tool to extract JSON data from the command line. Jq will return the actual value for the `word` field in the document we get from the aggregation pipeline. All of this is then assigned to the WORD variable.
Right after the two new variables, you can add this code.
```bash
WORD=$(curl --location --request POST -s $URL'/action/aggregate' \
--header 'Content-Type: application/json' \
--header 'Access-Control-Request-Headers: *' \
--header 'api-key: '$API_KEY \
--data-raw '{
"collection":"words",
"database":"wordle",
"dataSource":"Cluster0",
"pipeline": [{"$sample": {"size": 1}}]
}' | jq -r .documents[0].word)
```
And that’s it! Now, each time you run the `wordle.sh` file, you will get to try out a new word.
```
What is your guess: mongo ⬛️🟨🟨⬛️🟨
What is your guess: often 🟨⬛️⬛️⬛️🟩
What is your guess: adorn 🟨⬛️🟨🟨🟩
What is your guess: baron ⬛️🟩🟨🟩🟩
What is your guess: rayon 🟩🟩🟩🟩🟩
You won!
```
## Summary
That’s it! You now have your very own version of Wordle so that you can practice over and over directly in your favorite terminal. This version only misses one feature if you’re up to a challenge. At the moment, any five letters are accepted as input. Why don’t you add a validation step so that any word input by the user must have a match in the collection of valid words? You could do this with the help of the data API again. Don’t forget to submit a pull request to the repository if you manage to do it!
| md | {
"tags": [
"Bash",
"Atlas"
],
"pageDescription": "Learn how to build a Wordle clone using bash and the MongoDB Data API.",
"contentType": "Code Example"
} | Build Your Own Wordle in Bash with the Data API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/php/exploring-php-driver-jeremy-mikola | created | # Exploring the PHP Driver with Jeremy Mikola - Podcast Episode
Jeremy Mikola is a Staff Engineer at MongoDB and helps maintain the MongoDB PHP Driver and Extension. In this episode of the podcast, Jesse Hall and Michael Lynn sit down with Jeremy to talk about the PHP Driver and some of the history of PHP and Mongodb.
:youtube]{vid=qOuGM6dNDm8}
Michael: [00:00:00] Hey, Jesse, how are you doing today?
Jesse: [00:00:02] Good. How are you?
Michael: [00:00:02] Fantastic. It's good to have you back on the podcast. Hey, what's your experience with PHP?
Jesse: I've done a little bit of PHP in the past. Mostly JavaScript though, so not too much, but today we do have a special guest. Jeremy Mikola is a staff engineer with Mongo DB, and he knows all about the PHP driver. Why don't you give us a little bit of background on how long have you been with MongoDB?
Jeremy: [00:00:26] Hi, nice to be here. So I joined MongoDB just over nine years. So in the middle of May was my nine-year anniversary. And the entire time of year, a lot of employees been here that long. They tend to shuffle around in different departments and get new experiences. I've been on the drivers team the entire time.
So when I find a place that you're comfortable with, you stick there. So when I came on board team was maybe 10 or 12 people, maybe one or two people per language. We didn't have nearly as many officially supported languages as we do today. But the PHP driver was one of the first ones.
It was developed actually by some of the server engineers. Christina, she was one of the early employees, no longer at MongoDB now, but. So yeah, back then it was PHP, Python, Ruby, C# Java, and I think Node. And we've kind of grown out since then.
Michael: [00:01:05] Fantastic. And what's your personal experience with PHP? How did you get involved in PHP?
Jeremy: [00:01:11] So I picked up PHP as a hobby in high school. Date myself here in high school graduation was around 2001. It's kind of the mid nineties getting home from school, load up Napster work on a personal, had a personal SimCity website. We started off around this time of PHP. Nuke was one of the early CMS frameworks back then.
And a lot of it was just tinkering, copy/pasting and finding out how stuff works, kind of self-taught until you get to college and then actually have real computer science classes and you understand there's math behind programming and all these other things, concepts. So it's definitely, it was a hobby through most of college.
My college curriculum was not PHP at all. And then afterwards I was able to, ended up getting a full-time job I working on, and that was with a Symfony 1.0 at the time around like 2007 and followed a couple of companies in the role after that. Ended up being the Symfony 2.0 framework, I was just coming out and that was around the time that PHP really started maturing with like package managers and much more object oriented, kind of shedding the some of the old
bad publicity had had of the early years. And from there, that was also the that second PHP job was where I got started with MongoDB. So we were actually across the street from MongoDB's office in Midtown, New York on the flat iron district and customer support back then used to be go downstairs, go across the street and go up to Elliot's desk and the ShopWiki offices and the Mongo old 10gen offices.
And you'd go ask your question. That kind of works when you only have a few official customers.
Michael: [00:02:36] Talking about Elliot Horowitz.
Jeremy: [00:02:37] Yes, as Elliot Horowitz, the co-founder was much more accessible then when the company was a lot smaller. And from that role ended up jumping to a second PHP company kind of the same framework, also using MongoDB.
It was the same tech stack. And after that role, I was approached by an old coworker from the first company that used MongoDB. He had ended up at the drivers team, Steve Franzia. He was one of the first engineering managers, the drivers team help build the initial, a lot of the employees that are still on the drivers team
now, a lot of the folks leading the teams were hired by him or came around the same time. So the early developers of the Python, the Java driver and so he, we had a interview came back, wasn't allowed to recruit me out of the first job whatever paperwork you signed, you can't recruit your old coworkers.
But after I spent some time somewhere else, he was happy to bring me on. I learned about the opportunity to come on the drivers team. And I was really excited to go from working on applications, to going and developing libraries suited for other developers instead of like a customer facing product. And so that's kind of been the story since then, just really enjoyed working on APIs as well as it was working on the ODM library at the time, which we can talk about a little bit later. So kind of was already involved in a lot of MongoDB PHP ecosystem.
Jesse: [00:03:46] Cool. So let's, let's talk more about that, that PHP driver. So, what is it, why is it useful to our listeners? How does it work?
Jeremy: [00:03:54] okay. Yep. So level set for the basic explanation. So every language since MongoDB to be doesn't expose a ... it's. Not like some databases, that might have a REST protocol or you just have a web client accessing it. So you do need a driver to speak the wire protocol language, and the MongoDB drivers are particularly different from some other database drivers.
We do a lot of the monitoring of servers and kind of a lot more heavy than you might find in an equivalent like SQL driver especially like the PHP SQL drivers. So the drivers much more intensive library with a lot of the network programming. We're also responsible for converting, how MongoDB stores documents in a it's binary JSON format BSON converting that to whatever the language's
natural representation is. So I can Java that may be just be mapping at the Java classes with PHP. The original driver would turn everything into associative arrays. Make sure that a MongoDB string becomes a PHP string, vice versa. And so the original PHP driver you had familiar concepts across all drivers.
You have your client object that you connect to the database with, and then you have your database, your collection. And the goal is to make whatever language to users. Running their application, make the drivers as idiomatic as possible. And this kind of bit us early on because the drivers may be too idiomatic and they're inconsistent with each other, which becomes a struggle with someone that's writing a MongoDB application, in say C# and PHP.
There might be two very different experiences over Python and NodeJS. And the one thing that we hadn't since then was writing specifications to kind of codify what are the areas that we want to be idiomatic, but we also want to have consistent APIs. And this has also been a boon to our support team because if the drivers can behave predictably, both kind of have a familiar API in the outside that our users develop with.
And then also internally, how do they behave when they connect to MongoDO, so kind of being able to enforce that and having internal tests that are shared across all the different drivers has been a huge plus to our support team.
Michael: [00:05:38] So talk, talk a little bit about that, the balance between a standards-based approach and the idiomatic approach, how does that come together?
Jeremy: [00:05:48] Right. So this has definitely been a learning process from the, some of the early specifications. One of the first specifications we had was for the CRUD API which stands acronym for create, read, update, delete. And that was one of the, that's an essential component of every API. Like how do you insert data into MongoDB and read it back? And having that API let's us standardize on a this is a fine method. What are the options that should take how does this map to the servers? And the MongoDB shell API as well.
That was another project that exists outside of the driver's team's control. But from our customer standpoint, the Mongo shell is also something that they're common to use. So we try to enforce some consistency with that as well. And the specifications we want to, at a functional level provide a consistent experience.
But in terms of honoring that every language should be idiomatic. We're going to make allowances that say in C# you have special types to represent time units of time. Whereas other languages like C or Python, you might just use integers or numeric types. So having the specifications say if you're going to express
like the query time or a time limit on the query will allow like C# driver will say, if you have a time object, you can certainly make use of that type. And another language or students providing guidance and also consistent naming. So we'll say this method should be called find or findOne in your language, if you use camel case or you use snake case like Python with underscores, we're going to let you use that variation.
And that'll keep things idiomatic, so that a user using a Python library doesn't expect to see Pascal style method names in their application. They're going to want it to blend in with other libraries in that languages ecosystem. But the behaviors should be predictable. And there should be a common sense of what functionality is supported across all the different the drivers.
Michael: [00:07:26] Is that supported through synonyms in the language itself? So for, you mentioned, find and find one and maybe some people are used to other, other words to that stand for the, the read functionality in CRUD.
Jeremy: [00:07:41] So, this is, that's a point where we do need to be opinionated about, because this overlaps with also the MongoDB documentation. So if you go to the MongoDB server manual that the driver's team doesn't maintain you'll find language examples in there. An initiative we started a few years ago and that's code that we keep in the driver project that the docs team will then parse out and be able to embed in them are going to be manual.
So the benefit of a, we get to test it in C.I. Environments. And then the MongoDB manual you're browsing. You can say, I use this language and then all the code examples, instead of the MongoDB shell might be in your, in C# or Java or PHP. And so having consistent having, being able to enforce the actual names, we have to be opinionated that we want a method that reads the database instead of calling it query or select.
We want that to be called find. So we want that to be consistently named and we'll just leave flexibility in terms of the, the casing or if you need prefixing or something like that, but there's certain common or certain core words. We want users to think, oh, this is a find, this is a find operation.
It also maps to the find command in the database. That's the same thing with inserts and updates. One of the other changes with the old drivers. We would have an update method and in MongoDB different ways that you work with documents, you can update them in place, or you can replace the document.
And both of those in the server's perspective happened to be called an update command. So you had original drivers that would just have an update method with a bunch of options. And depending what options you pass in, they could do myriad different behaviors. You might be overwriting the entire document.
You might be incrementing a value inside of it. So one of the things that CRUD API implemented was saying, we're going to kind of, it's a kind of a poor design pattern to have an overloaded method name that changes behavior wildly based on the arguments. So let's create an updateOne method I replaced one method and updateMany method.
> For more information about the PHP Driver's implementation of CRUD, refer to the [PHP Quickstart series.
So now that when the users write their applications instead of having to infer, what are the options that I'm passing into this method? The method name itself leads to more by self-documenting code in that user's application.
Jesse: 00:09:31] awesome. So how do users get started using the driver?
Jeremy: [00:09:35] Yeah, so I think a lot of users some, maybe their first interaction might be through the online education courses that we have through MongoDB university. Not every driver, I don't believe there's a PHP class for that. There's definitely a Python Java node, a few others and just kind of a priority list of limited resources to produce that content.
But a lot of users are introduced, I would say through MongoDB University. Probably also through going back nine years early on in the company. MongoDB had a huge presence at like college hackathons going to conferences and doing booths, try out MongoDB and that definitely more appropriate when we were sa maller company, less people had heard about MongoDB now where it's kind of a different approach to capturing the developers.
I think in this case, a lot of developers already heard about MongoDB and maybe it's less of a. Maybe the focus has shifted towards find out how this database works to changing maybe misconceptions they might have about it, or getting them to learn about some new features that we're implementing. I think another way that users pick up using databases sometimes through projects that have MongoDB integrations.
So at the first company where I was using MongoDB and Symfony to in both of them were, it was like a really early time to be using both of those technologies much less together. There was the concept of ORM libraries for PHP, which would kind of map your PHP classes to relational database.
And at the time I don't know who made this decision, but early startup, the worst thing you can possibly do is use two very new technologies that are changing constantly and are arguably unproven. Someone higher up than me decided let's use MongoDB with this new web framework. It was still being actively developed and not formally released yet.
And we need an ORM library for MongoDB cause we don't want to just write raw database queries back and forth. And so we developed a ODM library, object document mapper instead of object relational mapper. And that was based on the same common interfaces as the corresponding ORM library. So that was the doctrine ODM.
And so this was really early time to be writing that. But it integrated so well. It was into the framework and from a such an early point that a lot of users when picking up the Symphony two framework, they realized, oh, we have this ORM library that's integrated in an ODM library. They both have
basically the same kind of support for all the common features, both in terms of integrating with the web forms all the bundles for like storing user accounts and user sessions and things like that. So in all those fleet or functionalities is kind of a drop-in replacement. And maybe those users said, oh MongoDB's new.
I want to try this out. And so that being able to. Have a very low barrier of entry to switch into it. Probably drove some users to to certainly try it out and stick with it. We definitely that's. The second company was at was kind of using it in the same vein. It was available as a drop-in replacement and they were excited about the not being bound to a relational schema.
So definitely had its use as a first company. It was an e-commerce product. So it definitely made use of storing like flexible the flexible schema design for storing like product information and stuff. And then the, we actually used SQL database side by side there just to do all the order, transactional stuff.
Because certainly at the time MongoDB did not have the same kind of level of transactions and stuff that it does today. So that was, I credit that experience of using the right tool for the job and the different part of the company like using MongoDB to represent products and using the relational database to do the order processing and transactions with time.
Definitely left me with a positive experience of using MongoDB versus like trying to shoehorn everything into the database at the time and realizing, oh, it doesn't work for, for this use case. I'm gonna write an angry blog post about it.
Michael: [00:12:53] Yeah, I can relate. So if listeners are looking to get started today, you mentioned the ODM, you mentioned the driver what's the best way to get started today?
Jeremy: [00:13:04] So I definitely would suggest users not jump right in with an ODM library. Because while that's going to help you ramp up and quickly develop an application, it's also going to extract a lot of the components of the database away from you. So you're not going to get an understanding of how the query language works completely, or maybe how to interact with aggregation pipelines, which are some of the richer features of MongoDB.
That said there's going to be some users that like, when you need to you're rapidly developing something, you don't want to think about that. Like you're deciding like uncomfortable and maybe I want to use Atlas and use all the infrastructure behind it with the scaling and being able to easily set up backups and all that functionality.
And so I just want to get down and start writing my application, crank out these model classes and just tap them, store to MongoDB. So different use cases, I would say, but if you really want to learn MongoDB, install the PHP driver comes in two parts. There's the PHP extension, which is implemented in C.
So that's gonna be the first thing you're gonna install. And that's published as a pickle package, like a lot of third-party PHP extensions. So you will install that and that's going to provide a very basic API on top of that. We have a higher level package written in PHP code itself. And that's kind of the offload, like what is the essential heavy lifting code that we have to do in C and what is the high level API that we can implement in PHP? It's more maintainable for us. And then also users can read the code or easily contribute to it if they wish. And so those two components collectively form what we call it, the PHP driver. And so using once those are both installed getting familiar with the API in terms of our documentation for that high-level library kind of goes through all the methods.
We don't, I would say where there's never nearly enough tutorials, but there's a bunch of tutorials in there to introduce the CRUD methods. Kind of explain the basics of inserting and reading and writing documents. MongoDB writing queries at patient pipelines iterating cursors. When you do a query, you get this cursor object back, how you read your results back.
So that would hopefully give users enough of a kind of a launchpad to get started. And I was certainly biased from having been exposed to MongoDB so long, but I think the driver APIs are mostly intuitive. And that's been, certainly been the goal with a lot of the specifications we write. And I'll say this, this does fall apart
when we get into things like say client-side encryption, these advanced features we're even being a long-term employee. Some of these features don't make complete sense to me because I'm not writing applications with them the same way our users are. We would kind of, a driver engineer, we might have a portion of the, the average team work on a, on a new feature, a new specification for it.
So not every driver engineer has the same benefit of being, having the same holistic experience of the database platform as is was easy to do so not years ago where going to say oh, I came in, I was familiar with all these aspects of MongoDB, and now there's like components of MongoDB that I've never interacted with.
Like some of the authentication mechanisms. Some of that, like the Atlas, a full text search features there's just like way too much for us to wrap our heads around.
Jesse: [00:15:49] Awesome. Yeah. And if the users want to get started, be sure to check the show notes. We'll include links to everything there. Let's talk about the development process. So, how does that work? And is there any community participation there?
Jeremy: [00:16:02] Yep. So the drivers spec process That's something that's definitely that's changed over the time is that I mentioned the specifications. So all the work that I mean kind of divide the drivers workload into two different things. We have the downstream work that comes from the server or other teams like Atlas has a new feature.
The server has a new feature, something like client side encryption or the full text search. And so the, for that to be used by our community, we need support for that in the driver. Right? So we're going to have downstream tickets be created and a driver engineer or two, a small team is going to spec out what the driver API for that feature should be.
And that's going to come on our plate for the next so if you consider like MongoDB 5.0, I was coming out soon. Or so if we look at them, MongoDB 5.0, which should be out within the summer that's going to have a bunch of new features that need to end up in the driver API. And we're in the process of designing those and writing our tests for those.
And then there's going to be another handful of features that are maybe fully contained within the driver, or maybe a single language as a new feature we want to write, let's give you an example, a PHP, we have a desire to improve the API is around mapping these on to PHB classes and back and forth.
So that's something that tied back to the doctorate ODM library. That was something that was. The heavy lifting and that was done. That doctor did entirely at PHB there's ways that we can use the C extension to do that. And it's a matter of writing enough C code to get the job done that said doctrine can fully rely on it instead of having to do a lot of it still on its own.
So the two of us working on the PHP driver now, myself and Andres Broan we both have a history of working on Doctrine, ODM project. So we know what the needs of that library are.
And we're a good position to spec out the kind of features. And more importantly, in this case, it involves a lot of prototyping to find out the right balance of how much code we want to write. And what's the performance improvement that we'll be able to give the third, the higher level libraries that can use the driver.
That's something that we're going to be. Another example for other drivers is implementing a client side operations timeout. So that's, this is an example of a cross driver project that is basically entirely on the language drivers. And this is to give users a better API. Then so right now MongoDB
has a whole bunch of options. If you want to use socket timeout. So we can say run this operation X amount of time, but in terms of what we want to give our users and the driver is just think about a logical amount of time that you want something to complete in and not have to set five different timeout options at various low levels.
And so this is something that's being developed inside. We're specing out a common driver API to provide this and this feature really kind of depends entirely on the drivers and money and it's not really a server feature or an Atlas feature. So those are two examples of the tickets that aren't downstream changes at all.
We are the originators of that feature. And so you've got, we have a mix of both, and it's always a lack of, not enough people to get all the work done. And so what do we prioritize? What gets punted? And fortunately, it's usually the organic drivers projects that have to take a back seat to the downstream stuff coming from other departments, because there's a, we have to think in terms of the global MongoDB ecosystem.
And so if an Atlas team is going to develop a new feature and folks can't use that from drivers, no one's going to be writing their application with the MongoDB shell directly. So if we need, there are certain things we need to have and drivers, and then we've just kind of solved this by finding enough resources and staff to get the job done.
Michael: [00:19:12] I'm curious about the community involvement, are there a lot of developers contributing code?
Jeremy: [00:19:19] So I can say definitely on the PHP driver, there's looking at the extension side and see there's a high barrier of entry in terms of like, when I joined the company, I didn't know how to write C extensions and see, it's not just a matter of even knowing C. It's knowing all the macros that PHP itself uses.
We've definitely had a few smaller contributions for the library that's written in PHP. But I would say even then it's not the same as if we compare it to like the Symfony project or other web frameworks, like Laravel where there's a lot of community involvement. Like people aren't running an application, they want a particular feature.
Or there's a huge list of bugs. That there's not enough time for the core developers to work on. And so users pick up the low-hanging fruit and or the bigger projects, depending on what time. And they make a contribution back to the framework and that's what I was doing. And that for the first company, when you use Symphone and Mongo. But I'd say in terms of the drivers speaking for PHP, there's not a lot of community involvement in terms of us.
Definitely for, we get issues reported, but in terms of submitting patches or requesting new features, I don't kind of see that same activity. And I don't remember that. I'd say what the PHP driver, I don't see the same kind of user contribution activity that you'd see in popular web frameworks and things.
I don't know if that's a factor of the driver does what it needs to do or people are just kind of considered a black box. It's this is the API I'm going to do its functionally here and not try and add in new features. Every now and then we do get feature requests, but I don't think they materialize in, into code contributions.
It might be like someone wants this functionality. They're not sure how we would design it. Or they're not sure, like what, what internal refactorings or what, what is it? What is the full scope of work required to get this feature done? But they've voiced to us that oh, it'd be nice if maybe going like MongoDB's date type was more usable with, with time zones or something like that.
So can you provide us with a better way to this is identifiable identify a pain point for us, and that will point us to say, develop some resources into thinking it through. And maybe that becomes a general drivers spec. Maybe that just becomes a project for the PHP driver. Could say a little bit of both.
I do want to point out with community participation in drivers versus existing drivers. We definitely have a lot of community developed drivers, so that MongoDB as a company limited staffing. We have maybe a dozen or so languages that we actively support with drivers. There's many more than that in terms of community developed drivers.
And so that's one of the benefits of us publishing specifications to develop our drivers kind of like open sourcing our development process. Is also a boon for community drivers, whether they have the resources to follow along with every feature or not, they might decide some of these features like the more enterprise features, maybe a community driver doesn't doesn't care about that.
But if we're updating the CRUD API or one of the more essential and generally useful features, they can follow along the development processes and see what changes are coming for new server versions and implement that into the community driver. And so that's kind of in the most efficient way that we've come up with to both support them without having the resources to actually contribute on all those community projects.
Cause I think if we could, it would be great to have MongoDB employees working on a driver for every possible language just isn't feasible. So it's the second best thing we can do. And maybe in lieu of throwing venture capital money at them and sponsoring the work, which we've done in the past with some drivers at different degrees.
But is this open sourcing the design process, keeping that as much the, not just the finished product, but also the communication, the review process and keeping that in it to give up yards as much as possible so people can follow the design rationale that goes into the specifications and keep up to date with the driver changes.
Michael: [00:22:46] I'm curious about the about the decline in the PHP community, there's been obviously a number of factors around that, right? The advent of Node JS and the popularity of frameworks around JavaScript, it's probably contributing to it. But I'm curious as someone who works in the PHP space, what are your thoughts around the, the general decline
of, or also I say the decrease in the number of new programmers leveraging PHP, do you see that continuing or do you think that maybe PHP has some life left?
Jeremy: [00:23:24] so I think the it's hard for me to truly identify this cause I've been disconnected from developing PHP applications for a long time. But in my time at MongoDB, I'd say maybe with the first seven or eight years of my time here, COVID kind of disrupted everything, but I was reasonably active in attending conferences in the community and watching the changes in the PHP ecosystem with the frameworks like Symfony and Laravel I think Laravel particularly. And some of these are kind of focused on region where you might say so Symfony is definitely like more active in, in Europe. Laravel I think if you look at like USB HP users and they may be there versus if they didn't catch on in the US quite the same way that Laravel did, I'm like, excuse me where the Symfony community, maybe didn't develop in
at the same pace that laravel did it in the United States. The, if you go to these conferences, you'll see there's huge amounts of people excited about the language and actively people still giving testimonies that they like taught themselves programming, wrote their first application in one of these frameworks and our supporting their families, or transitioned from a non-tech job into those.
So you definitely still have people learning PHP. I'd say it doesn't have the same story that we get from thinking about Node JS where there's like these bootcamps that exists. I don't think you kind of have that same experience for PHP. But there's definitely still a lot of people learning PHP and then making careers out of it.
And even in the shift of, in terms of the language maturity, you could say. Maybe it's a bit of a stereotype that you'd say PHP is a relic of the early nineties. And when people think about the older CMS platforms and maybe projects like a WordPress or Drupal which if we focused on the numbers are still in like using an incredible numbers in terms of the number of websites they power.
But it's also, I don't think people necessarily, they look at WordPress deployments and things like, oh, this is the they might look at that as a more data platform and that's a WordPress. It was more of a software that you deploy as well as a web framework. But like in terms of them supporting older PHP installations and things, and then looking at the newer frameworks where they can do cutting edge, like we're only going to support PHP.
The latest three-year releases of PHP, which is not a luxury that an established platform like WordPress or Drupal might have. But even if we consider Drupal or in the last, in the time I've been at MongoDB, they went from being a kind of a roll their own framework to redeveloping themselves on top of the Symphony framework and kind of modernizing their innards.
And that brought a lot of that. We could say the siloed communities where someone might identify as a Drupal developer and just only work in the Drupal ecosystem. And then having that framework change now be developed upon a Symfony and had more interoperability with other web frameworks and PHP packages.
Some of those only triple developers transitioned to becoming a kind of a Jack of all trades, PHP developer and more of a, kind of a well-balanced software engineer in that respect. And I think you'll find people in both camps, like you could certainly be incredibly successful, writing WordPress plugins.
So you could be incredibly successful writing pumping out websites for clients on web frameworks. The same way that you can join a full-time company that signs this entire platform is going to be built on a particular web framework.
Jesse: [00:26:28] Yeah, that's kind of a loaded question there. I don't think that PHP is a, is going to go anywhere. I think JavaScript gets a lot of publicity. But PHP has a strong foothold in the community and that's where I have some experience there with WordPress. That's kind of where I got introduced to PHP as well.
But PHP is, yeah, it's not going to go anywhere.
Jeremy: [00:26:49] I think from our perspective on the drivers, it's also, we get to look longingly at a lot of the new PHP versions that come out. So like right now they're working on kind of an API for async support, a lot of the new we have typing a lot more strictly type systems, which as a software engineer, you appreciate, you realize in terms of the flexibility of a scripting language, you don't want typing, but depending which way you're approaching it, as it says, it.
Working on the MongoDB driver, there's a lot of new features we want to use. And we're kind of limited in terms of we have customers that are still on earlier versions of PHP seven are definitely still some customers maybe on PHP five. So we have to do the dance in terms of when did we cut off support for older PHP versions or even older MongoDB versions?
So it's not quite as not quite the same struggle that maybe WordPress has to do with being able to be deployed everywhere. But I think when you're developing a project for your own company and you have full control of the tech stack, you can use the latest new features and like some new technology comes off.
You want to integrate it, you control your full tech stack. When you're writing a library, you kind of have to walk the balance of what is the lowest common denominator reasonably that we're going to support? Because we still have a user base. And so that's where the driver's team we make use of our, our product managers to kind of help us do that research.
We collect stats on Atlas users to find out what PHP versions they're using, what MongoDB versions are using as well. And so that gives us some kind of intelligence to say should we still be supporting this old PHP version while we have one, one or 2% of users? Is that, is that worth the amount of time or the sacrifice of features?
That we're not being able to take advantage of.
Jesse: [00:28:18] Sure. So I think you talked a little bit about this already, but what's on the roadmap? What's coming up?
Jeremy: [00:28:24] So Andreas and I are definitely looking forward when we have time to focus on just the PHP project development revisiting some of the BSON integration on coming up with better API is to not just benefit doctrine, but I'd say any library that integrates step provides Object mapper on top of the driver.
Find something generally useful. There's also framework integrations that so I mentioned I alluded to Laravel previously. So for Laravel, as a framework is kind of a PDA around there or on that ships with the framework is based around relational databases. And so there is a MongoDB integration for laravel that's Kind of community developed and that kind of deals with the least common denominator problem over well, we can't take advantage of all the MongoDB features because we have to provide a consistent API with the relational ORM that ships with Laravel. And this is a similar challenge when in the past, within the drivers team or people outside, the other departments in MongoDB have said, oh, why don't we get WordPress working on them? MongoDB around, we get Drupal running on MongoDB, and it's not as easy as it seems, because if the entire platform assumes because... same thing has come up before with a very long time ago with the Django Python framework. It was like, oh, let's get Django running on MongoDB. And this was like 10 years ago. And I think it's certainly a challenge when the framework itself has you can't fight the inertia of the opinionated decisions of the framework.
So in Laravel's case they have this community supported MongoDB integration and it struggles with implementing a lot of MongoDB features that just kind of can't be shoehorned into that. And so that's a project that is no longer in the original developers' hands. It kind of as a team behind it, of people in the community that have varying levels of amount of time to focus on these features.
So that project is now in the hands of a team, not the original maintainer. And there, I think, I mean, they all have jobs. They all have other things that they're doing this in their spare time offer free. So is this something that we can provide some guidance on in the past, like we've chipped in on code reviews and try to answer some difficult questions about MongoDB.
I think the direction they're going now is kind of, they want to remove features for next future version and kind of simplify things and get what they have really stable. But if that's something when, if we can build up our staff here and devote more time to, because. We look at our internal stats.
We definitely have a lot of MongoDB customers happen to be using Laravel with PHP or the Symfony framework. So I think a lot of our, given how many PHP users use things like Drupal and WordPress, we're not seeing them on MongoDB the same way that people using the raw frameworks and developing applications themselves might choose that in that case, they're in full control of what they deploy on.
And when they choose to use MongoDB, we want to make sure that they have as. It may not be the first class because it's that can't be the same experience as the aura that ships with the framework. But I think it's definitely there's if we strategize and think about what are the features that we can support.
But that, and that's definitely gonna require us familiarizing ourselves with the framework, because I'd say that the longer we spend at MongoDB working on the driver directly. We become more disconnected from the time when we were application developers. And so we can approach this two ways.
We can devote our time to spending time running example applications and finding those pain points for ourselves. We can try and hire someone familiar with the library, which is like the benefit of when I was hired or Andreas was hired coming out of a PHP application job. And then you get to bring that experience and then it's a matter of time before they become disconnected over the next 10 years.
Either, yeah. Either recruiting someone with the experience or spending time to experiment the framework and find out the pain points or interview users is another thing that our product managers do. And that'll give us some direction in terms of one of the things we want to focus on time permitting and where can we have the most impact to give our users a better experience?
Michael: [00:31:59] Folks listening that want to give feedback. What's the best way to do that? Are are you involved in the forums, the community.MongoDB.com forums?
Jeremy: [00:32:09] so we do monitor those. I'd say a lot of the support questions there because the drivers team itself is just a few people for language versus the entirety of our community support team and the technical services department. So I'm not certainly not going on there every day to check for a new user questions.
And to give credit to our community support team. Like they're able to answer a lot of the language questions themselves. That's something, and then they come, they escalate stuff to us. If there's, if there's a bigger question, just like our paids commercial support team does they feel so many things ourselves.
And then maybe once or twice a month, we'll get like a language question come to us. And it's just we're, we're kind of stumped here. Can you explain what the driver's doing here? Tell us that this is a bug. But I would say the community forums is the best way to if you're posting there. The information will definitely reach us because certainly our product managers the people that are kind of a full-time focus to dealing with the community are going to see that first in terms of, for the drivers we're definitely active on JIRA and our various GitHub projects.
And I think those are best used for reporting actual bugs instead of general support inquiries. I know like some other open source projects, they'll use GitHub to track like ideas and then the whole not just bug reports and things like that. In our case, we kind of, for our make best use of our time, we kind of silo okay, we want to keep JIRA and GitHub for bugs and the customer support issues.
If there is open discussion to have, we have these community forums and that helps us efficiently kind of keep the information in the, in the best forum, no pun intended to discuss it.
Michael: [00:33:32] Yeah, this has been a great discussion. Thank you so much for sharing all of the details about the PHP driver and the extension. Is there anything we missed? Anything you want to make sure that the listeners know about the about PHP and Mongo DB?
Jeremy: [00:33:46] I guess an encouragement to share a feedback and if there are, if there are pain points, we definitely like we're I definitely say like other like over languages have more vocal people. And so it's always unsure. It's do we just have not had people talking to us or is it a matter of or the users don't think that they should be raising the concerns, so just reiterate and encourage people to share the feedback?
Jesse: [00:34:10] Or there's no concerns.
Jeremy: [00:34:12] Yeah. Or maybe they're actually in terms of our, our bug reports are like very, we have very few bug reports relatively compared to some other drivers.
Michael: [00:34:19] That's a good thing. Yeah.
Awesome. Jeremy, thank you so much once again, truly appreciate your time, Jesse. Thanks for for helping out with the interview.
Jesse: [00:34:28] Thanks for having me.
Jeremy: [00:34:29] Great talking to you guys. Thanks.
We hope you enjoyed this podcast episode. If you're interested in learning more about the PHP Driver, please visit our [documentation page, and the GitHub Repository. I would also encourage you to visit our forums, where we have a category specifically for PHP.
I would also encourage you to check out the PHP Quickstart articles I wrote recently on our Developer Hub. Feedback is always welcome! | md | {
"tags": [
"PHP"
],
"pageDescription": "Jeremy Mikola is a Senior Software Engineer at MongoDB and helps maintain the MongoDB PHP Driver and Extension. In this episode of the podcast, Jesse Hall and Michael Lynn sit down with Jeremy to talk about the PHP Driver and some of the history of PHP and MongoDB.",
"contentType": "Podcast"
} | Exploring the PHP Driver with Jeremy Mikola - Podcast Episode | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-demo-restaurant-app | created | # Atlas Search from Soup to Nuts: The Restaurant Finder Demo App
Hey! Have you heard about the trendy, new restaurant in Manhattan named Karma? No need to order off the menu. You just get what you deserve. 😋 🤣 And with MongoDB Atlas Search, you also get exactly what you deserve by using a modern application development platform. You get the lightning-fast, relevance-based search capabilities that come with Apache Lucene on top of the developer productivity, resilience, and scale of MongoDB Atlas. Apache Lucene is the world’s most popular search engine library. Now together with Atlas, making sophisticated, fine-grained search queries is a piece of cake.
In this video tutorial, I am going to show you how to build out Atlas Search queries quickly with our Atlas Search Restaurant Finder demo application, which you will find at www.atlassearchrestaurants.com. This app search demo is based on a partially mocked dataset of over 25,000 restaurants in the New York City area. In it, you can search for restaurants based on a wide variety of search criteria, such as name, menu items, location, and cuisine.
This sample search app serves up all the Atlas Search features and also gives away the recipe by providing live code examples. As you interact with the What’s Cooking Restaurant Finder, see how your search parameters are blended together with varying operators within the $search stage of a MongoDB aggregation pipeline. Like combining the freshest ingredients for your favorite dish, Atlas Search lets you easily mix simple searches together using the compound operator.
I named this application “What’s Cooking,” but I should have called it “The Kitchen Sink” because it offers a smorgasbord of so many popular Atlas Search features:
* Fuzzy Search - to tolerate typos and misspellings. Desert, anyone?
* Autocomplete - to search-as-you-type
* Highlighting - to extract document snippets that display search terms in their original context
* Geospatial search - to search within a location’s radius or shape
* Synonyms - Wanna Coke or a Pop? Search for either one with defined synonyms
* Custom Scoring - that extra added flavor to modify search results rankings or to boost promoted content
* Facets and Counts - slice and dice your returned data into different categories
Looking for some killer New York pizza within a few blocks of MongoDB’s New York office in Midtown? How about some savory search synonyms! Any special restaurants with promotions? We have search capabilities for every appetite. And to kick it up a notch, we let you sample the speed of fast, native Lucene facets and counts - currently in public preview.
Feast your eyes!
>
>
>Atlas Search queries are built using $search in a MongoDB aggregation pipeline
>
Notice that even though searches are based in Lucene, Atlas Search queries look like any other aggregation stage, easily integrated into whatever programming language without any extra transformation code. No more half-baked context switching as needed with any other stand-alone search engine. This boils down to an instant productivity boost!
What is in our secret sauce, you ask? We have embedded an Apache Lucene search engine alongside your Atlas database. This synchronizes your data between the database and search index automatically. This also takes off your plate the operational burden and additional cost of setting-up, maintaining, and scaling a separate search platform. Now not only is your data architecture simplified, but also your developer workload, as now developers can work with a single API. Simply stated, you can now have champagne taste on a beer budget. 🥂🍾
If this application has whet your appetite for development, the code for the What’s Cooking Restaurant Finder application can be found here: (
https://github.com/mongodb-developer/whatscooking)
This repo has everything from soup to nuts to recreate the What’s Cooking Restaurant Finder:
* React and Tailwind CSS on the front-end
* The code for the backend APIs
* The whatscooking.restaurants dataset mongodb+srv://mongodb:[email protected]/whatscooking
This recipe - like all recipes - is merely a starting point. Like any chef, experiment. Use different operators in different combinations to see how they affect your scores and search results. Try different things to suit your tastes. We’ll even let you eat for free - forever! Most of these Atlas Search capabilities are available on free clusters in Atlas.
Bon appetit and happy coding! | md | {
"tags": [
"Atlas"
],
"pageDescription": "In this video tutorial, I'm going to show you how to build out Atlas Search queries with our Atlas Search Restaurant Finder demo application.",
"contentType": "Article"
} | Atlas Search from Soup to Nuts: The Restaurant Finder Demo App | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-lake-online-archive | created | # How to Archive Data to Cloud Object Storage with MongoDB Online Archive
MongoDB Atlas Online Archive is a new feature of the MongoDB Cloud Data Platform. It allows you to set a rule to automatically archive data off of your Atlas cluster to fully-managed cloud object storage. In this blog post, I'll demonstrate how you can use Online Archive to tier your data for a cost-effective data management strategy.
The MongoDB Cloud data platform with Atlas Data Federation provides a serverless and scalable Federated Database Instance which allows you to natively query your data across cloud object storage and MongoDB Atlas clusters in-place.
In this blog post, I will use one of the MongoDB Open Data COVID-19 time series collections to demonstrate how you can combine Online Archive and Atlas Data Federation to save on storage costs while retaining easy access to query all of your data.
## Prerequisites
For this tutorial, you will need:
- a MongoDB Atlas M10 cluster or higher as Online Archive is currently not available for the shared tiers,
- MongoDB Compass or MongoDB Shell to access your cluster.
## Let's get some data
To begin with, let's retrieve a time series collection. For this tutorial, I will use one of the time series collections that I built for the MongoDB Open Data COVID19 project.
The `covid19.global_and_us` collection is the most complete COVID-19 times series in our open data cluster as it combines all the data that JHU keeps into separated CSV files.
As I would like to retrieve the entire collection and its indexes, I will use `mongodump`.
``` shell
mongodump --uri="mongodb+srv://readonly:[email protected]/covid19" --collection='global_and_us'
```
This will create a `dump` folder in your current directory. Let's now import this collection in our cluster.
``` shell
mongorestore --uri="mongodb+srv://:
>
>Note here that the **date** field is an IsoDate in extended JSON relaxed notation.
>
>
This time series collection is fairly simple. For each day and each place, we have a measurement of the number of `confirmed`, `deaths` and `recovered` if it's available. More details in our documentation.
## What's the problem?
Problem is, it's a time series! So each day, we add a new entry for each place in the world and our collection will get bigger and bigger every single day. But as time goes on, it's likely that the older data is less important and less frequently accessed so we could benefit from archiving it off of our Atlas cluster.
Today, July 10th 2020, this collection contains 599760 documents which correspond to 3528 places, time 170 days and it's only 181.5 MB thanks to WiredTiger compression algorithm.
While this would not really be an issue with this trivial example, it will definitely force you to upgrade your MongoDB Atlas cluster to a higher tier if an extra GB of data was going in your cluster each day.
Upgrading to a higher tier would cost more money and maybe you don't need to keep all this cold data in your cluster.
## Online Archive to the Rescue!
Manually archiving a subset of this dataset is tedious. I actually wrote a blog post about this.
It works, but you will need to extract and remove the documents from your MongoDB Atlas cluster yourself and then use the new $out operator or the s3.PutObject MongoDB Realm function to write your documents to cloud object storage - Amazon S3 or Microsoft Azure Blob Storage.
Lucky for you, MongoDB Atlas Online Archive does this for you automatically!
Let's head to MongoDB Atlas and click on our cluster to access our cluster details. Currently, Online Archive is not set up on this cluster.
Now let's click on **Online Archive** then **Configure Online Archive**.
The next page will give you some information and documentation about MongoDB Atlas Online Archive and in the next step you will have to configure your archiving rule.
In our case, it will look like this:
As you can see, I'm using the **date** field I mentioned above and if this document is more than 60 days old, it will be automatically moved to my cloud object storage for me.
Now, for the next step, I need to think about my access pattern. Currently, I'm using this dataset to create awesome COVID-19 charts.
And each time, I have to first filter by date to reduce the size of my chart and then optionally I filter by country then state if I want to zoom on a particular country or region.
As these fields will convert into folder names into my cloud object storage, they need to exist in all the documents. It's not the case for the field "state" because some countries don't have sub-divisions in this dataset.
As the date is always my first filter, I make sure it's at the top. Folders will be named and organised this way in my cloud object storage and folders that don't need to be explored will be eliminated automatically to speed up the data retrieval process.
Finally, before starting the archiving process, there is a final step: making sure Online Archive can efficiently find and remove the documents that need to be archived.
I already have a few indexes on this collection, let's see if this is really needed. Here are the current indexes:
As we can see, I don't have the recommended index. I have its opposite: `{country: 1, date: 1}` but they are **not** equivalent. Let's see how this query behaves in MongoDB Compass.
We can note several things in here:
- We are using the **date** index. Which is a good news, at least it's not a collection scan!
- The final sort operation is `{ date: 1, country: 1}`
- Our index `{date:1}` doesn't contain the information about country so an in-memory sort is required.
- Wait a minute... Why do I have 0 documents returned?!
I have 170 days of data. I'm filtering all the documents older than 60 days so I should match `3528 places * 111 days = 391608` documents.
>
>
>111 days (not 170-60=110) because we are July 10th when I'm writing this
>and I don't have today's data yet.
>
>
When I check the raw json output in Compass, I actually see that an
error has occurred.
Sadly, it's trimmed. Let's run this again in the new
mongosh to see the complete
error:
``` none
errorMessage: 'Exec error resulting in state FAILURE :: caused by :: Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit.'
```
I ran out of RAM...oops! I have a few other collections in my cluster
and the 2GB of RAM of my M10 cluster are almost maxed out.
In-memory
sorts
actually use a lot of RAM and if you can avoid these, I would definitely
recommend that you get rid of them. They are forcing some data from your
working set out of your cache and that will result in cache pressure and
more IOPS.
Let's create the recommended index and see how the situation improves:
``` javascript
db.global_and_us.createIndex({ date: 1, country: 1})
```
Let's run our query again in the Compass explain plan:
This time, in-memory sort is no longer used, as we can return documents
in the same order they appear in our index. 391608 documents are
returned and we are using the correct index. This query is **MUCH** more
memory efficient than the previous one.
Now that our index is created, we can finally start the archiving
process.
Just before we start our archiving process, let's run an aggregation
pipeline in MongoDB Compass to check the content of our collection.
``` javascript
{
'$sort': {
'date': 1
}
}, {
'$group': {
'_id': {
'country': '$country',
'state': '$state',
'county': '$county'
},
'count': {
'$sum': 1
},
'first_date': {
'$first': '$date'
},
'last_date': {
'$last': '$date'
}
}
}, {
'$count': 'number_places'
}
]
```
![Aggregation pipeline in MongoDB Compass
As you can see, by grouping the documents by country, state and county,
we can see:
- how many days are reported: `170`,
- the first date: `2020-01-22T00:00:00.000+00:00`,
- the last date: `2020-07-09T00:00:00.000+00:00`,
- the number of places being monitored: `3528`.
Once started, your Online Archive will look like this:
When the initialisation is done, it will look like this:
After some times, all your documents will be migrated in the underlying
cloud object storage.
In my case, as I had 599760 in my collection and 111 days have been
moved to my cloud object storage, I have `599760 - 111 * 3528 = 208152`
documents left in my collection in MongoDB Atlas.
``` none
PRIMARY> db.global_and_us.count()
208152
```
Good. Our data is now archived and we don't need to upgrade our cluster
to a higher cluster tier!
## How to access my archived data?
Usually, archiving data rhymes with "bye bye data". The minute you
decide to archive it, it's gone forever and you just take it out of the
old dusty storage system when the actual production system just burnt to
the ground.
Let me show you how you can keep access to the **ENTIRE** dataset we
just archived on my cloud object storage using MongoDB Atlas Data
Federation.
First, let's click on the **CONNECT** button. Either directly in the
Online Archive tab:
Or head to the Data Federation menu on the left to find your automatically
configured Data Lake environment.
Retrieve the connection command line for the Mongo Shell:
Make sure you replace the database and the password in the command. Once
you are connected, you can run the following aggregation pipeline:
``` javascript
{
'$match': {
'country': 'France'
}
}, {
'$sort': {
'date': 1
}
}, {
'$group': {
'_id': '$uid',
'first_date': {
'$first': '$date'
},
'last_date': {
'$last': '$date'
},
'count': {
'$sum': 1
}
}
}
]
```
And here is the same query in command line - easier for a quick copy &
paste.
``` shell
db.global_and_us.aggregate([ { '$match': { 'country': 'France' } }, { '$sort': { 'date': 1 } }, { '$group': { '_id': { 'country': '$country', 'state': '$state', 'county': '$county' }, 'first_date': { '$first': '$date' }, 'last_date': { '$last': '$date' }, 'count': { '$sum': 1 } } } ])
```
Here is the result I get:
``` json
{ "_id" : { "country" : "France", "state" : "Reunion" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "Saint Barthelemy" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "Martinique" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "Mayotte" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "French Guiana" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "Guadeloupe" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "New Caledonia" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "St Martin" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "Saint Pierre and Miquelon" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
{ "_id" : { "country" : "France", "state" : "French Polynesia" }, "first_date" : ISODate("2020-01-22T00:00:00Z"), "last_date" : ISODate("2020-07-09T00:00:00Z"), "count" : 170 }
```
As you can see, even if our cold data is archived, we can still access
our **ENTIRE** dataset even though it was partially archived. The first
date is still January 22nd and the last date is still July 9th for a
total of 170 days.
## Wrap Up
MongoDB Atlas Online Archive is your new best friend to retire and store
your cold data safely in cloud object storage with just a few clicks.
In this tutorial, I showed you how to set up an Online Archive to automatically archive your data to fully-managed cloud object storage while retaining easy access to query the entirety of the dataset in-place, across sources, using Atlas Data Federation.
Just in case this blog post didn't make it clear, Online Archive is
**NOT** a replacement for backups or a [backup
strategy. These are 2
completely different topics and they should not be confused.
If you have questions, please head to our developer community
website where the MongoDB engineers and
the MongoDB community will help you build your next big idea with
MongoDB.
To learn more about MongoDB Atlas Data
Federation, read the other blogs
posts in this series below, or check out the
documentation.
| md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "Automatically tier your data across Atlas clusters and cloud object storage while retaining access to query it all with Atlas Data Federation.",
"contentType": "Tutorial"
} | How to Archive Data to Cloud Object Storage with MongoDB Online Archive | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/connectors/go-kafka-confluent-atlas | created | # Go to MongoDB Using Kafka Connectors - Ultimate Agent Guide
Go is a modern language built on typed and native code compiling concepts while feeling and utilizing some benefits of dynamic languages. It is fairly simple to install and use, as it provides readable and robust code for many application use cases.
One of those use cases is building agents that report to a centralized data platform via streaming. A widely accepted approach is to communicate the agent data through subscription of distributed queues like Kafka. The Kafka topics can then propagate the data to many different sources, such as a MongoDB Atlas cluster.
Having a Go agent allows us to utilize the same code base for various operating systems, and the fact that it has good integration with JSON data and packages such as a MongoDB driver and Confluent Go Kafka Client makes it a compelling candidate for the presented use case.
This article will demo how file size data on a host is monitored from a cross-platform agent written in Golang via a Kafka cluster using a Confluent hosted sink connector to MongoDB Atlas. MongoDB Atlas stores the data in a time series collection. The MongoDB Charts product is a convenient way to show the gathered data to the user.
## Preparing the Golang project, Kafka cluster, and MongoDB Atlas
### Configuring a Go project
Our agent is going to run Go. Therefore, you will need to install the Go language software on your host.
Once this step is done, we will create a Go module to begin our project in our working directory:
``` shell
go mod init example/main
```
Now we will need to add the Confluent Kafka dependency to our Golang project:
``` shell
go get -u gopkg.in/confluentinc/confluent-kafka-go.v1/kafka
```
### Configuring a Kafka cluster
Creating a Confluent Kafka Cluster is done via the Confluent UI. Start by creating a basic Kafka cluster in the Confluent Cloud. Once ready, create a topic to be used in the Kafka cluster. I created one named “files.”
Generate an api-key and api-secret to interact with this Kafka cluster. For the simplicity of this tutorial, I have selected the “Global Access” api-key. For production, it is recommended to give as minimum permissions as possible for the api-key used. Get a hold of the generated keys for future use.
Obtain the Kafka cluster connection string via Cluster Overview > Cluster Settings > Identification > Bootstrap server for future use. Basic clusters are open to the internet and in production, you will need to amend the access list for your specific hosts to connect to your cluster via advanced cluster ACLs.
> **Important:** The Confluent connector requires that the Kafka cluster and the Atlas cluster are deployed in the same region.
>
### Configuring Atlas project and cluster
Create a project and cluster or use an existing Atlas cluster in your project.
Since we are using a time series collection, the clusters must use a 5.0+ version. Prepare your Atlas cluster for a Confluent sink Atlas connection. Inside your project’s access list, enable user and relevant IP addresses of your connector IPs. The access list IPs should be associated to the Atlas Sink Connector, which we will configure in a following section. Finally, get a hold of the Atlas connection string and the main cluster DNS. For more information about best securing and getting the relevant IPs from your Confluent connector, please read the following article: MongoDB Atlas Sink Connector for Confluent Cloud.
## Adding agent main logic
Now that we have our Kafka cluster and Atlas clusters created and prepared, we can initialize our agent code by building a small main file that will monitor my `./files` directory and capture the file names and sizes. I’ve added a file called `test.txt` with some data in it to bring it to ~200MB.
Let’s create a file named `main.go` and write a small logic that performs a constant loop with a 1 min sleep to walk through the files in the `files` folder:
``` go
package main
import (
"fmt"
"encoding/json"
"time"
"os"
"path/filepath"
)
type Message struct {
Name string
Size float64
Time int64
}
func samplePath (startPath string) error {
err := filepath.Walk(startPath,
func(path string, info os.FileInfo, err error) error {
var bytes int64
bytes = info.Size()
var kilobytes int64
kilobytes = (bytes / 1024)
var megabytes float64
megabytes = (float64)(kilobytes / 1024) // cast to type float64
var gigabytes float64
gigabytes = (megabytes / 1024)
now := time.Now().Unix()*1000
m := Message{info.Name(), gigabytes, now}
value, err := json.Marshal(m)
if err != nil {
panic(fmt.Sprintf("Failed to parse JSON: %s", err))
}
fmt.Printf("value: %v\n", string(value))
return nil;
})
if err != nil {
return err
}
return nil;
}
func main() {
for {
err := samplePath("./files");
if err != nil {
panic(fmt.Sprintf("Failed to run sample : %s", err))
}
time.Sleep(time.Minute)
}
}
```
The above code simply imports helper modules to traverse the directories and for JSON documents out of the files found.
Since we need the data to be marked with the time of the sample, it is a great fit for time series data and therefore should eventually be stored in a time series collection on Atlas. If you want to learn more about time series collection and data, please read our article, MongoDB Time Series Data.
We can test this agent by running the following command:
``` shell
go run main.go
```
The agent will produce JSON documents similar to the following format:
``` shell
value: {"Name":"files","Size":0,"Time":1643881924000}
value: {"Name":"test.txt","Size":0.185546875,"Time":1643881924000}
```
## Creating a Confluent MongoDB connector for Kafka
Now we are going to create a Kafka Sink connector to write the data coming into the “files” topic to our Atlas Cluster’s time series collection.
Confluent Cloud has a very popular integration running MongoDB’s Kafka connector as a hosted solution integrated with their Kafka clusters. Follow these steps to initiate a connector deployment.
The following are the inputs provided to the connector:
Once you set it up, following the guide, you will eventually have a similar launch summary page:
After provisioning every populated document into the `files` queue will be pushed to a time series collection `hostMonitor.files` where the date field is `Time` and metadata field is `Name`.
## Pushing data to Kafka
Now let’s edit the `main.go` file to use a Kafka client and push each file measurement into the “files” queue.
Add the client library as an imported module:
``` go
import (
"fmt"
"encoding/json"
"time"
"os"
"path/filepath"
"github.com/confluentinc/confluent-kafka-go/kafka"
)
```
Add the Confluent cloud credentials and cluster DNS information. Replace `:` found on the Kafka Cluster details page and the `` , `` generated in the Kafka Cluster:
``` go
const (
bootstrapServers = “:"
ccloudAPIKey = ""
ccloudAPISecret = ""
)
```
The following code will initiate the producer and produce a message out of the marshaled JSON document:
``` go
topic := "files"
// Produce a new record to the topic...
producer, err := kafka.NewProducer(&kafka.ConfigMap{
"bootstrap.servers": bootstrapServers,
"sasl.mechanisms": "PLAIN",
"security.protocol": "SASL_SSL",
"sasl.username": ccloudAPIKey,
"sasl.password": ccloudAPISecret})
if err != nil {
panic(fmt.Sprintf("Failed to create producer: %s", err))
}
producer.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic,
Partition: kafka.PartitionAny},
Value: ]byte(value)}, nil)
// Wait for delivery report
e := <-producer.Events()
message := e.(*kafka.Message)
if message.TopicPartition.Error != nil {
fmt.Printf("failed to deliver message: %v\n",
message.TopicPartition)
} else {
fmt.Printf("delivered to topic %s [%d] at offset %v\n",
*message.TopicPartition.Topic,
message.TopicPartition.Partition,
message.TopicPartition.Offset)
}
producer.Close()
```
The entire `main.go` file will look as follows:
``` go
package main
import (
"fmt"
"encoding/json"
"time"
"os"
"path/filepath"
"github.com/confluentinc/confluent-kafka-go/kafka")
type Message struct {
Name string
Size float64
Time int64
}
const (
bootstrapServers = ":"
ccloudAPIKey = ""
ccloudAPISecret = ""
)
func samplePath (startPath string) error {
err := filepath.Walk(startPath,
func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
fmt.Println(path, info.Size())
var bytes int64
bytes = info.Size()
var kilobytes int64
kilobytes = (bytes / 1024)
var megabytes float64
megabytes = (float64)(kilobytes / 1024) // cast to type float64
var gigabytes float64
gigabytes = (megabytes / 1024)
now := time.Now().Unix()*1000
m := Message{info.Name(), gigabytes, now}
value, err := json.Marshal(m)
if err != nil {
panic(fmt.Sprintf("Failed to parse JSON: %s", err))
}
fmt.Printf("value: %v\n", string(value))
topic := "files"
// Produce a new record to the topic...
producer, err := kafka.NewProducer(&kafka.ConfigMap{
"bootstrap.servers": bootstrapServers,
"sasl.mechanisms": "PLAIN",
"security.protocol": "SASL_SSL",
"sasl.username": ccloudAPIKey,
"sasl.password": ccloudAPISecret})
if err != nil {
panic(fmt.Sprintf("Failed to create producer: %s", err))
}
producer.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic,
Partition: kafka.PartitionAny},
Value: []byte(value)}, nil)
// Wait for delivery report
e := <-producer.Events()
message := e.(*kafka.Message)
if message.TopicPartition.Error != nil {
fmt.Printf("failed to deliver message: %v\n",
message.TopicPartition)
} else {
fmt.Printf("delivered to topic %s [%d] at offset %v\n",
*message.TopicPartition.Topic,
message.TopicPartition.Partition,
message.TopicPartition.Offset)
}
producer.Close()
return nil;
})
if err != nil {
return err
}
return nil;
}
func main() {
for {
err := samplePath("./files");
if err != nil {
panic(fmt.Sprintf("Failed to run sample : %s", err))
}
time.Sleep(time.Minute)
}
}
```
Now when we run the agent while the Confluent Atlas sink connector is fully provisioned, we will see messages produced into the `hostMonitor.files` time series collection:
![Atlas Data
## Analyzing the data using MongoDB Charts
To put our data into use, we can create some beautiful charts on top of the time series data. In a line graph, we configure the X axis to use the Time field, the Y axis to use the Size field, and the series to use the Name field. The following graph shows the colored lines represented as the evolution of the different file sizes over time.
Now we have an agent and a fully functioning Charts dashboard to analyze growing files trends. This architecture allows big room for extensibility as the Go agent can have further functionalities, more subscribers can consume the monitored data and act upon it, and finally, MongoDB Atlas and Charts can be used by various applications and embedded to different platforms.
## Wrap Up
Building Go applications is simple yet has big benefits in terms of performance, cross platform code, and a large number of supported libraries and clients. Adding MongoDB Atlas via a Confluent Cloud Kafka service makes the implementation a robust and extensible stack, streaming data and efficiently storing and presenting it to the end user via Charts.
In this tutorial, we have covered all the basics you need to know in order to start using Go, Kafka, and MongoDB Atlas in your next streaming projects.
Try MongoDB Atlas and Go today! | md | {
"tags": [
"Connectors",
"Go",
"Kafka"
],
"pageDescription": "Go is a cross-platform language. When combined with the power of Confluent Kafka streaming to MongoDB Atlas, you’ll be able to form tools and applications with real-time data streaming and analytics. Here’s a step-by-step tutorial to get you started.",
"contentType": "Tutorial"
} | Go to MongoDB Using Kafka Connectors - Ultimate Agent Guide | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/announcing-realm-kotlin-beta | created | # Announcing the Realm Kotlin Beta: A Database for Multiplatform Apps
The Realm team is happy to announce the beta release of our Realm Kotlin SDK—with support for both Kotlin for Android and Kotlin Multiplatform apps. With this release, you can deploy and maintain a data layer across iOS, Android, and desktop platforms from a single Kotlin codebase.
:youtube]{vid=6vL5h8pbt5g}
Realm is a super fast local data store that makes storing, querying, and syncing data simple for modern applications. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects, enabling you to refresh the UI with first-class support for Kotlin’s programming primitives such as Coroutines, Channels, and Flows.
## Introduction
Our goal with Realm has always been to provide developers with the tools they need to easily build data-driven, reactive mobile applications. Back in 2014, this meant providing Android developers with a first-class Java SDK. But the Android community is changing. With the growing importance of Kotlin, our team had two options: refactor the existing Realm Java SDK to make it Kotlin-friendly or use this as an opportunity to build a new SDK from the ground up that is specifically tailored to the needs of the Kotlin community. After collecting seven years of feedback from Android developers, we chose to build a new Kotlin-first SDK that pursued the following directives:
* Expose Realm APIs that are idiomatic and directly integrate with Kotlin design patterns such as Coroutines and Flows, eliminating the need to write glue code between the data layer and the UI.
* Remove Realm Java’s thread confinement for Realms and instead emit data objects as immutable structs, conforming to the prevalent design pattern on Android and other platforms.
* Expose a single Realm singleton instance that integrates into Android’s lifecycle management automatically, removing the custom code needed to spin up and tear down Realm instances on a per-activity or fragment basis.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free!
## What is Realm?
Realm is a fast, easy-to-use alternative to SQLite + Room with built-in cloud capabilities, including a real-time edge-to-cloud sync solution. Written from the ground up in C++, it is not a wrapper around SQLite or any other relational data store. Designed with the mobile environment in mind, it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of state pages of disk space into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer. Simply put, Realm makes it easy to store, query, and sync your mobile data across devices and the back end.
## Realm for Kotlin developers
Realm is an object database, so your schema is defined in the same way you define your object classes. Under the hood, Realm uses a Kotlin compiler plugin to generate the necessary getters, setters, and schema for your database, freeing Android developers from the monotony of Room’s DAOs and the pain of investigating inaccurate SQL query responses from SQLite.
Realm also brings true relationships to your object class definitions, enabling you to have one-to-one, one-to-many, many-to-many, and even inverse relationships. And because Realm objects are memory-mapped, traversing the object graph across relationships is done in the blink of an eye.
Additionally, Realm delivers a simple and intuitive query system that will feel natural to Kotlin developers. No more context switching to SQL to instantiate your schema or looking behind the curtain when an ORM fails to translate your calls into SQL.
```kotlin
// Define your schema - Notice Project has a one-to-many relationship to Task
class Project : RealmObject {
var name: String = ""
var tasks: RealmList = realmListOf()
}
class Task : RealmObject {
var name: String = ""
var status: String = "Open"
var owner: String = ""
}
// Set the config and open the realm instance
val easyConfig = RealmConfiguration.with(schema = setOf(Task::class, Project::class))
val realm: Realm = Realm.open(easyConfig)
// Write asynchronously using suspend functions
realm.write { // this: MutableRealm
val project = Project().apply {
name = "Kotlin Beta"
}
val task = Task().apply {
name = "Ship It"
status = "InProgress"
owner = "Christian"
}
project.tasks = task
copyToRealm(project)
}
// Get a reference to the Project object
val currentProject: Project =
realm.query(
"name == $0", "Kotlin Beta"
).first().find()!!
// Or query multiple objects
vall allTasks: RealmResults =
realm.query().find()
// Get notified when data changes using Flows
currentProject.tasks.asFlow().collect { change: ListChange ->
when (change) {
is InitialList -> {
// Display initial data on UI
updateUI(change.list)
}
is UpdatedList -> {
// Get information about changes compared
// to last version.
Log.debug("Changes: ${change.changeRanges}")
Log.debug("Insertions: ${change.insertionRanges}")
Log.debug("Deletions: ${change.deletionRanges}")
updateUI(change.list)
}
is DeletedList -> {
updateUI(change.list) // Empty list
}
}
}
// Write synchronously - this blocks execution on the caller thread
// until the transaction is complete
realm.writeBlocking { // this: MutableRealm
val newTask = Task().apply {
name = "Write Blog"
status = "InProgress"
owner = "Ian"
}
findLatest(currentProject)?.apply {
tasks.add(newTask)
}
}
// The UI will now automatically display two tasks because
// of the above defined Flow on currentProject
```
Finally, one of Realm’s main benefits is its out-of-the-box data synchronization solution with MongoDB Atlas. Realm Sync makes it easy for developers to build reactive mobile apps that stay up to date in real-time.
## Looking ahead
The Realm Kotlin SDK is a free and open source database available for you to get started building applications with today! With this beta release, we believe that all of the critical components for building a production-grade data layer in an application are in place. In the future, we will look to add embedded objects; data types such as Maps, Sets, and the Mixed type; ancillary sync APIs as well as support for Flexible Sync; and finally, some highly optimized write and query helper APIs with the eventual goal of going GA later this year.
Give it a try today and let us know what you think! Check out our samples, read our docs, and follow our repo.
>For users already familiar with Realm Java, check out the video at the top of this article if you haven't already, or our migration blog post and documentation to see what is needed to port over your existing implementation.
| md | {
"tags": [
"Realm",
"Kotlin"
],
"pageDescription": "Announcing the Realm Kotlin Beta—making it easy to store, query, and sync data in your Kotlin for Android and Kotlin Multiplatform apps.",
"contentType": "Article"
} | Announcing the Realm Kotlin Beta: A Database for Multiplatform Apps | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/php/fifa-php-app | created | # Go-FIFA
## Creators
Dhiren and Nirbhay contributed this project.
## About the project
GoFifa is a PHP-Mongo based Football Stats Client. GoFifa's data has been sourced from Kaggle, Sofifa and FifaIndex. Data has been stored in MongoDB on AWS. The application is hosted on Heroku and deployed using GitHub as a VCS.
## Inspiration
The project was part of my course called ‘knowledge representation techniques.’ The assignment was to create a project that used the basic features of MongoDB. Together with my teammate, we decided to make the best use of this opportunity. We thought about what we could do; we needed proper structured and useful quality data, and also supported the idea behind MongoDB. We went to a lot of sites with sample data sites.
We went for soccer because my teammate is a huge soccer fan. We found this sample data, and we decided to use it for our project.
:youtube]{vid=YGNDGTnQdNQ}
## Why MongoDB?
As mentioned briefly, we were asked to use MongoDB in our project. We decided to dive deeper into everything that MongoDB has to offer. This project uses the most known querying techniques with MongoDB, and other features like geodata, leaflet js, grid fs, depth filtering, mapping crawled data to MongoDB, and references, deployment, etc. It can prove to be an excellent start for someone who wants to learn how to use MongoDB effectively and a rest client on top of it.
![
## How it works
GoFifa is a web application where you can find soccer players and learn more about them.
All in all, we created a project that was a full-stack.
First, we started crawling the data, so we created a crawler that feeds the data into the database as chunks. And we also use the idea behind references. When querying, we also made sure what we were querying with wildcards next to the normal querying.
We wanted to create a feature that can be used in the real world. That’s why we also decided to use geo queries and gridFS. It turned into a nice full-stack app. And top it off, the best part has been that since that project, we’ve used MongoDB in so many places.
## Challenges and learnings
I (Nirbhay) learned a lot from this project. I was a more PHP centric person. Now that's not the case anymore, but I was. And it was a little difficult to integrate the PHP driver at the time. Now it's all become very easy. More and more articles are written about bothered about all those codes. So it's all become very easy. But at that time, it wasn't easy. But other than that, I would say: the documentation provided by MongoDB was pretty good. It helps understand things to a certain level. I don't think that I've used all the features yet, but I'll try to use them more in the future.
| md | {
"tags": [
"PHP"
],
"pageDescription": "GoFifa - A comprehensive soccer stats tracker.",
"contentType": "Code Example"
} | Go-FIFA | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/mongoose-versus-nodejs-driver | created | # MongoDB & Mongoose: Compatibility and Comparison
In this article, we’ll explore the Mongoose library for MongoDB. Mongoose is a Object Data Modeling (ODM) library for MongoDB distributed as an npm package. We'll compare and contrast Mongoose to using the native MongoDB Node.js driver together with MongoDB Schema Validation.
We’ll see how the MongoDB Schema Validation helps us enforce a database schema, while still allowing for great flexibility when needed. Finally, we’ll see if the additional features that Mongoose provides are worth the overhead of introducing a third-party library into our applications.
## What is Mongoose?
Mongoose is a Node.js-based Object Data Modeling (ODM) library for MongoDB. It is akin to an Object Relational Mapper (ORM) such as SQLAlchemy for traditional SQL databases. The problem that Mongoose aims to solve is allowing developers to enforce a specific schema at the application layer. In addition to enforcing a schema, Mongoose also offers a variety of hooks, model validation, and other features aimed at making it easier to work with MongoDB.
## What is MongoDB Schema Validation?
MongoDB Schema Validation makes it possible to easily enforce a schema against your MongoDB database, while maintaining a high degree of flexibility, giving you the best of both worlds. In the past, the only way to enforce a schema against a MongoDB collection was to do it at the application level using an ODM like Mongoose, but that posed significant challenges for developers.
## Getting Started
If you want to follow along with this tutorial and play around with schema validations but don't have a MongoDB instance set up, you can set up a free MongoDB Atlas cluster here.
## Object Data Modeling in MongoDB
A huge benefit of using a NoSQL database like MongoDB is that you are not constrained to a rigid data model. You can add or remove fields, nest data multiple layers deep, and have a truly flexible data model that meets your needs today and can adapt to your ever-changing needs tomorrow. But being too flexible can also be a challenge. If there is no consensus on what the data model should look like, and every document in a collection contains vastly different fields, you're going to have a bad time.
### Mongoose Schema and Model
On one end of the spectrum, we have ODM's like Mongoose, which from the get-go force us into a semi-rigid schema. With Mongoose, you would define a `Schema` object in your application code that maps to a collection in your MongoDB database. The `Schema` object defines the structure of the documents in your collection. Then, you need to create a `Model` object out of the schema. The model is used to interact with the collection.
For example, let's say we're building a blog and want to represent a blog post. We would first define a schema and then create an accompanying Mongoose model:
``` javascript
const blog = new Schema({
title: String,
slug: String,
published: Boolean,
content: String,
tags: String],
comments: [{
user: String,
content: String,
votes: Number
}]
});
const Blog = mongoose.model('Blog', blog);
```
### Executing Operations on MongoDB with Mongoose
Once we have a Mongoose model defined, we could run queries for fetching,updating, and deleting data against a MongoDB collection that alignswith the Mongoose model. With the above model, we could do things like:
``` javascript
// Create a new blog post
const article = new Blog({
title: 'Awesome Post!',
slug: 'awesome-post',
published: true,
content: 'This is the best post ever',
tags: ['featured', 'announcement'],
});
// Insert the article in our MongoDB database
article.save();
// Find a single blog post
Blog.findOne({}, (err, post) => {
console.log(post);
});
```
### Mongoose vs MongoDB Node.js Driver: A Comparison
The benefit of using Mongoose is that we have a schema to work against in our application code and an explicit relationship between our MongoDB documents and the Mongoose models within our application. The downside is that we can only create blog posts and they have to follow the above defined schema. If we change our Mongoose schema, we are changing the relationship completely, and if you're going through rapid development, this can greatly slow you down.
The other downside is that this relationship between the schema and model only exists within the confines of our Node.js application. Our MongoDB database is not aware of the relationship, it just inserts or retrieves data it is asked for without any sort of validation. In the event that we used a different programming language to interact with our database, all the constraints and models we defined in Mongoose would be worthless.
On the other hand, if we decided to use just the [MongoDB Node.js driver, we could
run queries against any collection in our database, or create new ones on the fly. The MongoDB Node.js driver does not have concepts of object data modeling or mapping.
We simply write queries against the database and collection we wish to work with to accomplish the business goals. If we wanted to insert a new blog post in our collection, we could simply execute a command like so:
``` javascript
db.collection('posts').insertOne({
title: 'Better Post!',
slug: 'a-better-post',
published: true,
author: 'Ado Kukic',
content: 'This is an even better post',
tags: 'featured'],
});
```
This `insertOne()` operation would run just fine using the Node.js Driver. If we tried to save this data using our Mongoose `Blog` model, it would fail, because we don't have an `author` property defined in our Blog Mongoose model.
Just because the Node.js driver doesn't have the concept of a model, does not mean we couldn't create models to represent our MongoDB data at the application level. We could just as easily create a generic model or use a library such as [objectmodel. We could create a `Blog` model like so:
``` javascript
function Blog(post) {
this.title = post.title;
this.slug = post.slug;
...
}
```
We could then use this model in conjunction with our MongoDB Node.js driver, giving us both the flexibility of using the model, but not being constrained by it.
``` javascript
db.collection('posts').findOne({}).then((err, post) => {
let article = new Blog(post);
});
```
In this scenario, our MongoDB database is still blissfully unaware of our Blog model at the application level, but our developers can work with it, add specific methods and helpers to the model, and would know that this model is only meant to be used within the confines of our Node.js application. Next, let's explore schema validation.
## Adding Schema Validation
We can choose between two different ways of adding schema validation to our MongoDB collections. The first is to use application-level validators, which are defined in the Mongoose schemas. The second is to use MongoDB schema validation, which is defined in the MongoDB collection itself. The huge difference is that native MongoDB schema validation is applied at the database level. Let's see why that matters by exploring both methods.
### Schema Validation with Mongoose
When it comes to schema validation, Mongoose enforces it at the application layer as we've seen in the previous section. It does this in two ways.
First, by defining our model, we are explicitly telling our Node.js application what fields and data types we'll allow to be inserted into a specific collection. For example, our Mongoose Blog schema defines a `title` property of type `String`. If we were to try and insert a blog post with a `title` property that was an array, it would fail. Anything outside of the defined fields, will also not be inserted in the database.
Second, we further validate that the data in the defined fields matches our defined set of criteria. For example, we can expand on our Blog model by adding specific validators such as requiring certain fields, ensuring a minimum or maximum length for a specific field, or coming up with our custom logic even. Let's see how this looks with Mongoose. In our code we would simply expand on the property and add our validators:
``` javascript
const blog = new Schema({
title: {
type: String,
required: true,
},
slug: {
type: String,
required: true,
},
published: Boolean,
content: {
type: String,
required: true,
minlength: 250
},
...
});
const Blog = mongoose.model('Blog', blog);
```
Mongoose takes care of model definition and schema validation in one fell swoop. The downside though is still the same. These rules only apply at the application layer and MongoDB itself is none the wiser.
The MongoDB Node.js driver itself does not have mechanisms for inserting or managing validations, and it shouldn't. We can define schema validation rules for our MongoDB database using the MongoDB Shell or Compass.
We can create a schema validation when creating our collection or after the fact on an existing collection. Since we've been working with this blog idea as our example, we'll add our schema validations to it. I will use Compass and MongoDB Atlas. For a great resource on how to programmatically add schema validations, check out this series.
> If you want to follow along with this tutorial and play around with
> schema validations but don't have a MongoDB instance set up, you can
> set up a free MongoDB Atlas cluster here.
Create a collection called `posts` and let's insert our two documents that we've been working with. The documents are:
``` javascript
{"title":"Better Post!","slug":"a-better-post","published":true,"author":"Ado Kukic","content":"This is an even better post","tags":["featured"]}, {"_id":{"$oid":"5e714da7f3a665d9804e6506"},"title":"Awesome Post","slug":"awesome-post","published":true,"content":"This is an awesome post","tags":["featured","announcement"]}]
```
Now, within the Compass UI, I will navigate to the **Validation** tab. As expected, there are currently no validation rules in place, meaning our database will accept any document as long as it is valid BSON. Hit the **Add a Rule** button and you'll see a user interface for creating your own validation rules.
![Valid Document Schema
By default, there are no rules, so any document will be marked as passing. Let's add a rule to require the `author` property. It will look like this:
``` javascript
{
$jsonSchema: {
bsonType: "object",
required: "author" ]
}
}
```
Now we'll see that our initial post, that does not have an `author` field has failed validation, while the post that does have the `author` field is good to go.
![Invalid Document Schema
We can go further and add validations to individual fields as well. Say for SEO purposes we wanted all the titles of the blog posts to be a minimum of 20 characters and have a maximum length of 80 characters. We can represent that like this:
``` javascript
{
$jsonSchema: {
bsonType: "object",
required: "tags" ],
properties: {
title: {
type: "string",
minLength: 20,
maxLength: 80
}
}
}
}
```
Now if we try to insert a document into our `posts` collection either via the Node.js Driver or via Compass, we will get an error.
![Validation Error
There are many more rules and validations you can add. Check out the full list here. For a more advanced guided approach, check out the articles on schema validation with arrays and dependencies.
### Expanding on Schema Validation
With Mongoose, our data model and schema are the basis for our interactions with MongoDB. MongoDB itself is not aware of any of these constraints, Mongoose takes the role of judge, jury, and executioner on what queries can be executed and what happens with them.
But with MongoDB native schema validation, we have additional flexibility. When we implement a schema, validation on existing documents does not happen automatically. Validation is only done on updates and inserts. If we wanted to leave existing documents alone though, we could change the `validationLevel` to only validate new documents inserted in the database.
Additionally, with schema validations done at the MongoDB database level, we can choose to still insert documents that fail validation. The `validationAction` option allows us to determine what happens if a query fails validation. By default, it is set to `error`, but we can change it to `warn` if we want the insert to still occur. Now instead of an insert or update erroring out, it would simply warn the user that the operation failed validation.
And finally, if we needed to, we can bypass document validation altogether by passing the `bypassDocumentValidation` option with our query. To show you how this works, let's say we wanted to insert just a `title` in our `posts` collection and we didn't want any other data. If we tried to just do this...
``` javascript
db.collection('posts').insertOne({ title: 'Awesome' });
```
... we would get an error saying that document validation failed. But if we wanted to skip document validation for this insert, we would simply do this:
``` javascript
db.collection('posts').insertOne(
{ title: 'Awesome' },
{ bypassDocumentValidation: true }
);
```
This would not be possible with Mongoose. MongoDB schema validation is more in line with the entire philosophy of MongoDB where the focus is on a flexible design schema that is quickly and easily adaptable to your use cases.
## Populate and Lookup
The final area where I would like to compare Mongoose and the Node.js MongoDB driver is its support for pseudo-joins. Both Mongoose and the native Node.js driver support the ability to combine documents from multiple collections in the same database, similar to a join in traditional relational databases.
The Mongoose approach is called **Populate**. It allows developers to create data models that can reference each other and then, with a simple API, request data from multiple collections. For our example, let's expand on the blog post and add a new collection for users.
``` javascript
const user = new Schema({
name: String,
email: String
});
const blog = new Schema({
title: String,
slug: String,
published: Boolean,
content: String,
tags: String],
comments: [{
user: { Schema.Types.ObjectId, ref: 'User' },
content: String,
votes: Number
}]
});
const User = mongoose.model('User', user);
const Blog = mongoose.model('Blog', blog);
```
What we did above was we created a new model and schema to represent users leaving comments on blog posts. When a user leaves a comment, instead of storing information on them, we would just store that user’s `_id`. So, an update operation to add a new comment to our post may look something like this:
``` javascript
Blog.updateOne({
comments: [{ user: "12345", content: "Great Post!!!" }]
});
```
This is assuming that we have a user in our `User` collection with the `_id` of `12345`. Now, if we wanted to **populate** our `user` property when we do a query—and instead of just returning the `_id` return the entire document—we could do:
``` javascript
Blog.
findOne({}).
populate('comments.user').
exec(function (err, post) {
console.log(post.comments[0].user.name) // Name of user for 1st comment
});
```
Populate coupled with Mongoose data modeling can be very powerful, especially if you're coming from a relational database background. The drawback though is the amount of magic going on under the hood to make this happen. Mongoose would make two separate queries to accomplish this task and if you're joining multiple collections, operations can quickly slow down.
The other issue is that the populate concept only exists at the application layer. So while this does work, relying on it for your database management can come back to bite you in the future.
MongoDB as of version 3.2 introduced a new operation called `$lookup` that allows to developers to essentially do a left outer join on collections within a single MongoDB database. If we wanted to populate the user information using the Node.js driver, we could create an aggregation pipeline to do it. Our starting point using the `$lookup` operator could look like this:
``` javascript
db.collection('posts').aggregate([
{
'$lookup': {
'from': 'users',
'localField': 'comments.user',
'foreignField': '_id',
'as': 'users'
}
}, {}
], (err, post) => {
console.log(post.users); //This would contain an array of users
});
```
We could further create an additional step in our aggregation pipeline to replace the user information in the `comments` field with the users data, but that's a bit out of the scope of this article. If you wish to learn more about how aggregation pipelines work with MongoDB, check out the [aggregation docs.
## Final Thoughts: Do I Really Need Mongoose?
Both Mongoose and the MongoDB Node.js driver support similar functionality. While Mongoose does make MongoDB development familiar to someone who may be completely new, it does perform a lot of magic under the hood that could have unintended consequences in the future.
I personally believe that you don't need an ODM to be successful with MongoDB. I am also not a huge fan of ORMs in the relational database world. While they make initial dive into a technology feel familiar, they abstract away a lot of the power of a database.
Developers have a lot of choices to make when it comes to building applications. In this article, we looked at the differences between using an ODM versus the native driver and showed that the difference between the two is not that big. Using an ODM like Mongoose can make development feel familiar but forces you into a rigid design, which is an anti-pattern when considering building with MongoDB.
The MongoDB Node.js driver works natively with your MongoDB database to give you the best and most flexible development experience. It allows the database to do what it's best at while allowing your application to focus on what it's best at, and that's probably not managing data models. | md | {
"tags": [
"JavaScript",
"MongoDB",
"Node.js"
],
"pageDescription": "Learn why using an Object Data Modeling library may not be the best choice when building MongoDB apps with Node.js.",
"contentType": "Article"
} | MongoDB & Mongoose: Compatibility and Comparison | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-data-api-aws-gateway | created | # Creating an API with the AWS API Gateway and the Atlas Data API
## Introduction
> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.
This article will walk through creating an API using the Amazon API Gateway in front of the MongoDB Atlas Data API. When integrating with the Amazon API Gateway, it is possible but undesirable to use a driver, as drivers are designed to be long-lived and maintain connection pooling. Using serverless functions with a driver can result in either a performance hit – if the driver is instantiated on each call and must authenticate – or excessive connection numbers if the underlying mechanism persists between calls, as you have no control over when code containers are reused or created.
TheMongoDB Atlas Data API is an HTTPS-based API that allows us to read and write data in Atlas where a MongoDB driver library is either not available or not desirable. For example, when creating serverless microservices with MongoDB.
AWS (Amazon Web Services) describe their API Gateway as:
> "A fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
> API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales."
## Prerequisites.
A core requirement for this walkthrough is to have an Amazon Web Services account, the API Gateway is available as part of the AWS free tier, allowing up to 1 million API calls per month, at no charge, in your first 12 months with AWS.
We will also need an Atlas Cluster for which we have enabled the Data API – and our endpoint URL and API Key. You can learn how to get these in this Article or this Video if you do not have them already.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
A common use of Atlas with the Amazon API Gateway might be to provide a managed API to a restricted subset of data in our cluster, which is a common need for a microservice architecture. To demonstrate this, we first need to have some data available in MongoDB Atlas. This can be added by selecting the three dots next to our cluster name and choosing "Load Sample Dataset", or following instructions here.
## Creating an API with the Amazon API Gateway and the Atlas Data API
##
The instructions here are an extended variation from Amazon's own "Getting Started with the API Gateway" tutorial. I do not presume to teach you how best to use Amazon's API Gateway as Amazon itself has many fine resources for this, what we will do here is use it to get a basic Public API enabled that uses the Data API.
> The Data API itself is currently in an early preview with a flat security model allowing all users who have an API key to query or update any database or collection. Future versions will have more granular security. We would not want to simply expose the current data API as a 'Public' API but we can use it on the back-end to create more restricted and specific access to our data.
>
We are going to create an API which allows users to GET the ten films for any given year which received the most awards - a notional "Best Films of the Year". We will restrict this API to performing only that operation and supply the year as part of the URL
We will first create the API, then analyze the code we used for it.
## Create a AWS Lambda Function to retrieve data with the Data API
1. Sign in to the Lambda console athttps://console.aws.amazon.com/lambda.
2. Choose **Create function**.
3. For **Function name**, enter top-movies-for-year.
4. Choose **Create function**.
When you see the Javascript editor that looks like this
Replace the code with the following, changing the API-KEY and APP-ID to the values for your Atlas cluster. Save and click **Deploy** (In a production application you might look to store these in AWS Secrets manager , I have simplified by putting them in the code here).
```
const https = require('https');
const atlasEndpoint = "/app/APP-ID/endpoint/data/beta/action/find";
const atlasAPIKey = "API-KEY";
exports.handler = async(event) => {
if (!event.queryStringParameters || !event.queryStringParameters.year) {
return { statusCode: 400, body: 'Year not specified' };
}
//Year is a number but the argument is a string so we need to convert as MongoDB is typed
let year = parseInt(event.queryStringParameters.year, 10);
console.log(`Year = ${year}`)
if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }
const payload = JSON.stringify({
dataSource: "Cluster0",
database: "sample_mflix",
collection: "movies",
filter: { year },
projection: { _id: 0, title: 1, awards: "$awards.wins" },
sort: { "awards.wins": -1 },
limit: 10
});
const options = {
hostname: 'data.mongodb-api.com',
port: 443,
path: atlasEndpoint,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': payload.length,
'api-key': atlasAPIKey
}
};
let results = '';
const response = await new Promise((resolve, reject) => {
const req = https.request(options, res => {
res.on('data', d => {
results += d;
});
res.on('end', () => {
console.log(`end() status code = ${res.statusCode}`);
if (res.statusCode == 200) {
let resultsObj = JSON.parse(results)
resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });
}
else {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key
}
});
});
//Do not give the user clues about backend issues for security reasons
req.on('error', error => {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable
});
req.write(payload);
req.end();
});
return response;
};
```
Alternatively, if you are familiar with working with packages and Lambda, you could upload an HTTP package like Axios to Lambda as a zipfile, allowing you to use the following simplified code.
```
const axios = require('axios');
const atlasEndpoint = "https://data.mongodb-api.com/app/APP-ID/endpoint/data/beta/action/find";
const atlasAPIKey = "API-KEY";
exports.handler = async(event) => {
if (!event.queryStringParameters || !event.queryStringParameters.year) {
return { statusCode: 400, body: 'Year not specified' };
}
//Year is a number but the argument is a string so we need to convert as MongoDB is typed
let year = parseInt(event.queryStringParameters.year, 10);
console.log(`Year = ${year}`)
if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }
const payload = {
dataSource: "Cluster0",
database: "sample_mflix",
collection: "movies",
filter: { year },
projection: { _id: 0, title: 1, awards: "$awards.wins" },
sort: { "awards.wins": -1 },
limit: 10
};
try {
const response = await axios.post(atlasEndpoint, payload, { headers: { 'api-key': atlasAPIKey } });
return response.data.documents;
}
catch (e) {
return { statusCode: 500, body: 'Unable to service request' }
}
};
```
## Create an HTTP endpoint for our custom API function
##
We now need to route an HTTP endpoint to our Lambda function using the HTTP API.
The HTTP API provides an HTTP endpoint for your Lambda function. API Gateway routes requests to your Lambda function, and then returns the function's response to clients.
1. Go to the API Gateway console athttps://console.aws.amazon.com/apigateway.
2. Do one of the following:
To create your first API, for HTTP API, choose **Build**.
If you've created an API before, choose **Create API**, and then choose **Build** for HTTP API.
3. For Integrations, choose **Add integration**.
4. Choose **Lambda**.
5. For **Lambda function**, enter top-movies-for-year.
6. For **API name**, enter movie-api.
8. Choose **Next**.
8. Review the route that API Gateway creates for you, and then choose **Next**.
9. Review the stage that API Gateway creates for you, and then choose **Next**.
10. Choose **Create**.
Now you've created an HTTP API with a Lambda integration and the Atlas Data API that's ready to receive requests from clients.
## Test your API
You should now be looking at API Gateway details that look like this, if not you can get to it by going tohttps://console.aws.amazon.com/apigatewayand clicking on **movie-api**
Take a note of the **Invoke URL**, this is the base URL for your API
Now, in a new browser tab, browse to `/top-movies-for-year?year=2001` . Changing ` `to the Invoke URL shown in AWS. You should see the results of your API call - JSON listing the top 10 "Best" films of 2001.
## Reviewing our Function.
##
We start by importing the Standard node.js https library - the Data API needs no special libraries to call it. We also define our API Key and the path to our find endpoint, You get both of these from the Data API tab in Atlas.
```
const https = require('https');
const atlasEndpoint = "/app/data-amzuu/endpoint/data/beta/action/find";
const atlasAPIKey = "YOUR-API-KEY";
```
Now we check that the API call included a parameter for year and that it's a number - we need to convert it to a number as in MongoDB, "2001" and 2001 are different values, and searching for one will not find the other. The collection uses a number for the movie release year.
```
exports.handler = async (event) => {
if (!event.queryStringParameters || !event.queryStringParameters.year) {
return { statusCode: 400, body: 'Year not specified' };
}
//Year is a number but the argument is a string so we need to convert as MongoDB is typed
let year = parseInt(event.queryStringParameters.year, 10);
console.log(`Year = ${year}`)
if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }
const payload = JSON.stringify({
dataSource: "Cluster0", database: "sample_mflix", collection: "movies",
filter: { year }, projection: { _id: 0, title: 1, awards: "$awards.wins" }, sort: { "awards.wins": -1 }, limit: 10
});
```
THen we construct our payload - the parameters for the Atlas API Call, we are querying for year = year, projecting just the title and the number of awards, sorting by the numbers of awards descending and limiting to 10.
```
const payload = JSON.stringify({
dataSource: "Cluster0", database: "sample_mflix", collection: "movies",
filter: { year }, projection: { _id: 0, title: 1, awards: "$awards.wins" },
sort: { "awards.wins": -1 }, limit: 10
});
```
We then construct the options for the HTTPS POST request to the Data API - here we pass the Data API API-KEY as a header.
```
const options = {
hostname: 'data.mongodb-api.com',
port: 443,
path: atlasEndpoint,
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': payload.length,
'api-key': atlasAPIKey
}
};
```
Finally we use some fairly standard code to call the API and handle errors. We can get Request errors - such as being unable to contact the server - or Response errors where we get any Response code other than 200 OK - In both cases we return a 500 Internal error from our simplified API to not leak any details of the internals to a potential hacker.
```
let results = '';
const response = await new Promise((resolve, reject) => {
const req = https.request(options, res => {
res.on('data', d => {
results += d;
});
res.on('end', () => {
console.log(`end() status code = ${res.statusCode}`);
if (res.statusCode == 200) {
let resultsObj = JSON.parse(results)
resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });
} else {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key
}
});
});
//Do not give the user clues about backend issues for security reasons
req.on('error', error => {
reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable
});
req.write(payload);
req.end();
});
return response;
};
```
Our Axios verison is just the same functionality as above but simplified by the use of a library.
## Conclusion
As we can see, calling the Atlas Data API from AWS Lambda function is incredibly simple, especially if making use of a library like Axios. The Data API is also stateless, so there are no concerns about connection setup times or maintaining long lived connections as there would be using a Driver.
| md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "In this article, we look at how the Atlas Data API is a great choice for accessing MongoDB Atlas from AWS Lambda Functions by creating a custom API with the AWS API Gateway. ",
"contentType": "Quickstart"
} | Creating an API with the AWS API Gateway and the Atlas Data API | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/javascript/kenya-hostels | created | # Hostels Kenya Example App
## Creators
Derrick Muteti and Felix Omuok from Kirinyaga University in Kenya contributed this project.
## About the project
Hostels Kenya is a website that provides students the opportunity to find any hostel of their choice by filtering by distance from school, university name, room type, and even the monthly rent. It also provides the students with directions in case they are new to the area. Once they find a hostel that they like, they have the option to make a booking request, after which the landlord/landlady is automatically notified via SMS by our system. The students can also request to receive a notification when the hostel of their choice is fully occupied. Students have the opportunity to review and rate the hostels available in our system, helping other students make better decisions when looking for hostels. We launched the website on 1st September 2020, and so far, we have registered 26 hostels around our university and we are expanding to cover other universities.
## Inspiration
I come from Nyanza Province in Kenya and I study at Kirinyaga University, the university in Kenya's central region, which is around 529km from my home. Most universities in Kenya do not offer student accommodation, and if any, a tiny percentage of the students are accommodated by the school. Because of this reason, most students stay in privately owned hostels outside the school. Therefore, getting a hostel is always challenging, especially for students who are new to the area. In my case, I had to travel from home to Kirinyaga University a month before the admission date to book a hostel. Thus, I decided to develop hostels Kenya to help students from different parts of the country find student hostels easily and make booking requests.
## Why MongoDB?
My journey of developing this project has had ups and downs. I started working on the project last year using PHP and MYSQL. After facing many challenges in storing my data and dealing with geospatial queries, I had to stop the project. The funny thing is that last year, I did not know MongoDB existed. But I saw that MongoDB was part of the GitHub Student Developer Pack. And now that I was faced with a problem, I had to take the time and learn MongoDB.
In April this year, I started the project from scratch using Node.js and MongoDB.
MongoDB made it very easy for me to deal with geospatial queries and the fact that I was able to embed documents made it very fast when reading questions. This was not possible with MYSQL, and that is why I opted for a NoSQL database.
Learning MongoDB was also straightforward, and it took me a short duration of time to set up my project. I love the fact that MongoDB handles most of the heavy tasks for me. To be sincere, I do not think I could have finished the project in time with all the functionalities had I not used MongoDB.
Since the site's launch on 1st October 2020, the site has helped over 1 thousand students from my university find hostels, and we hope this number will grow once we expand to other universities. With the government's current COVID-19 regulations on traveling, many students have opted to use this site instead of traveling for long distances as they wait to resume in-person learning come January 2021.
## How it works
Students can create an account on our website. Our search query uses the school they go to, the room type they're looking for, the monthly rent, and the school's distance. Once students fill out this search, it will return the hostels that match their wishes. We use Geodata, the school's longitude, latitude, and the hostels to come up with the closest hostels. Filtering and querying this is obviously where the MongoDB aggregation framework comes into place. We love it!
Hostel owners can register their hostel via the website. They will be added to our database, and students will be able to start booking a room via our website.
Students can also view all the hostels on a map and select one of their choices. It was beneficial that we could embed all of this data, and the best part was MongoDB's ability to deal with GeoData.
Today hostel owners can register their hostel via the website; they can log in to their account and change pictures. But we're looking forward to implementing more features like a dashboard and making it more user friendly.
We're currently using mongoose, but we're thinking of expanding and using MongoDB Atlas in the future. I've been watching talks about Atlas at MongoDB.live Asia, and I was amazed. I'm looking forward to implementing this. I've also been watching some MongoDB YouTube videos on design patterns, and I realize that this is something that we can add in the future.
## Challenges and learnings
Except for the whole change from PHP and SQL, to MongoDB & Node.js, finding hostels has been our challenge. I underestimated the importance of marketing. I never knew how difficult it would be until I had to go out and talk to hostel owners., trying to convince them to come on board. But I am seeing that the students who are using the application are finding it very useful.
We decided to bring another person on board to help us with marketing. And we are also trying to reach the school to see how they can help us engage with the hostels.
For the future, we want to create a desktop application for hostel owners. Something that can be installed on their computer makes it easy for them to manage their students' bookings.
Most landlords are building many hostels around the school, so we're hoping to have them on board.
But first, we want to add more hostels into the system in December and create more data for our students. Especially now we might go back to school in January, it's essential to keep adding accommodations.
As for me, I’m also following courses on MongoDB University. I noticed that there is no MongoDB Certified Professional in my country, and I would like to become the first one.
| md | {
"tags": [
"JavaScript",
"Atlas",
"Node.js"
],
"pageDescription": "Find hostels and student apartments all over Kenya",
"contentType": "Code Example"
} | Hostels Kenya Example App | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/zap-tweet-repeat-how-to-use-zapier-mongodb | created | # Zap, Tweet, and Repeat! How to Use Zapier with MongoDB
I'm a huge fan of automation when the scenario allows for it. Maybe you need to keep track of guest information when they RSVP to your event, or maybe you need to monitor and react to feeds of data. These are two of many possible scenarios where you probably wouldn't want to do things manually.
There are quite a few tools that are designed to automate your life. Some of the popular tools include IFTTT, Zapier, and Automate. The idea behind these services is that given a trigger, you can do a
series of events.
In this tutorial, we're going to see how to collect Twitter data with Zapier, store it in MongoDB using a Realm webhook function, and then run aggregations on it using the MongoDB query language (MQL).
## The Requirements
There are a few requirements that must be met prior to starting this tutorial:
- A paid tier of Zapier with access to premium automations
- A properly configured MongoDB Atlas cluster
- A Twitter account
There is a Zapier free tier, but because we plan to use webhooks, which are premium in Zapier, a paid account is necessary. To consume data from Twitter in Zapier, a Twitter account is necessary, even if we plan to consume data that isn't related to our account. This data will be stored in MongoDB, so a cluster with properly configured IP access and user permissions is required.
>You can get started with MongoDB Atlas by launching a free M0 cluster, no credit card required.
While not necessary to create a database and collection prior to use, we'll be using a **zapier** database and a **tweets** collection throughout the scope of this tutorial.
## Understanding the Twitter Data Model Within Zapier
Since the plan is to store tweets from Twitter within MongoDB and then create queries to make sense of it, we should probably get an understanding of the data prior to trying to work with it.
We'll be using the "Search Mention" functionality within Zapier for Twitter. Essentially, it allows us to provide a Twitter query and trigger an automation when the data is found. More on that soon.
As a result, we'll end up with the following raw data:
``` json
{
"created_at": "Tue Feb 02 20:31:58 +0000 2021",
"id": "1356701917603238000",
"id_str": "1356701917603237888",
"full_text": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript",
"truncated": false,
"display_text_range": 0, 188],
"metadata": {
"iso_language_code": "en",
"result_type": "recent"
},
"source": "TweetDeck",
"in_reply_to_status_id": null,
"in_reply_to_status_id_str": null,
"in_reply_to_user_id": null,
"in_reply_to_user_id_str": null,
"in_reply_to_screen_name": null,
"user": {
"id": "227546834",
"id_str": "227546834",
"name": "Nic Raboy",
"screen_name": "nraboy",
"location": "Tracy, CA",
"description": "Advocate of modern web and mobile development technologies. I write tutorials and speak at events to make app development easier to understand. I work @MongoDB.",
"url": "https://t.co/mRqzaKrmvm",
"entities": {
"url": {
"urls": [
{
"url": "https://t.co/mRqzaKrmvm",
"expanded_url": "https://www.thepolyglotdeveloper.com",
"display_url": "thepolyglotdeveloper.com",
"indices": [0, 23]
}
]
},
"description": {
"urls": ""
}
},
"protected": false,
"followers_count": 4599,
"friends_count": 551,
"listed_count": 265,
"created_at": "Fri Dec 17 03:33:03 +0000 2010",
"favourites_count": 4550,
"verified": false
},
"lang": "en",
"url": "https://twitter.com/227546834/status/1356701917603237888",
"text": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript"
}
```
The data we have access to is probably more than we need. However, it really depends on what you're interested in. For this example, we'll be storing the following within MongoDB:
``` json
{
"created_at": "Tue Feb 02 20:31:58 +0000 2021",
"user": {
"screen_name": "nraboy",
"location": "Tracy, CA",
"followers_count": 4599,
"friends_count": 551
},
"text": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript"
}
```
Without getting too far ahead of ourselves, our analysis will be based off the `followers_count` and the `location` of the user. We want to be able to make sense of where our users are and give priority to users that meet a certain followers threshold.
## Developing a Webhook Function for Storing Tweet Information with MongoDB Realm and JavaScript
Before we start connecting Zapier and MongoDB, we need to develop the middleware that will be responsible for receiving tweet data from Zapier.
Remember, you'll need to have a properly configured MongoDB Atlas cluster.
We need to create a Realm application. Within the MongoDB Atlas dashboard, click the **Realm** tab.
![MongoDB Realm Applications
For simplicity, we're going to want to create a new application. Click the **Create a New App** button and proceed to fill in the information about your application.
From the Realm Dashboard, click the **3rd Party Services** tab.
We're going to want to create an **HTTP** service. The name doesn't matter, but it might make sense to name it **Twitter** based on what we're planning to do.
Because we plan to work with tweet data, it makes sense to call our webhook function **tweet**, but the name doesn't truly matter.
With the exception of the **HTTP Method**, the defaults are fine for this webhook. We want the method to be POST because we plan to create data with this particular webhook function. Make note of the **Webhook URL** because it will be used when we connect Zapier.
The next step is to open the **Function Editor** so we can add some logic behind this function. Add the following JavaScript code:
``` javascript
exports = function (payload, response) {
const tweet = EJSON.parse(payload.body.text());
const collection = context.services.get("mongodb-atlas").db("zapier").collection("tweets");
return collection.insertOne(tweet);
};
```
In the above code, we are taking the request payload, getting a handle to the **tweets** collection within the **zapier** database, and then doing an insert operation to store the data in the payload.
There are a few things to note in the above code:
1. We are not validating the data being sent in the request payload. In a realistic scenario, you'd probably want some kind of validation logic in place to be sure about what you're storing.
2. We are not authenticating the user sending the data. In this example, we're trusting that only Zapier knows about our URL.
3. We aren't doing any error handling.
When we call our function, a new document should be created within MongoDB.
By default, the function will not deploy when saving. After saving, make sure to review and deploy the changes through the notification at the top of the browser window.
## Creating a "Zap" in Zapier to Connect Twitter to MongoDB
So, we know the data we'll be working with and we have a MongoDB Realm webhook function that is ready for receiving data. Now, we need to bring everything together with Zapier.
For clarity, new Twitter matches will be our trigger in Zapier, and the webhook function will be our event.
Within Zapier, choose to create a new "Zap," which is an automation. The trigger needs to be a **Search Mention in Twitter**, which means that when a new Tweet is detected using a search query, our events happen.
For this example, we're going to use the following Twitter search query:
``` none
url:developer.mongodb.com -filter:retweets filter:safe lang:en -from:mongodb -from:realm
```
The above query says that we are looking for tweets that include a URL to developer.mongodb.com. The URL doesn't need to match exactly as long as the domain matches. The query also says that we aren't interested in retweets. We only want original tweets, they have to be in English, and they have to be detected as safe for work.
In addition to the mentioned search criteria, we are also excluding tweets that originate from one of the MongoDB accounts.
In theory, the above search query could be used to see what people are saying about the MongoDB Developer Hub.
With the trigger in place, we need to identify the next stage of the automation pipeline. The next stage is taking the data from the trigger and sending it to our Realm webhook function.
As the event, make sure to choose **Webhooks by Zapier** and specify a POST request. From here, you'll be prompted to enter your Realm webhook URL and the method, which should be POST. Realm is expecting the payload to be JSON, so it is important to select JSON within Zapier.
We have the option to choose which data from the previous automation stage to pass to our webhook. Select the fields you're interested in and save your automation.
The data I chose to send looks like this:
``` json
{
"created_at": "Tue Feb 02 20:31:58 +0000 2021",
"username": "nraboy",
"location": "Tracy, CA",
"follower_count": "4599",
"following_count": "551",
"message": "In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript"
}
```
The fields do not match the original fields brought in by Twitter. It is because I chose to map them to what made sense for me.
When deploying the Zap, anytime a tweet is found that matches our query, it will be saved into our MongoDB cluster.
## Analyzing the Twitter Data in MongoDB with an Aggregation Pipeline
With tweet data populating in MongoDB, it's time to start querying it to make sense of it. In this fictional example, we want to know what people are saying about our Developer Hub and how popular these individuals are.
To do this, we're going to want to make use of an aggregation pipeline within MongoDB.
Take the following, for example:
``` json
{
"$addFields": {
"follower_count": {
"$toInt": "$follower_count"
},
"following_count": {
"$toInt": "$following_count"
}
}
}, {
"$match": {
"follower_count": {
"$gt": 1000
}
}
}, {
"$group": {
"_id": {
"location": "$location"
},
"location": {
"$sum": 1
}
}
}
]
```
There are three stages in the above aggregation pipeline.
We want to understand the follower data for the individual who made the tweet, but that data comes into MongoDB as a string rather than an integer. The first stage of the pipeline takes the `follower_count` and `following_count` fields and converts them from string to integer. In reality, we are using `$addFields` to create new fields, but because they have the same name as existing fields, the existing fields are replaced.
The next stage is where we want to identify people with more than 1,000 followers as a person of interest. While people with fewer followers might be saying great things, in this example, we don't care.
After we've filtered out people by their follower count, we do a group based on their location. It might be valuable for us to know where in the world people are talking about MongoDB. We might want to know where our target audience exists.
The aggregation pipeline we chose to use can be executed with any of the MongoDB drivers, through the MongoDB Atlas dashboard, or through the CLI.
## Conclusion
You just saw how to use [Zapier with MongoDB to automate certain tasks and store the results as documents within the NoSQL database. In this example, we chose to store Twitter data that matched certain criteria, later to be analyzed with an aggregation pipeline. The automations and analysis options that you can do are quite limitless.
If you enjoyed this tutorial and want to get engaged with more content and like-minded developers, check out the MongoDB Community. | md | {
"tags": [
"MongoDB",
"JavaScript",
"Node.js"
],
"pageDescription": "Learn how to create automated workflows with Zapier and MongoDB.",
"contentType": "Tutorial"
} | Zap, Tweet, and Repeat! How to Use Zapier with MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-on-raspberry-pi | created | # Install & Configure MongoDB on the Raspberry Pi
I've been a big fan of the Raspberry Pi since the first version was
released in 2012. The newer generations are wonderful home-automation
and IoT prototyping computers, with built in WiFi, and the most recent
versions (the Pi 3 and Pi 4) are 64-bit. This means they can run the
MongoDB server, mongod, locally! MongoDB even provides a pre-compiled
version for the Raspberry Pi processor, so it's relatively
straightforward to get it installed.
I'm currently building a home-automation service on a Raspberry Pi 4.
Its job is to run background tasks, such as periodically requesting data
from the internet, and then provide the data to a bunch of small devices
around my house, such as some smart displays, and (ahem) my coffee
grinder.
The service doesn't have super-complex data storage requirements, and I
could have used an embedded database, such as SQLite. But I've become
resistant to modelling tables and joins in a relational database and
working with flat rows. The ability to store rich data structures in a
single MongoDB database is a killer feature for me.
## Prerequisites
You will need:
- A Raspberry Pi 3 or 4
- A suitably sized Micro SD card (I used a 16 Gb card)
- A computer and SD card reader to write the SD card image. (This
*can* be another Raspberry Pi, but I'm using my desktop PC)
- A text editor on the host computer. (I recommend VS
Code)
## What This Tutorial Will Do
This tutorial will show you how to:
- Install the 64-bit version of Ubuntu Server on your Raspberry Pi.
- Configure it to connect to your WiFi.
- *Correctly* install MongoDB onto your Pi.
- Add a user account, so you can *safely* expose MongoDB on your home
network.
When you're done, you'll have a secured MongoDB instance available on
your home network.
>
>
>Before we get too far into this, please bear in mind that you don't want
>to run a production, web-scale database on a Raspberry Pi. Despite the
>processor improvements on the Pi 4, it's still a relatively low-powered
>machine, with a relatively low amount of RAM for a database server.
>Still! For a local, offline MongoDB instance, with the ease of
>development that MongoDB offers, a Raspberry Pi is a great low-cost
>solution. If you *do* wish to serve your data to the Internet, you
>should definitely check out
>Atlas, MongoDB's cloud hosting
>solution. MongoDB will host your database for you, and the service has a
>generous (and permanent) free tier!
>
>
## Things Not To Do
*Do not* run `apt install mongodb` on your Raspberry Pi, or indeed any
Linux computer! The versions of MongoDB shipped with Linux distributions
are *very* out of date. They won't run as well, and some of them are so
old they're no longer supported.
MongoDB provide versions of the database, pre-packaged for many
different operating systems, and Ubuntu Server on Raspberry Pi is one of
them.
## Installing Ubuntu
Download and install the Raspberry Pi
Imager for your host computer.
Run the Raspberry Pi Imager, and select Ubuntu Server 20.04, 64-bit for
Raspberry Pi 3/4.
Make sure you *don't* accidentally select Ubuntu Core, or a 32-bit
version.
Insert your Micro SD Card into your computer and select it in the
Raspberry Pi Imager window.
Click **Write** and wait for the image to be written to the SD Card.
This may take some time! When it's finished, close the Raspberry Pi
Imager. Then remove the Micro SD Card from your computer, and re-insert
it.
The Ubuntu image for Raspberry Pi uses
cloud-init to configure the system
at boot time. This means that in your SD card `system-boot` volume,
there should be a YAML file, called `network-config`. Open this file in
VS Code (or your favourite text editor).
Edit it so that it looks like the following. The indentation is
important, and it's the 'wifis' section that you're editing to match
your wifi configuration. Replace 'YOUR-WIFI-SSD' with your WiFi's name,
and 'YOUR-WIFI-PASSWORD' with your WiFi password.
``` yaml
version: 2
ethernets:
eth0:
dhcp4: true
optional: true
wifis:
wlan0:
dhcp4: true
optional: true
access-points:
"YOUR-WIFI-SSID":
password: "YOUR-WIFI-PASSWORD"
```
Now eject the SD card (safely!) from your computer, insert it into the
Pi, and power it up! It may take a few minutes to start up, at least the
first time. You'll need to monitor your network to wait for the Pi to
connect. When it does, ssh into the Pi with
`ssh ubuntu@`. The password is also `ubuntu`.
You'll be prompted to change your password to something secret.
Once you've set your password update the operating system by running the
following commands:
``` bash
sudo apt update
sudo apt upgrade
```
## Install MongoDB
Now let's install MongoDB. This is done as follows:
``` bash
# Install the MongoDB 4.4 GPG key:
wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -
# Add the source location for the MongoDB packages:
echo "deb arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list
# Download the package details for the MongoDB packages:
sudo apt-get update
# Install MongoDB:
sudo apt-get install -y mongodb-org
```
The instructions above have mostly been taken from [Install MongoDB
Community Edition on
Ubuntu
## Run MongoDB
Ubuntu 20.04 uses Systemd to run background services, so to set up
mongod to run in the background, you need to enable and start the
service:
``` bash
# Ensure mongod config is picked up:
sudo systemctl daemon-reload
# Tell systemd to run mongod on reboot:
sudo systemctl enable mongod
# Start up mongod!
sudo systemctl start mongod
```
Now, you can check to see if the service is running correctly by
executing the following command. You should see something like the
output below it:
``` bash
$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-08-09 08:09:07 UTC; 4s ago
Docs: https://docs.mongodb.org/manual
Main PID: 2366 (mongod)
CGroup: /system.slice/mongod.service
└─2366 /usr/bin/mongod --config /etc/mongod.conf
```
If your service is running correctly, you can run the MongoDB client,
`mongo`, from the command-line to connect:
``` bash
# Connect to the local mongod, on the default port:
$ mongo
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("576ec12b-6c1a-4382-8fae-8b6140e76d51") }
MongoDB server version: 4.4.0
---
The server generated these startup warnings when booting:
2020-08-09T08:09:08.697+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
2020-08-09T08:09:10.712+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
```
First, check the warnings. You can ignore the recommendation to run the
XFS filesystem, as this is just a small, local install. The warning
about access control not being enabled for the database is important
though! You'll fix that in the next section. At this point, if you feel
like it, you can enable the free
monitoring
that MongoDB provides, by running `db.enableFreeMonitoring()` inside the
mongo shell.
## Securing MongoDB
Here's the next, essential steps, that other tutorials miss out, for
some reason. Recent versions of mongod won't connect to the network
unless user authentication has been configured. Because of this, at the
moment your database is only accessible from the Raspberry Pi itself.
This may actually be fine, if like me, the services you're running with
MongoDB are running on the same device. It's still a good idea to set a
username and password on the database.
Here's how you do that, inside `mongo` (replace SUPERSECRETPASSWORD with
an *actual* secret password!):
``` javascript
use admin
db.createUser( { user: "admin",
pwd: "SUPERSECRETPASSWORD",
roles: "userAdminAnyDatabase",
"dbAdminAnyDatabase",
"readWriteAnyDatabase"] } )
exit
```
The three roles listed give the `admin` user the ability to administer
all user accounts and data in MongoDB. Make sure your password is
secure. You can use a [random password
generator to be safe.
Now you need to reconfigure mongod to run with authentication enabled,
by adding a couple of lines to `/etc/mongod.conf`. If you're comfortable
with a terminal text editor, such as vi or emacs, use one of those. I
used nano, because it's a little simpler, with
`sudo nano /etc/mongod.conf`. Add the following two lines somewhere in
the file. Like the `network-config` file you edited earlier, it's a YAML
file, so the indentation is important!
``` yaml
# These two lines must be uncommented and in the file together:
security:
authorization: enabled
```
And finally, restart mongod:
``` bash
sudo systemctl restart mongod
```
Ensure that authentication is enforced by connecting `mongo` without
authentication:
``` bash
$ mongo
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4002052b-1a39-4158-8a99-234cfd818e30") }
MongoDB server version: 4.4.0
> db.adminCommand({listDatabases: 1})
{
"ok" : 0,
"errmsg" : "command listDatabases requires authentication",
"code" : 13,
"codeName" : "Unauthorized"
}
> exit
```
Ensure you've exited `mongo` and now test that you can connect and
authenticate with the user details you created:
``` bash
$ mongo -u "admin" -p "SUPERSECRETPASSWORD"
MongoDB shell version v4.4.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("3dee8ec3-6e7f-4203-a6ad-976b55ea3020") }
MongoDB server version: 4.4.0
> db.adminCommand({listDatabases: 1})
{
"databases" :
{
"name" : "admin",
"sizeOnDisk" : 151552,
"empty" : false
},
{
"name" : "config",
"sizeOnDisk" : 36864,
"empty" : false
},
{
"name" : "local",
"sizeOnDisk" : 73728,
"empty" : false
},
{
"name" : "test",
"sizeOnDisk" : 8192,
"empty" : false
}
],
"totalSize" : 270336,
"ok" : 1
}
> exit
```
## Make MongoDB Available to your Network
**This step is optional!** Now that you've configured authentication on
your server, if you want your database to be available to other
computers on your network, you need to:
- Bind MongoDb to the Raspberry Pi's public IP address
- Open up port `27017` on the Raspberry Pi's firewall.
>
>
>If you *don't* want to access your data from your network, *don't*
>follow these steps! It's always better to leave things more secure, if
>possible.
>
>
First, edit `/etc/mongod.conf` again, the same way as before. This time,
change the IP address to 0.0.0.0:
``` yaml
# Change the bindIp to '0.0.0.0':
net:
port: 27017
bindIp: 0.0.0.0
```
And restart `mongod` again:
``` bash
sudo systemctl restart mongod
```
Open up port 27017 on your Raspberry Pi's firewall:
``` bash
sudo ufw allow 27017/tcp
```
Now, on *another computer on your network*, with the MongoDB client
installed, run the following to ensure that `mongod` is available on
your network:
``` bash
# Replace YOUR-RPI-IP-ADDRESS with your Raspberry Pi's actual IP address:
mongo --host 'YOUR-RPI-IP-ADDRESS'
```
If it connects, then you've successfully installed and configured
MongoDB on your Raspberry Pi!
### Security Caveats
*This short section is extremely important. Don't skip it.*
- *Never* open up an instance of `mongod` to the internet without
authentication enabled.
- Configure your firewall to limit the IP addresses which can connect
to your MongoDB port. (Your Raspberry Pi has just been configured to
allow connections from *anywhere*, with the assumption that your
home network has a firewall blocking access from outside.)
- Ensure the database user password you created is secure!
- Set up different database users for each app that connects to your
database server, with *only* the permissions required by each app.
MongoDB comes with sensible security defaults. It uses TLS, SCRAM-based
password authentication, and won't bind to your network port without
authentication being set up. It's still up to you to understand how to
secure your Raspberry Pi and any data you store within it. Go and read
the [MongoDB Security
Checklist
for further information on keeping your data secure.
## Wrapping Up
As you can see, there are a few steps to properly installing and
configuring MongoDB yourself. I hadn't done it for a while, and I'd
forgotten how complicated it can be! For this reason, you should
definitely consider using MongoDB Atlas
where a lot of this is taken care of for you. Not only is the
free-forever tier quite generous for small use-cases, there are also a
bunch of extra services thrown in, such as serverless functions,
charting, free-text search, and more!
You're done! Go write some code in your favourite programming language,
and if you're proud of it (or even if you're just having some trouble
and would like some help) let us
know!. Check out all the cool blog
posts on the MongoDB Developer Hub,
and make sure to bookmark MongoDB
Documentation
| md | {
"tags": [
"MongoDB",
"RaspberryPi"
],
"pageDescription": "Install and correctly configure MongoDB on Raspberry Pi",
"contentType": "Tutorial"
} | Install & Configure MongoDB on the Raspberry Pi | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/connectors/measuring-mongodb-kafka-connector-performance | created | # Measuring MongoDB Kafka Connector Performance
With today’s need of flexible event-driven architectures, companies across the globe choose best of breed technologies like MongoDB and Apache Kafka to help solve these challenges. While these two complementary technologies provide the power and flexibility to solve these large scale challenges, performance has always been at the forefront of concerns. In this blog, we will cover how to measure performance of the MongoDB Connector for Apache Kafka in both a source and sink configuration.
## Measuring Sink Performance
Recall that the MongoDB sink connector writes data from a Kafka topic into MongoDB. Writes by default use the ReplaceOneModel where the data is either updated if it's present on the destination cluster or created as a new document if it is not present. You are not limited to this upsert behavior. In fact, you can change the sink to perform deletes or inserts only. These write behaviors are defined by the Write Model Strategy setting in the sink configuration.
To determine the performance of the sink connector, we need a timestamp of when the document was written to MongoDB. Currently, the only write model strategy that writes a timestamp field on behalf of the user is UpdateOneTimestampsStrategy and UpdateOneBusinessKeyTimestampStrategy. These two write models insert a new field named **_insertedTS**, which can be used to query the lag between Kafka and MongoDB.
In this example, we’ll use MongoDB Atlas. MongoDB Atlas is a public cloud MongoDB data platform providing out-of-the-box capabilities such as MongoDB Charts, a tool to create visual representations of your MongoDB data. If you wish to follow along, you can create a free forever tier.
### Generate Sample Data
We will generate sample data using the datagen Kafka Connector provided by Confluent. Datagen is a convenient way of creating test data in the Kafka ecosystem. There are a few quickstart schema specifications bundled with this connector. We will use a quickstart called **users**.
```
curl -X POST -H "Content-Type: application/json" --data '
{"name": "datagen-users",
"config": { "connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
"kafka.topic": "topic333",
"quickstart": "users",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"max.interval": 50,
"iterations": 5000,
"tasks.max": "2"
}}' http://localhost:8083/connectors -w "\n"
```
### Configure Sink Connector
Now that the data is generated and written to the Kafka topic, “topic333,” let’s create our MongoDB sink connector to write this topic data into MongoDB Atlas. As stated earlier, we will add a field **_insertedTS** for use in calculating the lag between the message timestamp and this value. To perform the insert, let’s use the **UpdateOneTimestampsStrategy** write mode strategy.
```
curl -X POST -H "Content-Type: application/json" --data '
{"name": "kafkametadata3",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
"topics": "topic333",
"connection.uri": "MONGODB CONNECTION STRING GOES HERE",
"writemodel.strategy": "com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneTimestampsStrategy",
"database": "kafka",
"collection": "datagen",
"errors.log.include.messages": true,
"errors.deadletterqueue.context.headers.enable": true,
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"document.id.strategy": "com.mongodb.kafka.connect.sink.processor.id.strategy.KafkaMetaDataStrategy",
"tasks.max": 2,
"value.converter.schemas.enable":false,
"transforms": "InsertField",
"transforms.InsertField.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertField.offset.field": "offsetColumn",
"transforms": "InsertField",
"transforms.InsertField.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertField.timestamp.field": "timestampColumn"
}}' http://localhost:8083/connectors -w "\n"
```
Note: The field **_insertedTS** is populated with the time value of the Kafka connect server.
### Viewing Results with MongoDB Charts
Take a look at the MongoDB Atlas collection “datagen” and familiarize yourself with the added fields.
In this blog, we will use MongoDB Charts to display a performance graph. To make it easy to build the chart, we will create a view.
```
use kafka
db.createView("SinkView","datagen",
{
"$sort" : {
"_insertedTS" : 1,
"timestampColumn" : 1
}
},
{
"$project" : {
"_insertedTS" : 1,
"timestampColumn" : 1,
"_id" : 0
}
},
{
"$addFields" : {
"diff" : {
"$subtract" : [
"$_insertedTS",
{
"$convert" : {
"input" : "$timestampColumn",
"to" : "date"
}
}
]
}
}
}
])
```
To create a chart, click on the Charts tab in MongoDB Atlas:
![
Click on Datasources and “Add Data Source.” The dialog will show the view that was created.
Select the SinkView and click Finish.
Download the MongoDB Sink performance Chart from Gist.
```
curl https://gist.githubusercontent.com/RWaltersMA/555b5f17791ecb58e6e683c54bafd381/raw/748301bcb7ae725af4051d40b2e17a8882ef2631/sink-chart-performance.charts -o sink-performance.charts
```
Choose **Import Dashbaord** from the Add Dashboard dropdown and select the downloaded file.
Load the sink-perfromance.chart file.
Select the kafka.SinkView as the data source at the destination then click Save.
Now the KafkaPerformance chart is ready to view. When you click on the chart, you will see something like the following:
This chart shows statistics on the differences between the timestamp in the Kafka topic and Kafka connector. In the above example, the maximum time delta is approximately one second (997ms) from inserting 40,000 documents.
## Measuring Source Performance
To measure the source, we will take a different approach using KSQL to create a stream of the clusterTime timestamp from the MongoDB change stream and the time the row was written in the Kafka topic. From here, we can push this data into a MongoDB sink and display the results in a MongoDB Chart.
### Configure Source Connector
The first step will be to create the MongoDB Source connector that will be used to push data onto the Kafka topic.
```
curl -X POST -H "Content-Type: application/json" --data '
{"name": "mongo-source-perf",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSourceConnector",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"connection.uri": "mongodb+srv://MONGODB CONNECTION STRING HERE",
"database": "kafka",
"collection": "source-perf-test",
"mongo.errors.log.enable": "true",
"topic.prefix":"mdb",
"output.json.formatter" : "com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson",
"output.format.value":"schema",
"output.schema.infer.value":true,
"output.format.key":"json",
"publish.full.document.only": "false",
"change.stream.full.document": "updateLookup"
}}' http://localhost:8083/connectors -w "\n"
```
### Generate Sample Data
There are many ways to generate sample data on MongoDB. In this blog post, we will use the doc-gen tool (Github repo) to quickly create sample documents based upon the user’s schema, which is defined as follows:
```
{
"_id" : ObjectId("59b99db4cfa9a34dcd7885b6"),
"name" : "Ned Stark",
"email" : "[email protected]",
"password" : "$2b$12$UREFwsRUoyF0CRqGNK0LzO0HM/jLhgUCNNIJ9RJAqMUQ74crlJ1Vu"
}
```
To generate data in your MongoDB cluster, issue the following:
```
docker run robwma/doc-gen:1.0 python doc-gen.py -s '{"name":"string","email":"string","password":"string"}' -c "MONGODB CONNECTION STRING GOES HERE" -t 1000 -db "kafka" -col "source-perf-test"
```
### Create KSQL Queries
Launch KSQL and create a stream of the clusterTime within the message.
Note: If you do not have KSQL, you can run it as part of the Confluent Platform all in Docker using the following instructions.
If using Control Center, click ksQLDB, click Editor, and then paste in the following KSQL:
```
CREATE STREAM stats (
clusterTime BIGINT
) WITH (
KAFKA_TOPIC='kafka.source-perf-test',
VALUE_FORMAT='AVRO'
);
```
The only information that we need from the message is the clusterTime. This value is provided within the change stream event. For reference, this is a sample event from change streams.
```
{
_id: { },
"operationType": "",
"fullDocument": { },
"ns": {
"db": ,
"coll":
},
"to": {
"db": ,
"coll":
},
"documentKey": {
_id:
},
"updateDescription": {
"updatedFields": { },
"removedFields": , ... ]
},
"clusterTime": ,
"txnNumber": ,
"lsid": {
"id": ,
"uid":
}
}
```
**Step 3**
Next, we will create a ksql stream that calculates the difference between the cluster time (time when it was created on MongoDB) and the time where it was inserted on the broker.
```
CREATE STREAM STATS2 AS
select ROWTIME - CLUSTERTIME as diff, 1 AS ROW from STATS EMIT CHANGES;
```
As stated previously, this diff value may not be completely accurate if the clocks on Kafka and MongoDB are different.
**Step 4**
To see how the values change over time, we can use a window function and write the results to a table which can then be written into MongoDB via a sink connector.
```
SET 'ksql.suppress.enabled' = 'true';
CREATE TABLE STATSWINDOW2 AS
SELECT AVG( DIFF ) AS AVG, MAX(DIFF) AS MAX, count(*) AS COUNT, ROW FROM STATS2
WINDOW TUMBLING (SIZE 10 SECONDS)
GROUP BY ROW
EMIT FINAL;
```
Windowing lets you control how to group records that have the same key for stateful operations, such as aggregations or joins into so-called windows. There are three ways to define time windows in ksqlDB: hopping windows, tumbling windows, and session windows. In this example, we will use tumbling as it is a fixed-duration, non-overlapping, and gap-less window.
![
### Configure Sink Connector
The final step is to create a sink connector to insert all this aggregate data on MongoDB.
```
curl -X POST -H "Content-Type: application/json" --data '
{
"name": "MongoSource-SinkPerf",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
"tasks.max": "1",
"errors.log.enable": true,
"errors.log.include.messages": true,
"topics": "STATSWINDOW2",
"errors.deadletterqueue.context.headers.enable": true,
"connection.uri": "MONGODB CONNECTION STRING GOES HERE",
"database": "kafka",
"collection": "sourceStats",
"mongo.errors.log.enable": true,
"transforms": "InsertField",
"transforms.InsertField.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.InsertField.timestamp.field": "timestampColumn"
}}' http://localhost:8083/connectors -w "\n"
```
### Viewing Results with MongoDB Charts
Download the MongoDB Source performance Chart from Gist.
```
curl https://gist.githubusercontent.com/RWaltersMA/011f1473cf937badc61b752a6ab769d4/raw/bc180b9c2db533536e6c65f34c30b2d2145872f9/mongodb-source-performance.chart -o source-performance.charts
```
Choose **Import Dashboard** from the Add Dashboard dropdown and select the downloaded file.
You will need to create a Datasource to the new sink collection, “kafka.sourceStats.”
Click on the Kafka Performance Source chart to view the statistics.
In the above example, you can see the 10-second sliding window performance statistics for 1.5M documents. The average difference was 252s, with the maximum difference being 480s. Note that some of this delta could be differences in clocks between MongoDB and Kafka. While not taking these numbers as absolute, simply using this technique is good enough to determine trends and if the performance is getting worse or better.
If you have any opinions on features or functionality enhancements that you would like to see with respect to monitoring performance or monitoring the MongoDB Connector for Apache Kafka in general, please add a comment to KAFKA-64.
Have any questions? Check out our Connectors and Integrations MongoDB community forum. | md | {
"tags": [
"Connectors"
],
"pageDescription": "Learn about measuring the performance of the MongoDB Connector for Apache Kafka in both a source and sink configuration.",
"contentType": "Article"
} | Measuring MongoDB Kafka Connector Performance | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/getting-started-realm-sdk-unity | created | # Getting Started with the Realm SDK for Unity
Did you know that MongoDB has a Realm
SDK for the
Unity game development framework that makes
working with game data effortless? The Realm SDK is currently an alpha
release, but you can already start using it to build persistence into
your cross platform gaming projects.
A few weeks ago I streamed about and
wrote about creating an infinite
runner
type game using Unity and the Realm SDK for Unity. Realm was used for
storing the score between scenes and sessions within the game.
There were a lot of deep topics in the infinite runner (think Temple Run
or Subway Surfer) example, so I wanted to take a step back. In this
tutorial, we're going to spend less time making an interesting game and
more time including and using Realm within a Unity
project.
To get an idea of what we're going to accomplish, take a look at the
following animated image:
In the above example, we have three rectangles, each of a different
color. When clicking our mouse on a rectangle, the numeric values
increase. If the game were to be closed and then opened again, the
numeric values would be retained.
## The Requirements
There aren't many requirements to using Realm with Unity, and once Realm
becomes production ready, those requirements will be even less. However,
for now you need the following:
- Unity 2020.2.4f1+
- Realm SDK for
Unity 10.1.1+
For now, the Realm SDK for Unity needs to be downloaded and imported
manually into a project. This will change when the SDK can be added
through the Unity Asset Store.
When you download Unity, you'll likely be using a different and
potentially older version by default. Within the Unity Hub software, pay
attention to the version you're using and either upgrade or downgrade as
necessary.
## Adding the Realm SDK for Unity to a Project
From GitHub, download the
latest Realm SDK for
Unity tarball. If given the option, choose the **bundle** file. For
example, **realm.unity.bundle-10.1.1.tgz** is what I'm using.
Create a new Unity project and use the 2D template when prompted.
Within a Unity project, choose **Window -> Package Manager** and then
click the plus icon to add a tarball.
The process of importing the tarball should only take a minute or two.
Once it has been added, it is ready for use within the game. Do note
that adding the tarball to your project only adds a reference based on
its current location on your disk. Moving or removing the tarball on
your filesystem will break the link.
## Designing a Data Model for the Realm Objects Within the Game
Before we can start persisting data to Realm and then accessing it
later, we need to define a model of what our data will look like. Since
Realm is an object-oriented database, we're going to define a class with
appropriate member variables and methods. This will represent what the
data looks like when persisted.
To align with the basic example that we're interested in, we essentially
want to store various score information.
Within the Unity project, create a new script file titled
**GameModel.cs** with the following C# code:
``` csharp
using Realms;
public class GameModel : RealmObject {
PrimaryKey]
public string gamerTag { get; set; }
public int redScore { get; set; }
public int greenScore { get; set; }
public int whiteScore { get; set; }
public GameModel() { }
public GameModel(string gamerTag, int redScore, int greenScore, int whiteScore) {
this.gamerTag = gamerTag;
this.redScore = redScore;
this.greenScore = greenScore;
this.whiteScore = whiteScore;
}
}
```
The `redScore`, `greenScore`, and `whiteScore` variables will keep the
score for each square on the screen. Since a game is usually tied to a
person or a computer, we need to define a [primary
key
for the associated data. The Realm primary key uniquely identifies an
object within a Realm. For this example, we're use a `gamerTag` variable
which represents a person or player.
To get an idea of what our model might look like as JSON, take the
following:
``` json
{
"gamerTag": "poketrainernic",
"redScore": 0,
"greenScore": 0,
"whiteScore": 0
}
```
For this example, and many Realm with Unity examples, we won't ever have
to worry about how it looks like as JSON since everything will be done
locally as objects.
With the `RealmObject` class configured, we can make use of it inside
the game.
## Interacting with Persisted Realm Data in the Game
The `RealmObject` only represents the storage model for our data. There
are extra steps when it comes to interacting with the data that is
modeled using it.
Within the Unity project, create a **GameController.cs** file with the
following C# code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Realms;
using UnityEngine.UI;
public class GameController : MonoBehaviour {
private Realm _realm;
private GameModel _gameModel;
public Text scoreText;
void OnEnable() {
_realm = Realm.GetInstance();
_gameModel = _realm.Find("poketrainernic");
if(_gameModel == null) {
_realm.Write(() => {
_gameModel = _realm.Add(new GameModel("poketrainernic", 0, 0, 0));
});
}
}
void OnDisable() {
_realm.Dispose();
}
public void SetButtonScore(string color, int inc) {
switch(color) {
case "RedSquare":
_realm.Write(() => {
_gameModel.redScore++;
});
break;
case "GreenSquare":
_realm.Write(() => {
_gameModel.greenScore++;
});
break;
case "WhiteSquare":
_realm.Write(() => {
_gameModel.whiteScore++;
});
break;
default:
Debug.Log("Color Not Found");
break;
}
}
void Update() {
scoreText.text = "Red: " + _gameModel.redScore + "\n" + "Green: " + _gameModel.greenScore + "\n" + "White: " + _gameModel.whiteScore;
}
}
```
In the above code, we have a few things going on, all related to
interacting with Realm.
In the `OnEnable` method, we are getting an instance of our Realm
database and we are finding an object based on our `GameModel` class.
The primary key is the `gamerTag` string variable, so we are providing a
value to query on. If the query returns a null value, it means that no
data exists based on the primary key used. In that circumstance, we
create a `Write` block and add a new object based on the constructor
within the `GameModel` class. By the end of the query or creation of our
data, we'll have a `_gameModel` object that we can work with in our
game.
We're hard coding the "poketrainernic" value because we don't plan to
use any kind of authentication in this example. Everyone who plays this
game is considered the "poketrainernic" player.
The `OnDisable` method is for cleanup. It is important to dispose of the
Realm instance when the game ends to prevent any unexpected behavior.
For this particular game example, most of our logic happens in the
`SetButtonScore` method. In the `SetButtonScore` method, we are checking
to see which color should be incremented and then we are doing so. The
amazing thing is that changing the `_gameModel` object changes what is
persisted, as long as the changes happen in a `Write` block. No having
to write queries or do anything out of the ordinary beyond just working
with your objects as you would normally.
While we don't have a `Text` object configured yet within our game, the
`Update` method will update the text on the screen every frame. If one
of the values in our Realm instance changes, it will be reflected on the
screen.
## Adding Basic Logic to the Game Objects Within the Scene
At this point, we have a `RealmObject` data model for our persisted data
and we have a class for interacting with that data. We don't have
anything to tie it together visually like you'd expect in a game. In
other words, we need to be able to click on a colored sprite and have it
persist something new.
Within the Unity project, create a **Button.cs** file with the following
C# code:
``` csharp
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Button : MonoBehaviour {
public GameController game;
void OnMouseDown() {
game.SetButtonScore(gameObject.name, 1);
}
}
```
The `game` variable in the above code will eventually be from a game
object within the scene and configured through a series of dragging and
dropping, but we're not there yet. As of right now, we're focusing on
the code, and less on game objects.
The `OnMouseDown` method is where the magic happens for this script. The
game object that this script will eventually be attached to will have a
collider
which gives us access to the `OnMouseDown` method. When the game object
is clicked, we use the `SetButtonScore` method to send the name of the
current game object as well as a value to increase the score by.
Remember, inside the `SetButtonScore` method we are expecting a string
value for our switch statement. In the next few steps, naming the game
objects appropriately is critical based on our already applied logic.
If you're not sure where `gameObject` is coming from, it is inherited as
part of the `MonoBehavior` class, and it represents the current game
object to which the script is currently attached to.
## Game Objects, Colliders, and the Gaming Wrap-Up
The Unity project has a bunch of short scripts sitting out in the ether.
It's time to add game objects to the scene so we can attach the scripts
and do something interesting.
By the time we're done, our Unity editor should look something like the
following:
We need to add a few game objects, add the scripts to those game
objects, then reference a few other game objects. Yes, it sounds
complicated, but it really isn't!
Within the Unity editor, add the following game objects to the scene.
We'll walk through adding them and some of the specifics next:
- GameController
- RedSquare
- GreenSquare
- WhiteSquare
- Canvas
- Scores
- EventSystem
Now the `Scores` game object is for our text. You can add a `Text` game
object from the menu and it will add the `Canvas` and `EventSystem` for
you. You don't need to add a `Canvas` or `EventSystem` manually if Unity
created one for you. Just make sure you name and position the game
object for scores appropriately.
If the `Scores` text is too small or not visible, make sure the
rectangular boundaries for the text is large enough.
The `RedSquare`, `GreenSquare`, and `WhiteSquare` game objects are
`Square` sprites, each with a different color. These sprites can be
added using the **GameObject -> 2D Object -> Sprites -> Square** menu
item. You'll need to rename them to the desired name after adding them
to the scene. Finally, the `GameController` is nothing more than an
empty game object.
Drag the **Button.cs** script to the inspector panel of each of the
colored square sprites. The sprites depend on being able to access the
`SetButtonScore` method, so the `GameController` game object must be
dragged onto the **Score Text** field within the script area on each of
the squares as well. Drag the **GameController.cs** script to the
`GameController` game object. Next, drag the `Scores` game object into
the scripts section of the `GameController` game object so that the
`GameController` game object can control the score text.
We just did a lot of drag and drop on the game objects within the scene.
We're not quite done yet though. In order to use the `OnMouseDown`
method for our squares, they need to have a collider. Make sure to add a
**Box Collider 2D** to each of the squares. The **Box Collider 2D** is a
component that can be added to the game objects through the inspector.
You should be able to run the game with success as of now! You can do
this by either creating and running a build from the **File** menu, or
by using the play button within your editor to preview the game.
## Conclusion
You just saw how to get started with the Realm SDK for
Unity. I wrote another example of using Realm with
Unity, but the game was a little more exciting, which added more
complexity. Once you have a firm understanding of how Realm works in
Unity, it is worth checking out Build an Infinite Runner Game with
Unity and the Realm Unity
SDK.
As previously mentioned, the Realm SDK for Unity is currently an alpha
release. Expect that there will be problems at some point, so it
probably isn't best to use it in your production ready game. However,
you should be able to get comfortable including it.
For more examples on using the Realm SDK, check out the C#
documentation.
Questions? Comments? We'd love to connect with you. Join the
conversation on the MongoDB Community
Forums.
| md | {
"tags": [
"Realm",
"C#",
"Unity"
],
"pageDescription": "Learn how to get started with the Realm SDK for Unity for data persistance in your game.",
"contentType": "Tutorial"
} | Getting Started with the Realm SDK for Unity | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/under-used-features | created | # Three Underused MongoDB Features
As a Developer Advocate for MongoDB, I have quite a few conversations with developers. Many of these developers have never used MongoDB, and so the conversation is often around what kind of data MongoDB is particularly good for. (Spoiler: Nearly *all* of them! MongoDB is a general purpose database that just happens to be centered around documents instead of tables.)
But there are lots of developers out there who already use MongoDB every day, and in those situations, my job is to make sure they know how to use MongoDB effectively. I make sure, first and foremost, that these developers know about MongoDB's Aggregation Framework, which is, in my opinion, MongoDB's most powerful feature. It is relatively underused. If you're not using the Aggregation Framework in your projects, then either your project is very simple, or you could probably be doing things more efficiently by adding some aggregation pipelines.
But this article is not about the Aggregation Framework! This article is about three *other* features of MongoDB that deserve to be better known: TTL Indexes, Capped Collections, and Change Streams.
## TTL Indexes
One of the great things about MongoDB is that it's so *easy* to store data in it, without having to go through complex steps to map your data to the model expected by your database's schema expectations.
Because of this, it's quite common to use MongoDB as a cache as well as a database, to store things like session information, authentication data for third-party services, and other things that are relatively short-lived.
A common idiom is to store an expiry date in the document, and then when retreiving the document, to compare the expiry date to the current time and only use it if it's still valid. In some cases, as with OAuth access tokens, if the token has expired, a new one can be obtained from the OAuth provider and the document can be updated.
```
coll.insert_one(
{
"name": "Professor Bagura",
# This document will disappear before 2022:
"expires_at": datetime.fromisoformat("2021-12-31 23:59:59"),
}
)
# Retrieve a valid document by filtering on docs where `expires_at` is in the future:
if (doc := coll.find_one({"expires_at": {"$gt": datetime.now()}})) is None:
# If no valid documents exist, create one (and probably store it):
doc = create_document()
# Code to use the retrieved or created document goes here.
print(doc)
```
Another common idiom also involves storing an expiry date in the document, and then running code periodically that either deletes or refreshes expired documents, depending on what's correct for the use-case.
``` python
while True:
# Delete all documents where `expires_at` is in the past:
coll.delete_many({"expires_at": {"$lt": datetime.now()}})
time.sleep(60)
```
An alternative way to manage data that has an expiry, either absolute or relative to the time the document is stored, is to use a TTL index.
To use the definition from the documentation: "TTL indexes are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific clock time." TTL indexes are why I like to think of MongoDB as
a platform for building data applications, not just a database. If you apply a TTL index to your documents' expiry field, MongoDB will automatically remove the document for you! This means that you don't need to write your own code for removing expired documents, and you don't need to remember to always filter documents based on whether their expiry is earlier than the current time. You also don't need to calculate the absolute expiry time if all you have is the number of seconds a document remains valid!
Let me show you how this works. The code below demonstrates how to create an index on the `created_at` field. Because `expiresAfterSeconds` is set to 3600 (which is one hour), any documents in the collection with `created_at` set to a date will be deleted one hour after that point in time.
``` python
coll = db.get_collection("ttl_collection")
# Creates a new index on the `created_at`.
# The document will be deleted when current time reaches one hour (3600 seconds)
# after the date stored in `created_at`:
coll.create_index(("expires_at", 1)], expireAfterSeconds=3600)
coll.insert_one(
{
"name": "Professor Bagura",
"created_at": datetime.now(), # Document will disappear after one hour.
}
)
```
Another common idiom is to explicitly set the expiry time, when the document should be deleted. This is done by setting `expireAfterSeconds` to 0:
``` python
coll = db.get_collection("expiry_collection")
# Creates a new index on the `expires_at`.
# The document will be deleted when
# the current time reaches the date stored in `expires_at`:
coll.create_index([("expires_at", 1)], expireAfterSeconds=0)
coll.insert_one(
{
"name": "Professor Bagura",
# This document will disappear before 2022:
"expires_at": datetime.fromisoformat("2021-12-31 23:59:59"),
}
)
```
Bear in mind that the background process that removes expired documents only runs every 60 seconds, and on a cluster under heavy load, maybe less frequently than that. So, if you're working with documents with very short-lived expiry durations, then this feature probably isn't for you. An alternative is to continue to filter by the expiry in your code, to benefit from finer-grained control over document validity, but allow the TTL expiry service to maintain the collection over time, removing documents that have very obviously expired.
If you're working with data that has a lifespan, then TTL indexes are a great feature for maintaining the documents in a collection.
## Capped Collections
Capped collections are an interesting feature of MongoDB, useful if you wish to efficiently store a ring buffer of documents.
A capped collection has a maximum size in bytes and optionally a maximum number of documents. (The lower of the two values is used at any time, so if you want to reach the maximum number of documents, make sure you set the byte size large enough to handle the number of documents you wish to store.) Documents are stored in insertion order, without the need for a specific index to maintain that order, and so can handle higher throughput than an indexed collection. When either the collection reaches the set byte `size`, or the `max` number of documents, then the oldest documents in the collection are purged.
Capped collections can be useful for buffering recent operations (application-level operations - MongoDB's oplog is a different kind ofthing), and these can be queried when an error state occurs, in order to have a log of recent operations leading up to the error state.
Or, if you just wish to efficiently store a fixed number of documents in insertion order, then capped collections are the way to go.
Capped collections are created with the [createCollection method, by setting the `capped`, `size`, and optionally the `max` parameters:
``` python
# Create acollection with a large size value that will store a max of 3 docs:
coll = db.create_collection("capped", capped=True, size=1000000, max=3)
# Insert 3 docs:
coll.insert_many({"name": "Chico"}, {"name": "Harpo"}, {"name": "Groucho"}])
# Insert a fourth doc! This will evict the oldest document to make space (Zeppo):
coll.insert_one({"name": "Zeppo"})
# Print out the docs in the collection:
for doc in coll.find():
print(doc)
# {'_id': ObjectId('600e8fcf36b07f77b6bc8ecf'), 'name': 'Harpo'}
# {'_id': ObjectId('600e8fcf36b07f77b6bc8ed0'), 'name': 'Groucho'}
# {'_id': ObjectId('600e8fcf36b07f77b6bc8ed1'), 'name': 'Zeppo'}
```
If you want a rough idea of how big your bson documents are in bytes, for calculating the value of `size`, you can either use your driver's [bsonSize method in the `mongo` shell, on a document constructed in code, or you can use MongoDB 4.4's new bsonSize aggregation operator, on documents already stored in MongoDB.
Note that with the improved efficiency that comes with capped collections, there are also some limitations. It is not possible to explicitly delete a document from a capped collection, although documents will eventually be replaced by newly inserted documents. Updates in a capped collection also cannot change a document's size. You can't shard a capped collection. There are some other limitations around replacing and updating documents and transactions. Read the documentation for more details.
It's worth noting that this pattern is similar in feel to the Bucket Pattern, which allows you to store a capped number of items in an array, and automatically creates a new document for storing subsequent values when that cap is reached.
## Change Streams and the `watch` method
And finally, the biggest lesser-known feature of them all! Change streams are a live stream of changes to your database. The `watch` method, implemented in most MongoDB drivers, streams the changes made to a collection,
a database, or even your entire MongoDB replicaset or cluster, to your application in real-time. I'm always surprised by how few people have not heard of it, given that it's one of the first MongoDB features that really excited me. Perhaps it's just luck that I stumbled across it earlier.
In Python, if I wanted to print all of the changes to a collection as they're made, the code would look a bit like this:
``` python
with my_database.my_collection.watch() as stream:
for change in stream:
print(change)
```
In this case, `watch` returns an iterator which blocks until a change is made to the collection, at which point it will yield a BSON document describing the change that was made.
You can also filter the types of events that will be sent to the change stream, so if you're only interested in insertions or deletions, then those are the only events you'll receive.
I've used change streams (which is what the `watch` method returns) to implement a chat app, where changes to a collection which represented a conversation were streamed to the browser using WebSockets.
But fundamentally, change streams allow you to implement the equivalent of a database trigger, but in your favourite programming language, using all the libraries you prefer, running on the servers you specify. It's a super-powerful feature and deserves to be better known.
## Further Resources
If you don't already use the Aggregation Framework, definitely check out the documentation on that. It'll blow your mind (in a good way)!
Further documentation on the topics discussed here:
- TTL Index Documentation,
- Capped Collection Documentation
- Change Streams
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"MongoDB",
"Python"
],
"pageDescription": "Go beyond CRUD with these 3 special features of MongoDB!",
"contentType": "Article"
} | Three Underused MongoDB Features | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-swiftui-combine-first-app | created | # Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine
I'm relatively new to building iOS apps (a little over a year's experience), and so I prefer using the latest technologies that make me a more productive developer. That means my preferred app stack looks like this:
>
>
>This article was updated in July 2021 to replace `objc` and `dynamic` with the `@Persisted` annotation that was introduced in Realm-Cocoa 10.10.0.
>
>
##### Technologies Used by the App
| In 🔥 | Out ❄️ |
|-----------------------------------|-------------------------------------|
| Swift | Objective C |
| SwiftUI | UIKit |
| Combine | RxSwift |
| Realm | Core Data |
| MongoDB Realm Sync (where needed) | Home-baked cross-platform data sync |
This article presents a simple task management app that I built on that stack. To continue my theme on being productive (lazy), I've borrowed heavily (stolen) from MongoDB's official iOS Swift tutorial:
- I've refactored the original front end, adding Combine for event management, and replacing the UIKit ViewControllers with Swift views.
- The back end Realm app is entirely unchanged. Note that once you've stood up this back end, then this app can share its data with the equivalent Android, React/JavaScript, and Node.js apps with no changes.
I'm going to focus here on the iOS app. Check the official tutorial if you want to understand how the back end works.
You can download all of the code for the front end app from the GitHub repo.
## Prerequisites
I'm lucky that I don't have to support an existing customer base that's running on old versions of iOS, and so I can take advantage of the latest language, operating system, and SDK features:
- A Mac (sorry Windows and Linux users)
- iOS14+ / XCode 12.2+
- It would be pretty easy to port the app back to iOS13, but iOS14 makes SwiftUI more of a first-class citizen (though there are still times when a more complex app would need to break out into UIKit code—e.g., if you wanted to access the device's camera).
- Apple introduced SwiftUI and Combine in iOS13, and so you'd be better sticking with the original tutorial if you need to support iOS12 or earlier.
- Realm Cocoa SDK 10.1+
- Realm Cocoa 10 adds support for Combine and the ability to "Freeze" Realm Objects, making it simpler and safer to embed them directly within SwiftUI views.
- CocoaPods 1.10+
## Running the App for Yourself
I always prefer to build and run an app before being presented with code snippets; these are the steps:
1. If you don't already have Xcode 12 installed, install it through the Apple App Store.
2. Set up your back end Realm app. Make a note of the ID:
3. Download the iOS app, install dependencies, and open the workspace in Xcode:
``` bash
git clone https://github.com/ClusterDB/task-tracker-swiftui.git
cd task-tracker-swiftui
pod install --repo-update
open task-tracker-swiftui.xcworkspace
```
4. Within Xcode, edit `task-tracker-swiftui/task_tracker_swiftuiApp.swift` and set the Realm application ID to the value you noted in Step 2:
``` swift
let app = App(id: "tasktracker-xxxxx")
```
5. In Xcode, select an iOS simulator:
Select an iOS simulator in Xcode
6. Build and run the app using `⌘-R`.
7. Go ahead and play with the app:
Demo of the app in an iOS simulator
## Key Pieces of Code
Usually, when people start explaining SwiftUI, they begin with, "You know how you do X with UIKit? With SwiftUI, you do Y instead." But, I'm not going to assume that you're an experienced UIKit developer.
### The Root of a SwiftUI App
If you built and ran the app, you've already seen the "root" of the app in `swiftui_realmApp.swift`:
``` swift
import SwiftUI
import RealmSwift:
let app = App(id: "tasktracker-xxxxx") // TODO: Set the Realm application ID
@main
struct swiftui_realmApp: SwiftUI.App {
@StateObject var state = AppState()
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(state)
}
}
}
```
`app` is the Realm application that will be used by our iOS app to store and retrieve data stored in Realm.
SwiftUI works with views, typically embedding many views within other views (a recent iOS app I worked on has over 500 views), and you always start with a top-level view for the app—in this case, `ContentView`.
Individual views contain their own state (e.g., the details of the task that's currently being edited, or whether a pop-up sheet should be displayed), but we store any app-wide state in the `state` variable. `@ObservedObject` is a SwiftUI annotation to indicate that a view should be refreshed whenever particular attributes within an object change. We pass state to `ContentView` as an `environmentOject` so that any of the app's views can access it.
### Application-Wide State Management
Like other declarative, state-driven frameworks (e.g., React or Vue.js), components/views can pass state up and down the hierarchy. However, it can simplify state management by making some state available application-wide. In this app, we centralize this app-wide state data storage and control in an instance of the `AppState` class:
``` swift
class AppState: ObservableObject {
var loginPublisher = PassthroughSubject()
var logoutPublisher = PassthroughSubject()
let userRealmPublisher = PassthroughSubject()
var cancellables = Set()
@Published var shouldIndicateActivity = false
@Published var error: String?
var user: User?
}
```
We use `shouldIndicateActivity` to control whether a "working on it" view should be displayed while the app is busy. error is set whenever we want to display an error message. Both of these variables are annotated with `@Published` to indicate that referencing views should be refreshed when their values change.
`user` represents the Realm user that's currently logged into the app.
The app uses the Realm SDK to interact with the back end Realm application to perform actions such as logging into Realm. Those operations can take some time as they involve accessing resources over the internet, and so we don't want the app to sit busy-waiting for a response. Instead, we use "Combine" publishers and subscribers to handle these events. `loginPublisher`, `logoutPublisher`, and `userRealmPublisher` are publishers to handle logging in, logging out, and opening a Realm for a user.
As an example, when an event is sent to `loginPublisher` to indicate that the login process has completed, Combine will run this pipeline:
``` swift
init() {
loginPublisher
.receive(on: DispatchQueue.main)
.flatMap { user -> RealmPublishers.AsyncOpenPublisher in
self.shouldIndicateActivity = true
var realmConfig = user.configuration(partitionValue: "user=\(user.id)")
realmConfig.objectTypes = User.self, Project.self]
return Realm.asyncOpen(configuration: realmConfig)
}
.receive(on: DispatchQueue.main)
.map {
self.shouldIndicateActivity = false
return $0
}
.subscribe(userRealmPublisher)
.store(in: &self.cancellables)
}
```
The pipeline receives the freshly-logged-in Realm user.
The `receive(on: DispatchQueue.main)` stage specifies that the next stage in the pipeline should run in the main thread (because it will update the UI).
The Realm user is passed to the `flatMap` stage which:
- Updates the UI to show that the app is busy.
- Opens a Realm for this user (requesting Objects where the partition matches the string `"user=\(user.id"`).
- Passes a publisher for the opening of the Realm to the next stage.
The `.subscribe` stage subscribes the `userRealmPublisher` to outputs from the publisher it receives from the previous stage. In that way, a pipeline associated with the `userRealmPublisher` publisher can react to an event indicating when the Realm has been opened.
The `.store` stage stores the publisher in the `cancellables` array so that it isn't removed when the `init()` function completes.
### The Object Model
You'll find the Realm object model in the `Model` group in the Xcode workspace. These are the objects used in the iOS app and synced to MongoDB Atlas in the back end.
The `User` class represents application users. It inherits from `Object` which is a class in the Realm SDK and allows instances of the class to be stored in Realm:
``` swift
import RealmSwift
class User: Object {
@Persisted(primaryKey: true) var _id: String = UUID().uuidString
@Persisted var _partition: String = ""
@Persisted var name: String = ""
@Persisted let memberOf = RealmSwift.List()
}
```
Note that instances of classes that inherit from `Object` can be used as `@ObservedObjects` without inheriting from `ObservableObject` or annotating attributes with `@Public`.
Summary of the attributes:
- `_id` uniquely identifies a `User` object. We set it to be the Realm primary key.
- `_partition` is used as the partition key, which can be used by the app to filter which `User` `Objects` it wants to access.
- `name` is the username (email address).
- `membersOf` is a Realm List of projects that the user can access. (It always contains its own project, but it may also include other users' projects if those users have added this user to their teams.)
The elements in `memberOf` are instances of the `Project` class. `Project` inherits from `EmbeddedObject` which means that instances of `Project` can be embedded within other Realm `Objects`:
``` swift
import RealmSwift
class Project: EmbeddedObject {
@Persisted var name: String?
@Persisted var partition: String?
convenience init(partition: String, name: String) {
self.init()
self.partition = partition
self.name = name
}
}
```
Summary of the attributes:
- `name` is the project's name.
- `partition` is a string taking the form `"project=project-name"` where `project-name` is the `_id` of the project's owner.
Individual tasks are represented by the `Task` class:
``` swift
import RealmSwift
enum TaskStatus: String {
case Open
case InProgress
case Complete
}
class Task: Object {
@Persisted(primaryKey: true) var _id: ObjectId = ObjectId.generate()
@Persisted var _partition: String = ""
@Persisted var name: String = ""
@Persisted var owner: String?
@Persisted var status: String = ""
var statusEnum: TaskStatus {
get {
return TaskStatus(rawValue: status) ?? .Open
}
set {
status = newValue.rawValue
}
}
convenience init(partition: String, name: String) {
self.init()
self._partition = partition
self.name = name
}
}
```
Summary of the attributes:
- `_id` uniquely identifies a `Task` object. We set it to be the Realm primary key.
- `_partition` is used as the partition key, which can be used by the app to filter which `Task` `Objects` it wants to access. It takes the form `"project=project-id"`.
- `name` is the task's title.
- `status` takes on the value "Open", "InProgress", or "Complete".
### User Authentication
We want app users to only be able to access the tasks from their own project (or the projects of other users who have added them to their team). Our users need to see their tasks when they restart the app or run it on a different device. Realm's username/password authentication is a simple way to enable this.
Recall that our top-level SwiftUI view is `ContentView` (`task-tracker-swiftui/Views/ContentView.swift`). `ContentView` selects whether to show the `LoginView` or `ProjectsView` view based on whether a user is already logged into Realm:
``` swift
struct ContentView: View {
@EnvironmentObject var state: AppState
var body: some View {
NavigationView {
ZStack {
VStack {
if state.loggedIn && state.user != nil {
if state.user != nil {
ProjectsView()
}
} else {
LoginView()
}
Spacer()
if let error = state.error {
Text("Error: \(error)")
.foregroundColor(Color.red)
}
}
if state.shouldIndicateActivity {
ProgressView("Working With Realm")
}
}
.navigationBarItems(leading: state.loggedIn ? LogoutButton() : nil)
}
}
}
```
Note that `ContentView` also renders the `state.error` message and the `ProgressView` views. These will kick in whenever a sub-view updates state.
`LoginView` (`task-tracker-swiftui/Views/User Accounts/LoginView.swift`) presents a simple form for existing app users to log in:
When the user taps "Log In", the `login` function is executed:
``` swift
private func login(username: String, password: String) {
if username.isEmpty || password.isEmpty {
return
}
self.state.error = nil
state.shouldIndicateActivity = true
app.login(credentials: .emailPassword(email: username, password: password))
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: {
state.shouldIndicateActivity = false
switch $0 {
case .finished:
break
case .failure(let error):
self.state.error = error.localizedDescription
}
}, receiveValue: {
self.state.error = nil
state.loginPublisher.send($0)
})
.store(in: &state.cancellables)
}
```
`login` calls `app.login` (`app` is the Realm app that we create when the app starts) which returns a Combine publisher. The results from the publisher are passed to a Combine pipeline which updates the UI and sends the resulting Realm user to `loginPublisher`, which can then complete the process.
If it's a first-time user, then they tap "Register new user" to be taken to `SignupView` which registers a new user with Realm (`app.emailPasswordAuth.registerUser`) before popping back to `loginView` (`self.presentationMode.wrappedValue.dismiss()`):
``` swift
private func signup(username: String, password: String) {
if username.isEmpty || password.isEmpty {
return
}
self.state.error = nil
state.shouldIndicateActivity = true
app.emailPasswordAuth.registerUser(email: username, password: password)
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: {
state.shouldIndicateActivity = false
switch $0 {
case .finished:
break
case .failure(let error):
self.state.error = error.localizedDescription
}
}, receiveValue: {
self.state.error = nil
self.presentationMode.wrappedValue.dismiss()
})
.store(in: &state.cancellables)
}
```
To complete the user lifecycle, `LogoutButton` logs them out from Realm and then sends an event to `logoutPublisher`:
``` swift
struct LogoutButton: View {
@EnvironmentObject var state: AppState
var body: some View {
Button("Log Out") {
state.shouldIndicateActivity = true
app.currentUser?.logOut()
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: { _ in
}, receiveValue: {
state.shouldIndicateActivity = false
state.logoutPublisher.send($0)
})
.store(in: &state.cancellables)
}
.disabled(state.shouldIndicateActivity)
}
}
```
### Projects View
After logging in, the user is shown `ProjectsView` (`task-tracker-swiftui/Views/Projects & Tasks/ProjectsView.swift`) which displays a list of projects that they're a member of:
``` swift
var body: some View {
VStack(spacing: Dimensions.padding) {
if let projects = state.user?.memberOf {
ForEach(projects, id: \.self) { project in
HStack {
LabeledButton(label: project.partition ?? "No partition",
text: project.name ?? "No project name") {
showTasks(project)
}
}
}
}
Spacer()
if let tasksRealm = tasksRealm {
NavigationLink( destination: TasksView(realm: tasksRealm, projectName: projectName),
isActive: $showingTasks) {
EmptyView() }
}
}
.navigationBarTitle("Projects", displayMode: .inline)
.toolbar {
ToolbarItem(placement: .bottomBar) {
Button(action: { self.showingSheet = true }) {
ManageTeamButton()
}
}
}
.sheet(isPresented: $showingSheet) { TeamsView() }
.padding(.all, Dimensions.padding)
}
```
Recall that `state.user` is assigned the data retrieved from Realm when the pipeline associated with `userRealmPublisher` processes the event forwarded from the login pipeline:
``` swift
userRealmPublisher
.sink(receiveCompletion: { result in
if case let .failure(error) = result {
self.error = "Failed to log in and open realm: \(error.localizedDescription)"
}
}, receiveValue: { realm in
self.user = realm.objects(User.self).first
})
.store(in: &cancellables)
```
Each project in the list is a button that invokes `showTasks(project)`:
``` swift
func showTasks(_ project: Project) {
state.shouldIndicateActivity = true
let realmConfig = app.currentUser?.configuration(partitionValue: project.partition ?? "")
guard var config = realmConfig else {
state.error = "Cannot get Realm config from current user"
return
}
config.objectTypes = [Task.self]
Realm.asyncOpen(configuration: config)
.receive(on: DispatchQueue.main)
.sink(receiveCompletion: { result in
state.shouldIndicateActivity = false
if case let .failure(error) = result {
self.state.error = "Failed to open realm: \(error.localizedDescription)"
}
}, receiveValue: { realm in
self.tasksRealm = realm
self.projectName = project.name ?? ""
self.showingTasks = true
state.shouldIndicateActivity = false
})
.store(in: &self.state.cancellables)
}
```
`showTasks` opens a new Realm and then sets up the variables which are passed to `TasksView` in body (note that the `NavigationLink` is automatically followed when `showingTasks` is set to `true`):
``` swift
NavigationLink(
destination: TasksView(realm: tasksRealm, projectName: projectName),
isActive: $showingTasks) {
EmptyView()
}
```
### Tasks View
`TasksView` (`task-tracker-swiftui/Views/Projects & Tasks/TasksView.swift`) presents a list of the tasks within the selected project:
``` swift
var body: some View {
VStack {
if let tasks = tasks {
List {
ForEach(tasks.freeze()) { task in
if let tasksRealm = tasks.realm {
TaskView(task: (tasksRealm.resolve(ThreadSafeReference(to: task)))!)
}
}
.onDelete(perform: deleteTask)
}
} else {
Text("Loading...")
}
if let lastUpdate = lastUpdate {
LastUpdate(date: lastUpdate)
}
}
.navigationBarTitle("Tasks in \(projectName)", displayMode: .inline)
.navigationBarItems(trailing: Button(action: { self.showingSheet = true }) {
Image(systemName: "plus.circle.fill")
.renderingMode(.original)
})
.sheet(isPresented: $showingSheet) { AddTaskView(realm: realm) }
.onAppear(perform: loadData)
.onDisappear(perform: stopWatching)
}
```
Tasks can be removed from the projects by other instances of the application or directly from Atlas in the back end. SwiftUI tends to crash if an item is removed from a list which is bound to the UI, and so we use Realm's "freeze" feature to isolate the UI from those changes:
``` swift
ForEach(tasks.freeze()) { task in ...
```
However, `TaskView` can make changes to a task, and so we need to "unfreeze" `Task` `Objects` before passing them in:
``` swift
TaskView(task: (tasksRealm.resolve(ThreadSafeReference(to: task)))!)
```
When the view loads, we must fetch the latest list of tasks in the project. We want to refresh the view in the UI whenever the app observes a change in the list of tasks. The `loadData` function fetches the initial list, and then observes the Realm and updates the `lastUpdate` field on any changes (which triggers a view refresh):
``` swift
func loadData() {
tasks = realm.objects(Task.self).sorted(byKeyPath: "_id")
realmNotificationToken = realm.observe { _, _ in
lastUpdate = Date()
}
}
```
To conserve resources, we release the refresh token when leaving this view:
``` swift
func stopWatching() {
if let token = realmNotificationToken {
token.invalidate()
}
}
```
We delete a task when the user swipes it to the left:
``` swift
func deleteTask(at offsets: IndexSet) {
do {
try realm.write {
guard let tasks = tasks else {
return
}
realm.delete(tasks[offsets.first!])
}
} catch {
state.error = "Unable to open Realm write transaction"
}
}
```
### Task View
`TaskView` (`task-tracker-swiftui/Views/Projects & Tasks/TaskView.swift`) is responsible for rendering a `Task` `Object`; optionally adding an image and format based on the task status:
``` swift
var body: some View {
Button(action: { self.showingUpdateSheet = true }) {
HStack(spacing: Dimensions.padding) {
switch task.statusEnum {
case .Complete:
Text(task.name)
.strikethrough()
.foregroundColor(.gray)
Spacer()
Image(systemName: "checkmark.square")
.foregroundColor(.gray)
case .InProgress:
Text(task.name)
.fontWeight(.bold)
Spacer()
Image(systemName: "tornado")
case .Open:
Text(task.name)
Spacer()
}
}
}
.sheet(isPresented: $showingUpdateSheet) {
UpdateTaskView(task: task)
}
.padding(.horizontal, Dimensions.padding)
}
```
The task in the UI is a button that exposes `UpdateTaskView` when tapped. That view doesn't cover any new ground, and so I won't dig into it here.
### Teams View
A user can add others to their team; all team members can view and edit tasks in the user's project. For the logged-in user to add another member to their team, they need to update that user's `User` `Object`. This isn't allowed by the Realm Rules in the back end app. Instead, we make use of Realm Functions that have been configured in the back end to make these changes securely.
`TeamsView` (`task-tracker-swiftui/Views/Teams/TeamsView.swift`) presents a list of all the user's teammates:
``` swift
var body: some View {
NavigationView {
VStack {
List {
ForEach(members) { member in
LabeledText(label: member.id, text: member.name)
}
.onDelete(perform: removeTeamMember)
}
Spacer()
}
.navigationBarTitle(Text("My Team"), displayMode: .inline)
.navigationBarItems(
leading: Button(
action: { self.presentationMode.wrappedValue.dismiss() }) { Image(systemName: "xmark.circle") },
trailing: Button(action: { self.showingAddTeamMember = true }) { Image(systemName: "plus.circle.fill")
.renderingMode(.original)
}
)
}
.sheet(isPresented: $showingAddTeamMember) {
// TODO: Not clear why we need to pass in the environmentObject, appears that it may
// be a bug – should test again in the future.
AddTeamMemberView(refresh: fetchTeamMembers)
.environmentObject(state)
}
.onAppear(perform: fetchTeamMembers)
}
```
We invoke a Realm Function to fetch the list of team members, when this view is opened (`.onAppear`) through the `fetchTeamMembers` function:
``` swift
func fetchTeamMembers() {
state.shouldIndicateActivity = true
let user = app.currentUser!
user.functions.getMyTeamMembers([]) { (result, error) in
DispatchQueue.main.sync {
state.shouldIndicateActivity = false
guard error == nil else {
state.error = "Fetch team members failed: \(error!.localizedDescription)"
return
}
guard let result = result else {
state.error = "Result from fetching members is nil"
return
}
self.members = result.arrayValue!.map({ (bson) in
return Member(document: bson!.documentValue!)
})
}
}
}
```
Swiping left removes a team member using another Realm Function:
``` swift
func removeTeamMember(at offsets: IndexSet) {
state.shouldIndicateActivity = true
let user = app.currentUser!
let email = members[offsets.first!].name
user.functions.removeTeamMember([AnyBSON(email)]) { (result, error) in
DispatchQueue.main.sync {
state.shouldIndicateActivity = false
if let error = error {
self.state.error = "Internal error, failed to remove member: \(error.localizedDescription)"
} else if let resultDocument = result?.documentValue {
if let resultError = resultDocument["error"]??.stringValue {
self.state.error = resultError
} else {
print("Removed team member")
self.fetchTeamMembers()
}
} else {
self.state.error = "Unexpected result returned from server"
}
}
}
}
```
Tapping on the "+" button opens up the `AddTeamMemberView` sheet/modal, but no new concepts are used there, and so I'll skip it here.
## Summary
Our app relies on the latest features in the Realm-Cocoa SDK (notably Combine and freezing objects) to bind the model directly to our SwiftUI views. You may have noticed that we don't have a view model.
We use Realm's username/password functionality and Realm Sync to ensure that each user can work with all of their tasks from any device.
You've seen how the front end app can delegate work to the back end app using Realm Functions. In this case, it was to securely work around the data access rules for the `User` object; other use-cases for Realm Functions are:
- Securely access other network services without exposing credentials in the front end app.
- Complex data wrangling using the MongoDB Aggregation Framework.
- We've used Apple's Combine framework to handle asynchronous events, such as performing follow-on actions once the back end confirms that a user has been authenticated and logged in.
This iOS app reuses the back end Realm application from the official MongoDB Realm tutorials. This demonstrates how the same data and back end logic can be shared between apps running on iOS, Android, web, Node.js...
## References
- [GitHub Repo for this app
- UIKit version of this app
- Instructions for setting up the backend Realm app
- Freezing Realm Objects
- GitHub Repo for Realm-Cocoa SDK
- Realm Cocoa SDK documentation
- MongoDB's Realm documentation
- WildAid O-FISH – an example of a **much** bigger app built on Realm and MongoDB Realm Sync
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Realm",
"Swift",
"iOS",
"React Native",
"Mobile"
],
"pageDescription": "Build your first iOS mobile app using Realm, SwiftUI, and Combine.",
"contentType": "Tutorial"
} | Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/compass/mongodb-compass-aggregation-improvements | created | # A Better MongoDB Aggregation Experience via Compass
## Introduction
MongoDB Compass has had an aggregation pipeline builder since 2018. Its primary focus has always been enabling developers to quickly prototype and troubleshoot aggregations. Aggregations would then be exported to the developer’s preferred programming language and copy-pasted inside application code.
As of MongoDB World 2022, we are relaunching the aggregation experience within Compass, and we’re happy to announce richer, more powerful functionality for developers.
Compass 1.32.x series includes the following:
In-Use encryption, including providing options for KMS details in the connection form and CRUD support for encrypted collections, and more specifically for Queryable Encryption when creating collections
Saved queries and aggregations in the My Queries tab
Explain plan for aggregations
Run aggregations against the whole collection
Export aggregation results
Below, we will talk a bit more about each of these features and how they are useful to developers.
## In-Use Encryption
Our latest release of MongoDB Compass provides options for using KMS details in the connection form, as well as CRUD support for encrypted collections. Specifically, we also include functionality for Queryable Encryption when creating a collection.
## My Queries Section
Users can already save aggregations in Compass for later use.
However, saved aggregations are bound to a namespace and what we’ve seen often is that our users had trouble finding again the queries and aggregations they’ve saved and reuse them across namespaces. We decided the experience had to be improved: developers often reuse code they’ve written in the past as a starting point for new code. Similarly, they’ve told us that the queries and aggregations they saved are their “best queries”, the ones they want to use as the basis to build new ones.
We took their input seriously and recently we added a new “My Queries” screen to Compass. Now, you can find all your queries and aggregations in one place, you can filter them by namespace and search across all of them.
## Explain Plan for Aggregations
When building aggregations for collections with more than a few hundreds documents, performance best practices start to become important.
“Explain Plan” is the most reliable way to understand the performance of an aggregation and ensure it’s using the right indexes, so it was not surprising to see a feature request for explaining aggregations quickly rising up to be in the top five requests in our feedback portal.
Now “Explain Plan” is finally available and built into the aggregation-building experience: with just one click you can dig into the performance metrics for your aggregations, monitor the execution time, and double-check that the right indexes are in place.
However, the role of a developer in a modern engineering team is expanding to include tasks related to gathering users and product insights from live data (what we sometimes refer to as real-time analytics) and generating reports for other functions in the team or in the company.
When this happens, users are puzzled about not having the ability to run the aggregations they have created on the full dataset and export results in the same way they can do with a query. This is understandable and it’s reasonable that they assume this to be table-stakes functionality in a database GUI.
Now this is finally possible. Once you are done building your aggregation, you can just click “Run” and wait for the results to appear. You can also export them as JSON or CSV, which is something really useful when you need to share the insights you’ve extracted with other parts of the business.
## Summary of Compass 1.32 release
In summary, the latest version of MongoDB Compass means that users of MongoDB that are exploring the aggregation framework can get started more easily and build their first aggregations in no time without reading a lot of documentation. Experts and established users will be able to get more out of the aggregation framework by creating and reuse aggregations more effectively, including sharing them with teammates, confirming their performance, and running the aggregation via Compass directly.
If you're interested in trying Compass, download for free here. | md | {
"tags": [
"Compass"
],
"pageDescription": "MongoDB Compass is one of the most popular database GUIs",
"contentType": "News & Announcements"
} | A Better MongoDB Aggregation Experience via Compass | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/getting-started-mongodb-cpp | created | # Getting Started with MongoDB and C++
This article will show you how to utilize Microsoft Visual Studio to compile and install the MongoDB C and C++ drivers on Windows, and use these drivers to create a console application that can interact with your MongoDB data by performing basic CRUD operations.
Tools and libraries used in this tutorial:
1. Microsoft Windows 11
2. Microsoft Visual Studio 2022 17.3.6
3. Language standard: C++17
4. MongoDB C Driver version: 1.23
5. MongoDB C++ Driver version: 3.7.0
6. boost: 1.80.0
7. Python: 3.10
8. CMake: 3.25.0
## Prerequisites
1. MongoDB Atlas account with a cluster created.
2. *(Optional)* Sample dataset loaded into the Atlas cluster.
3. Your machine’s IP address is whitelisted. Note: You can add *0.0.0.0/0* as the IP address, which should allow access from any machine. This setting is not recommended for production use.
## Installation: IDE and tools
Step 1: Install Visual Studio: Download Visual Studio Tools - Install Free for Windows, Mac, Linux.
In the Workloads tab during installation, select “Desktop development with C++.”
Step 2: Install CMake: Download \| CMake
* For simplicity, choose the installer.
* In the setup, make sure to select “Add CMake to the system PATH for all users.” This enables the CMake executable to be easily accessible.
Step 3: Install Python 3: Download Python.
Step 4: *(Optional)* Download boost library from Boost Downloads and extract it to *C:\boost*.
## Installation: Drivers
> Detailed instructions and configurations available here:
>
>
> * Installing the mongocxx driver
> * Installing the MongoDB C Driver (libmongoc) and BSON library (libbson)
### Step 1: Install C Driver
C++ Driver has a dependency on C driver. Hence, we need to install C Driver first.
* Download C Driver
* Check compatibility at Windows - Installing the mongocxx driver for the driver to download.
* Download release tarball — Releases · mongodb/mongo-c-driver — and extract it to *C:\Repos\mongo-c-driver-1.23.0*.
* Setup build via CMake
* Launch powershell/terminal as an administrator.
* Navigate to *C:\Repos\mongo-c-driver-1.23.0* and create a new folder named *cmake-build* for the build files.
* Navigate to *C: \Repos\mongo-c-driver-1.23.0\cmake-build*.
* Run the below command to configure and generate build files using CMake.
```
cmake -G "Visual Studio 17 2022" -A x64 -S "C:\Repos\mongo-c-driver-1.23.0" -B "C:\Repos\mongo-c-driver-1.23.0\cmake-build"
```
Note: Build setup can be done with the CMake GUI application, as well.
* Execute build
* Visual Studio’s default build type is Debug. A release build with debug info is recommended for production use.
* Run the below command to build and install the driver
```
cmake --build . --config RelWithDebInfo --target install
```
* You should now see libmongoc and libbson installed in *C:/Program Files/mongo-c-driver*.
* Move the *mongo-c-driver* to *C:/* for convenience. Hence, C Driver should now be present at *C:/mongo-c-driver*.
### Step 2: Install C++ Driver
* Download C++ Driver
* Download release tarball — Releases · mongodb/mongo-cxx-driver — and extract it to *C:\Repos\mongo-cxx-driver-r3.7.0*.
* Set up build via CMake
* Launch powershell/terminal as an administrator.
* Navigate to *C:\Repos\mongo-cxx-driver-r3.7.0\build*.
* Run the below command to generate and configure build files via CMake.
```
cmake .. -G "Visual Studio 17 2022" -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS="/Zc:__cplusplus /EHsc" -DCMAKE_PREFIX_PATH=C:\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\mongo-cxx-driver
```
Note: Setting *DCMAKE_CXX_FLAGS* should not be required for C++ driver version 3.7.1 and above.
* Execute build
* Run the below command to build and install the driver
```
cmake --build . --config RelWithDebInfo --target install
```
* You should now see C++ driver installed in *C:\mongo-cxx-driver*.
## Visual Studio: Setting up the dev environment
* Create a new project in Visual Studio.
* Select *Console App* in the templates.
* Visual Studio should create a new project and open a .cpp file which prints “Hello World.” Navigate to the Solution Explorer panel, right-click on the solution name (*MongoCXXGettingStarted*, in this case), and click Properties.
* Go to *Configuration Properties > C/C++ > General > Additional Include Directories* and add the include directories from the C and C++ driver installation folders, as shown below.
* Go to *Configuration Properties > C/C++ > Language* and change the *C++ Language Standard* to C++17.
* Go to *Configuration Properties > C/C++ > Command Line* and add */Zc:\_\_cplusplus* in the *Additional Options* field. This flag is needed to opt into the correct definition of \_\_cplusplus.
* Go to *Configuration Properties > Linker > Input* and add the driver libs in *Additional Dependencies* section, as shown below.
* Go to *Configuration Properties > Debugging > Environment* to add a path to the driver executables, as shown below.
## Building the console application
> Source available here
Let’s build an application that maintains student records. We will input student data from the user, save them in the database, and perform different CRUD operations on the database.
### Connecting to the database
Let’s start with a simple program to connect to the MongoDB Atlas cluster and access the databases. Get the connection string (URI) to the cluster and create a new environment variable with key as *“MONGODB\_URI”* and value as the connection string (URI). It’s a good practice to keep the connection string decoupled from the code.
Tip: Restart your machine after creating the environment variable in case the *“getEnvironmentVariable”* function fails to retrieve the environment variable.
```
#include
#include
#include
#include
#include
#include
#include
#include
using namespace std;
std::string getEnvironmentVariable(std::string environmentVarKey)
{
char* pBuffer = nullptr;
size_t size = 0;
auto key = environmentVarKey.c_str();
// Use the secure version of getenv, ie. _dupenv_s to fetch environment variable.
if (_dupenv_s(&pBuffer, &size, key) == 0 && pBuffer != nullptr)
{
std::string environmentVarValue(pBuffer);
free(pBuffer);
return environmentVarValue;
}
else
{
return "";
}
}
auto mongoURIStr = getEnvironmentVariable("MONGODB_URI");
static const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr };
// Get all the databases from a given client.
vector getDatabases(mongocxx::client& client)
{
return client.list_database_names();
}
int main()
{
// Create an instance.
mongocxx::instance inst{};
mongocxx::options::client client_options;
auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };
client_options.server_api_opts(api);
mongocxx::client conn{ mongoURI, client_options}
auto dbs = getDatabases(conn);
for (auto db : dbs)
{
cout << db << endl;
}
return 0;
}
```
Click on “Launch Debugger” to launch the console application. The output should looks something like this:
### CRUD operations
> Full tutorial
Since the database is successfully connected to our application, let’s write some helper functions to interact with the database, performing CRUD operations.
#### Create
```
// Create a new collection in the given database.
void createCollection(mongocxx::database& db, const string& collectionName)
{
db.create_collection(collectionName);
}
// Create a document from the given key-value pairs.
bsoncxx::document::value createDocument(const vector>& keyValues)
{
bsoncxx::builder::stream::document document{};
for (auto& keyValue : keyValues)
{
document << keyValue.first << keyValue.second;
}
return document << bsoncxx::builder::stream::finalize;
}
// Insert a document into the given collection.
void insertDocument(mongocxx::collection& collection, const bsoncxx::document::value& document)
{
collection.insert_one(document.view());
}
```
#### Read
```
// Print the contents of the given collection.
void printCollection(mongocxx::collection& collection)
{
// Check if collection is empty.
if (collection.count_documents({}) == 0)
{
cout << "Collection is empty." << endl;
return;
}
auto cursor = collection.find({});
for (auto&& doc : cursor)
{
cout << bsoncxx::to_json(doc) << endl;
}
}
// Find the document with given key-value pair.
void findDocument(mongocxx::collection& collection, const string& key, const string& value)
{
// Create the query filter
auto filter = bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize;
//Add query filter argument in find
auto cursor = collection.find({ filter });
for (auto&& doc : cursor)
{
cout << bsoncxx::to_json(doc) << endl;
}
}
```
#### Update
```
// Update the document with given key-value pair.
void updateDocument(mongocxx::collection& collection, const string& key, const string& value, const string& newKey, const string& newValue)
{
collection.update_one(bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize,
bsoncxx::builder::stream::document{} << "$set" << bsoncxx::builder::stream::open_document << newKey << newValue << bsoncxx::builder::stream::close_document << bsoncxx::builder::stream::finalize);
}
```
#### Delete
```
// Delete a document from a given collection.
void deleteDocument(mongocxx::collection& collection, const bsoncxx::document::value& document)
{
collection.delete_one(document.view());
}
```
### The main() function
With all the helper functions in place, let’s create a menu in the main function which we can use to interact with the application.
```
// ********************************************** I/O Methods **********************************************
// Input student record.
void inputStudentRecord(mongocxx::collection& collection)
{
string name, rollNo, branch, year;
cout << "Enter name: ";
cin >> name;
cout << "Enter roll number: ";
cin >> rollNo;
cout << "Enter branch: ";
cin >> branch;
cout << "Enter year: ";
cin >> year;
insertDocument(collection, createDocument({ {"name", name}, {"rollNo", rollNo}, {"branch", branch}, {"year", year} }));
}
// Update student record.
void updateStudentRecord(mongocxx::collection& collection)
{
string rollNo, newBranch, newYear;
cout << "Enter roll number: ";
cin >> rollNo;
cout << "Enter new branch: ";
cin >> newBranch;
cout << "Enter new year: ";
cin >> newYear;
updateDocument(collection, "rollNo", rollNo, "branch", newBranch);
updateDocument(collection, "rollNo", rollNo, "year", newYear);
}
// Find student record.
void findStudentRecord(mongocxx::collection& collection)
{
string rollNo;
cout << "Enter roll number: ";
cin >> rollNo;
findDocument(collection, "rollNo", rollNo);
}
// Delete student record.
void deleteStudentRecord(mongocxx::collection& collection)
{
string rollNo;
cout << "Enter roll number: ";
cin >> rollNo;
deleteDocument(collection, createDocument({ {"rollNo", rollNo} }));
}
// Print student records.
void printStudentRecords(mongocxx::collection& collection)
{
printCollection(collection);
}
// ********************************************** Main **********************************************
int main()
{
if(mongoURI.to_string().empty())
{
cout << "URI is empty";
return 0;
}
// Create an instance.
mongocxx::instance inst{};
mongocxx::options::client client_options;
auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };
client_options.server_api_opts(api);
mongocxx::client conn{ mongoURI, client_options};
const string dbName = "StudentRecords";
const string collName = "StudentCollection";
auto dbs = getDatabases(conn);
// Check if database already exists.
if (!(std::find(dbs.begin(), dbs.end(), dbName) != dbs.end()))
{
// Create a new database & collection for students.
conndbName];
}
auto studentDB = conn.database(dbName);
auto allCollections = studentDB.list_collection_names();
// Check if collection already exists.
if (!(std::find(allCollections.begin(), allCollections.end(), collName) != allCollections.end()))
{
createCollection(studentDB, collName);
}
auto studentCollection = studentDB.collection(collName);
// Create a menu for user interaction
int choice = -1;
do while (choice != 0)
{
//system("cls");
cout << endl << "**************************************************************************************************************" << endl;
cout << "Enter 1 to input student record" << endl;
cout << "Enter 2 to update student record" << endl;
cout << "Enter 3 to find student record" << endl;
cout << "Enter 4 to delete student record" << endl;
cout << "Enter 5 to print all student records" << endl;
cout << "Enter 0 to exit" << endl;
cout << "Enter Choice : ";
cin >> choice;
cout << endl;
switch (choice)
{
case 1:
inputStudentRecord(studentCollection);
break;
case 2:
updateStudentRecord(studentCollection);
break;
case 3:
findStudentRecord(studentCollection);
break;
case 4:
deleteStudentRecord(studentCollection);
break;
case 5:
printStudentRecords(studentCollection);
break;
case 0:
break;
default:
cout << "Invalid choice" << endl;
break;
}
} while (choice != 0);
return 0;
}
```
## Application in action
When this application is executed, you can manage the student records via the console interface. Here’s a demo:
You can also see the collection in Atlas reflecting any change made via the console application.
![Student Records collection in Atlas
## Wrapping up
With this article, we covered installation of C/C++ driver and creating a console application in Visual Studio that connects to MongoDB Atlas to perform basic CRUD operations.
More information about the C++ driver is available at MongoDB C++ Driver. | md | {
"tags": [
"MongoDB",
"C++"
],
"pageDescription": "This article will show you how to utilize Microsoft Visual Studio to compile and install the MongoDB C and C++ drivers on Windows, and use these drivers to create a console application that can interact with your MongoDB data.",
"contentType": "Tutorial"
} | Getting Started with MongoDB and C++ | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-schema-design-best-practices | created | # MongoDB Schema Design Best Practices
Have you ever wondered, "How do I model a schema for my application?"
It's one of the most common questions devs have pertaining to MongoDB.
And the answer is, *it depends*. This is because document databases have
a rich vocabulary that is capable of expressing data relationships in
more nuanced ways than SQL. There are many things to consider when
picking a schema. Is your app read or write heavy? What data is
frequently accessed together? What are your performance considerations?
How will your data set grow and scale?
In this post, we will discuss the basics of data modeling using real
world examples. You will learn common methodologies and vocabulary you
can use when designing your database schema for your application.
Okay, first off, did you know that proper MongoDB schema design is the
most critical part of deploying a scalable, fast, and affordable
database? It's true, and schema design is often one of the most
overlooked facets of MongoDB administration. Why is MongoDB Schema
Design so important? Well, there are a couple of good reasons. In my
experience, most people coming to MongoDB tend to think of MongoDB
schema design as the same as legacy relational schema design, which
doesn't allow you to take full advantage of all that MongoDB databases
have to offer. First, let's look at how legacy relational database
design compares to MongoDB schema design.
## Schema Design Approaches – Relational vs. MongoDB
When it comes to MongoDB database schema design, this is what most
developers think of when they are looking at designing a relational
schema and a MongoDB schema.
I have to admit that I understand the impulse to design your MongoDB
schema the same way you have always designed your SQL schema. It's
completely normal to want to split up your data into neat little tables
as you've always done before. I was guilty of doing this when I first
started learning how to use MongoDB. However, as we will soon see, you
lose out on many of the awesome features of MongoDB when you design your
schema like an SQL schema.
And this is how that makes me feel.
However, I think it's best to compare MongoDB schema design to
relational schema design since that's where many devs coming to MongoDB
are coming from. So, let's see how these two design patterns differ.
### Relational Schema Design
When designing a relational schema, typically, devs model their schema
independent of queries. They ask themselves, "What data do I have?"
Then, by using prescribed approaches, they will
normalize
(typically in 3rd normal
form).
The tl;dr of normalization is to split up your data into tables, so you
don't duplicate data. Let's take a look at an example of how you would
model some user data in a relational database.
In this example, you can see that the user data is split into separate
tables and it can be JOINED together using foreign keys in the `user_id`
column of the Professions and Cars table. Now, let's take a look at how
we might model this same data in MongoDB.
### MongoDB Schema Design
Now, MongoDB schema design works a lot differently than relational
schema design. With MongoDB schema design, there is:
- No formal process
- No algorithms
- No rules
When you are designing your MongoDB schema design, the only thing that matters is that you design a schema that will work well for ___your___ application. Two different apps that use the same exact data might have very different schemas if the applications are used differently. When designing a schema, we want to take into consideration the following:
- Store the data
- Provide good query performance
- Require reasonable amount of hardware
Let's take a look at how we might model the relational User model in
MongoDB.
``` json
{
"first_name": "Paul",
"surname": "Miller",
"cell": "447557505611",
"city": "London",
"location": 45.123, 47.232],
"profession": ["banking", "finance", "trader"],
"cars": [
{
"model": "Bentley",
"year": 1973
},
{
"model": "Rolls Royce",
"year": 1965
}
]
}
```
You can see that instead of splitting our data up into separate
collections or documents, we take advantage of MongoDB's document based
design to embed data into arrays and objects within the User object. Now
we can make one simple query to pull all that data together for our
application.
## Embedding vs. Referencing
MongoDB schema design actually comes down to only two choices for every
piece of data. You can either embed that data directly or reference
another piece of data using the
[$lookup
operator (similar to a JOIN). Let's look at the pros and cons of using
each option in your schema.
### Embedding
#### Advantages
- You can retrieve all relevant information in a single query.
- Avoid implementing joins in application code or using
$lookup.
- Update related information as a single atomic operation. By
default, all CRUD operations on a single document are ACID
compliant.
- However, if you need a transaction across multiple operations, you
can use the transaction
operator.
- Though transactions are available starting
4.0,
however, I should add that it's an anti-pattern to be overly reliant
on using them in your application.
#### Limitations
- Large documents mean more overhead if most fields are not relevant.
You can increase query performance by limiting the size of the
documents that you are sending over the wire for each query.
- There is a 16-MB document size limit in
MongoDB. If you
are embedding too much data inside a single document, you could
potentially hit this limit.
### Referencing
Okay, so the other option for designing our schema is referencing
another document using a document's unique object
ID and
connecting them together using the
$lookup
operator. Referencing works similarly as the JOIN operator in an SQL
query. It allows us to split up data to make more efficient and scalable
queries, yet maintain relationships between data.
#### Advantages
- By splitting up data, you will have smaller documents.
- Less likely to reach 16-MB-per-document
limit.
- Infrequently accessed information not needed on every query.
- Reduce the amount of duplication of data. However, it's important to
note that data duplication should not be avoided if it results in a
better schema.
#### Limitations
- In order to retrieve all the data in the referenced documents, a
minimum of two queries or
$lookup
required to retrieve all the information.
## Type of Relationships
Okay, so now that we have explored the two ways we are able to split up
data when designing our schemas in MongoDB, let's look at common
relationships that you're probably familiar with modeling if you come
from an SQL background. We will start with the more simple relationships
and work our way up to some interesting patterns and relationships and
how we model them with real-world examples. Note, we are only going to
scratch the surface of modeling relationships in MongoDB here.
It's also important to note that even if your application has the same
exact data as the examples listed below, you might have a completely
different schema than the one I outlined here. This is because the most
important consideration you make for your schema is how your data is
going to be used by your system. In each example, I will outline the
requirements for each application and why a given schema was used for
that example. If you want to discuss the specifics of your schema, be
sure to open a conversation on the MongoDB
Community Forum, and
we all can discuss what will work best for your unique application.
### One-to-One
Let's take a look at our User document. This example has some great
one-to-one data in it. For example, in our system, one user can only
have one name. So, this would be an example of a one-to-one
relationship. We can model all one-to-one data as key-value pairs in our
database.
``` json
{
"_id": "ObjectId('AAA')",
"name": "Joe Karlsson",
"company": "MongoDB",
"twitter": "@JoeKarlsson1",
"twitch": "joe_karlsson",
"tiktok": "joekarlsson",
"website": "joekarlsson.com"
}
```
DJ Khalid would approve.
One to One tl;dr:
- Prefer key-value pair embedded in the document.
- For example, an employee can work in one and only one department.
### One-to-Few
Okay, now let's say that we are dealing a small sequence of data that's
associated with our users. For example, we might need to store several
addresses associated with a given user. It's unlikely that a user for
our application would have more than a couple of different addresses.
For relationships like this, we would define this as a *one-to-few
relationship.*
``` json
{
"_id": "ObjectId('AAA')",
"name": "Joe Karlsson",
"company": "MongoDB",
"twitter": "@JoeKarlsson1",
"twitch": "joe_karlsson",
"tiktok": "joekarlsson",
"website": "joekarlsson.com",
"addresses":
{ "street": "123 Sesame St", "city": "Anytown", "cc": "USA" },
{ "street": "123 Avenue Q", "city": "New York", "cc": "USA" }
]
}
```
Remember when I told you there are no rules to MongoDB schema design?
Well, I lied. I've made up a couple of handy rules to help you design
your schema for your application.
>
>
>**Rule 1**: Favor embedding unless there is a compelling reason not to.
>
>
Generally speaking, my default action is to embed data within a
document. I pull it out and reference it only if I need to access it on
its own, it's too big, I rarely need it, or any other reason.
One-to-few tl;dr:
- Prefer embedding for one-to-few relationships.
### One-to-Many
Alright, let's say that you are building a product page for an
e-commerce website, and you are going to have to design a schema that
will be able to show product information. In our system, we save
information about all the many parts that make up each product for
repair services. How would you design a schema to save all this data,
but still make your product page performant? You might want to consider
a *one-to-many* schema since your one product is made up of many parts.
Now, with a schema that could potentially be saving thousands of sub
parts, we probably do not need to have all of the data for the parts on
every single request, but it's still important that this relationship is
maintained in our schema. So, we might have a Products collection with
data about each product in our e-commerce store, and in order to keep
that part data linked, we can keep an array of Object IDs that link to a
document that has information about the part. These parts can be saved
in the same collection or in a separate collection, if needed. Let's
take a look at how this would look.
Products:
``` json
{
"name": "left-handed smoke shifter",
"manufacturer": "Acme Corp",
"catalog_number": "1234",
"parts": ["ObjectID('AAAA')", "ObjectID('BBBB')", "ObjectID('CCCC')"]
}
```
Parts:
``` json
{
"_id" : "ObjectID('AAAA')",
"partno" : "123-aff-456",
"name" : "#4 grommet",
"qty": "94",
"cost": "0.94",
"price":" 3.99"
}
```
>
>
>**Rule 2**: Needing to access an object on its own is a compelling
>reason not to embed it.
>
>
>
>
>**Rule 3**: Avoid joins/lookups if possible, but don't be afraid if they
>can provide a better schema design.
>
>
### One-to-Squillions
What if we have a schema where there could be potentially millions of
subdocuments, or more? That's when we get to the one-to-squillions
schema. And, I know what you're thinking: \_Is squillions a real word?\_
[And the answer is yes, it is a real
word.
Let's imagine that you have been asked to create a server logging
application. Each server could potentially save a massive amount of
data, depending on how verbose you're logging and how long you store
server logs for.
With MongoDB, tracking data within an unbounded array is dangerous,
since we could potentially hit that 16-MB-per-document limit. Any given
host could generate enough messages to overflow the 16-MB document size,
even if only ObjectIDs are stored in an array. So, we need to rethink
how we can track this relationship without coming up against any hard
limits.
So, instead of tracking the relationship between the host and the log
message in the host document, let's let each log message store the host
that its message is associated with. By storing the data in the log, we
no longer need to worry about an unbounded array messing with our
application! Let's take a look at how this might work.
Hosts:
``` json
{
"_id": ObjectID("AAAB"),
"name": "goofy.example.com",
"ipaddr": "127.66.66.66"
}
```
Log Message:
``` json
{
"time": ISODate("2014-03-28T09:42:41.382Z"),
"message": "cpu is on fire!",
"host": ObjectID("AAAB")
}
```
>
>
>**Rule 4**: Arrays should not grow without bound. If there are more than
>a couple of hundred documents on the "many" side, don't embed them; if
>there are more than a few thousand documents on the "many" side, don't
>use an array of ObjectID references. High-cardinality arrays are a
>compelling reason not to embed.
>
>
### Many-to-Many
The last schema design pattern we are going to be covering in this post
is the *many-to-many* relationship. This is another very common schema
pattern that we see all the time in relational and MongoDB schema
designs. For this pattern, let's imagine that we are building a to-do
application. In our app, a user may have *many* tasks and a task may
have *many* users assigned to it.
In order to preserve these relationships between users and tasks, there
will need to be references from the *one* user to the *many* tasks and
references from the *one* task to the *many* users. Let's look at how
this could work for a to-do list application.
Users:
``` json
{
"_id": ObjectID("AAF1"),
"name": "Kate Monster",
"tasks": ObjectID("ADF9"), ObjectID("AE02"), ObjectID("AE73")]
}
```
Tasks:
``` json
{
"_id": ObjectID("ADF9"),
"description": "Write blog post about MongoDB schema design",
"due_date": ISODate("2014-04-01"),
"owners": [ObjectID("AAF1"), ObjectID("BB3G")]
}
```
From this example, you can see that each user has a sub-array of linked
tasks, and each task has a sub-array of owners for each item in our
to-do app.
### Summary
As you can see, there are a ton of different ways to express your schema
design, by going beyond normalizing your data like you might be used to
doing in SQL. By taking advantage of embedding data within a document or
referencing documents using the $lookup operator, you can make some
truly powerful, scalable, and efficient database queries that are
completely unique to your application. In fact, we are only barely able
to scratch the surface of all the ways that you could model your data in
MongoDB. If you want to learn more about MongoDB schema design, be sure
to check out our continued series on schema design in MongoDB:
- [MongoDB schema design
anti-patterns
- MongoDB University - M320: Data
Modeling
- MongoDB Data Model Design
Documentation
- Building with Patterns: A
Summary
I want to wrap up this post with the most important rule to MongoDB
schema design yet.
>
>
>**Rule 5**: As always, with MongoDB, how you model your data depends –
>entirely – on your particular application's data access patterns. You
>want to structure your data to match the ways that your application
>queries and updates it.
>
>
Remember, every application has unique needs and requirements, so the
schema design should reflect the needs of that particular application.
Take the examples listed in this post as a starting point for your
application. Reflect on what you need to do, and how you can use your
schema to help you get there.
>
>
>Recap:
>
>- **One-to-One** - Prefer key value pairs within the document
>- **One-to-Few** - Prefer embedding
>- **One-to-Many** - Prefer embedding
>- **One-to-Squillions** - Prefer Referencing
>- **Many-to-Many** - Prefer Referencing
>
>
>
>
>General Rules for MongoDB Schema Design:
>
>- **Rule 1**: Favor embedding unless there is a compelling reason not to.
>- **Rule 2**: Needing to access an object on its own is a compelling reason not to embed it.
>- **Rule 3**: Avoid joins and lookups if possible, but don't be afraid if they can provide a better schema design.
>- **Rule 4**: Arrays should not grow without bound. If there are more than a couple of hundred documents on the *many* side, don't embed them; if there are more than a few thousand documents on the *many* side, don't use an array of ObjectID references. High-cardinality arrays are a compelling reason not to embed.
>- **Rule 5**: As always, with MongoDB, how you model your data depends **entirely** on your particular application's data access patterns. You want to structure your data to match the ways that your application queries and updates it.
We have only scratched the surface of design patterns in MongoDB. In
fact, we haven't even begun to start exploring patterns that aren't even
remotely possible to perform in a legacy relational model. If you want
to learn more about these patterns, check out the resources below.
## Additional Resources:
- Now that you know how to design a scalable and performant MongoDB
schema, check out our MongoDB schema design anti-pattern series to
learn what NOT to do when building out your MongoDB database schema:
- Video more your thing? Check out our video series on YouTube to
learn more about MongoDB schema anti-patterns:
- MongoDB University - M320: Data
Modeling
- 6 Rules of Thumb for MongoDB Schema Design: Part
1
- MongoDB Data Model Design
Documentation
- MongoDB Data Model Examples and Patterns
Documentation
- Building with Patterns: A
Summary
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!",
"contentType": "Tutorial"
} | MongoDB Schema Design Best Practices | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/python-quickstart-crud | created | # Basic MongoDB Operations in Python
Like Python? Want to get started with MongoDB? Welcome to this quick start guide! I'll show you how to set up an Atlas database with some sample data to explore. Then you'll create some data and learn how to read, update and delete it.
## Prerequisites
You'll need the following installed on your computer to follow along with this tutorial:
- An up-to-date version of Python 3. I wrote the code in this tutorial in Python 3.8, but it should run fine in version 3.6+.
- A code editor of your choice. I recommend either PyCharm or the free VS Code with the official Python extension.
## Start a MongoDB cluster on Atlas
Now you've got your local environment set up, it's time to create a MongoDB database to work with, and to load in some sample data you can explore and modify.
You could create a database on your development machine, but it's easier to get started on the Atlas hosted service without having to learn how to configure a MongoDB cluster.
>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.
You'll need to create a new cluster and load it with sample data. My awesome colleague Maxime Beugnet has created a video tutorial to help you out.
If you don't want to watch the video, the steps are:
- Click "Get started free".
- Enter your details and accept the Terms of Service.
- Create a *Starter* cluster.
- Select the same cloud provider you're used to, or just leave it as-is. Pick a region that makes sense for you.
- You can change the name of the cluster if you like. I've called mine "PythonQuickstart".
It will take a couple of minutes for your cluster to be provisioned, so while you're waiting you can move on to the next step.
## Set up your environment
You should set up a Python virtualenv which will contain the libraries you install during this quick start. There are several different ways to set up virtualenvs, but to simplify things we'll use the one included with Python. First, create a directory to hold your code and your virtualenv. Open your terminal, `cd` to that directory and then run the following command:
``` bash
# Note:
# On Debian & Ubuntu systems you'll first need to install virtualenv with:
# sudo apt install python3-venv
python3 -m venv venv
```
The command above will create a virtualenv in a directory called `venv`. To activate the new virtualenv, run one of the following commands, according to your system:
``` bash
# Run the following on OSX & Linux:
source venv/bin/activate
# Run the following on Windows:
.\\venv\\Scripts\\activate
```
To write Python programs that connect to your MongoDB database (don't worry - you'll set that up in a moment!) you'll need to install a Python driver - a library which knows how to talk to MongoDB. In Python, you have two choices! The recommended driver is PyMongo - that's what I'll cover in this quick start. If you want to write *asyncio* programs with MongoDB, however, you'll need to use a library called Motor, which is also fully supported by MongoDB.
To install PyMongo, run the following command:
``` bash
python -m pip install pymongosrv]==3.10.1
```
For this tutorial we'll also make use of a library called `python-dotenv` to load configuration, so run the command below as well to install that:
``` bash
python -m pip install python-dotenv==0.13.0
```
## Set up your MongoDB instance
Hopefully, your MongoDB cluster should have finished starting up now and has probably been running for a few minutes.
The following instructions were correct at the time of writing, but may change, as we're always improving the Atlas user interface:
In the Atlas web interface, you should see a green button at the bottom-left of the screen, saying "Get Started". If you click on it, it'll bring up a checklist of steps for getting your database set up. Click on each of the items in the list (including the optional "Load Sample Data" item), and it'll help you through the steps to get set up.
### Create a user
Following the "Get Started" steps, create a user with "Read and write access to any database". You can give it a username and password of your choice - take a copy of them, you'll need them in a minute. Use the "autogenerate secure password" button to ensure you have a long random password which is also safe to paste into your connection string later.
### Allow an IP address
When deploying an app with sensitive data, you should only allow the IP address of the servers which need to connect to your database. To allow the IP address of your development machine, select "Network Access", click the "Add IP Address" button and then click "Add Current IP Address" and hit "Confirm".
## Connect to your database
The last step of the "Get Started" checklist is "Connect to your Cluster". Select "Connect your application" and select "Python" with a version of "3.6 or later".
Ensure Step 2 has "Connection String only" highlighted, and press the "Copy" button to copy the URL to your pasteboard. Save it to the same place you stored your username and password. Note that the URL has `` as a placeholder for your password. You should paste your password in here, replacing the whole placeholder including the '\<' and '>' characters.
Now it's time to actually write some Python code to connect to your MongoDB database!
In your code editor, create a Python file in your project directory called `basic_operations.py`. Enter in the following code:
``` python
import datetime # This will be needed later
import os
from dotenv import load_dotenv
from pymongo import MongoClient
# Load config from a .env file:
load_dotenv()
MONGODB_URI = os.environ['MONGODB_URI']
# Connect to your MongoDB cluster:
client = MongoClient(MONGODB_URI)
# List all the databases in the cluster:
for db_info in client.list_database_names():
print(db_info)
```
In order to run this, you'll need to set the MONGODB_URI environment variable to the connection string you obtained above. You can do this two ways. You can:
- Run an `export` (or `set` on Windows) command to set the environment variable each time you set up your session.
- Save the URI in a configuration file which should *never* be added to revision control.
I'm going to show you how to take the second approach. Remember it's very important not to accidentally publish your credentials to git or anywhere else, so add `.env` to your `.gitignore` file if you're using git. The `python-dotenv` library loads configuration from a file in the current directory called `.env`. Create a `.env` file in the same directory as your code and paste in the configuration below, replacing the placeholder URI with your own MongoDB URI.
``` none
# Unix:
export MONGODB_URI='mongodb+srv://yourusername:[email protected]/test?retryWrites=true&w=majority'
```
The URI contains your username and password (so keep it safe!) and the hostname of a DNS server which will provide information to PyMongo about your cluster. Once PyMongo has retrieved the details of your cluster, it will connect to the primary MongoDB server and start making queries.
Now if you run the Python script you should see output similar to the following:
``` bash
$ python basic_operations.py
sample_airbnb
sample_analytics
sample_geospatial
sample_mflix
sample_supplies
sample_training
sample_weatherdata
twitter_analytics
admin
local
```
You just connected your Python program to MongoDB and listed the databases in your cluster! If you don't see this list then you may not have successfully loaded sample data into your cluster; You may want to go back a couple of steps until running this command shows the list above.
In the code above, you used the `list_database_names` method to list the database names in the cluster. The `MongoClient` instance can also be used as a mapping (like a `dict`) to get a reference to a specific database. Here's some code to have a look at the collections inside the `sample_mflix` database. Paste it at the end of your Python file:
``` python
# Get a reference to the 'sample_mflix' database:
db = client['sample_mflix']
# List all the collections in 'sample_mflix':
collections = db.list_collection_names()
for collection in collections:
print(collection)
```
Running this piece of code should output the following:
``` bash
$ python basic_operations.py
movies
sessions
comments
users
theaters
```
A database also behaves as a mapping of collections inside that database. A collection is a bucket of documents, in the same way as a table contains rows in a traditional relational database. The following code looks up a single document in the `movies` collection:
``` python
# Import the `pprint` function to print nested data:
from pprint import pprint
# Get a reference to the 'movies' collection:
movies = db['movies']
# Get the document with the title 'Blacksmith Scene':
pprint(movies.find_one({'title': 'Blacksmith Scene'}))
```
When you run the code above it will look up a document called "Blacksmith Scene" in the 'movies' collection. It looks a bit like this:
``` python
{'_id': ObjectId('573a1390f29313caabcd4135'),
'awards': {'nominations': 0, 'text': '1 win.', 'wins': 1},
'cast': ['Charles Kayser', 'John Ott'],
'countries': ['USA'],
'directors': ['William K.L. Dickson'],
'fullplot': 'A stationary camera looks at a large anvil with a blacksmith '
'behind it and one on either side. The smith in the middle draws '
'a heated metal rod from the fire, places it on the anvil, and '
'all three begin a rhythmic hammering. After several blows, the '
'metal goes back in the fire. One smith pulls out a bottle of '
'beer, and they each take a swig. Then, out comes the glowing '
'metal and the hammering resumes.',
'genres': ['Short'],
'imdb': {'id': 5, 'rating': 6.2, 'votes': 1189},
'lastupdated': '2015-08-26 00:03:50.133000000',
'num_mflix_comments': 1,
'plot': 'Three men hammer on an anvil and pass a bottle of beer around.',
'rated': 'UNRATED',
'released': datetime.datetime(1893, 5, 9, 0, 0),
'runtime': 1,
'title': 'Blacksmith Scene',
'tomatoes': {'lastUpdated': datetime.datetime(2015, 6, 28, 18, 34, 9),
'viewer': {'meter': 32, 'numReviews': 184, 'rating': 3.0}},
'type': 'movie',
'year': 1893}
```
It's a one-minute movie filmed in 1893 - it's like a YouTube video from nearly 130 years ago! The data above is a single document. It stores data in fields that can be accessed by name, and you should be able to see that the `title` field contains the same value as we looked up in our call to `find_one` in the code above. The structure of every document in a collection can be different from each other, but it's usually recommended to follow the same or similar structure for all the documents in a single collection.
### A quick diversion about BSON
MongoDB is often described as a JSON database, but there's evidence in the document above that it *doesn't* store JSON. A MongoDB document consists of data stored as all the types that JSON can store, including booleans, integers, floats, strings, arrays, and objects (we call them subdocuments). However, if you look at the `_id` and `released` fields, these are types that JSON cannot store. In fact, MongoDB stores data in a binary format called BSON, which also includes the `ObjectId` type as well as native types for decimal numbers, binary data, and timestamps (which are converted by PyMongo to Python's native `datetime` type.)
## Create a document in a collection
The `movies` collection contains a lot of data - 23539 documents, but it only contains movies up until 2015. One of my favourite movies, the Oscar-winning "Parasite", was released in 2019, so it's not in the database! You can fix this glaring omission with the code below:
``` python
# Insert a document for the movie 'Parasite':
insert_result = movies.insert_one({
"title": "Parasite",
"year": 2020,
"plot": "A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. "
"But their easy life gets complicated when their deception is threatened with exposure.",
"released": datetime(2020, 2, 7, 0, 0, 0),
})
# Save the inserted_id of the document you just created:
parasite_id = insert_result.inserted_id
print("_id of inserted document: {parasite_id}".format(parasite_id=parasite_id))
```
If you're inserting more than one document in one go, it can be much more efficient to use the `insert_many` method, which takes an array of documents to be inserted. (If you're just loading documents into your database from stored JSON files, then you should take a look at [mongoimport
## Read documents from a collection
Running the code above will insert the document into the collection and print out its ID, which is useful, but not much to look at. You can retrieve the document to prove that it was inserted, with the following code:
``` python
import bson # <- Put this line near the start of the file if you prefer.
# Look up the document you just created in the collection:
print(movies.find_one({'_id': bson.ObjectId(parasite_id)}))
```
The code above will look up a single document that matches the query (in this case it's looking up a specific `_id`). If you want to look up *all* the documents that match a query, you should use the `find` method, which returns a `Cursor`. A Cursor will load data in batches, so if you attempt to query all the data in your collection, it will start to yield documents immediately - it doesn't load the whole Collection into memory on your computer! You can loop through the documents returned in a Cursor with a `for` loop. The following query should print one or more documents - if you've run your script a few times you will have inserted one document for this movie each time you ran your script! (Don't worry about cleaning them up - I'll show you how to do that in a moment.)
``` python
# Look up the documents you've created in the collection:
for doc in movies.find({"title": "Parasite"}):
pprint(doc)
```
Many methods in PyMongo, including the find methods, expect a MongoDB query as input. MongoDB queries, unlike SQL, are provided as data structures, not as a string. The simplest kind of matches look like the ones above: `{ 'key': 'value' }` where documents containing the field specified by the `key` are returned if the provided `value` is the same as that document's value for the `key`. MongoDB's query language is rich and powerful, providing the ability to match on different criteria across multiple fields. The query below matches all movies produced before 1920 with 'Romance' as one of the genre values:
``` python
{
'year': {
'$lt': 1920
},
'genres': 'Romance'
}
```
Even more complex queries and aggregations are possible with MongoDB Aggregations, accessed with PyMongo's `aggregate` method - but that's a topic for a later quick start post.
## Update documents in a collection
I made a terrible mistake! The document you've been inserting for Parasite has an error. Although Parasite was released in 2020 it's actually a *2019* movie. Fortunately for us, MongoDB allows you to update documents in the collection. In fact, the ability to atomically update parts of a document without having to update a whole new document is a key feature of MongoDB!
Here's some code which will look up the document you've inserted and update the `year` field to 2019:
``` python
# Update the document with the correct year:
update_result = movies.update_one({ '_id': parasite_id }, {
'$set': {"year": 2019}
})
# Print out the updated record to make sure it's correct:
pprint(movies.find_one({'_id': ObjectId(parasite_id)}))
```
As mentioned above, you've probably inserted *many* documents for this movie now, so it may be more appropriate to look them all up and change their `year` value in one go. The code for that looks like this:
``` python
# Update *all* the Parasite movie docs to the correct year:
update_result = movies.update_many({"title": "Parasite"}, {"$set": {"year": 2019}})
```
## Delete documents from the collection
Now it's time to clean up after yourself! The following code will delete all the matching documents from the collection - using the same broad query as before - all documents with a `title` of "Parasite":
``` python
movies.delete_many(
{"title": "Parasite",}
)
```
Once again, PyMongo has an equivalent `delete_one` method which will only delete the first matching document the database finds, instead of deleting *all* matching documents.
## Further reading
>Did you enjoy this quick start guide? Want to learn more? We have a great MongoDB University course I think you'll love!
>
>If that's not for you, we have lots of other courses covering all aspects of hosting and developing with MongoDB.
This quick start has only covered a small part of PyMongo and MongoDB's functionality, although I'll be covering more in later Python quick starts! Fortunately, in the meantime the documentation for MongoDB and using Python with MongoDB is really good. I recommend bookmarking the following for your reading pleasure:
- PyMongo Documentation provides thorough documentation describing how to use PyMongo with your MongoDB cluster, including comprehensive reference documentation on the `Collection` class that has been used extensively in this quick start.
- MongoDB Query Document documentation details the full power available for querying MongoDB collections. | md | {
"tags": [
"Python",
"MongoDB"
],
"pageDescription": "Learn how to perform CRUD operations using Python for MongoDB databases.",
"contentType": "Quickstart"
} | Basic MongoDB Operations in Python | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/code-examples/swift/realm-swiftui-ios-chat-app | created | # Building a Mobile Chat App Using Realm – Data Architecture
This article targets developers looking to build Realm into their mobile apps and (optionally) use MongoDB Atlas Device Sync. It focuses on the data architecture, both the schema and the
partitioning strategy. I use a chat app as an example, but you can apply
the same principals to any mobile app. This post will equip you with the
knowledge needed to design an efficient, performant, and robust data
architecture for your mobile app.
RChat is a chat application. Members of a chat room share messages, photos, location, and presence information with each other. The initial version is an iOS (Swift and SwiftUI) app, but we will use the same data model and backend Atlas App Services application to build an Android version in the future.
RChat makes an interesting use case for several reasons:
- A chat message needs to be viewable by all members of a chat room
and no one else.
- New messages must be pushed to the chat room for all online members
in real-time.
- The app should notify a user that there are new messages even when
they don't have that chat room open.
- Users should be able to observe the "presence" of other users (e.g.,
whether they're currently logged into the app).
- There's no limit on how many messages users send in a chat room, and
so the data structures must allow them to grow indefinitely.
If you're looking to add a chat feature to your mobile app, you can
repurpose the code from this article and the associated repo. If not,
treat it as a case study that explains the reasoning behind the data
model and partitioning/syncing decisions taken. You'll likely need to
make similar design choices in your apps.
This is the first in a series of three articles on building this app:
- Building a Mobile Chat App Using Realm – Integrating Realm into Your App explains how to build the rest of the app. It was written before new SwiftUI features were added in Realm-Cocoa 10.6. You can skip this part unless you're unable to make use of those features (e.g., if you're using UIKit rather than SwiftUI).
- Building a Mobile Chat App Using Realm – The New and Easier Way details building the app using the latest SwiftUI features released with Realm-Cocoa 10.6
>
>
>This article was updated in July 2021 to replace `objc` and `dynamic`
>with the `@Persisted` annotation that was introduced in Realm-Cocoa
>10.10.0.
>
>
## Prerequisites
If you want to build and run the app for yourself, this is what you'll
need:
- iOS14.2+
- XCode 12.3+
## Front End App Features
A user can register and then log into the app. They provide an avatar
image and select options such as whether to share location information
in chat messages.
Users can create new chat rooms and include other registered users.
The list of chat rooms is automatically updated to show how many unread
messages are in that room. The members of the room are shown, together
with an indication of their current status.
A user can open a chat room to view the existing messages or send new
ones.
Chat messages can contain text, images, and location details.
>
>
>Watch this demo of the app in action.
>
>:youtube]{vid=BlV9El_MJqk}
>
>
## Running the App for Yourself
I like to see an app in action before I start delving into the code. If
you're the same, you can find the instructions in the [README.
## The Data
Figuring out how to store, access, sync, and share your data is key to
designing a functional, performant, secure, and scalable application.
Here are some things to consider:
- What data should a user be able to see? What should they be able to
change?
- What data needs to be available in the mobile app for the current
user?
- What data changes need to be communicated to which users?
- What pieces of data will be accessed at the same time?
- Are there instances where data should be duplicated for performance,
scalability, or security purposes?
This article describes how I chose to organize and access the data, as well as why I made those choices.
### Data Architecture
I store virtually all of the application's data both on the mobile device (in Realm) and in the backend (in MongoDB Atlas). MongoDB Atlas Device Sync is used to keep the multiple copies in sync.
The Realm schema is defined in code – I write the classes, and Realm handles the rest. I specify the backend (Atlas) schema through JSON schemas (though I cheated and used the developer mode to infer the schema from the Realm model).
I use Atlas Triggers to automatically create or modify data as a side effect of other actions, such as a new user registering with the app or adding a message to a chat room. Triggers simplify the front end application code and increase security by limiting what data needs to be accessible from the mobile app.
When the mobile app opens a Realm, it provides a list of the classes it should contain and a partition value. In combination, Realm uses that information to decide what data it should synchronize between the local Realm and the back end (and onto other instances of the app).
Atlas Device Sync currently requires that an application must use the same partition key (name and type) in all of its Realm Objects and Atlas documents.
A common use case would be to use a string named "username" as the partition key. The mobile app would then open a Realm by setting the partition to the current user's name, ensuring that all of that user's data is available (but no data for other users).
For RChat, I needed something a bit more flexible. For example, multiple users need to be able to view a chat message, while a user should only be able to update their own profile details. I chose a string partition key, where the string is always composed of a key-value pair — for example, `"user=874798352934983"` or `"conversation=768723786839"`.
I needed to add back end rules to prevent a rogue user from hacking the mobile app and syncing data that they don't own. Atlas Device Sync permissions are defined through two JSON rules – one for read connections, one for writes. For this app, the rules delegate the decision to Functions:
The functions split the partition key into its key and value components. They perform different checks depending on the key component:
``` javascript
const splitPartition = partition.split("=");
if (splitPartition.length == 2) {
partitionKey = splitPartition0];
partitionValue = splitPartition[1];
console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);
} else {
console.log(`Couldn't extract the partition key/value from ${partition}`);
return;
}
switch (partitionKey) {
case "user":
// ...
case "conversation":
// ...
case "all-users":
// ...
default:
console.log(`Unexpected partition key: ${partitionKey}`);
return false;
}
```
The full logic for the partition checks can be found in the [canReadPartition and canWritePartition Functions. I'll cover how each of the cases are handled later.
### Data Model
There are three top-level Realm Objects, and I'll work through them in turn.
#### User Object
The User class represents an application user:
``` swift
class User: Object {
@Persisted var _id = UUID().uuidString
@Persisted var partition = "" // "user=_id"
@Persisted var userName = ""
@Persisted var userPreferences: UserPreferences?
@Persisted var lastSeenAt: Date?
@Persisted let conversations = List()
@Persisted var presence = "Off-Line"
}
```
I declare that the `User` class top-level Realm objects, by making it inherit from Realm's `Object` class.
The partition key is a string. I always set the partition to `"user=_id"` where `_id` is a unique identifier for the user's `User` object.
`User` includes some simple attributes such as strings for the user name and presence state.
User preferences are embedded within the User class:
``` swift
class UserPreferences: EmbeddedObject {
@Persisted var displayName: String?
@Persisted var avatarImage: Photo?
}
```
It's the inheritance from Realm's `EmbeddedObject` that tags this as a class that must always be embedded within a higher-level Realm object.
Note that only the top-level Realm Object class needs to include the partition field. The partition's embedded objects get included automatically.
`UserPreferences` only contains two attributes, so I could have chosen to include them directly in the `User` class. I decided to add the extra level of hierarchy as I felt it made the code easier to understand, but it has no functional impact.
Breaking the avatar image into its own embedded class was a more critical design decision as I reuse the `Photo` class elsewhere. This is the Photo class:
``` swift
class Photo: EmbeddedObject, ObservableObject {
@Persisted var _id = UUID().uuidString
@Persisted var thumbNail: Data?
@Persisted var picture: Data?
@Persisted var date = Date()
}
```
The `User` class includes a Realm `List` of embedded Conversation objects:
``` swift
class Conversation: EmbeddedObject, ObservableObject, Identifiable {
@Persisted var id = UUID().uuidString
@Persisted var displayName = ""
@Persisted var unreadCount = 0
@Persisted let members = List()
}
```
I've intentionally duplicated some data by embedding the conversation data into the `User` object. Every member of a conversation (chat room) will have a copy of the conversation's data. Only the `unreadCount` attribute is unique to each user.
##### What was the alternative?
I could have made `Conversation` a top-level Realm object and set the partition to a string of the format `"conversation=conversation-id"`. The User object would then have contained an array of conversation-ids. If a user were a member of 20 conversations, then the app would need to open 20 Realms (one for each of the partitions) to fetch all of the data it needed to display a list of the user's conversations. That would be a very inefficient approach.
##### What are the downsides to duplicating the conversation data?
Firstly, it uses more storage in the back end. The cost isn't too high as the `Conversation` only contains meta-data about the chat room and not the actual chat messages (and embedded photos). There are relatively few conversations compared to the number of chat messages.
The second drawback is that I need to keep the different versions of the conversation consistent. That does add some extra complexity, but I contain the logic within an Atlas
Trigger in the back end. This reasonably simple function ensures that all instances of the conversation data are updated when someone adds a new chat message:
``` javascript
exports = function(changeEvent) {
if (changeEvent.operationType != "insert") {
console.log(`ChatMessage ${changeEvent.operationType} event – currently ignored.`);
return;
}
console.log(`ChatMessage Insert event being processed`);
let userCollection = context.services.get("mongodb-atlas").db("RChat").collection("User");
let chatMessage = changeEvent.fullDocument;
let conversation = "";
if (chatMessage.partition) {
const splitPartition = chatMessage.partition.split("=");
if (splitPartition.length == 2) {
conversation = splitPartition1];
console.log(`Partition/conversation = ${conversation}`);
} else {
console.log("Couldn't extract the conversation from partition ${chatMessage.partition}");
return;
}
} else {
console.log("partition not set");
return;
}
const matchingUserQuery = {
conversations: {
$elemMatch: {
id: conversation
}
}
};
const updateOperator = {
$inc: {
"conversations.$[element].unreadCount": 1
}
};
const arrayFilter = {
arrayFilters:[
{
"element.id": conversation
}
]
};
userCollection.updateMany(matchingUserQuery, updateOperator, arrayFilter)
.then ( result => {
console.log(`Matched ${result.matchedCount} User docs; updated ${result.modifiedCount}`);
}, error => {
console.log(`Failed to match and update User docs: ${error}`);
});
};
```
Note that the function increments the `unreadCount` for all conversation members. When those changes are synced to the mobile app for each of those users, the app will update its rendered list of conversations to alert the user about the unread messages.
`Conversations`, in turn, contain a List of [Members:
``` swift
class Member: EmbeddedObject, Identifiable {
@Persisted var userName = ""
@Persisted var membershipStatus: String = "User added, but invite pending"
}
```
Again, there's some complexity to ensure that the `User` object for all conversation members contains the full list of members. Once more, a back end Atlas Trigger handles this.
This is how the iOS app opens a User Realm:
``` swift
let realmConfig = user.configuration(partitionValue: "user=\(user.id)")
return Realm.asyncOpen(configuration: realmConfig)
```
For efficiency, I open the User Realm when the user logs in and don't close it until the user logs out.
The Realm sync rules to determine whether a user can open a synced read or read/write Realm of User objects are very simple. Sync is allowed only if the value component of the partition string matches the logged-in user's `id`:
``` javascript
case "user":
console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) – ${partitionKey === user.id}`);
return partitionValue === user.id;
```
#### Chatster Object
Atlas Device Sync doesn't currently have a way to give one user permission to sync all elements of an object/document while restricting a different user to syncing just a subset of the attributes. The `User` object contains some attributes that should only be accessible by the user it represents (e.g., the list of conversations that they are members of). The impact is that we can't sync `User` objects to other users. But, there is also data in there that we would like to share (e.g., the
user's avatar image).
The way I worked within the current constraints is to duplicate some of the `User` data in the Chatster Object:
``` swift
class Chatster: Object {
@Persisted var _id = UUID().uuidString // This will match the _id of the associated User
@Persisted var partition = "all-users=all-the-users"
@Persisted var userName: String?
@Persisted var displayName: String?
@Persisted var avatarImage: Photo?
@Persisted var lastSeenAt: Date?
@Persisted var presence = "Off-Line"
}
```
I want all `Chatster` objects to be available to all users. For example, when creating a new conversation, the user can search for potential members based on their username. To make that happen, I set the partition to `"all-users=all-the-users"` for every instance.
A Trigger handles the complexity of maintaining consistency between the `User` and `Chatster` collections/objects. The iOS app doesn't need any additional logic.
An alternate solution would have been to implement and call Functions to fetch the required subset of `User` data and to search usernames. The functions approach would remove the data duplication, but it would add extra latency and wouldn't work when the device is offline.
This is how the iOS app opens a Chatster Realm:
``` swift
let realmConfig = user.configuration(partitionValue: "all-users=all-the-users")
return Realm.asyncOpen(configuration: realmConfig)
```
For efficiency, I open the `Chatster` Realm when the user logs in and don't close it until the user logs out.
The Sync rules to determine whether a user can open a synced read or read/write Realm of User objects are even more straightforward.
It's always possible to open a synced `Chatster` Realm for reads:
``` javascript
case "all-users":
console.log(`Any user can read all-users partitions`);
return true;
```
It's never possible to open a synced `Chatster` Realm for writes (the Trigger is the only place that needs to make changes):
``` javascript
case "all-users":
console.log(`No user can write to an all-users partitions`);
return false;
```
#### ChatMessage Object
The third and final top-level Realm Object is ChatMessage:
``` swift
class ChatMessage: Object {
@Persisted var _id = UUID().uuidString
@Persisted var partition = "" // "conversation="
@Persisted var author: String?
@Persisted var text = ""
@Persisted var image: Photo?
@Persisted let location = List()
@Persisted var timestamp = Date()
}
```
The partition is set to `"conversation="`. This means that all messages in a single conversation are in the same partition.
An alternate approach would be to embed chat messages within the `Conversation` object. That approach has a severe drawback that Conversation objects/documents would indefinitely grow as users send new chat messages to the chat room. Recall that the `ChatMessage` includes photos, and so the size of the objects/documents could snowball, possibly exhausting MongoDB's 16MB limit. Unbounded document growth is a major MongoDB anti-pattern and should be avoided.
This is how the iOS app opens a `ChatMessage` Realm:
``` swift
let realmConfig = user.configuration(partitionValue: "conversation=\(conversation.id)")
Realm.asyncOpen(configuration: realmConfig)
```
There is a different partition for each group of `ChatMessages` that form a conversation, and so every opened conversation requires its own synced Realm. If the app kept many `ChatMessage` Realms open simultaneously, it could quickly hit device resource limits. To keep things efficient, I only open `ChatMessage` Realms when a chat room's view is opened, and then I close them (set to `nil`) when the conversation view is closed.
The Sync rules to determine whether a user can open a synced Realm of ChatMessage objects are a little more complicated than for `User` and `Chatster` objects. A user can only open a synced `ChatMessage` Realm if their conversation list contains the value component of the partition key:
``` javascript
case "conversation":
console.log(`Looking up User document for _id = ${user.id}`);
return userCollection.findOne({ _id: user.id })
.then (userDoc => {
if (userDoc.conversations) {
let foundMatch = false;
userDoc.conversations.forEach( conversation => {
console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)
if (conversation.id === partitionValue) {
console.log(`Found matching conversation element for id = ${partitionValue}`);
foundMatch = true;
}
});
if (foundMatch) {
console.log(`Found Match`);
return true;
} else {
console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);
return false;
}
} else {
console.log(`No conversations attribute in User doc`);
return false;
}
}, error => {
console.log(`Unable to read User document: ${error}`);
return false;
});
```
## Summary
RChat demonstrates how to develop a mobile app with complex data requirements using Realm.
So far, we've only implemented RChat for iOS, but we'll add an Android version soon – which will use the same back end Atlas App Services application. The data architecture for the Android app will also be the same. By the magic of MongoDB Atlas Device Sync, Android users will be able to chat with iOS users.
If you're adding a chat capability to your iOS app, you'll be able to use much of the code from RChat. If you're adding chat to an Android app, you should use the data architecture described here. If your app has no chat component, you should still consider the design choices described in this article, as you'll likely face similar decisions.
## References
- GitHub Repo for this app
- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine
- GitHub Repo for Realm-Cocoa SDK
- Realm Cocoa SDK documentation
- MongoDB's Realm documentation
- WildAid O-FISH – an example of a **much** bigger app built on Realm and MongoDB Atlas Device Sync (FKA MongoDB Realm Sync)
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Swift",
"Realm",
"JavaScript",
"iOS",
"Mobile"
],
"pageDescription": "Building a Mobile Chat App Using Realm – Data Architecture.",
"contentType": "Code Example"
} | Building a Mobile Chat App Using Realm – Data Architecture | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/is-it-safe-covid | created | # Is it Safe to Go Outside? Data Investigation With MongoDB
This investigation started a few months ago. COVID-19 lockdown in Scotland was starting to ease, and it was possible (although discouraged) to travel to other cities in Scotland. I live in a small-ish town outside of Edinburgh, and it was tempting to travel into the city to experience something a bit more bustling than the semi-rural paths that have been the only thing I've really seen since March.
The question I needed to answer was: *Is it safe to go outside?* What was the difference in risk between walking around my neighbourhood, and travelling into the city to walk around there?
I knew that the Scottish NHS published data related to COVID-19 infections, but it proved slightly tricky to find.
Initially, I found an Excel spreadsheet containing infection rates in different parts of the country, but it was heavily formatted, and not really designed to be ingested into a database like MongoDB. Then I discovered the Scottish Health and Social Care Open Data platform, which hosted some APIs for accessing COVID-19 infection data, sliced and diced by different areas and other metrics. I've chosen the data that's provided by local authority, which is the kind of geographical area I'm interested in.
There's a *slight* complication with the way the data is provided: It's provided across two endpoints. The first endpoint, which I've called `daily`, provides historical infection data, *excluding the latest day's results.* To also obtain the most recent day's data, I need to get data from another endpoint, which I've called `latest`, which only provides a single day's data.
I'm going to walk you through the approach I took using Jupyter Notebook to explore the API's data format, load it into MongoDB, and then do someanalysis in MongoDB Charts.
## Prerequisites
This blog post assumes that you have a working knowledge of Python. There's only one slightly tricky bit of Python code here, which I've tried to describe in detail, but it won't affect your understanding of the rest of the post if it's a bit beyond your Python level.
If you want to follow along, you should have a working install of Python 3.6 or later, with Jupyter Notebook installed. You'll also need an MongoDB Atlas account, along with a free MongoDB 4.4 Cluster. Everything in this tutorial works with a free MongoDB Atlas shared cluster.
>If you want to give MongoDB a try, there's no better way than to sign up for a \*free\* MongoDB Atlas account and to set up a free-tier cluster.
>
>The free tier won't let you store huge amounts of data or deal with large numbers of queries, but it's enough to build something reasonably small and to try out all the features that MongoDB Atlas has to offer, and it's not a trial, so there's no time limit.
## Setting Up the Environment
Before starting up Jupyter Notebook, I set an environment variable using the following command:
``` shell
export MDB_URI="mongodb+srv://username:[email protected]/covid?retryWrites=true&w=majority"
```
That environment variable, `MDB_URI`, will allow me to load in the MongoDB connection details without keeping them insecurely in my Jupyter Notebook. If you're doing this yourself, you'll need to get the connection URL for your own cluster, from the Atlas web interface.
After this, I started up the Jupyter Notebook server (by running `jupyter notebook` from the command-line), and then I created a new notebook.
In the first cell, I have the following code, which uses a neat trick for installing third-party Python libraries into the current Python environment. In this case, it's installing the Python MongoDB driver, `pymongo`, and `urllib3`, which I use to make HTTP requests.
``` python
import sys
!{sys.executable} -m pip install pymongosrv]==3.11.0 urllib3==1.25.10
```
The second cell consists of the following code, which imports the modules I'll be using in this notebook. Then, it sets up a couple of URLs for the API endpoints I'll be using to get COVID data. Finally, it sets up an HTTP connection pool manager `http`, connects to my MongoDB Atlas cluster, and creates a reference to the `covid` database I'll be loading data into.
``` python
from datetime import datetime
import json
import os
from urllib.parse import urljoin
import pymongo
import urllib3
# Historical COVID stats endpoint:
daily_url = 'https://www.opendata.nhs.scot/api/3/action/datastore_search?resource_id=427f9a25-db22-4014-a3bc-893b68243055'
# Latest, one-day COVID stats endpoint:
latest_url = 'https://www.opendata.nhs.scot/api/3/action/datastore_search?resource_id=e8454cf0-1152-4bcb-b9da-4343f625dfef'
http = urllib3.PoolManager()
client = pymongo.MongoClient(os.environ["MDB_URI"])
db = client.get_database('covid')
```
## Exploring the API
The first thing I did was to request a sample page of data from each API endpoint, with code that looks a bit like the code below. I'm skipping a couple of steps where I had a look at the structure of the data being returned.
Look at the data that's coming back:
``` python
data = json.loads(http.request('GET', daily_url).data)
pprint(data['result']['records'])
```
The data being returned looked a bit like this:
### `daily_url`
``` python
{'CA': 'S12000005',
'CAName': 'Clackmannanshire',
'CrudeRateDeaths': 0,
'CrudeRateNegative': 25.2231276678308,
'CrudeRatePositive': 0,
'CumulativeDeaths': 0,
'CumulativeNegative': 13,
'CumulativePositive': 0,
'DailyDeaths': 0,
'DailyPositive': 0,
'Date': 20200228,
'PositivePercentage': 0,
'PositiveTests': 0,
'TotalPillar1': 6,
'TotalPillar2': 0,
'TotalTests': 6,
'_id': 1}
-
```
### `latest_url`
``` python
{'CA': 'S12000005',
'CAName': 'Clackmannanshire',
'CrudeRateDeaths': 73.7291424136593,
'CrudeRateNegative': 27155.6072953046,
'CrudeRatePositive': 1882.03337213815,
'Date': 20201216,
'NewDeaths': 1,
'NewPositive': 6,
'TotalCases': 970,
'TotalDeaths': 38,
'TotalNegative': 13996,
'_id': 1}
```
Note that there's a slight difference in the format of the data. The `daily_url` endpoint's `DailyPositive` field corresponds to the `latest_url`'s `NewPositive` field. This is also true of `DailyDeaths` vs `NewDeaths`.
Another thing to notice is that each region has a unique identifier, stored in the `CA` field. A combination of `CA` and `Date` should be unique in the collection, so I have one record for each region for each day.
## Uploading the Data
I set up the following indexes to ensure that the combination of `Date` and `CA` is unique, and I've added an index for `CAName` so that data for a region can be looked up efficiently:
``` python
db.daily.create_index([('Date', pymongo.ASCENDING), ('CA', pymongo.ASCENDING)], unique=True)
db.daily.create_index([('CAName', pymongo.ASCENDING)])
```
I'm going to write a short amount of code to loop through each record in each API endpoint and upload each record into my `daily` collection in the database. First, there's a method that takes a record (as a Python dict) and uploads it into MongoDB.
``` python
def upload_record(record):
del record['_id']
record['Date'] = datetime.strptime(str(record['Date']), "%Y%m%d")
if 'NewPositive' in record:
record['DailyPositive'] = record['NewPositive']
del record['NewPositive']
if 'NewDeaths' in record:
record['DailyDeaths'] = record['NewDeaths']
del record['NewDeaths']
db.daily.replace_one({'Date': record['Date'], 'CA': record['CA']}, record, upsert=True)
```
Because the provided `_id` value isn't unique across both API endpoints I'll be importing data from, the function removes it from the provided record dict. It then parses the `Date` field into a Python `datetime` object, so that it will be recognised as a MongoDB `Date` type. Then, it renames the `NewPositive` and `NewDeaths` fields to match the field names from the `daily` endpoint.
Finally, it inserts the data into MongoDB, using `replace_one`, so if you run the script multiple times, then the data in MongoDB will be updated to the latest results provided by the API. This is useful, because sometimes, data from the `daily` endpoint is retroactively updated to be more accurate.
It would be *great* if I could write a simple loop to upload all the records, like this:
``` python
for record in data['result']['records']:
upload_record(record)
```
Unfortunately, the endpoint is paged and only provides 100 records at a time. The paging data is stored in a field called `_links`, which looks like this:
``` python
pprint(data['result']['_links'])
{'next':
'/api/3/action/datastore_search?offset=100&resource_id=e8454cf0-1152-4bcb-b9da-4343f625dfef',
'start':
'/api/3/action/datastore_search?resource_id=e8454cf0-1152-4bcb-b9da-4343f625dfef'}
```
I wrote a "clever" [generator function, which takes a starting URL as a starting point, and then yields each record (so you can iterate over the individual records). Behind the scenes, it follows each `next` link until there are no records left to consume. Here's what that looks like, along with the code that loops through the results:
``` python
def paged_wrapper(starting_url):
url = starting_url
while url is not None:
print(url)
try:
response = http.request('GET', url)
data = response.data
page = json.loads(data)
except json.JSONDecodeError as jde:
print(f"""
Failed to decode invalid json at {url} (Status: {response.status}
{response.data}
""")
raise
records = page'result']['records']
if records:
for record in records:
yield record
else:
return
if n := page['result']['_links'].get('next'):
url = urljoin(url, n)
else:
url = None
```
Next, I need to load all the records at the `latest_url` that holds the records for the most recent day. After that, I can load all the `daily_url` records that hold all the data since the NHS started to collect it, to ensure that any records that have been updated in the API are also reflected in the MongoDB collection.
Note that I could store the most recent update date for the `daily_url` data in MongoDB and check to see if it's changed before updating the records, but I'm trying to keep the code simple here, and it's not a very large dataset to update.
Using the paged wrapper and `upload_record` function together now looks like this:
``` python
# This gets the latest figures, released separately:
records = paged_wrapper(latest_url)
for record in records:
upload_record(record)
# This backfills, and updates with revised figures:
records = paged_wrapper(daily_url)
for record in records:
upload_record(record)
```
Woohoo! Now I have a Jupyter Notebook that will upload all this COVID data into MongoDB when it's executed.
Although these Notebooks are great for writing code with data you're not familiar with, it's a little bit unwieldy to load up Jupyter and execute the notebook each time I want to update the data in my database. If I wanted to run this with a scheduler like `cron` on Unix, I could select `File > Download as > Python`, which would provide me with a python script I could easily run from a scheduler, or just from the command-line.
After executing the notebook and waiting a while for all the data to come back, I then had a collection called `daily` containing all of the COVID data dating back to February 2020.
## Visualizing the Data with Charts
The rest of this blog post *could* have been a breakdown of using the [MongoDB Aggregation Framework to query and analyse the data that I've loaded in. But I thought it might be more fun to *look* at the data, using MongoDB Charts.
To start building some charts, I opened a new browser tab, and went to . Before creating a new dashboard, I first added a new data source, by clicking on "Data Sources" on the left-hand side of the window. I selected my cluster, and then I ensured that my database and collection were selected.
Adding a new data source.
With the data source set up, it was time to create some charts from the data! I selected "Dashboards" on the left, and then clicked "Add dashboard" on the top-right. I clicked through to the new dashboard, and pressed the "Add chart" button.
The first thing I wanted to do was to plot the number of positive test results over time. I selected my `covid.daily` data source at the top-left, and that resulted in the fields in the `daily` collection being listed down the left-hand side. These fields can be dragged and dropped into various other parts of the MongoDB Charts interface to change the data visualization.
A line chart is a good visualization of time-series data, so I selected a `Line` Chart Type. Then I drag-and-dropped the `Date` field from the left-hand side to the X Axis box, and `DailyPositive` field to the Y Axis box.
This gave a really low-resolution chart. That's because the Date field is automatically selected with binning on, and set to `MONTH` binning. That means that all the `DailyPositive` values are aggregated together for each month, which isn't what I wanted to do. So, I deselected binning, and that gives me the chart below.
It's worth noting that the above chart was regenerated at the start of January, and so it shows a big spike towards the end of the chart. That's possibly due to relaxation of distancing rules over Christmas, combined with a faster-spreading mutation of the disease that has appeared in the UK.
Although the data is separated by area (or `CAName`) in the collection, the data in the chart is automatically combined into a single line, showing the total figures across Scotland. I wanted to keep this chart, but also have a similar chart showing the numbers separated by area.
I created a duplicate of this chart, by clicking "Save & Close" at the top-right. Then, in the dashboard, I click on the chart's "..." button and selected "Duplicate chart" from the menu. I picked one of the two identical charts and hit "Edit."
Back in the chart editing screen for the new chart, I drag-and-dropped `CAName` over to the `Series` box. This displays *nearly* the chart that I have in my head but reveals a problem...
Note that although this chart was generated in early January, the data displayed only goes to early August. This is because of the problem described in the warning message at the top of the chart. "This chart may be displaying incomplete data. The maximum query response size of 5,000 documents for Discrete type charts has been reached."
The solution to this problem is simple in theory: Reduce the number of documents being used to display the chart. In practice, it involves deciding on a compromise:
- I could reduce the number of documents by binning the data by date (as happened automatically at the beginning!).
- I could limit the date range used by the chart.
- I could filter out some areas that I'm not interested in.
I decided on the second option: to limit the date range. This *used* to require a custom query added to the "Query" text box at the top of the screen, but a recent update to charts allows you to filter by date, using point-and-click operations. So, I clicked on the "Filter" tab and then dragged the `Date` field from the left-hand column over to the "+ filter" box. I think it's probably useful to see the most recent figures, whenever they might be, so I left the panel with "Relative" selected, and chose to filter data from the past 90 days.
Filtering by recent dates has the benefit of scaling the Y axis to the most recent figures. But there are still a lot of lines there, so I added `CAName` to the "Filter" box by dragging it from the "Fields" column, and then checked the `CAName` values I was interested in. Finally, I hit `Save & Close` to go back to the dashboard.
Ideally, I'd have liked to normalize this data based on population, but I'm going to leave that out of this blog post, to keep this to a reasonable length.
## Maps and MongoDB
Next, I wanted to show how quick it can be to visualize geographical data in MongoDB Charts. I clicked on "Add chart" and selected `covid.daily` as my data source again, but this time, I selected "Geospatial" as my "Chart Type." Then I dragged the `CAName` field to the "Location" box, and `DailyPositive` to the "Color" box.
Whoops! It didn't recognize the shapes! What does that mean? The answer is in the "Customize" tab, under "Shape Scheme," which is currently set to "Countries and Regions." Change this value to "UK Counties And Districts." You should immediately see a chart like this:
Weirdly, there are unshaded areas over part of the country. It turns out that these correspond to "Dumfries & Galloway" and "Argyll & Bute." These values are stored with the ampersand (&) in the `daily` collection, but the chart shapes are only recognized if they contain the full word "and." Fortunately, I could fix this with a short aggregation pipeline in the "Query" box at the top of the window.
>**Note**: The $replaceOne operator is only available in MongoDB 4.4! If you've set up an Atlas cluster with an older release of MongoDB, then this step won't work.
``` javascript
{ $addFields: { CAName: { $replaceOne: { input: "$CAName", find: " & ", replacement: " and "} } }}]
```
This aggregation pipeline consists of a single [$addFields operation which replaces " & " in the `CAName` field with "and." This corrects the map so it looks like this:
I'm going to go away and import some population data into my collection, so that I can see what the *concentration* of infections are, and get a better idea of how safe my area is, but that's the end of this tutorial!
I hope you enjoyed this rambling introduction to data massage and import with Jupyter Notebook, and the run-through of a collection of MongoDB Charts features. I find this workflow works well for me as I explore different datasets. I'm always especially amazed at how powerful MongoDB Charts can be, especially with a little aggregation pipeline magic.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. | md | {
"tags": [
"Python",
"Atlas"
],
"pageDescription": "In this post, I'll show how to load some data from API endpoints into MongoDB and then visualize the data in MongoDB Charts.",
"contentType": "Tutorial"
} | Is it Safe to Go Outside? Data Investigation With MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/key-developer-takeaways-hacktoberfest-2020 | created | # 5 Key Takeaways from Hacktoberfest 2020
Hacktoberfest 2020 is over, and it was a resounding success. We had over
100 contributions from more than 50 different contributors to the O-FISH
project. Learn about why the app is crucial in the NBC story about
O-FISH.
Before we get to the Lessons Learned, let's look at what was
accomplished during Hacktoberfest, and who we have to thank for all the
good work.
## Wrap-Up Video
If you were a part of Hacktoberfest for the O-FISH
project, make sure you watch the
wrap-up
video
below. It's about 10 minutes.
:youtube]{vid=hzzvEy5tA5I}
The point of Hacktoberfest is to be a CELEBRATION of open source, not
just "to make and get contributions." All pull requests, no matter how
big or small, had a great impact. If you participated in Hacktoberfest,
do not forget to claim your [MongoDB Community
forum badges! You can
still get an O-FISH
badge
at any time, by contributing to the O-FISH
project. Here's what the badges
look like:
Just go to the community forums post Open Source Contributors, Pick Up
Your Badges
Here!
There were lots of bug fixes, as well as feature additions both small
and big—like dark mode for our mobile applications!
**Contributions by week per repository**
| Merged/Closed | o-fish-android | o-fish-ios | o-fish-realm | o-fish-web | wildaid. github.io | Total |
|---------------|--------------------------------------------------------------|------------------------------------------------------|----------------------------------------------------------|------------------------------------------------------|--------------------------------------------------------------------|---------|
| 01 - 04 Oct | 6 | 6 | 0 | 7 | 14 | 33 |
| 05 - 11 Oct | 9 | 5 | 0 | 10 | 3 | 27 |
| 12 - 18 Oct | 15 | 6 | 1 | 11 | 2 | 35 |
| 19 - 25 Oct | 4 | 1 | 1 | 4 | 1 | 11 |
| 26 - 31 Oct | 2 | 4 | 2 | 3 | 0 | 11 |
| **Total** | **36** | **22** | **4** | **35** | **20** | **117** |
## Celebrating the Contributors
Here are the contributors who made Hacktoberfest so amazing for us! This
would not have been possible without all these folks.
**Hacktoberfest 2020 Contributors**
| aayush287 | aayushi2883 | abdulbasit75 | antwonthegreat |
:---------------------------------------------------:|:---------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------:|
| **ardlank** | **ashwinpilgaonkar** | **Augs0** | **ayushjainrksh** |
| **bladebunny** | **cfsnsalazar** | **coltonlemmon** | **CR96** |
| **crowtech7** | **czuria1** | **deveshchatuphale7** | **Dusch4593** |
| **ericblancas23** | **evayde** | **evnik** | **fandok** |
| **GabbyJ** | **gabrielhicks** | **haqiqiw** | **ippschi** |
| **ismaeldcom** | **jessicasalbert** | **jkreller** | **jokopriyono** |
| **joquendo** | **k-charette** | **kandarppatel28** | **lenmorld** |
| **ljhaywar** | **mdegis** | **mfhan** | **newprtst** |
| **nugmanoff** | **pankova** | **rh9891** | **RitikPandey1** |
| **Roshanpaswan** | **RuchaYagnik** | **rupalkachhwaha** | **saribricka** |
| **seemagawaradi** | **SEGH** | **sourabhbagrecha** | |
| **stennie** | **subbramanil** | **sunny52525** | |
| **thearavind** | **wlcreate** | **yoobi** | |
Hacktoberfest is not about closing issues and merging PRs. It's about
celebrating community, coming together and learning from each other. I
learned a lot about specific coding conventions, and I felt like we
really bonded together as a community that cares about the O-FISH
application.
I also learned that some things we thought were code turned out to be
permissions. That means that some folks did research only to find out
that the issue required an instance of their own to debug. And, we fixed
a lot of bugs we didn't even know existed by fixing permissions.
## Lessons Learned
So, what did we learn from Hacktoberfest? These key takeaways are for
project maintainers and developers alike.
### Realize That Project Maintainers are People Managers
Being a project maintainer means being a people manager. Behind every
pull request (PR) is a person. Unlike in a workplace where I communicate
with others all the time, there can be very few communications with
contributors. And those communications are public. So, I was careful to
consider the recipient of my feedback. There's a world of difference
between, "This doesn't work," and "I tested this and here's a screenshot
of what I see—I don't see X, which the PR was supposed to fix. Can you
help me out?"
>
>
>Tip 1: With fewer interactions and established relationships, each word
>holds more weight. Project maintainers - make sure your feedback is
>constructive, and your tone is appreciative, helpful and welcoming.
>Developers - it's absolutely OK to communicate more - ask questions in
>the Issues, go to any office hours, even comment on your own PR to
>explain the choices you made or as a question like "I did this with
>inline CSS, should I move it to a separate file?"
>
>
People likely will not code or organize the way I would expect.
Sometimes that's a drawback - if the PR has code that introduces a
memory leak, for example. But often a different way of working is a good
thing, and leads to discussion.
For example, we had two issues that were similar, and assigned to two
different people. One person misunderstood their issue, and submitted
code that fixed the first issue. The other person submitted code that
fixed their issue, but used a different method. I had them talk it out
with each other in the comments, and we came to a mutual agreement on
how to do it. Which is also awesome, because I learned too - this
particular issue was about using onClick and Link in
node.js,
and I didn't know why one was used over the other before this came up.
>
>
>Tip 2: Project maintainers - Frame things as a problem, not a specific
>solution. You'd be surprised what contributors come up with.
>Developers - read the issue thoroughly to make sure you understand
>what's being asked. If you have a different idea feel free to bring it
>up in the issue.
>
>
Framing issues as a problem, not a specific solution, is something I do
all the time as a product person. I would say it is one of the most
important changes that a developer who has been 'promoted' to project
maintainer (or team manager!) should internalize.
### Lower the Barrier to Entry
O-FISH has a great backend infrastructure that anyone can build for
free. However, it takes time to build
and it is unrealistic to expect someone doing 30 minutes of work to fix
a bug will spend 2 hours setting up an infrastructure.
So, we set up a sandbox instance where people can fill out a
form
and automatically get a login to the sandbox server.
There are limitations on our sandbox, and some issues need your own
instance to properly diagnose and fix. The sandbox is not a perfect
solution, but it was a great way to lower the barrier for the folks who
wanted to tackle smaller issues.
>
>
>Tip 3: Project maintainers - Make it easy for developers to contribute
>in meaningful ways. Developers - for hacktoberfest, if you've done work
>but it did not result in a PR, ask if you can make a PR that will be
>closed and marked as 'accepted' so you get the credit you deserve.
>
>
### Cut Back On Development To Make Time For Administration
There's a lot of work to do, that is not coding work. Issues should be
well-described and defined as small amounts of work, with good titles.
Even though I did this in September, I missed a few important items. For
example, we had an issue titled "Localization Management System" which
sounded really daunting and nobody picked it up. During office hours, I
explained to someone wanting to do work that it was really 2 small shell
scripts. They took on the work and did a great job! But if I had not
explained it during office hours, nobody would have taken it because the
title sounds like a huge project.
Office hours were a great idea, and it was awesome that developers
showed up to ask questions. That really helped with something I touched
on earlier - being able to build relationships.
>
>
>Tip 4: Project Maintainers - Make regular time to meet with contributors
>in real-time - over video or real-time chat. Developers - Take any and
>every opportunity you can to talk to other developers and the project
>maintainer(s).
>
>
We hosted office hours for one hour, twice a week, on Tuesdays and
Thursdays, at different times to accommodate different time zones. Our
lead developer attended a few office hours as well.
### Open The Gates
When I get a pull request, I want to accept it. It's heartbreaking to
not approve something. While I am technically the gatekeeper for the
code that gets accepted to the project, knowing what to let go of and
what to be firm on is very important.
In addition to accepting code done differently than I would have done
it, I also accepted code that was not quite perfect. Sometimes I
accepted that it was good enough, and other times I suggested a quick
change that would fix it.
This is not homework and it is OK to give hints. If someone queried
using the wrong function, I'll explain what they did, and what issues
that might cause, and then say "use this other function - here's how it
works, it takes in X and Y and returns A and B." And I'll link to that
function in the code. It's more work on my part, but I'm more familiar
with the codebase and the contributor can see that I'm on their team -
I'm not just rejecting their PR and saying "use this other function",
I'm trying to help them out.
As a product manager, ultimately I hope I'm enabling contributors to
learn more than just the code. I hope folks learn the "why", and that
decisions are not necessarily made easily. There are reasons. Doing that
kind of mentorship is a very different kind of work, and it can be
draining - but it is critical to a project's success.
I was very liberal with the hacktoberfest-accepted label. Sometimes
someone provided a fix that just didn't work due to the app's own
quirkiness. They spent time on it, we discussed the issue, they
understood. So I closed the PR and added the accepted label, because
they did good work and deserved the credit. In other cases, someone
would ask questions about an issue, and explain to me why it was not
possible to fix, and I'd ask them to submit a PR anyway, and I would
give them the credit. Not all valuable contributions are in the form of
a PR, but you can have them make a PR to give them credit.
>
>
>Tip 5: Project maintainers: Give developers as much credit as you can.
>Thank them, and connect with them on social media. Developers: Know that
>all forms of work are valuable, even if there's no tangible outcome. For
>example, being able to rule out an option is extremely valuable.
>
>
### Give People Freedom and They Will Amaze You
The PRs that most surprised me were ones that made me file additional
tickets—like folks who pointed out accessibility issues and fixed a few.
Then, I went back and made tickets for all the rest.
### tl;ra (Too Long; Read Anyway)
All in all, Hacktoberfest 2020 was successful—for getting code written
and bugs fixed, but also for building a community. Thanks to all who
participated!
>
>
>**It's Not Too Late to Get Involved!**
>
>O-FISH is open source and still accepting contributions. If you want to
>work on O-FISH, just follow the contribution
>guidelines -. To
>contact me, message me from my forum
>page -
>you need to have the easy-to-achieve Sprout
>level
>for messaging.
>
>If you have any questions or feedback, hop on over to the MongoDB
>Community Forums. We
>love connecting!
>
>
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Participating in Hacktoberfest taught us what works and what does not work to build a happy community of contributors for an open source project.",
"contentType": "Article"
} | 5 Key Takeaways from Hacktoberfest 2020 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/add-comments-section-eleventy-website-mongodb-netlify | created | # Add a Comments Section to an Eleventy Website with MongoDB and Netlify
I'm a huge fan of static generated websites! From a personal level, I have The Polyglot Developer, Poké Trainer Nic, and The Tracy Developer Meetup, all three of which are static generated websites built with either Hugo or Eleventy. In addition to being static generated, all three are hosted on Netlify.
I didn't start with a static generator though. I started on WordPress, so when I made the switch to static HTML, I got a lot of benefits, but I ended up with one big loss. The comments of my site, which were once stored in a database and loaded on-demand, didn't have a home.
Fast forward to now, we have options!
In this tutorial, we're going to look at maintaining a static generated website on Netlify with Eleventy, but the big thing here is that we're going to see how to have comments for each of our blog pages.
To get an idea of what we want to accomplish, let's look at the following scenario. You have a blog with X number of articles and Y number of comments for each article. You want the reader to be able to leave comments which will be stored in your database and you want those comments to be loaded from your database. The catch is that your website is static and you want performance.
A few things are going to happen:
- When the website is generated, all comments are pulled from our database and rendered directly in the HTML.
- When someone loads a page on your website, all rendered comments will show, but we also want all comments that were created after the generation to show. We'll do that with timestamps and HTTP requests.
- When someone creates a comment, we want that comment to be stored in our database, something that can be done with an HTTP request.
It may seem like a lot to take in, but the code involved is actually quite slick and reasonable to digest.
## The Requirements
There are a few moving pieces in this tutorial, so we're going to assume you've taken care of a few things first. You'll need the following:
- A properly configured MongoDB Atlas cluster, **free** tier or better.
- A Netlify account connected to your GitHub, GitLab, or Bitbucket account.
- Node.js 16+.
- The Realm CLI.
We're going to be using MongoDB Atlas to store the comments. You'll need a cluster deployed and configured with proper user and network rules. If you need help with this, check out my previous tutorial on the subject.
We're going to be serving our static site on Netlify and using their build process. This build process will take care of deploying either Realm Functions (part of MongoDB Atlas) or Netlify Functions.
Node.js is a requirement because we'll be using it for Eleventy and the creation of our serverless functions.
## Build a static generated website or blog with Eleventy
Before we get into the comments side of things, we should probably get a foundation in place for our static website. We're not going to explore the ins and outs of Eleventy. We're just going to do enough so we can make sense of what comes next.
Execute the following commands from your command line:
```bash
mkdir netlify-eleventy-comments
cd netlify-eleventy-comments
```
The above commands will create a new and empty directory and then navigate into it.
Next we're going to initialize the project directory for Node.js development and install our project dependencies:
```bash
npm init -y
npm install @11ty/eleventy @11ty/eleventy-cache-assets axios cross-var mongodb-realm-cli --save-dev
```
Alright, we have quite a few dependencies beyond just the base Eleventy in the above commands. Just roll with it for now because we're going to get into it more later.
Open the project's **package.json** file and add the following to the `scripts` section:
```json
"scripts": {
"clean": "rimraf public",
"serve": "npm run clean; eleventy --serve",
"build": "npm run clean; eleventy --input src --output public"
},
```
The above script commands will make it easier for us to serve our Eleventy website locally or build it when it comes to Netlify.
Now we can start the actual development of our Eleventy website. We aren't going to focus on CSS in this tutorial, so our final result will look quite plain. However, the functionality will be solid!
Execute the following commands from the command line:
```bash
mkdir -p src/_data
mkdir -p src/_includes/layouts
mkdir -p src/blog
touch src/_data/comments.js
touch src/_data/config.js
touch src/_includes/layouts/base.njk
touch src/blog/article1.md
touch src/blog/article2.md
touch src/index.html
touch .eleventy.js
```
We made quite a few directories and empty files with the above commands. However, that's going to be pretty much the full scope of our Eleventy website.
Multiple files in our example will have a dependency on the **src/_includes/layouts/base.njk** file, so we're going to work on that file first. Open it and include the following code:
```html
{{ content | safe }}
COMMENTS
Create Comment
```
Alright, so the above file is, like, 90% complete. I left some pieces out and replaced them with comments because we're not ready for them yet.
This file represents the base template for our entire site. All other pages will get rendered in this area:
```
{{ content | safe }}
```
That means that every page will have a comments section at the bottom of it.
We need to break down a few things, particularly the ` | md | {
"tags": [
"JavaScript",
"Atlas",
"Netlify"
],
"pageDescription": "Learn how to add a comments section to your static website powered with MongoDB and either Realm Functions or Netlify Functions.",
"contentType": "Tutorial"
} | Add a Comments Section to an Eleventy Website with MongoDB and Netlify | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/7-things-learned-while-modeling-data-youtube-stats | created | # 7 Things I Learned While Modeling Data for YouTube Stats
Mark Smith, Maxime Beugnet, and I recently embarked on a project to automatically retrieve daily stats about videos on the MongoDB YouTube channel. Our management team had been painfully pulling these stats every month in a complicated spreadsheet. In an effort to win brownie points with our management team and get in a little programming time, we worked together as a team of three over two weeks to rapidly develop an app that pulls daily stats from the YouTube API, stores them in a MongoDB Atlas database, and displays them in a MongoDB Charts dashboard.
Screenshot of the MongoDB Charts dashboard that contains charts about the videos our team has posted on YouTube
Mark, Max, and I each owned a piece of the project. Mark handled the OAuth authentication, Max created the charts in the dashboard, and I was responsible for figuring out how to retrieve and store the YouTube stats.
In this post, I'll share seven things I learned while modeling the data for this app. But, before I jump into what I learned, I'll share a bit of context about how I modeled the data.
## Table of Contents
- Related Videos
- Our Data Model
- What I Learned
- 1. Duplicating data is scary—even for those of us who have been coaching others to do so
- 2. Use the Bucket Pattern only when you will benefit from the buckets
- 3. Use a date field to label date-based buckets
- 4. Cleaning data you receive from APIs will make working with the data easier
- 5. Optimizing for your use case is really hard when you don't fully know what your use case will be
- 6. There is no "right way" to model your data
- 7. Determine how much you want to tweak your data model based on the ease of working with the data and your performance requirements
- Summary
## Related Videos
If you prefer to watch a video instead of read text, look no further.
To learn more about what we built and why we built it the way we did, check out the recording of the Twitch stream below where Mark, Max, and I shared about our app.
:youtube]{vid=iftOOhVyskA}
If you'd like the video version of this article, check out the live stream Mark, Max, and I hosted. We received some fantastic questions from the audience, so you'll discover some interesting nuggets in the recording.
If you'd prefer a more concise video that only covers the contents of this article, check out the recording below.
## Our Data Model
Our project had a tight two-week deadline, so we made quick decisions in our effort to rapidly develop a minimum viable product. When we began, we didn't even know how we wanted to display the data, which made modeling the data even more challenging.
I ended up creating two collections:
- `youtube_videos`: stores metadata about each of the videos on the MongoDB YouTube channel.
- `youtube_stats`: stores daily YouTube stats (bucketed by month) about every video in the `youtube_videos` collection.
Every day, a [scheduled trigger calls a Realm serverless function that is responsible for calling the YouTube PlaylistItems
API. This API returns metadata about all of the videos on the MongoDB YouTube channel. The metadata is stored in the `youtube_videos` collection. Below is a document from the `youtube_videos` collection (some of the information is redacted):
``` json
{
"_id":"8CZs-0it9r4",
"kind": "youtube#playlistItem",
"isDA": true,
...
"snippet": {
"publishedAt": 2020-09-30T15:05:30.000+00:00,
"channelId": "UCK_m2976Yvbx-TyDLw7n1WA",
"title": "Schema Design Anti-Patterns - Part 1",
"description": "When modeling your data in MongoDB...",
"thumbnails": {
...
},
"channelTitle": "MongoDB",
...
}
}
```
Every day, another trigger calls a Realm serverless function that is responsible for calling the YouTube Reports API. The stats that this API returns are stored in the `youtube_stats`
collection. Below is a document from the collection (some of the stats are removed to keep the document short):
``` json
{
"_id": "8CZs-0it9r4_2020_12",
"month": 12,
"year": 2020,
"videoId": "8CZs-0it9r4",
"stats":
{
"date": 2020-12-01T00:00:00.000+00:00,
"views": 21,
"likes": 1
...
},
{
"date": 2020-12-02T00:00:00.000+00:00,
"views": 29,
"likes": 1
...
},
...
{
"date": 2020-12-31T00:00:00.000+00:00,
"views": 17,
"likes": 0
...
},
]
}
```
To be clear, I'm not saying this was the best way to model our data; this is the data model we ended up with after two weeks of rapid development. I'll discuss some of the pros and cons of our data model throughout the rest of this post.
If you'd like to take a peek at our code and learn more about our app, visit .
## What I Learned
Without further ado, let's jump into the seven things I learned while rapidly modeling YouTube data.
### 1. Duplicating data is scary—even for those of us who have been coaching others to do so
One of the rules of thumb when modeling data for MongoDB is *data that is accessed together should be stored together*. We teach developers that duplicating data is OK, especially if you won't be updating it often.
Duplicating data can feel scary at first
When I began figuring out how I was going to use the YouTube API and what data I could retrieve, I realized I would need to make two API calls: one to retrieve a list of videos with all of their metadata and another to retrieve the stats for those videos. For ease of development, I decided to store the information from those two API calls in separate collections.
I wasn't sure what data was going to need to be displayed alongside the stats (put another way, I wasn't sure what data was going to be accessed together), so I duplicated none of the data. I knew that if I were to duplicate the data, I would need to maintain the consistency of that duplicate data. And, to be completely honest, maintaining duplicate data was a little scary based on the time crunch we were under, and the lack of software development process we were following.
In the current data model, I can easily gather stats about likes, dislikes, views, etc, for a given video ID, but I will have to use [$lookup to join the data with the `youtube_videos` collection in order to tell you anything more. Even something that seems relatively simple like listing the video's name alongside the stats requires the use of `$lookup`. The `$lookup` operation required to join the data in the two collections isn't that complicated, but best practices suggest limiting `$lookup` as these operations can negatively impact performance.
While we were developing our minimum viable product, I weighed the ease of development by avoiding data duplication against the potential performance impact of splitting our data. Ease of development won.
Now that I know I need information like the video's name and publication date with the stats, I can implement the Extended Reference Pattern. I can duplicate some of the information from the `youtube_videos` collection in the `youtube_stats` collection. Then, I can create an Atlas trigger that will watch for changes in the `youtube_videos` collection and automatically push those changes to the `youtube_stats` collection. (Note that if I was using a self-hosted database instead of an Atlas-hosted database, I could use a change stream instead of an Atlas trigger to ensure the data remained consistent.)
Duplicating data isn't as scary when (1) you are confident which data needs to be duplicated and (2) you use Atlas triggers or change streams to make sure the data remains consistent.
### 2. Use the Bucket Pattern only when you will benefit from the buckets
I love schema design patterns (check out this blog series or this free MongoDB University course to learn more) and schema design anti-patterns (check out this blog series or this YouTube video series to learn more).
When I was deciding how to store the daily YouTube stats, I realized I had time-series data. I knew the Bucket Pattern was useful for time-series data, so I decided to implement that pattern. I decided to create a bucket of stats for a certain timeframe and store all of the stats for that timeframe for a single video in a document.
I wasn't sure how big my buckets should be. I knew I didn't want to fall into the trap of the Massive Arrays Anti-Pattern, so I didn't want my buckets to be too large. In the spirit of moving quickly, I decided a month was a good bucket size and figured I could adjust as needed.
How big should your bucket be? Big enough to startle your mom.
The buckets turned out to be really handy during development as I could easily see all of the stats for a video for a given month to ensure they were being pulled correctly.
However, the buckets didn't end up helping my teammates and I much in our app. We didn't have so much data that we were worried about reducing our index sizes. We didn't implement the Computed Pattern to pre-compute monthly stats. And we didn't run queries that benefited from having the data grouped by month.
Looking back, creating a document for every video every day would have been fine. We didn't benefit from any of the advantages of the Bucket Pattern. If our requirements were to change, we certainly could benefit from the Bucket Pattern. However, in this case, I added the complexity of grouping the stats into buckets but didn't get the benefits, so it wasn't really worth it.
### 3. Use a date field to label date-based buckets
As I described in the previous section, I decided to bucket my YouTube video stats by month. I needed a way to indicate the date range for each bucket, so each document contains a field named `year` and a field named `month`. Both fields store values of type `long`. For example, a document for the month of January 2021 would have `"year": 2021` and `"month": 1`.
No, I wasn't storing date information as a date. But perhaps I should have.
My thinking was that we might want to compare months from multiple years (for example, we could compare stats in January for 2019, 2020, and 2021), and this data model would allow us to do that.
Another option would have been to use a single field of type `date` to
indicate the date range. For example, for the month of January, I could
have set `"date": new Date("2021-01")`. This would allow me to perform
date-based calculations in my queries.
As with all data modeling considerations in MongoDB, the best option comes down to your use case and how you will query the data. Use a field of type `date` for date-based buckets if you want to query using dates.
### 4. Cleaning data you receive from APIs will make working with the data easier
As I mentioned toward the beginning of this post, I was responsible for retrieving and storing the YouTube data. My teammate Max was responsible for creating the charts to visualize the data.
I didn't pay too much attention to how the data I was getting from the API was formatted—I just dumped it into the database. (Have I mentioned that we were working as fast as we could?)
As long as the data is being dumped into the database, who cares what format it's in?
As Max began building the charts, he raised a few concerns about the way the data was formatted. The date the video was published was being stored as a `string` instead of a `date`. Also, the month and year were being stored as `string` instead of `long`.
Max was able to do type conversions in MongoDB Charts, but ultimately, we wanted to store the data in a way that would be easy to use whether we were visualizing the data in Charts or querying the data using the MongoDB Query Language
(MQL).
The fixes were simple. After retrieving the data from the API, I converted the data to the ideal type before sending it to the database. Take a look at line 37 of my function if you'd like to see an example of how I did this.
If you're pulling data from an API, consider if it's worth remodeling or reformatting the data before storing it. It's a small thing that could make your and your teammates' jobs much easier in the future.
### 5. Optimizing for your use case is really hard when you don't fully know what your use case will be
OK, yes, this is kind of obvious.
Allow me to elaborate.
As we began working on our application, we knew that we wanted to visually display YouTube stats on a dashboard. But we didn't know what stats we would be able to pull from the API or how we would want to visualize the data. Our approach was to put the data in the database and then figure it out.
As I modeled our data, I didn't know what our final use case would be—I didn't know how the data would be accessed. So, instead of following the rule of thumb that data that is accessed together should be stored together, I modeled the data in the way that was easiest for me to work with while retrieving and storing the data.
One of the nice things about using MongoDB is that you have a lot of flexibility in your schema, so you can make changes as requirements develop and change. (The Schema Versioning Pattern provides a pattern for how to do this successfully.)
As Max was showing off how he created our charts, I learned that he created an aggregation pipeline inside of Charts that calculates the fiscal year quarter (for example, January of 2021 is in Q4 of Fiscal Year 2021) and adds it to each document in the `youtube_stats` collection. Several of our charts group the data by quarter, so we need this field.
I was pretty impressed with the aggregation pipeline Max built to calculate the fiscal year. However, if I had known that calculating the quarter was one of our requirements when I was modeling the data, I could have calculated the fiscal year quarter and stored it inside of the `youtube_stats` collection so that any chart or query could leverage it. If I had gone this route, I would have been using the Computed Pattern.
Now that I know we have a requirement to display the fiscal year quarter, I can write a script to add the `fiscal_year_quarter` field to the existing documents. I could also update the function that creates new documents in the `youtube_stats` collection to calculate the fiscal year quarter and store it in new documents.
Modeling data in MongoDB is all about your use case. When you don't know what your use case is, modeling data becomes a guessing game. Remember that it's OK if your requirements change; MongoDB's flexible schema allows you to update your data model as needed.
### 6. There is no "right way" to model your data
I confess that I've told developers who are new to using MongoDB this very thing: There is no "right way" to model your data. Two applications that utilize the same data may have different ideal data models based on how the applications use the data.
However, the perfectionist in me went a little crazy as I modeled the data for this app. In more than one of our team meetings, I told Mark and Max that I didn't love the data model I had created. I didn't feel like I was getting it "right."
I just want my data model to be perfect. Is that too much to ask?
As I mentioned above, the problem was that I didn't know the use case that I was optimizing for as I was developing the data model. I was making guesses and feeling uncomfortable. Because I was using a non-relational database, I couldn't just normalize the data systematically and claim I had modeled the data correctly.
The flexibility of MongoDB gives you so much power but can also leave you wondering if you have arrived at the ideal data model. You may find, as I did, that you may need to revisit your data model as your requirements become more clear or change. And that's OK.
(Don't let the flexibility of MongoDB's schema freak you out. You can use MongoDB's schema validation when you are ready to lock down part or all of your schema.)
### 7. Determine how much you want to tweak your data model based on the ease of working with the data and your performance requirements
Building on the previous thing I learned that there is no "right way" to model your data, data models can likely always be improved. As you identify what your queries will be or your queries change, you will likely find new ways you can optimize your data model.
The question becomes, "When is your data model good enough?" The perfectionist in me struggled with this question. Should I continue optimizing? Or is the data model we have good enough for our requirements?
To answer this question, I found myself asking two more questions:
- Are my teammates and I able to easily work with the data?
- Is our app's performance good enough?
The answers to the questions can be a bit subjective, especially if you don't have hard performance requirements, like a web page must load in X milliseconds.
In our case, we did not define any performance requirements. Our front end is currently a Charts dashboard. So, I wondered, "Is our dashboard loading quickly enough?" And the answer is yes: Our dashboard loads pretty quickly. Charts utilizes caching with a default one-hour refresh to ensure the charts load quickly. Once a user loads the dashboard in their browser, the charts remain displayed—even while waiting for the charts to get the latest data when the cache is refreshed.
If your developers are able to easily work with the data and your app's performance is good enough, your data model is probably good enough.
## Summary
Every time I work with MongoDB, I learn something new. In the process of working with a team to rapidly build an app, I learned a lot about data modeling in MongoDB:
- 1. Duplicating data is scary—even for those of us who have been coaching others to do so
- 2. Use the Bucket Pattern only when you will benefit from the buckets
- 3. Use a date field to label date-based buckets
- 4. Cleaning data you receive from APIs will make working with the data easier
- 5. Optimizing for your use case is really hard when you don't fully know what your use case will be
- 6. There is no "right way" to model your data
- 7. Determine how much you want to tweak your data model based on the ease of working with the data and your performance requirements
If you're interested in learning more about data modeling, I highly recommend the following resources:
- Free MongoDB University Course: M320: Data Modeling
- Blog Series: MongoDB Schema Design Patterns
- YouTube Video Series: MongoDB Schema Design Anti-Patterns
- Blog Series: MongoDB Schema Design Anti-Patterns
Remember, every use case is different, so every data model will be different. Focus on how you will be using the data.
If you have any questions about data modeling, I encourage you to join the MongoDB Community. It's a great place to ask questions. MongoDB employees and community members are there every day to answer questions and share their experiences. I hope to see you there!
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Discover 7 things Lauren learned while modeling data in MongoDB.",
"contentType": "Article"
} | 7 Things I Learned While Modeling Data for YouTube Stats | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-vs-regex | created | # A Decisioning Framework for MongoDB $regex and $text vs Atlas Search
Are you using $text or $regex to provide search-like functionality in your application? If so, MongoDB Atlas’ $search operator offers several advantages to $text and $regex, such as faster and more efficient search results, natural language queries, built-in relevance ranking, and better scalability. Getting started is super easy as $search is embedded as an aggregation stage right into MongoDB Atlas, providing you with full text search capabilities on all of your operational data.
While the $text and $regex operators are your only options for on-premises or local deployment, and provide basic text matching and pattern searching, Atlas users will find that $search provides a more comprehensive and performant solution for implementing advanced search functionality in your applications. Features like fuzzy matching, partial word matching, synonyms search, More Like This, faceting, and the capability to search through large data sets are only available with Atlas Search.
Migrating from $text or $regex to $search doesn't necessarily mean rewriting your entire codebase. It can be a gradual process where you start incorporating the $search operator in new features or refactoring existing search functionality in stages.
The table below explores the benefits of using Atlas Search compared to regular expressions for searching data. Follow along and experience the power of Atlas Search firsthand.
**Create a Search Index Now**
>Note: $text and $regex have had no major updates since 2015, and all future enhancements in relevance-based search will be delivered via Atlas Search.
To learn more about Atlas Search, check out the documentation.
| App Requirements | $regex | $text | $search | Reasoning |
| --- | --- | --- | --- | --- |
| The datastore must respect write concerns | ✅ | 🚫 | 🚫 | If you have a datastore that must respect write concerns for use cases like transactions with heavy reads after writes, $regex is a better choice. For search use cases, reads after writes should be rare. |
| Language awareness (Spanish, Chinese, English, etc.) | 🚫 | 🚫 | ✅ | Atlas Search natively supports over 40 languages so that you can better tokenize languages, remove stopwords, and interpret diacritics to support improved search relevance. |
| Case-insensitive text search |🚫 | 🚫 |✅ | Case-insensitive text search using $regex is one of the biggest sources of problems among our customer base, and $search offers far more capabilities than $text. |
| Highlighting result text | 🚫 |🚫 | ✅ | The ability to highlight text fragments in result documents helps end users contextualize why some documents are returned compared to others. It's essential for user experiences powered by natural language queries. While developers could implement a crude version of highlighting with the other options, the $search aggregation stage provides an easy-to-consume API and a core engine that handles topics like tokenization and offsets. |
| Geospatial-aware search queries | ✅ | 🚫 | ✅ | Both $regex and $search have geospatial capabilities. The differences between the two lie in the differences between how $regex and $search treat geospatial parameters. For instance, Lucene draws a straight line from one query coordinate to another, whereas MongoDB lines are spherical. Spherical queries are best for flights, whereas flat map queries might be better for short distances. |
| On-premises or local deployment | ✅ | ✅ | 🚫 | Atlas Search is not available on-premise or for local deployment. The single deployment target enables our team to move fast and innovate at a more rapid pace than if we targeted many deployment models. For that reason, $regex and $text are the only options for people who do not have access to Atlas. |
| Autocomplete of characters (nGrams) | 🚫 | 🚫 | ✅ | End users typing in a search box have grown accustomed to an experience where their search queries are completed for them. Atlas Search offers edgeGrams for left-to-right autocomplete, nGrams for autocomplete with languages that do not have whitespace, and rightEdgeGram for languages that are written and read right-to-left. |
| Autocomplete of words (wordGrams) | 🚫 | 🚫 | ✅ | If you have a field with more than two words and want to offer word-based autocomplete as a feature of your application, then a shingle token filter with custom analyzers could be best for you. Custom analyzers offer developers a flexible way to index and modify how their data is stored. |
| Fuzzy matching on text input | 🚫 | 🚫 |✅ | If you would like to filter on user generated input, Atlas Search’s fuzzy offers flexibility. Issues like misspelled words are handled best by $search. |
| Filtering based on more than 10 strings | 🚫 | 🚫 | ✅ | It’s tricky to filter on more than 10 strings in MongoDB due to the limitations of compound text indexes. The compound filter is again the right way to go here. |
| Relevance score sorted search | 🚫 |🚫 |✅ |Atlas Search uses the state-of-art BM25 algorithm for determining the search relevance score of documents and allows for advanced configuration through boost expressions like multiply and gaussian decay, as well as analyzers, search operators, and synonyms. |
| Cluster needs to be optimized for write performance |🚫 | 🚫 |✅ | When you add a database index in MongoDB, you should consider tradeoffs to write performance in cases where database write performance is important. Search Indexes don’t degrade cluster write performance. |
| Searching through large data sets | 🚫 |🚫 |✅ | If you have lots of documents, your queries will linearly get slower. In Atlas Search, the inverted index enables fast document retrieval at very large scales. |
| Partial indexes for simple text matching | ✅ |🚫 |🚫 | Atlas Search does not yet support partial indexing. Today, $regex takes the cake. |
| Single compound index on arrays |🚫 | 🚫 |✅ | Atlas Search is partially designed for this use case, where term indexes are intersected in a single Search index, to eliminate the need for compound indexes for filtering on arrays. |
| Synonyms search | 🚫 |🚫 |✅ | The only option for robust synonyms search is Atlas Search, where synonyms are defined in a collection, and that collection is referenced in your search index. |
| Fast faceting for counts | 🚫 |🚫 |✅ | If you are looking for faceted navigation, or fast counts of documents based on text criteria, let Atlas Search do the bucketing. In our internal testing, it's 100x faster and also supports number and date buckets. |
| Custom analyzers (stopwords, email/URL token, etc.) | 🚫 | 🚫 | ✅ | Using Atlas Search, you can define a custom analyzer to suit your specific indexing needs. |
| Partial match | 🚫 |🚫 |✅ |MongoDB has a number of partial match options ranging from the wildcard operator to autocomplete, which can be useful for some partial match use cases. |
| Phrase queries | 🚫 | 🚫 | ✅ | Phrase queries are supported natively in Atlas Search via the phrase operator. |
> Note: The green check mark sometimes does not appear in cases where the corresponding aggregation stage may be able to satisfy an app requirement, and in those cases, it’s because one of the other stages (i.e., $search) is far superior for a given use case.
If we’ve whetted your appetite to learn more about Atlas Search, we have some resources to get you started:
The Atlas Search documentation provides reference materials and tutorials, while the MongoDB Developer Hub provides sample apps and code. You can spin up Atlas Search at no cost on the Atlas Free Tier and follow along with the tutorials using our sample data sets, or load your own data for experimentation within your own sandbox. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn about the differences between using $regex, $text, and Atlas Search.",
"contentType": "Article"
} | A Decisioning Framework for MongoDB $regex and $text vs Atlas Search | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/3-things-to-know-switch-from-sql-mongodb | created | # 3 Things to Know When You Switch from SQL to MongoDB
Welcome to the final post in my series on moving from SQL to MongoDB. In the first post, I mapped terms and concepts from SQL to MongoDB. In the second post, I discussed the top four reasons why you should use MongoDB.
Now that we have an understanding of the terminology as well as why MongoDB is worth the effort of changing your mindset, let's talk about three key ways you need to change your mindset.
Your first instinct might be to convert your existing columns and rows to fields and documents and stick with your old ways of modeling data. We've found that people who try to use MongoDB in the same way that they use a relational database struggle and sometimes fail.
We don't want that to happen to you.
Let's discuss three key ways to change your mindset as you move from SQL to MongoDB.
- Embrace Document Diversity
- Data That is Accessed Together Should Be Stored Together
- Tread Carefully with Transactions
>
>
>This article is based on a presentation I gave at MongoDB World and MongoDB.local Houston entitled "From SQL to NoSQL: Changing Your Mindset."
>
>If you prefer videos over articles, check out the recording. Slides are available here.
>
>
## Embrace Document Diversity
As we saw in the first post in this series when we modeled documents for Leslie, Ron, and Lauren, not all documents in a collection need to have the same fields.
Users
``` json
{
"_id": 1,
"first_name": "Leslie",
"last_name": "Yepp",
"cell": "8125552344",
"city": "Pawnee",
"location": -86.536632, 39.170344 ],
"hobbies": ["scrapbooking", "eating waffles", "working"],
"jobHistory": [
{
"title": "Deputy Director",
"yearStarted": 2004
},
{
"title": "City Councillor",
"yearStarted": 2012
},
{
"title": "Director, National Parks Service, Midwest Branch",
"yearStarted": 2014
}
]
},
{
"_id": 2,
"first_name": "Ron",
"last_name": "Swandaughter",
"cell": "8125559347",
"city": "Pawnee",
"hobbies": ["woodworking", "fishing"],
"jobHistory": [
{
"title": "Director",
"yearStarted": 2002
},
{
"title": "CEO, Kinda Good Building Company",
"yearStarted": 2014
},
{
"title": "Superintendent, Pawnee National Park",
"yearStarted": 2018
}
]
},
{
"_id": 3,
"first_name": "Lauren",
"last_name": "Burhug",
"city": "Pawnee",
"hobbies": ["soccer"],
"school": "Pawnee Elementary"
}
```
For those of us with SQL backgrounds, this is going to feel uncomfortable and probably a little odd at first. I promise it will be ok. Embrace document diversity. It gives us so much flexibility and power to model our data.
In fact, MongoDB has a data modeling pattern specifically for when your documents do not have the same fields. It's called the [Polymorphic Pattern. We use the Polymorphic Pattern when documents in a collection are of similar but not identical structures.
Let's take a look at an example that builds on the Polymorphic Pattern. Let's say we decided to keep a list of each user's social media followers inside of each `User` document. Lauren and Leslie don't have very many followers, so we could easily list their followers in their documents. For example, Lauren's document might look something like this:
``` json
{
"_id": 3,
"first_name": "Lauren",
"last_name": "Burhug",
"city": "Pawnee",
"hobbies": "soccer"],
"school": "Pawnee Elementary",
"followers": [
"Brandon",
"Wesley",
"Ciara",
...
]
}
```
This approach would likely work for most of our users. However, since Ron built a chair that appeared in the very popular Bloosh Magazine, Ron has millions of followers. If we try to list all of his followers in his `User` document, it may exceed the [16 megabyte document size limit. The question arises: do we want to optimize our document model for the typical use case where a user has a few hundred followers or the outlier use case where a user has millions of followers?
We can utilize the Outlier Pattern to solve this problem. The Outlier Pattern allows us to model our data for the typical use case but still handle outlier use cases.
We can begin modeling Ron's document just like Lauren's and include a list of followers. When we begin to approach the document size limit, we can add a new `has_extras` field to Ron's document. (The field can be named anything we'd like.)
``` json
{
"_id": 2,
"first_name": "Ron",
"last_name": "Swandaughter",
"cell": "8125559347",
"city": "Pawnee",
"hobbies": "woodworking", "fishing"],
"jobHistory": [
{
"title": "Director",
"yearStarted": 2002
},
...
],
"followers": [
"Leslie",
"Donna",
"Tom"
...
],
"has_extras": true
}
```
Then we can create a new document where we will store the rest of Ron's followers.
``` json
{
"_id": 2.1,
"followers": [
"Jerry",
"Ann",
"Ben"
...
],
"is_overflow": true
}
```
If Ron continues to gain more followers, we could create another overflow document for him.
The great thing about the Outlier Pattern is that we are optimizing for the typical use case but we have the flexibility to handle outliers.
So, embrace document diversity. Resist the urge to force all of your documents to have identical structures just because it's what you've always done.
For more on MongoDB data modeling design patterns, see [Building with Patterns: A Summary and the free MongoDB University Course M320: Data Modeling.
## Data That is Accessed Together Should be Stored Together
If you have experience with SQL databases, someone probably drilled into your head that you should normalize your data. Normalization is considered good because it prevents data duplication. Let's take a step back and examine the motivation for database normalization.
When relational databases became popular, disk space was extremely expensive. Financially, it made sense to normalize data and save disk space. Take a look at the chart below that shows the cost per megabyte over time.
:charts]{url="https://charts.mongodb.com/charts-storage-costs-sbekh" id="740dea93-d2da-44c3-8104-14ccef947662"}
The cost has drastically gone down. Our phones, tablets, laptops, and flash drives have more storage capacity today than they did even five to ten years ago for a fraction of the cost. When was the last time you deleted a photo? I can't remember when I did. I keep even the really horribly unflattering photos. And I currently backup all of my photos on two external hard drives and multiple cloud services. Storage is so cheap.
Storage has become so cheap that we've seen a shift in the cost of software development. Thirty to forty years ago storage was a huge cost in software development and developers were relatively cheap. Today, the costs have flipped: storage is a small cost of software development and developers are expensive.
Instead of optimizing for storage, we need to optimize for developers' time and productivity.
As a developer, I like this shift. I want to be able to focus on implementing business logic and iterate quickly. Those are the things that matter to the business and move developers' careers forward. I don't want to be dragged down by data storage specifics.
Think back to the [example in the previous post where I coded retrieving and updating a user's profile information. Even in that simple example, I was able to write fewer lines of code and move quicker when I used MongoDB.
So, optimize your data model for developer productivity and query optimization. Resist the urge to normalize your data for the sake of normalizing your data.
*Data that is accessed together should be stored together*. If you end up repeating data in your database, that's ok—especially if you won't be updating the data very often.
## Tread Carefully with Transactions
We discussed in a previous post that MongoDB supports transactions. The MongoDB engineering team did an amazing job of implementing transactions. They work so well!
But here's the thing. Relying on transactions is a bad design smell.
Why? This builds on our first two points in this section.
First, not all documents need to have the same fields. Perhaps you're breaking up data between multiple collections because it's not all of identical structure. If that's the only reason you've broken the data up, you can probably put it back together in a single collection.
Second, data that is accessed together should be stored together. If you're following this principle, you won't need to use transactions. Some use cases call for transactions. Most do not. If you find yourself frequently using transactions, take a look at your data model and consider if you need to restructure it.
For more information on transactions and when they should be used, see the MongoDB MongoDB Multi-Document ACID Transactions Whitepaper.
## Wrap Up
Today we discussed the three things you need to know as you move from SQL to MongoDB:
- Embrace Document Diversity
- Data That is Accessed Together Should Be Stored Together
- Tread Carefully with Transactions
I hope you enjoy using MongoDB! If you want to jump in and start coding, my teammates and I have written Quick Start Tutorials for a variety of programming languages. I also highly recommend the free courses on MongoDB University.
In summary, don't be like Ron. (I mean, don't be like him in this particular case, because Ron is amazing.)
Change your mindset and get the full value of MongoDB.
| md | {
"tags": [
"MongoDB",
"SQL"
],
"pageDescription": "Discover the 3 things you need to know when you switch from SQL to MongoDB.",
"contentType": "Article"
} | 3 Things to Know When You Switch from SQL to MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-eventbridge-slack | created | # Integrate Your Realm App with Amazon EventBridge
>
>
>This post was developed with the help of AWS.
>
>
Realm makes it easy to develop compelling mobile applications backed by a serverless MongoDB Realm back end and the MongoDB Atlas database service. You can enrich those applications by integrating with AWS's broad ecosystem of services. In this article, we'll show you how to configure Realm and AWS to turn Atlas database changes into Amazon EventBridge events – all without adding a single line of code. Once in EventBridge, you can route events to other services which can act on them.
We'll use an existing mobile chat application (RChat). RChat creates new `ChatMessage` objects which Realm Sync writes to the ChatMessage Atlas collection. Realm also syncs the chat message with all other members of the chat room.
This post details how to add a new feature to the RChat application – forwarding messages to a Slack channel.
We'll add a Realm Trigger that forwards any new `ChatMessage` documents to EventBridge. EventBridge stores those events in an event bus, and a rule will route it to a Lambda function. The Lambda function will use the Slack SDK (using credentials we'll store in AWS Secrets Manager).
>
>
>Amazon EventBridge is a serverless event bus that makes it easier to connect applications together using data from your applications, integrated software as a service (SaaS) applications, and AWS services. It does so by delivering a stream of real-time data from various event sources. You can set up routing rules to send data to targets like AWS Lambda and build loosely coupled application architectures that react in near-real time to data sources.
>
>
## Prerequisites
If you want to build and run the app for yourself, this is what you'll
need:
- Mac OS 11+
- Node.js 12.x+
- Xcode 12.3+
- iOS 14.2+ (a real device, or the simulator built into Xcode)
- git command-line tool
- realm-cli command-line tool
- AWS account
- (Free) MongoDB account
- Slack account
If you're not interested in running the mobile app (or don't have access to a Mac), the article includes instructions on manually adding a document that will trigger an event being sent to EventBridge.
## Walkthrough
This walkthrough shows you how to:
- Set Up the RChat Back End Realm App
- Create a Slack App
- Receive MongoDB Events in Amazon EventBridge
- Store Slack Credentials in AWS Secrets Manager
- Write and Configure the AWS Lambda Function
- Link the Lambda Function to the MongoDB Partner Event Bus
- Run the RChat iOS App
- Test the End-to-End Integration (With or Without the iOS App)
### Set Up the RChat Back End Realm App
If you don't already have a MongoDB cloud account, create one. You'll also create an Atlas organization and project as you work through the wizard. For this walkthrough, you can use the free tier. Stick with the defaults (e.g., "Cluster Name" = "Cluster0") but set the version to MongoDB 4.4.
While your database cluster is starting, select "Project Access" under "Access Manager." Create an API key with "Project Owner" permissions. Add your current IP address to the access list. Make a note of the API keys; they're needed when using realm-cli.
Wait until the Atlas cluster is running.
From a terminal, import the back end Realm application (substituting in your Atlas project's API keys) using realm-cli:
``` bash
git clone https://github.com/realm/RChat.git
cd RChat/RChat-Realm/RChat
realm-cli login --api-key --private-api-key
realm-cli import # Then answer prompts, naming the app "RChat"
```
From the Atlas UI, click on the Realm logo and you will see the RChat app. Open it and make a note of the Realm "App Id":
Optionally, create database indexes by using mongorestore to import the empty database from the `dump` folder.
### Create a Slack App
The Slack app simply allows us to send a message to a Slack channel.
Navigate to the Slack API page. (You'll need to log in or register a new account if you don't have one.)
Click on the button to create a new Slack app, name it "RChat," and select one of your Slack workspaces. (If using your company's account, you may want or need to create a new workspace.)
Give your app a short description and then click "Save Changes."
After creating your Slack app, select the "OAuth & Permissions" link. Scroll down to "Bot Token Scopes" and add the `chat.write` and `channels:read` scopes.
Click on "Install to Workspace" and then "Allow."
Take a note of the new "Bot User OAuth Access Token."
From your Slack client, create a new channel named "rchat-notifications." Invite your Slack app bot to the channel (i.e., send a message from the channel to "@RChat Messenger" or whatever Slack name you gave to your app):
You now need to find its channel ID from a terminal window (substituting in your Slack OAuth access token):
``` bash
curl --location --request GET 'slack.com/api/conversations.list' \
--header 'Authorization: Bearer xoxb-XXXXXXXXXXXXXXX-XXXXXXXXXXXX-XXXXXXXXXXXXXXXXXXX'
```
In the results, you'll find an entry for your new "rchat-notifications" channel. Take a note of its `id`; it will be stored in AWS Secrets Manager and then used from the Lambda function when calling the Slack SDK:
``` json
{
"name" : "rchat-notifications",
"is_pending_ext_shared" : false,
"is_ext_shared" : false,
"is_general" : false,
"is_private" : false,
"is_member" : false,
"name_normalized" : "rchat-notifications",
"is_archived" : false,
"is_channel" : true,
"topic" : {
"last_set" : 0,
"creator" : "",
"value" : ""
},
"unlinked" : 0,
"is_org_shared" : false,
"is_group" : false,
"shared_team_ids" :
"T01JUGHQXXX"
],
"is_shared" : false,
"is_mpim" : false,
"is_im" : false,
"pending_connected_team_ids" : [],
"purpose" : {
"last_set" : 1610987122,
"creator" : "U01K7ET1XXX",
"value" : "This is for testing the RChat app"
},
"creator" : "U01K7ET1XXX",
"created" : 1610987121,
"parent_conversation" : null,
"id" : "C01K1NYXXXX",
"pending_shared" : [],
"num_members" : 3,
"previous_names" : []
}
```
### Receive MongoDB Events in Amazon EventBridge
EventBridge supports MongoDB as a partner event source; this makes it very easy to receive change events from Realm Triggers.
From the [EventBridge console, select "Partner event sources." Search for the "MongoDB" partner and click "Set up":
Take a note of your AWS account ID.
Return to the Realm UI navigate to "Triggers" and click "Add a trigger." Configure the trigger as shown here:
Rather than sticking with the default "Function" event type (which is
Realm Function, not to be confused with Lambda), select "EventBridge,"
add your AWS Account ID from the previous section, and click "Save"
followed by "REVIEW & DEPLOY":
Return to the AWS "Partner event sources" page, select the new source, and click "Associate with event bus":
On the next screen, leave the "Resource-based policy" empty.
Returning to the "Event buses" page, you'll find the new MongoDB partner bus.
### Store Slack Credentials in AWS Secrets Manager
We need a new Lambda function to be invoked on any MongoDB change events added to the event bus. That function will use the Slack API to send messages to our channel. The Lambda function must provide the OAuth token and channel ID to use the Slack SDK. Rather than storing that private information in the function, it's more secure to hold them in AWS Secrets Manager.
Navigate to the Secrets Manager console and click "Store a new secret." Add the values you took a note of when creating the Slack app:
Click through the wizard, and apart from assigning a unique name to the secret (and take a note of it as it's needed when configuring the Lambda function), leave the other fields as they are. Take a note of the ARN for the new secret as it's required when configuring the Lambda function.
### Write and Configure the AWS Lambda Function
From the Lambda console, click "Create Function." Name the function "sendToSlack" and set the runtime to "Node.js 12.x."
After creating the Lambda function, navigate to the "Permissions" tab and click on the "Execution role" role name. On the new page, click on the "Policy name" and then "Edit policy."
Click "Add additional permissions" and select the "Secrets Manager" service:
Select the "ListSecrets" action. This permission allows the Lambda function to see what secrets are available, but not to read our specific Slack secret. To remedy that, click "Add additional permissions" again. Once more, select the "Secrets Manager" service, but this time select the "Read" access level and specify your secret's ARN in the resources section:
Review and save the new permissions.
Returning to the Lambda function, select the "Configuration" tab and add an environment variable to set the "secretName" to the name you chose when creating the secret (the function will use this to access Secret Manager):
It can take some time for the function to fetch the secret for the first time, so set the timeout to 30 seconds in the "Basic settings" section.
Finally, we can write the actual Lambda function.
From a terminal, bootstrap the function definition:
``` bash
mkdir lambda
cd lambda
npm install '@slack/web-api'
```
In the same lambda directory, create a file called `index.js`:
``` javascript
const {WebClient} = require('@slack/web-api');
const AWS = require('aws-sdk');
const secretName = process.env.secretName;
let slackToken = "";
let channelId = "";
let secretsManager = new AWS.SecretsManager();
const initPromise = new Promise((resolve, reject) => {
secretsManager.getSecretValue(
{ SecretId: secretName },
function(err, data) {
if(err) {
console.error(`Failed to fetch secrets: ${err}`);
reject();
} else {
const secrets = JSON.parse(data.SecretString);
slackToken = secrets.slackToken;
channelId = secrets.channelId;
resolve()
}
}
)
});
exports.handler = async (event) => {
await initPromise;
const client = new WebClient({ token: slackToken });
const blocks =
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": `*${event.detail.fullDocument.author} said...*\n\n${event.detail.fullDocument.text}`
},
"accessory": {
"type": "image",
"image_url": "https://cdn.dribbble.com/users/27903/screenshots/4327112/69chat.png?compress=1&resize=800x600",
"alt_text": "Chat logo"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": `Sent from `
}
},
{
"type": "divider"
}
]
await publishMessage(
channelId, `Sent from RChat: ${event.detail.fullDocument.author} said "${event.detail.fullDocument.text}"`,
blocks);
const response = {
statusCode: 200,
body: JSON.stringify('Slack message sent')
};
return response;
async function publishMessage(id, text, blocks) {
try {
const result = await client.chat.postMessage({
token: slackToken,
channel: id,
text: text,
blocks: blocks
});
}
catch (error) {
console.error(error);
}
}
};
```
There are a couple of things to call out in that code.
This is how the Slack credentials are fetched from Secret Manager:
``` javascript
const secretName = process.env.secretName;
var MyPromise = new AWS.SecretsManager();
const secret = await MyPromise.getSecretValue({ SecretId: secretName}).promise();
const openSecret = JSON.parse(secret.SecretString);
const slackToken = openSecret.slackToken;
const channelId = openSecret.channelId;
```
`event` is passed in as a parameter, and the function retrieves the original MongoDB document's contents from `event.detail.fullDocument`.
`blocks` is optional, and if omitted, the SDK uses text as the body of the Slack message.
Package up the Lambda function:
``` bash
zip -r ../lambda.zip .
```
From the Lambda console, upload the zip file and then deploy:
!["Upload Lambda Function"
The Lambda function is now complete, and the next section will start routing events from the EventBridge partner message bus to it.
### Link the Lambda Function to the MongoDB Partner Event Bus
The final step to integrate our Realm app with the new Lambda function is to have that function consume the events from the event bus. We do that by adding a new EventBridge rule.
Return to the EventBridge console and click the "Rules" link. Select the "aws.partner/mongodb.com/stitch.trigger/xxx" event bus and click "Create rule."
The "Name" can be anything. You should use an "Event pattern," set "Pre-defined pattern by service," search for "Service partner" "MongoDB," and leave the "Event pattern" as is. This rule matches all bus events linked to our AWS account (i.e., it will cover everything sent from our Realm function):
Select the new Lambda function as the target and click "Create":
### Run the RChat iOS App
After creating the back end Realm app, open the RChat iOS app in Xcode:
``` bash
cd ../../RChat-iOS
open RChat.xcodeproj
```
Navigate to `RChatApp.swift`. Replace `rchat-xxxxx` with your Realm App Id:
Select your target device (a connected iPhone/iPad or one of the built-in simulators) and build and run the app with `⌘r`.
### Test the End-to-End Integration (With or Without the iOS App)
To test a chat app, you need at least two users and two instances of the chat app running.
From Xcode, run (`⌘r`) the RChat app in one simulator, and then again in a second simulator after changing the target device. On each device, register a new user. As one user, create a new chat room (inviting the second user). Send messages to the chat room from either user, and observe that message also appearing in Slack:
#### If You Don't Want to Use the iOS App
Suppose you're not interested in using the iOS app or don't have access to a Mac. In that case, you can take a shortcut by manually adding documents to the `ChatMessage` collection within the `RChat` database. Do this from the "Collections" tab in the Atlas UI. Click on "INSERT DOCUMENT" and then ensure that you include fields for "author" and "text":
## Summary
This post stepped through how to get your data changes from MongoDB into your AWS ecosystem with no new code needed. Once your EventBridge bus has received the change events, you can route them to one or more services. Here we took a common approach by sending them to a Lambda function which then has the freedom to import external libraries and work with other AWS or external services.
To understand more about the Realm chat app that was the source of the messages, read Building a Mobile Chat App Using Realm – Data Architecture.
## References
- RChat GitHub repo
- Building a Mobile Chat App Using Realm – Data Architecture
- Slack SDK
- Sending Trigger Events to AWS EventBridge
>
>
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
>
>
| md | {
"tags": [
"Realm"
],
"pageDescription": "Step through extending a Realm chat app to send messages to a Slack channel using Amazon EventBridge",
"contentType": "Tutorial"
} | Integrate Your Realm App with Amazon EventBridge | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-aws-kinesis-firehose-destination | created | # Using MongoDB Realm WebHooks with Amazon Kinesis Data Firehose
With MongoDB Realm's AWS integration, it has always been as simple as possible to use MongoDB as a Kinesis data stream. Now with the launch of third-party data destinations in Kinesis, you can also use MongoDB Realm and MongoDB Atlas as an AWS Kinesis Data Firehose destination.
>Keep in mind that this is just an example. You do not need to use Atlas as both the source **and** destination for your Kinesis streams. I am only doing so in this example to demonstrate how you can use MongoDB Atlas as both an AWS Kinesis Data and Delivery Stream. But, in actuality, you can use any source for your data that AWS Kinesis supports, and still use MongoDB Atlas as the destination.
## Prerequisites
Before we get started, you will need the following:
- A MongoDB Atlas account with a deployed cluster; a free M0 cluster is perfectly adequate for this example. ✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
- A MongoDB Realm App. You can learn more about creating a Realm App and linking it to your Atlas cluster in our "Create a Realm App" guide
- An AWS account and the AWS CLI. Check out "What Is the AWS Command Line Interface?" for a guide to installing and configuring the AWS CLI
## Setting up our Kinesis Data Stream
In this example, the source of my data is a Raspberry Pi with a Sense HAT. The output from the Sense HAT is read by a Python script running on the Pi. This script then stores the sensor data such as temperature, humidity, and pressure in MongoDB Atlas.
``` python
import platform
import time
from datetime import datetime
from pymongo import MongoClient
from sense_hat import SenseHat
# Setup the Sense HAT module and connection to MongoDB Atlas
sense = SenseHat()
client = MongoClient(process.env.MONGODB_CONNECTION_STRING)
db = client.monitors
sense.load_image("img/realm-sensehat.png")
# If the acceleration breaches 1G we assume the device is being moved
def is_moving(x, y, z):
for acceleration in x, y, z]:
if acceleration < -1 or acceleration > 1:
return True
return False
while True:
# prepare the object to save as a document in Atlas
log = {
"nodeName": platform.node(),
"humidity": sense.get_humidity(),
"temperature": sense.get_temperature(),
"pressure": sense.get_pressure(),
"isMoving": is_moving(**sense.get_accelerometer_raw()),
"acceleration": sense.get_accelerometer_raw(),
"recordedAt": datetime.now(),
}
# Write the report object to MongoDB Atlas
report = db.reports.insert_one(log)
# Pause for 0.5 seconds before capturing next round of sensor data
time.sleep(0.5)
```
I then use a [Realm Database Trigger to transform this data into a Kinesis Data Stream.
>Realm functions are useful if you need to transform or do some other computation with the data before putting the record into Kinesis. However, if you do not need to do any additional computation, it is even easier with the AWS Eventbridge. MongoDB offers an AWS Eventbridge partner event source that lets you send Realm Trigger events to an event bus instead of calling a Realm Function. You can configure any Realm Trigger to send events to EventBridge. You can find out more in the documentation: "Send Trigger Events to AWS EventBridge"
``` javascript
// Function is triggered anytime a document is inserted/updated in our collection
exports = function (event) {
// Access the AWS service in Realm
const awsService = context.services.get("AWSKinesis")
try {
awsService
.kinesis()
.PutRecord({
/* this trigger function will receive the full document that triggered the event
put this document into Kinesis
*/
Data: JSON.stringify(event.fullDocument),
StreamName: "realm",
PartitionKey: "1",
})
.then(function (response) {
return response
})
} catch (error) {
console.log(JSON.parse(error))
}
}
```
You can find out more details on how to do this in our blog post "Integrating MongoDB and Amazon Kinesis for Intelligent, Durable Streams."
## Amazon Kinesis Data Firehose Payloads
AWS Kinesis HTTP(s) Endpoint Delivery Requests are sent via POST with a single JSON document as the request body. Delivery destination URLs must be HTTPS.
### Delivery Stream Request Headers
Each Delivery Stream Request contains essential information in the HTTP headers, some of which we'll use in our Realm WebHook in a moment.
- `X-Amz-Firehose-Protocol-Version`: This header indicates the version of the request/response formats. Currently, the only version is 1.0, but new ones may be added in the future
- `X-Amz-Firehose-Request-Id`: This value of this header is an opaque GUID used for debugging purposes. Endpoint implementations should log the value of this header if possible, for both successful and unsuccessful requests. The request ID is kept the same between multiple attempts of the same request
- `X-Amz-Firehose-Source-Arn`: The ARN of the Firehose Delivery Stream represented in ASCII string format. The ARN encodes region, AWS account id, and the stream name
- `X-Amz-Firehose-Access-Key`: This header carries an API key or other credentials. This value is set when we create or update the delivery stream. We'll discuss it in more detail later
### Delivery Stream Request Body
The body carries a single JSON document, you can configure the max body size, but it has an upper limit of 64 MiB, before compression. The JSON document has the following properties:
- `requestId`: Same as the value in the X-Amz-Firehose-Request-Id header, duplicated here for convenience
- `timestamp`: The timestamp (milliseconds since epoch) at which the Firehose server generated this request
- `records`: The actual records of the Delivery Stream, carrying your data. This is an array of objects, each with a single property of data. This property is a base64 encoded string of your data. Each request can contain a minimum of 1 record and a maximum of 10,000. It's worth noting that a record can be empty
### Response Format
When responding to a Delivery Stream Request, there are a few things you should be aware of.
#### Status Codes
The HTTP status code must be in the 2xx, 4xx, 5xx range; they will not follow redirects, so nothing in the 3xx range. Only a status of 200 is considered a successful delivery of the records; all other statuses are regarded as a retriable error, except 413.
413 (size exceeded) is considered a permanent failure, and will not be retried. In all other error cases, they will reattempt delivery of the same batch of records using an exponential back-off algorithm.
The retries are backed off using an initial back-off time of 1 second with a jitter factor of 15% . Each subsequent retry is backed off using the formula initial-backoff-time \* (multiplier(2) ^ retry_count) with added jitter. The back-off time is capped by a maximum interval of 2 minutes. For example on the 'n'-th retry the back-off time is = MAX(120sec, (1 \* (2^n)) \* random(0.85, 1.15).
These parameters are subject to change. Please refer to the AWS Firehose documentation for exact initial back-off time, max back-off time, multiplier, and jitter percentages.
#### Other Response Headers
As well as the HTTP status code your response should include the following headers:
- `Content-Type`: The only acceptable content type is application/json
- `Content-Length`: The Content-Length header must be present if the response has a body
Do not send a `Content-Encoding` header, the body must be uncompressed.
#### Response Body
Just like the Request, the Response body is JSON, but it has a max filesize of 1MiB. This JSON body has two required properties:
- `requestId`: This must match the requestId in the Delivery Stream Request
- `timestamp`: The timestamp (milliseconds since epoch) at which the server processed this request
If there was a problem processing the request, you could optionally include an errorMessage property. If a request fails after exhausting all retries, the last Instance of this error message is copied to the error output S3 bucket, if one has been configured for the Delivery Stream.
## Storing Shared Secrets
When we configure our Kinesis Delivery Stream, we will have the opportunity to set an AccessKey value. This is the same value which is sent with each request as the `X-Amz-Firehose-Access-Key` header. We will use this shared secret to validate the source of the request.
We shouldn't hard-code this access key in our Realm function; instead, we will create a new secret named `FIREHOSE_ACCESS_KEY`. It can be any value, but keep a note of it as you'll need to reference it later when we configure the Kinesis Delivery Stream.
## Creating our Realm WebHook
Before we can write the code for our WebHook, we first need to configure it. The "Configure Service WebHooks guide in the Realm documentation goes into more detail, but you will need to configure the following options:
- Authentication type must be set to system
- The HTTP method is POST
- "Respond with result" is disabled
- Request validation must be set to "No Additional Authorisation"; we need to handle authenticating Requests ourselves using the X-Amz-Firehose-Access-Key header
### The Realm Function
For our WebHook we need to write a function which:
- Receives a POST request from Kinesis
- Ensures that the `X-Amz-Firehose-Access-Key` header value matches the `FIREHOSE_ACCESS_KEY` secret
- Parses the JSON body from the request
- Iterates over the reports array and base64 decodes the data in each
- Parses the base64 decoded JSON string into a JavaScript object
- Writes the object to MongoDB Atlas as a new document
- Returns the correct status code and JSON body to Kinesis in the response
``` javascript
exports = function(payload, response) {
/* Using Buffer in Realm causes a severe performance hit
this function is ~6 times faster
*/
const decodeBase64 = (s) => {
var e={},i,b=0,c,x,l=0,a,r='',w=String.fromCharCode,L=s.length
var A="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
for(i=0;i<64;i++){eA.charAt(i)]=i}
for(x=0;x=8){((a=(b>>>(l-=8))&0xff)||(x<(L-2)))&&(r+=w(a))}
}
return r
}
// Get AccessKey from Request Headers
const firehoseAccessKey = payload.headers["X-Amz-Firehose-Access-Key"]
// Check shared secret is the same to validate Request source
if(firehoseAccessKey == context.values.get("FIREHOSE_ACCESS_KEY")) {
// Payload body is a JSON string, convert into a JavaScript Object
const data = JSON.parse(payload.body.text())
// Each record is a Base64 encoded JSON string
const documents = data.records.map((record) => {
const document = JSON.parse(decodeBase64(record.data))
return {
...document,
_id: new BSON.ObjectId(document._id)
}
})
// Perform operations as a bulk
const bulkOp = context.services.get("mongodb-atlas").db("monitors").collection("firehose").initializeOrderedBulkOp()
documents.forEach((document) => {
bulkOp.find({ _id:document._id }).upsert().updateOne(document)
})
response.addHeader(
"Content-Type",
"application/json"
)
bulkOp.execute().then(() => {
// All operations completed successfully
response.setStatusCode(200)
response.setBody(JSON.stringify({
requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],
timestamp: (new Date()).getTime()
}))
return
}).catch((error) => {
// Catch any error with execution and return a 500
response.setStatusCode(500)
response.setBody(JSON.stringify({
requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],
timestamp: (new Date()).getTime(),
errorMessage: error
}))
return
})
} else {
// Validation error with Access Key
response.setStatusCode(401)
response.setBody(JSON.stringify({
requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],
timestamp: (new Date()).getTime(),
errorMessage: "Invalid X-Amz-Firehose-Access-Key"
}))
return
}
}
```
As you can see, Realm functions are mostly just vanilla JavaScript. We export a function which takes the request and response as arguments and returns the modified response.
One extra we do have within Realm functions is the global context object. This provides access to other Realm functions, values, and services; you may have noticed in the trigger function at the start of this article that we use the context object to access our AWS service. Whereas in the code above we're using the context object to access the `mongodb-atlas` service and to retrieve our secret value. You can read more about what's available in the Realm context in our documentation.
#### Decoding and Parsing the Payload Body
``` javascript
// Payload body is a JSON string, convert into a JavaScript Object
const data = JSON.parse(payload.body.text())
// Each record is a Base64 encoded JSON string
const documents = data.records.map((record) => {
const document = JSON.parse(decodeBase64(record.data))
return {
...document,
_id: new BSON.ObjectId(document._id)
}
})
```
When we receive the POST request, we first have to convert the body—which is a JSON string—into a JavaScript object. Then we can iterate over each of the records.
The data in each of these records is Base64 encoded, so we have to decode it first.
>Using `Buffer()` within Realm functions may currently cause a degradation in performance. Currently we do not recommend using Buffer to decode Base64 strings, but instead to use a function such as `decodeBase64()` in the example above.
This data could be anything, whatever you've supplied in your Delivery Stream, but in this example, it is the MongoDB document sent from our Realm trigger. This document is also a JSON string, so we'll need to parse it back into a JavaScript object.
#### Writing the Reports to MongoDB Atlas
Once the parsing and decoding are complete, we're left with an array of between 1 and 10,000 objects, depending on the size of the batch. It's tempting to pass this array to `insertMany()`, but there is the possibility that some records might already exist as documents in our collection.
Remember if Kinesis does not receive an HTTP status of 200 in response to a request it will, in the majority of cases, retry the batch. We have to take into account that there could be an issue after the documents have been written that prevents Kinesis from receiving the 200 OK status. If this occurs and we try to insert the document again, MongoDB will raise a `Duplicate key error` exception.
To prevent this we perform a `find()` and `updateOne()`, `with upsert()`.
When updating/inserting a single document, you can use `updateOne()` with the `upsert` option.
``` javascript
context.services.get("mongodb-atlas").db("monitors").collection("firehose").updateOne(
{_id: document._id},
document,
{upsert: true}
)
```
But we could potentially have to update/insert 10,000 records, so instead, we perform a bulk write.
``` javascript
// Perform operations as a bulk
const bulkOp = context.services.get("mongodb-atlas").db("monitors").collection("firehose").initializeOrderedBulkOp()
documents.forEach((document) => {
bulkOp.find({ _id:document._id }).upsert().updateOne(document)
})
```
#### Sending the Response
``` javascript
bulkOp.execute().then(() => {
// All operations completed successfully
response.setStatusCode(200)
response.setBody(JSON.stringify({
requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],
timestamp: (new Date()).getTime()
}))
return
})
```
If our write operations have completed successfully, we return an HTTP 200 status code with our response. Otherwise, we return a 500 and include the error message from the exception in the response body.
``` javascript
).catch((error) => {
// Catch any error with execution and return a 500
response.setStatusCode(500)
response.setBody(JSON.stringify({
requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],
timestamp: (new Date()).getTime(),
errorMessage: error
}))
return
})
```
### Our WebHook URL
Now we've finished writing our Realm Function, save and deploy it. Then on the settings tab copy the WebHook URL, we'll need it in just a moment.
## Creating an AWS Kinesis Delivery Stream
To create our Kinesis Delivery Stream we're going to use the AWS CLI, and you'll need the following information:
- Your Kinesis Data Stream ARN
- The ARN of your respective IAM roles, also ensure that service-principal firehose.amazonaws.com is allowed to assume these roles
- Bucket and Role ARNs for the S3 bucket to be used for errors/backups
- MongoDB Realm WebHook URL
- The value of the `FIREHOSE_ACCESS_KEY`
Your final AWS CLI command will look something like this:
``` bash
aws firehose --endpoint-url "https://firehose.us-east-1.amazonaws.com" \
create-delivery-stream --delivery-stream-name RealmDeliveryStream \
--delivery-stream-type KinesisStreamAsSource \
--kinesis-stream-source-configuration \
"KinesisStreamARN=arn:aws:kinesis:us-east-1:78023564309:stream/realm,RoleARN=arn:aws:iam::78023564309:role/KinesisRealmRole" \
--http-endpoint-destination-configuration \
"RoleARN=arn:aws:iam::78023564309:role/KinesisFirehoseFullAccess,\
S3Configuration={RoleARN=arn:aws:iam::78023564309:role/KinesisRealmRole, BucketARN=arn:aws:s3:::realm-kinesis},\
EndpointConfiguration={\
Url=https://webhooks.mongodb-stitch.com/api/client/v2.0/app/realmkinesis-aac/service/kinesis/incoming_webhook/kinesisDestination,\
Name=RealmCloud,AccessKey=sdhfjkdbf347fb3icb34i243orn34fn234r23c}"
```
If everything executes correctly, you should see your new Delivery Stream appear in your Kinesis Dashboard. Also, after a few moments, the WebHook event will appear in your Realm logs and documents will begin to populate your collection!
![Screenshot Kinesis delivery stream dashboard
## Next Steps
With the Kinesis data now in MongoDB Atlas, we have a wealth of possibilities. We can transform it with aggregation pipelines, visualise it with Charts, turn it into a GraphQL API, or even trigger more Realm functions or services.
## Further reading
Now you've seen how you can use MongoDB Realm as an AWS Kinesis HTTP Endpoint you might find our other articles on using MongoDB with Kinesis useful:
- Integrating MongoDB and Amazon Kinesis for Intelligent, Durable Streams
- Processing Data Streams with Amazon Kinesis and MongoDB Atlas
- MongoDB Stitch Triggers & Amazon Kinesis — The AWS re\:Invent Stitch Rover Demo
- Near-real time MongoDB integration with AWS kinesis stream and Apache Spark Streaming
>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post. | md | {
"tags": [
"Realm",
"JavaScript",
"AWS"
],
"pageDescription": "With the launch of third-party data destinations in Kinesis, you can use MongoDB Realm and MongoDB Atlas as an AWS Kinesis Data Firehose destination.",
"contentType": "Tutorial"
} | Using MongoDB Realm WebHooks with Amazon Kinesis Data Firehose | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/aggregation-pipeline-covid19-benford-law | created | # Aggregation Pipeline: Applying Benford's Law to COVID-19 Data
## Introduction
In this blog post, I will show you how I built an aggregation
pipeline to
apply Benford's law on
the COVID-19 data set that we have made available in the following
cluster:
``` none
mongodb+srv://readonly:[email protected]/covid19
```
If you want to know more about this cluster and how we transformed the
CSV files from Johns Hopkins University's repository into clean MongoDB documents, check out this blog post.
Finally, based on this pipeline, I was able to produce a dashboard in MongoDB Charts. For example, here is one Chart that applies Benford's law on the worldwide daily cases of COVID-19:
:charts]{url="https://charts.mongodb.com/charts-open-data-covid-19-zddgb" id="bff5cb5e-ce3d-4fe7-a208-be9da0502621"}
>
>
>**Disclaimer**: This article will focus on the aggregation pipeline and
>the stages I used to produce the result I wanted to get to be able to
>produce these charts—not so much on the results themselves, which can be
>interpreted in many different ways. One of the many issues here is the
>lack of data. The pandemic didn't start at the same time in all the
>countries, so many countries don't have enough data to make the
>percentages accurate. But feel free to interpret these results the way
>you want...
>
>
## Prerequisites
This blog post assumes that you already know the main principles of the
[aggregation pipeline
and you are already familiar with the most common stages.
If you want to follow along, feel free to use the cluster mentioned
above or take a copy using mongodump or mongoexport, but the main takeaway from this blog post is the techniques I used to
produce the output I wanted.
Also, I can't recommend you enough to use the aggregation pipeline
builder in MongoDB Atlas
or Compass to build your pipelines and play with the ones you will see in this blog post.
All the code is available in this repository.
## What is Benford's Law?
Before we go any further, let me tell you a bit more about Benford's
law. What does Wikipedia
say?
>
>
>Benford's law \...\] is an observation about the frequency distribution
>of leading digits in many real-life sets of numerical data. The law
>states that in many naturally occurring collections of numbers, the
>leading digit is likely to be small. In sets that obey the law, the
>number 1 appears as the leading significant digit about 30% of the time,
>while 9 appears as the leading significant digit less than 5% of the
>time. If the digits were distributed uniformly, they would each occur
>about 11.1% of the time. Benford's law also makes predictions about the
>distribution of second digits, third digits, digit combinations, and so
>on.
>
>
Here is the frequency distribution of the first digits that we can
expect for a data set that respects Benford's law:
A little further down in Wikipedia's article, in the "Applications"
section, you can also read the following:
>
>
>**Accounting fraud detection**
>
>In 1972, Hal Varian suggested that the law could be used to detect
>possible fraud in lists of socio-economic data submitted in support of
>public planning decisions. Based on the plausible assumption that people
>who fabricate figures tend to distribute their digits fairly uniformly,
>a simple comparison of first-digit frequency distribution from the data
>with the expected distribution according to Benford's law ought to show
>up any anomalous results.
>
>
Simply, if your data set distribution is following Benford's law, then
it's theoretically possible to detect fraudulent data if a particular
subset of the data doesn't follow the law.
In our situation, based on the observation of the first chart above, it
looks like the worldwide daily confirmed cases of COVID-19 are following
Benford's law. But is it true for each country?
If I want to answer this question (I don't), I will have to build a
relatively complex aggregation pipeline (I do 😄).
## The Data Set
I will only focus on a single collection in this blog post:
`covid19.countries_summary`.
As its name suggests, it's a collection that I built (also using an
[aggregation
pipeline)
that contains a daily document for each country in the data set.
Here is an example:
``` json
{
_id: ObjectId("608b24d4e7a11f5710a66b05"),
uids: 504 ],
confirmed: 19645,
deaths: 305,
country: 'Morocco',
date: 2020-07-25T00:00:00.000Z,
country_iso2s: [ 'MA' ],
country_iso3s: [ 'MAR' ],
country_codes: [ 504 ],
combined_names: [ 'Morocco' ],
population: 36910558,
recovered: 16282,
confirmed_daily: 811,
deaths_daily: 6,
recovered_daily: 182
}
```
As you can see, for each day and country, I have daily counts of the
COVID-19 confirmed cases and deaths.
## The Aggregation Pipeline
Let's apply Benford's law on these two series of numbers.
### The Final Documents
Before we start applying stages (transformations) to our documents,
let's define the shape of the final documents which will make it easy to
plot in MongoDB Charts.
It's easy to do and defines clearly where to start (the document in the
previous section) and where we are going:
``` json
{
country: 'US',
confirmed_size: 435,
deaths_size: 424,
benford: [
{ digit: 1, confirmed: 22.3, deaths: 36.1 },
{ digit: 2, confirmed: 21.1, deaths: 14.4 },
{ digit: 3, confirmed: 11.5, deaths: 10.6 },
{ digit: 4, confirmed: 11.7, deaths: 8 },
{ digit: 5, confirmed: 11, deaths: 5 },
{ digit: 6, confirmed: 11.7, deaths: 4.7 },
{ digit: 7, confirmed: 6.7, deaths: 6.8 },
{ digit: 8, confirmed: 2.3, deaths: 6.4 },
{ digit: 9, confirmed: 1.6, deaths: 8 }
]
}
```
Setting the final objective makes us focused on the target while doing
our successive transformations.
### The Pipeline in English
Now that we have a starting and an ending point, let's try to write our
pipeline in English first:
1. Regroup all the first digits of each count into an array for the
confirmed cases and into another one for the deaths for each
country.
2. Clean the arrays (remove zeros and negative numbers—see note below).
3. Calculate the size of these arrays.
4. Remove countries with empty arrays (countries without cases or
deaths).
5. Calculate the percentages of 1s, 2s, ..., 9s in each arrays.
6. Add a fake country "BenfordTheory" with the theoretical values of
1s, 2s, etc. we are supposed to find.
7. Final projection to get the document in the final shape I want.
>
>
>Note: The daily fields that I provide in this collection
>`covid19.countries_summary` are computed from the cumulative counts that
>Johns Hopkins University (JHU) provides. Simply: Today's count, for each
>country, is today's cumulative count minus yesterday's cumulative count.
>In theory, I should have zeros (no deaths or no cases that day), but
>never negative numbers. But sometimes, JHU applies corrections on the
>counts without applying them retroactively in the past (as these counts
>were official counts at some point in time, I guess). So, negative
>values exist and I chose to ignore them in this pipeline.
>
>
Now that we have a plan, let's execute it. Each of the points in the
above list is an aggregation pipeline stage, and now we "just" have to
translate them.
### Stage 1: Arrays of Leading Digits
First, I need to be able to extract the first character of
`$confirmed_daily`, which is an integer.
MongoDB provides a
[$substring
operator which we can use if we transform this integer into a string.
This is easy to do with the
$toString
operator.
``` json
{ "$substr": { "$toString": "$confirmed_daily" }, 0, 1 ] }
```
Then, apply this transformation to each country and regroup
([$group)
the result into an array using
$push.
Here is the first stage:
``` json
{
"$group": {
"_id": "$country",
"confirmed": {
"$push": {
"$substr":
{
"$toString": "$confirmed_daily"
},
0,
1
]
}
},
"deaths": {
"$push": {
"$substr": [
{
"$toString": "$deaths_daily"
},
0,
1
]
}
}
}
}
```
Here is the shape of my documents at this point if I apply this
transformation:
``` json
{
_id: 'Japan',
confirmed: [ '1', '3', '7', [...], '7', '5' ],
deaths: [ '7', '6', '0', [...], '-' , '2' ]
}
```
### Stage 2: Clean the Arrays
As mentioned above, my arrays might contains zeros and `-` which is the
leading character of a negative number. I decided to ignore this for my
little mathematical experimentation.
If I now translate *"clean the arrays"* into something more
"computer-friendly," what I actually want to do is *"filter the
arrays."* We can leverage the
[$filter
operator and overwrite our existing arrays with their filtered versions
without zeros and dashes by using the
$addFields
stage.
``` js
{
"$addFields": {
"confirmed": {
"$filter": {
"input": "$confirmed",
"as": "elem",
"cond": {
"$and":
{
"$ne": [
"$$elem",
"0"
]
},
{
"$ne": [
"$$elem",
"-"
]
}
]
}
}
},
"deaths": { ... } // same as above with $deaths
}
}
```
At this point, our documents in the pipeline have the same shape as
previously.
### Stage 3: Array Sizes
The final goal here is to calculate the percentages of 1s, 2s, ..., 9s
in these two arrays, respectively. To compute this, I will need the size
of the arrays to apply the [rule of
three.
This stage is easy as
$size
does exactly that.
``` json
{
"$addFields": {
"confirmed_size": {
"$size": "$confirmed"
},
"deaths_size": {
"$size": "$deaths"
}
}
}
```
To be completely honest, I could compute this on the fly later, when I
actually need it. But I'll need it multiple times later on, and this
stage is inexpensive and eases my mind so... Let's
KISS.
Here is the shape of our documents at this point:
``` json
{
_id: 'Japan',
confirmed: '1', '3', '7', [...], '7', '5' ],
deaths: [ '7', '6', '9', [...], '2' , '1' ],
confirmed_size: 452,
deaths_size: 398
}
```
As you can see for Japan, our arrays are relatively long, so we could
expect our percentages to be somewhat accurate.
It's far from being true for all the countries...
``` json
{
_id: 'Solomon Islands',
confirmed: [
'4', '1', '1', '3',
'1', '1', '1', '2',
'1', '5'
],
deaths: [],
confirmed_size: 10,
deaths_size: 0
}
```
``` json
{
_id: 'Fiji',
confirmed: [
'1', '1', '1', '2', '2', '1', '6', '2',
'2', '1', '2', '1', '5', '5', '3', '1',
'4', '1', '1', '1', '2', '1', '1', '1',
'1', '2', '4', '1', '1', '3', '1', '4',
'3', '2', '1', '4', '1', '1', '1', '5',
'1', '4', '8', '1', '1', '2'
],
deaths: [ '1', '1' ],
confirmed_size: 46,
deaths_size: 2
}
```
### Stage 4: Eliminate Countries with Empty Arrays
I'm not good enough at math to decide which size is significant enough
to be statistically accurate, but good enough to know that my rule of
three will need to divide by the size of the array.
As dividing by zero is bad for health, I need to remove empty arrays. A
sound statistician would probably also remove the small arrays... but
not me 😅.
This stage is a trivial
[$match:
```
{
"$match": {
"confirmed_size": {
"$gt": 0
},
"deaths_size": {
"$gt": 0
}
}
}
```
### Stage 5: Percentages of Digits
We are finally at the central stage of our pipeline. I need to apply a
rule of three to calculate the percentage of 1s in an array:
- Find how many 1s are in the array.
- Multiply by 100.
- Divide by the size of the array.
- Round the final percentage to one decimal place. (I don't need more
precision for my charts.)
Then, I need to repeat this operation for each digit and each array.
To find how many times a digit appears in the array, I can reuse
techniques we learned earlier:
```
{
"$size": {
"$filter": {
"input": "$confirmed",
"as": "elem",
"cond": {
"$eq":
"$$elem",
"1"
]
}
}
}
}
```
I'm creating a new array which contains only the 1s with `$filter` and I
calculate its size with `$size`.
Now I can
[$multiply
this value (let's name it X) by 100,
$divide
by the size of the `confirmed` array, and
$round
the final result to one decimal.
```
{
"$round":
{
"$divide": [
{ "$multiply": [ 100, X ] },
"$confirmed_size"
]
},
1
]
}
```
As a reminder, here is the final document we want:
``` json
{
country: 'US',
confirmed_size: 435,
deaths_size: 424,
benford: [
{ digit: 1, confirmed: 22.3, deaths: 36.1 },
{ digit: 2, confirmed: 21.1, deaths: 14.4 },
{ digit: 3, confirmed: 11.5, deaths: 10.6 },
{ digit: 4, confirmed: 11.7, deaths: 8 },
{ digit: 5, confirmed: 11, deaths: 5 },
{ digit: 6, confirmed: 11.7, deaths: 4.7 },
{ digit: 7, confirmed: 6.7, deaths: 6.8 },
{ digit: 8, confirmed: 2.3, deaths: 6.4 },
{ digit: 9, confirmed: 1.6, deaths: 8 }
]
}
```
The value we just calculated above corresponds to the `22.3` that we
have in this document.
At this point, we just need to repeat this operation nine times for each
digit of the `confirmed` array and nine other times for the `deaths`
array and assign the results accordingly in the new `benford` array of
documents.
Here is what it looks like in the end:
``` json
{
"$addFields": {
"benford": [
{
"digit": 1,
"confirmed": {
"$round": [
{
"$divide": [
{
"$multiply": [
100,
{
"$size": {
"$filter": {
"input": "$confirmed",
"as": "elem",
"cond": {
"$eq": [
"$$elem",
"1"
]
}
}
}
}
]
},
"$confirmed_size"
]
},
1
]
},
"deaths": {
"$round": [
{
"$divide": [
{
"$multiply": [
100,
{
"$size": {
"$filter": {
"input": "$deaths",
"as": "elem",
"cond": {
"$eq": [
"$$elem",
"1"
]
}
}
}
}
]
},
"$deaths_size"
]
},
1
]
}
},
{"digit": 2...},
{"digit": 3...},
{"digit": 4...},
{"digit": 5...},
{"digit": 6...},
{"digit": 7...},
{"digit": 8...},
{"digit": 9...}
]
}
}
```
At this point in our pipeline, our documents look like this:
```
{
_id: 'Luxembourg',
confirmed: [
'1', '5', '2', '1', '1', '4', '3', '1', '2', '5', '8', '4',
'1', '4', '1', '1', '1', '2', '3', '1', '9', '5', '3', '2',
'2', '2', '1', '7', '4', '1', '2', '5', '1', '2', '1', '8',
'9', '6', '8', '1', '1', '3', '7', '8', '6', '6', '4', '2',
'2', '1', '1', '1', '9', '5', '8', '2', '2', '6', '1', '6',
'4', '8', '5', '4', '1', '2', '1', '3', '1', '4', '1', '1',
'3', '3', '2', '1', '2', '2', '3', '2', '1', '1', '1', '3',
'1', '7', '4', '5', '4', '1', '1', '1', '1', '1', '7', '9',
'1', '4', '4', '8',
... 242 more items
],
deaths: [
'1', '1', '8', '9', '2', '3', '4', '1', '3', '5', '5', '1',
'3', '4', '2', '5', '2', '7', '1', '1', '5', '1', '2', '2',
'2', '9', '6', '1', '1', '2', '5', '3', '5', '1', '3', '3',
'1', '3', '3', '4', '1', '1', '2', '4', '1', '2', '2', '1',
'4', '4', '1', '3', '6', '5', '8', '1', '3', '2', '7', '1',
'6', '8', '6', '3', '1', '2', '6', '4', '6', '8', '1', '1',
'2', '3', '7', '1', '8', '2', '1', '6', '3', '3', '6', '2',
'2', '2', '3', '3', '3', '2', '6', '3', '1', '3', '2', '1',
'1', '4', '1', '1',
... 86 more items
],
confirmed_size: 342,
deaths_size: 186,
benford: [
{ digit: 1, confirmed: 36.3, deaths: 32.8 },
{ digit: 2, confirmed: 16.4, deaths: 19.9 },
{ digit: 3, confirmed: 9.1, deaths: 14.5 },
{ digit: 4, confirmed: 8.8, deaths: 7.5 },
{ digit: 5, confirmed: 6.4, deaths: 6.5 },
{ digit: 6, confirmed: 9.6, deaths: 8.6 },
{ digit: 7, confirmed: 5.8, deaths: 3.8 },
{ digit: 8, confirmed: 5, deaths: 4.8 },
{ digit: 9, confirmed: 2.6, deaths: 1.6 }
]
}
```
>
>
>Note: At this point, we don't need the arrays anymore. The target
>document is almost there.
>
>
### Stage 6: Introduce Fake Country BenfordTheory
In my final charts, I wanted to be able to also display the Bendord's
theoretical values, alongside the actual values from the different
countries to be able to spot easily which one is **potentially**
producing fake data (modulo the statistic noise and many other reasons).
Just to give you an idea, it looks like, globally, all the countries are
producing legit data but some arrays are small and produce "statistical
accidents."
:charts[]{url="https://charts.mongodb.com/charts-open-data-covid-19-zddgb" id="5030cc1a-8318-40e0-91b0-b1c118dc719b"}
To be able to insert this "perfect" document, I need to introduce in my
pipeline a fake and perfect country that has the perfect percentages. I
decided to name it "BenfordTheory."
But (because there is always one), as far as I know, there is no stage
that can just let me insert a new document like this in my pipeline.
So close...
Lucky for me, I found a workaround to this problem with the new (since
4.4)
[$unionWith
stage. All I have to do is insert my made-up document into a collection
and I can "insert" all the documents from this collection into my
pipeline at this stage.
I inserted my fake document into the new collection randomly named
`benford`. Note that I made this document look like the documents at
this current stage in my pipeline. I didn't care to insert the two
arrays because I'm about to discard them anyway.
``` json
{
_id: 'BenfordTheory',
benford:
{ digit: 1, confirmed: 30.1, deaths: 30.1 },
{ digit: 2, confirmed: 17.6, deaths: 17.6 },
{ digit: 3, confirmed: 12.5, deaths: 12.5 },
{ digit: 4, confirmed: 9.7, deaths: 9.7 },
{ digit: 5, confirmed: 7.9, deaths: 7.9 },
{ digit: 6, confirmed: 6.7, deaths: 6.7 },
{ digit: 7, confirmed: 5.8, deaths: 5.8 },
{ digit: 8, confirmed: 5.1, deaths: 5.1 },
{ digit: 9, confirmed: 4.6, deaths: 4.6 }
],
confirmed_size: 999999,
deaths_size: 999999
}
```
With this new collection in place, all I need to do is `$unionWith` it.
``` json
{
"$unionWith": {
"coll": "benford"
}
}
```
### Stage 7: Final Projection
At this point, our documents look almost like the initial target
document that we have set at the beginning of this blog post. Two
differences though:
- The name of the countries is in the `_id` key, not the `country`
one.
- The two arrays are still here.
We can fix this with a simple
[$project
stage.
``` json
{
"$project": {
"country": "$_id",
"_id": 0,
"benford": 1,
"confirmed_size": 1,
"deaths_size": 1
}
}
```
>
>
>Note that I chose which field should be here or not in the final
>document by inclusion here. `_id` is an exception and needs to be
>explicitly excluded. As the two arrays aren't explicitly included, they
>are excluded by default, like any other field that would be there. See
>considerations.
>
>
Here is our final result:
``` json
{
confirmed_size: 409,
deaths_size: 378,
benford:
{ digit: 1, confirmed: 32.8, deaths: 33.6 },
{ digit: 2, confirmed: 20.5, deaths: 13.8 },
{ digit: 3, confirmed: 15.9, deaths: 11.9 },
{ digit: 4, confirmed: 10.8, deaths: 11.6 },
{ digit: 5, confirmed: 5.9, deaths: 6.9 },
{ digit: 6, confirmed: 2.9, deaths: 7.7 },
{ digit: 7, confirmed: 4.4, deaths: 4.8 },
{ digit: 8, confirmed: 3.2, deaths: 5.6 },
{ digit: 9, confirmed: 3.7, deaths: 4.2 }
],
country: 'Bulgaria'
}
```
And please remember that some documents still look like this in the
pipeline because I didn't bother to filter them:
``` json
{
confirmed_size: 2,
deaths_size: 1,
benford: [
{ digit: 1, confirmed: 0, deaths: 0 },
{ digit: 2, confirmed: 50, deaths: 100 },
{ digit: 3, confirmed: 0, deaths: 0 },
{ digit: 4, confirmed: 0, deaths: 0 },
{ digit: 5, confirmed: 0, deaths: 0 },
{ digit: 6, confirmed: 0, deaths: 0 },
{ digit: 7, confirmed: 50, deaths: 0 },
{ digit: 8, confirmed: 0, deaths: 0 },
{ digit: 9, confirmed: 0, deaths: 0 }
],
country: 'MS Zaandam'
}
```
## The Final Pipeline
My final pipeline is pretty long due to the fact that I'm repeating the
same block for each digit and each array for a total of 9\*2=18 times.
I wrote a factorised version in JavaScript that can be executed in
[mongosh:
``` js
use covid19;
let groupBy = {
"$group": {
"_id": "$country",
"confirmed": {
"$push": {
"$substr": {
"$toString": "$confirmed_daily"
}, 0, 1]
}
},
"deaths": {
"$push": {
"$substr": [{
"$toString": "$deaths_daily"
}, 0, 1]
}
}
}
};
let createConfirmedAndDeathsArrays = {
"$addFields": {
"confirmed": {
"$filter": {
"input": "$confirmed",
"as": "elem",
"cond": {
"$and": [{
"$ne": ["$$elem", "0"]
}, {
"$ne": ["$$elem", "-"]
}]
}
}
},
"deaths": {
"$filter": {
"input": "$deaths",
"as": "elem",
"cond": {
"$and": [{
"$ne": ["$$elem", "0"]
}, {
"$ne": ["$$elem", "-"]
}]
}
}
}
}
};
let addArraySizes = {
"$addFields": {
"confirmed_size": {
"$size": "$confirmed"
},
"deaths_size": {
"$size": "$deaths"
}
}
};
let removeCountriesWithoutConfirmedCasesAndDeaths = {
"$match": {
"confirmed_size": {
"$gt": 0
},
"deaths_size": {
"$gt": 0
}
}
};
function calculatePercentage(inputArray, digit, sizeArray) {
return {
"$round": [{
"$divide": [{
"$multiply": [100, {
"$size": {
"$filter": {
"input": inputArray,
"as": "elem",
"cond": {
"$eq": ["$$elem", digit]
}
}
}
}]
}, sizeArray]
}, 1]
}
}
function calculatePercentageConfirmed(digit) {
return calculatePercentage("$confirmed", digit, "$confirmed_size");
}
function calculatePercentageDeaths(digit) {
return calculatePercentage("$deaths", digit, "$deaths_size");
}
let calculateBenfordPercentagesConfirmedAndDeaths = {
"$addFields": {
"benford": [{
"digit": 1,
"confirmed": calculatePercentageConfirmed("1"),
"deaths": calculatePercentageDeaths("1")
}, {
"digit": 2,
"confirmed": calculatePercentageConfirmed("2"),
"deaths": calculatePercentageDeaths("2")
}, {
"digit": 3,
"confirmed": calculatePercentageConfirmed("3"),
"deaths": calculatePercentageDeaths("3")
}, {
"digit": 4,
"confirmed": calculatePercentageConfirmed("4"),
"deaths": calculatePercentageDeaths("4")
}, {
"digit": 5,
"confirmed": calculatePercentageConfirmed("5"),
"deaths": calculatePercentageDeaths("5")
}, {
"digit": 6,
"confirmed": calculatePercentageConfirmed("6"),
"deaths": calculatePercentageDeaths("6")
}, {
"digit": 7,
"confirmed": calculatePercentageConfirmed("7"),
"deaths": calculatePercentageDeaths("7")
}, {
"digit": 8,
"confirmed": calculatePercentageConfirmed("8"),
"deaths": calculatePercentageDeaths("8")
}, {
"digit": 9,
"confirmed": calculatePercentageConfirmed("9"),
"deaths": calculatePercentageDeaths("9")
}]
}
};
let unionBenfordTheoreticalValues = {
"$unionWith": {
"coll": "benford"
}
};
let finalProjection = {
"$project": {
"country": "$_id",
"_id": 0,
"benford": 1,
"confirmed_size": 1,
"deaths_size": 1
}
};
let pipeline = [groupBy,
createConfirmedAndDeathsArrays,
addArraySizes,
removeCountriesWithoutConfirmedCasesAndDeaths,
calculateBenfordPercentagesConfirmedAndDeaths,
unionBenfordTheoreticalValues,
finalProjection];
let cursor = db.countries_summary.aggregate(pipeline);
printjson(cursor.next());
```
If you want to read the entire pipeline, it's available in [this github
repository.
If you want to see more visually how this pipeline works step by step,
import it in MongoDB Compass
once you are connected to the cluster (see the URI in the
Introduction). Use the `New Pipeline From Text` option in the
`covid19.countries_summary` collection to import it.
## An Even Better Pipeline?
Did you think that this pipeline I just presented was *perfect*?
Well well... It's definitely getting the job done, but we can make it
*better* in many ways. I already mentioned in this blog post that we
could remove Stage 3, for example, if we wanted to. It might not be as
optimal, but it would be shorter.
Also, there is still Stage 5, in which I literally copy and paste the
same piece of code 18 times... and Stage 6, where I have to use a
workaround to insert a document in my pipeline.
Another solution could be to rewrite this pipeline with a
$facet
stage and execute two sub-pipelines in parallel to compute the results
we want for the confirmed array and the deaths array. But this solution
is actually about two times slower.
However, my colleague John Page came up
with this pipeline that is just better than mine
because it's applying more or less the same algorithm, but it's not
repeating itself. The code is a *lot* cleaner and I just love it, so I
thought I would also share it with you.
John is using very smartly a
$map
stage to iterate over the nine digits which makes the code a lot simpler
to maintain.
## Wrap-Up
In this blog post, I tried my best to share with you the process of
creating a relatively complex aggregation pipeline and a few tricks to
transform as efficiently as possible your documents.
We talked about and used in a real pipeline the following aggregation
pipeline stages and operators:
- $addFields.
- $toString.
- $substr.
- $group.
- $push.
- $filter.
- $size.
- $multiply.
- $divide.
- $round.
- $match.
- $unionWith.
- $project.
- $facet.
- $map.
If you are a statistician and you can make sense of these results,
please post a message on the Community
Forum and ping me!
Also, let me know if you can find out if some countries are clearly
generating fake data.
>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Using the MongoDB Aggregation Pipeline to apply Benford's law on the COVID-19 date set from Johns Hopkins University.",
"contentType": "Article"
} | Aggregation Pipeline: Applying Benford's Law to COVID-19 Data | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-jetpackcompose-emoji-android | created | # Building an Android Emoji Garden on Jetpack Compose with Realm
As an Android developer, have you wanted to get acquainted with Jetpack
Compose and mobile architecture? Or maybe you have wanted to build an
app end to end, with a hosted database? If yes, then this post is for
you!
We'll be building an app that shows data from a central shared database:
MongoDB Realm. The app will reflect changes in the database in real-time on all devices that use it.
Imagine you're at a conference and you'd like to engage with the other
attendees in a creative way. How about with emojis? 😋 What if the
conference had an app with a field of emojis where each emoji represents
an attendee? Together, they create a beautiful garden. I call this app
*Emoji Garden*. I'll be showing you how to build such an app in this
post.
This article is Part 1 of a two-parter where we'll just be building the
core app structure and establishing the connection to Realm and sharing
our emojis between the database and the app. Adding and changing emojis
from the app will be in Part 2.
Here we see the app at first run. We'll be creating two screens:
1. A **Login Screen**.
2. An **Emoji Garden Screen** updated with emojis directly from the
server. It displays all the attendees of the conference as emojis.
Looks like a lot of asynchronous code, doesn't it? As we know,
asynchronous code is the bane of Android development. However, you
generally can't avoid it for database and network operations. In our
app, we store emojis in the local Realm database. The local database
seamlessly syncs with a MongoDB Realm Sync server instance in the
background. Are we going to need other libraries like RxJava or
Coroutines? Nope, we won't. **In this article, we'll see how to get**
Realm to do this all for you!
If you prefer Kotlin Flows with Coroutines, then don't worry. The Realm
SDK can generate them for you. I'll show you how to do that too. Let's
begin!
Let me tempt you with the tech for Emoji Garden!
* Using Jetpack Compose to put together the UI.
* Using ViewModels and MVVM effectively with Compose.
* Using Coroutines
and Realm functions to keep your UI updated.
* Using anonymous logins in Realm.
* Setting up a globally accessible MongoDB Atlas instance to sync to
your app's Realm database.
## Prerequisites
Remember that all of the code for the final app is available in the
GitHub repo. If
you'd like to build Emoji Garden🌲 with me, you'll need the following:
1. Android Studio, version
"*Arctic Fox (2020.3.1)*" or later.
2. A basic understanding of
building Android apps, like knowing what an Activity is and having tried a bit of Java or Kotlin coding.
Emoji Garden shouldn't be the first Android app you've ever tried to
build. However, it is a great intro into Realm and Jetpack Compose.
> 💡 There's one prerequisite you'd need for anything you're doing and
> that's a growth mindset 🌱. It means you believe you can learn anything. I believe in you!
Estimated time to complete: 2.5-3 hours
## Create a New Compose Project
Once you've got the Android Studio
Canary, you can fire up
the **New Project** menu and select Empty Compose Activity. Name your
app "Emoji Garden" if you want the same name as mine.
## Project Imports
We will be adding imports into two files:
1. Into the app level build.gradle.
2. Into the project level build.gradle.
At times, I may refer to functions, classes, or variables by putting
their names in italics, like *EmojiClass*, so you can tell what's a
variable/constant/class and what isn't.
### App Level build.gradle Imports
First, the app level build.gradle. To open the app's build.gradle file,
double-tap Shift in Android Studio and type "build.gradle." **Select the**
one with "app" at the end and hit enter. Check out how build.gradle
looks in the finished sample
app.
Yours doesn't need to look exactly like this yet. I'll tell you what to
add.
In the app level build.gradle, we are going to add a few dependencies,
shown below. They go into the *dependencies* block:
``` kotlin
// For the viewModel function that imports them into activities
implementation 'androidx.activity:activity-ktx:1.3.0'
// For the ViewModelScope if using Coroutines in the ViewModel
implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.2.0'
implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.3.1'
```
**After** adding them, your dependencies block should look like this.
You could copy and replace the entire block in your app.
``` kotlin
dependencies {
implementation 'androidx.core:core-ktx:1.3.2'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'com.google.android.material:material:1.2.1'
// For Jetpack Compose
implementation "androidx.compose.ui:ui:$compose_version"
implementation "androidx.compose.material:material:$compose_version"
implementation "androidx.compose.ui:ui-tooling:$compose_version"
// For the viewModel function that imports them into activities
implementation 'androidx.activity:activity-ktx:1.3.0'
// For the ViewModelScope if using Coroutines in the ViewModel
implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.2.0'
implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.3.1'
testImplementation 'junit:junit:4.+'
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
}
```
In the same file under *android* in the app level build.gradle, you
should have the *composeOptions* already. **Make sure the**
kotlinCompilerVersion is at least 1.5.10. Compose needs this to
function correctly.
``` kotlin
composeOptions {
kotlinCompilerExtensionVersion compose_version
kotlinCompilerVersion kotlin_ext
}
```
### Project Level build.gradle Imports
Open the **project level** build.gradle file. Double-tap Shift in
Android Studio -> type "build.gradle" and **look for the one with a dot**
at the end. This is how it looks in the sample app.
Follow along for steps.
Make sure the compose version under buildscript is 1.x.x or greater.
``` kotlin
buildscript {
ext {
compose_version = '1.0.0'
kotlin_ext = '1.5.10'
}
```
Great! We're all done with imports. Remember to hit "Sync Now" at the
top right.
## Overview of the Emoji Garden App
### Folder Structure
*com.example.emojigarden* is the directory where all the code for the
Emoji Garden app resides. This directory is auto-generated from the app
name when you create a project. The image shown below is an overview of
all the classes in the finished app. It's what we'll have when we're
done with this article.
## Building the Android App
The Emoji Garden app is divided into two parts: the UI and the logic.
1. The UI displays the emoji garden.
2. The logic (classes and functions) will update the emoji garden from
the server. This will keep the app in sync for all attendees.
### Creating a New Source File
Let's create a file named *EmojiTile* inside a source folder. If you're
not sure where the source folder is, here's how to find it. Hit the
project tab (**⌘+1** on mac or **Ctrl+1** on Windows/Linux).
Open the app folder -> java -> *com.example.emojigarden* or your package name. Right click on *com.example.emojigarden* to create new files for source code. For this project, we will create all source files here. To see other strategies to organize code, see package-by-feature.
Type in the name of the class you want to make— *EmojiTile*, for
instance. Then hit Enter.
### Write the Emoji Tile Class
Since the garden is full of emojis, we need a class to represent the
emojis. Let's make the *EmojiTile* class for this. Paste this in.
``` kotlin
class EmojiTile {
var emoji : String = ""
}
```
### Let's Start with the Garden Screen
Here's what the screen will look like. When the UI is ready, the Garden
Screen will display a grid of beautiful emojis. We still have some work
to do in setting everything up.
#### The Garden UI Code
Let's get started making that screen. We're going to throw away nearly
everything in *MainActivity.kt* and write this code in its place.
Reach *MainActivity.kt* by **double-tapping Shift** and typing
"mainactivity." Any of those three results in the image below will take
you there.
Here's what the file looks like before we've made any changes.
Now, leave only the code below in *MainActivity.kt* apart from the
imports. Notice how we've removed everything inside the *setContent*
function except the MainActivityUi function. We haven't created it yet,
so I've left it commented out. It's the last of the three sectioned UI
below. The extra annotation (*@ExperimentalFoundationApi*) will be
explained shortly.
``` kotlin
@ExperimentalFoundationApi
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
// MainActivityUi(emptyList())
}
}
}
```
The UI code for the garden will be built up in three functions. Each
represents one "view."
> 💡 We'll be using a handful of functions for UI instead of defining it
> in the Android XML file. Compose uses only regular functions marked
> @Composeable to define how the UI should look. Compose also features interactive Previews without even deploying to an emulator or device. "Functions as UI" make UIs designed in Jetpack Compose incredibly modular.
The functions are:
1. *EmojiHolder*
2. *EmojiGrid*
3. *MainActivityUi*
I'll show how to do previews right after the first function EmojiHolder.
Each of the three functions will be written at the end of the
*MainActivity.kt* file. That will put the functions outside the
*MainActivity* class. **Compose functions are independent of classes.**
They'll be composed together like this:
> 💡 Composing just means using inside something else—like calling one
> Jetpack Compose function inside another Jetpack Compose function.
### Single Emoji Holder
Let's start from the smallest bit of UI, the holder for a single emoji.
``` kotlin
@Composable
fun EmojiHolder(emoji: EmojiTile) {
Text(emoji.emoji)
}
```
The *EmojiHolder* function draws the emoji in a text box. The text
function is part of Jetpack Compose. It's the equivalent of a TextView
in the XML way making UI. It just needs to have some text handed to it.
In this case, the text comes from the *EmojiTile* class.
### Previewing Your Code
A great thing about Compose functions is that they can be previewed
right inside Android Studio. Drop this function into *MainActivity.kt*
at the end.
``` kotlin
@Preview
@Composable
fun EmojiPreview() {
EmojiHolder(EmojiTile().apply { emoji = "😼" })
}
```
You'll see the image below! If the preview is too small, click it and
hit **Ctrl+** or **⌘+** to increase the size. If it's not, choose the
"Split View" (the larger arrow below). It splits the screen between code
and previews. Previews are only generated once you've changed the code and hit the build icon. To rebuild the code, hit the refresh icon (the
smaller green arrow below).
### The EmojiGrid
To make the garden, we'll be using the *LazyVerticalGrid*, which is like
RecyclerView in Compose. It only renders items that are visible, as opposed to those
that scroll offscreen. *LazyVerticalGrid* is a new class in Jetpack
Compose version alpha9. Since it's experimental, it requires the
*@ExperimentalFoundationApi* annotation. It's fun to play with though!
Copy this into your project.
``` kotlin
@ExperimentalFoundationApi
@Composable
fun EmojiGrid(emojiList: List) {
LazyVerticalGrid(cells = GridCells.Adaptive(20.dp)) {
items(emojiList) { emojiTile ->
EmojiHolder(emojiTile)
}
}
}
```
### Garden Screen Container: MainActivityUI
Finally, the EmojiGrid is centered in a full-width *Box*. *Box* itself
is a compose function.
> 💡 Since my app was named "Emoji Garden," the auto-generated theme for it is EmojiGardenTheme. The theme name may be different for you. Type it in, if so.
Since the *MainActivityUi* is composed of *EmojiGrid*, which uses the
*@ExperimentalFoundationApi* annotation, *MainActivityUi* now has to use the same annotation.
``` kotlin
@ExperimentalFoundationApi
@Composable
fun MainActivityUi(emojiList: List) {
EmojiGardenTheme {
Box(
Modifier.fillMaxWidth().padding(16.dp),
contentAlignment = Alignment.Center
) {
EmojiGrid(emojiList)
}
}
}
```
### Previews
Try previews for any of these! Here's a preview function for
*MainActivityUI*. Preview functions should be in the same file as the
functions they're trying to preview.
``` kotlin
@ExperimentalFoundationApi
@Preview(showBackground = true)
@Composable
fun DefaultPreview() {
MainActivityUi(List(102){ i -> EmojiTile().apply { emoji = emojisi] }})
}
val emojis = listOf("🐤","🐦","🐔","🦤","🕊","️","🦆","🦅","🪶","🦩","🐥","-","🐣","🦉","🦜","🦚","🐧","🐓","🦢","🦃","🦡","🦇","🐻","🦫","🦬","🐈","","⬛","🐗","🐪","🐈","🐱","🐿","️","🐄","🐮","🦌","🐕","🐶","🐘","🐑","🦊","🦒","🐐","🦍","🦮","🐹","🦔","🦛","🐎","🐴","🦘","🐨","🐆","🦁","🦙","🦣","🐒","🐵","🐁","🐭","🦧","🦦","🐂","🐼","🐾","🐖","🐷","🐽","🐻","","❄","️","🐩","🐇","🐰","🦝","🐏","🐀","🦏","🐕","","🦺","🦨","🦥","🐅","🐯","🐫","-","🦄","🐃","🐺","🦓","🐳","🐡","🐬","🐟","🐙","🦭","🦈","🐚","🐳","🐠","🐋","🌱","🌵","🌳","🌲","🍂","🍀","🌿","🍃","🍁","🌴","🪴","🌱","☘","️","🌾","🐊","🐊","🐉","🐲","🦎","🦕","🐍","🦖","-","🐢")
```
Here's a preview generated by the code above. Remember to hit the build arrows if it doesn't show up.
You might notice that some of the emojis aren't showing up. That's
because we haven't begun to use [EmojiCompat yet. We'll get to that in the next article.
### Login Screen
You can use a Realm database locally without logging in. Syncing data
requires a user account. Let's take a look at the UI for login since
we'll need it soon. If you're following along, drop this into the
*MainActivity.kt*, at the end of the file. The login screen is going to
be all of one button. Notice that the actual login function is passed
into the View. Later, we'll make a *ViewModel* named *LoginVm*. It will
provide the login function.
``` kotlin
@Composable
fun LoginView(login : () -> Unit) {
Column(modifier = Modifier.fillMaxWidth().padding(16.dp),
verticalArrangement = Arrangement.Center,
horizontalAlignment = Alignment.CenterHorizontally){
Button(login){
Text("Login")
}
}
}
```
## Set Up Realm Sync
We've built as much of the app as we can without Realm. Now it's time to enable storing our emojis locally. Then we can begin syncing them to
your own managed Realm instance in the cloud.
Now we need to:
1. Create a free MongoDB Atlas
account
* Follow the link above to host your data in the cloud. The emojis
in the garden will be synced to this database so they can be
sent to all connecting mobile devices. Configure your Atlas
account with the following steps:
* Add your connection
IP,
so only someone with your IP can access the database.
* Create a database
user,
so you have an admin user to run commands with. Note down the
username and password you create here.
2. Create a Realm App on the cloud account
* Hit the Realm tab
*
* You're building a Mobile app for Android from scratch. How cool!
Hit Start a New realm App.
*
* You can name your application anything you want. Even the
default "Application 0" is fine.
3. Turn on Anonymous
authentication \- We don't want to make people wait around to authenticate with a
username and password. So, we'll just hand them a login button
that will perform an anonymous authentication. Follow the link
in the title to turn it on.
4. Enable Realm Sync
* This will allow real-time data synchronization between mobile
clients.
* Go to https://cloud.mongodb.com and hit the Realm tab.
* Click your application. It might have a different name.
*
* As in the image below, hit Sync (on the left) in the Realm tab.
Then "Define Data Models" on the page that opens.
*
* Choose the default cluster. For the partition key, type in
"event" and select a type of "string" for it. Under "Define a
database name," type in "gardens." Hit "Turn Dev Mode On" at the
bottom.
*
> 💡 For this use case, the "partition key" should be named "event" and be
> of type "String." We'll see why when we add the partition key to our
> EmojiTile later. The partition key is a way to separate data within the
> collection by when it's going to be used.
Fill in those details and hit "Turn Dev Mode On." Now click "Review and
Deploy."
## Integrating Realm into the App
### Install the SDK
Install the Realm Android
SDK
Follow the link above to install the SDK. This provides Realm
authentication and database methods within the app. When they talk about
adding "apply plugin:" just replace that with "id," like in the image
below:
### Add Internet Permissions
Open the AndroidManifest.xml file by **double-tapping Shift** in Android
Studio and typing in "manifest."
Add the Internet permission to your Android Manifest above the
application tag.
``` xml
```
The file should start off like this after adding it:
``` xml
💡 "event" might seem a strange name for a field for an emoji. Here,
> it's the partition key. Emojis for a single garden will be assigned the
> same partition key. Each instance of Realm on mobile can only be
> configured to retrieve objects with one partition key.
### Separating Your Concerns
We're going to need objects from the Realm Mobile SDK that give access
to login and data functions. These will be abstracted into their own
class, called RealmModule.
Later, I'll create a custom application class *EmojiGardenApplication*
to instantiate *RealmModule*. This will make it easy to pass into the
*ViewModels*.
#### RealmModule
Grab a copy of the RealmModule from the sample
repo.
This will handle Realm App initialization and connecting to a synced
instance for you. It also contains a method to log in. Copy/paste it
into the source folder. You might end up with duplicate *package*
declarations. Delete the extra one, if so. Let's take a look at what's
in *RealmModule*. Skip to the next section if you want to get right to using it.
##### The Constructor and Class Variables
The init{ } block is like a Kotlin constructor. It'll run as soon as an
instance of the class is created. Realm.init is required for local or
remote Realms. Then, a configuration is created from your appId as part
of initialization, as seen
here. To get
a synced realm, we need to log in.
We'll need to hold onto the Realm App object for logins later, so it's a
class variable.
``` kotlin
private var syncedRealm: Realm? = null
private val app : App
private val TAG = RealmModule::class.java.simpleName
init {
Realm.init(application)
app = App(AppConfiguration.Builder(appId).build())
// Login anonymously because a logged in user is required to open a synced realm.
loginAnonSyncedRealm(
onSuccess = {Log.d(TAG, "Login successful") },
onFailure = {Log.d(TAG, "Login Unsuccessful, are you connected to the net?")}
)
}
```
##### The Login Function
Before you can add data to a synced Realm, you need to be logged in. You
only need to be online the first time you log in. Your credentials are
preserved and data can be inserted offline after that.
Note the partition key. Only objects with the same value for the
partition key as specified here will be synced by this Realm instance.
To sync objects with different keys, you would need to create another
instance of Realm. Once login succeeds, the logged-in user object is
used to instantiate the synced Realm.
``` kotlin
fun loginAnonSyncedRealm(partitionKey : String = "default", onSuccess : () -> Unit, onFailure : () -> Unit ) {
val credentials = Credentials.anonymous()
app.loginAsync(credentials) { loginResult ->
Log.d("RealmModule", "logged in: $loginResult, error? : ${loginResult.error}")
if (loginResult.isSuccess) {
instantiateSyncedRealm(loginResult.get(), partitionKey)
onSuccess()
} else {
onFailure()
}
}
}
private fun instantiateSyncedRealm(user: User?, partition : String) {
val config: SyncConfiguration = SyncConfiguration.defaultConfig(user, partition)
syncedRealm = Realm.getInstance(config)
}
```
##### Initialize the Realm Schema
Part of the setup of Realm is telling the server a little about the data
types it can expect. This is only important for statically typed
programming languages like Kotlin, which would refuse to sync objects
that it can't cast into expected types.
> 💡 There are a few ways to do this:
>
>
> 1. Manually code the schema as a JSON schema document.
> 2. Let Realm generate the schema from what's stored in the database already.
> 3. Let Realm figure out the schema from the documents at the time they're pushed into the db from the mobile app.
>
>
> We'll be doing #3.
If you're wondering where the single soil emoji comes from when you log
in, it's from this function. It will be called behind the scenes (in
*LoginVm*) to set up the schema for the *EmojiTile* collection. Later,
when we add emojis from the server, it'll have stronger guarantees about
what types it contains.
``` kotlin
fun initializeCollectionIfEmpty() {
syncedRealm?.executeTransactionAsync { realm ->
if (realm.where(EmojiTile::class.java).count() == 0L) {
realm.insert(EmojiTile().apply {
emoji = "🟫"
})
}
}
}
```
##### Minor Functions
*getSyncedRealm* Required to work around the fact that *syncedRealm*
must be nullable internally. The internal nullability is used to figure
out whether it's initialized. When it's retrieved externally, we'd
always expect it to be available and so we throw an exception if it
isn't.
``` kotlin
fun isInitialized() = syncedRealm != null
fun getSyncedRealm() : Realm = syncedRealm ?: throw IllegalStateException("loginAnonSyncedRealm has to return onSuccess first")
```
### EmojiGarden Custom Application
Create a custom application class for the Emoji Garden app which will
instantiate the *RealmModule*.
Remember to add your appId to the appId variable. You could name the new
class *EmojiGardenApplication*.
``` kotlin
class EmojiGardenApplication : Application() {
lateinit var realmModule : RealmModule
override fun onCreate() {
super.onCreate()
// Get your appId from https://realm.mongodb.com/ for the database you created under there.
val appId = "your appId here"
realmModule = RealmModule(this, appId)
}
}
```
## ViewModels
ViewModels hold the logic and data for the UI. There will be one
ViewModel each for the Login and Garden UIs.
### Login ViewModel
What the *LoginVm* does:
1. An anonymous login.
2. Initializing the MongoDB Realm Schema.
Copy *LoginVm*'s complete code from
here.
Here's how the *LoginVm* works:
1. Retrieve an instance of the RealmModule from the custom application.
2. Once login succeeds, it adds initial data (like a 🟫 emoji) to the
database to initialize the Realm schema.
> 💡 Initializing the Realm schema is only required right now because the
> app doesn't provide a way to choose and insert your emojis. At least one
> inserted emoji is required for Realm Sync to figure out what kind of
> data will be synced. When the app is written to handle inserts by
> itself, this can be removed.
*showGarden* will be used to "switch" between whether the Login screen
or the Garden screen should be shown. This will be covered in more
detail later. It is marked
"*private set*" so that it can't be modified from outside *LoginVm*.
``` kotlin
var showGarden : Boolean by mutableStateOf(getApplication().realmModule.isInitialized())
private set
```
*initializeData* will insert a sample emoji into Realm Sync. When it's
done, it will signal for the garden to be shown. We're going to call
this after *login*.
``` kotlin
private fun initializeData() {
getApplication().realmModule.initializeCollectionIfEmpty()
showGarden = true
}
```
*login* calls the equivalent function in *RealmModule* as seen earlier.
If it succeeds, it initializes the data. Failures are only logged, but
you could do anything with them.
``` kotlin
fun login() = getApplication().realmModule.loginAnonSyncedRealm(
onSuccess = ::initializeData,
onFailure = { Log.d(TAG, "Failed to login") }
)
```
You can now modify *MainActivity.kt* to display and use Login. You might
need to *import* the *viewModel* function. Android Studio will give you
that option.
``` kotlin
@ExperimentalFoundationApi
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
val loginVm : LoginVm = viewModel()
if(!loginVm.showGarden) {
LoginView(loginVm::login)
}
}
}
}
```
Once you've hit login, the button will disappear, leaving you a blank
screen. Let's understand what happened and get to work on the garden
screen, which should appear instead.
> 💡 If you get an error like "Caused by: java.lang.ClassCastException:
> android.app.Application cannot be cast to EmojiGardenApplication at
> com.example.emojigarden.LoginVm.\(LoginVm.kt:20)," then you might
> have forgotten to add the EmojiGardenApplication to the name attribute in the manifest.
### What Initialization Did
Here's how you can verify what happened because of the initialization.
Before logging in and sending the first *EmojiTile*, you could go look
up your data's schema by going to https://cloud.mongodb.com in the
Realm tab. Click Schema on the options on the left and you'd see this:
MongoDB Realm Sync has inferred the data types in EmojiTile when the
first EmojiTile was pushed up. Here's what that section says now
instead:
If we had inserted data on the server side prior to this, it would've
defaulted the *index* field type to *Double* instead. The Realm SDK
would not have been able to coerce it on mobile, and sync would've
failed.
### The Garden ViewModel
The UI code is only going to render data that is given to them by the
ViewModels, which is why if you run the app without previews, everything
has been blank so far.
As a refresher, we're using the MVVM architecture, and we'll be using Android ViewModels.
The ViewModels that we'll be using are custom classes that extend the
ViewModel class. They implement their own methods to retrieve and hold
onto data that UI should render. In this case, that's the EmojiTile
objects that we'll be loading from the MongoDB Realm Sync server.
I'm going to demonstrate two ways to do this:
1. With Realm alone handling the asynchronous data retrieval via Realm
SDK functions. In the class EmojiVmRealm.
2. With Kotlin Coroutines Flow handling the data being updated
asynchronously, but with Realm still providing the data. In the
class EmojiVmFlow.
Either way is fine. You can pick whichever way suits you. You could even
swap between them by changing a single line of code. If you would like
to avoid any asynchronous handling of data by yourself, use
EmojiVmRealm and let Realm do all the heavy lifting!
If you are already using Kotlin Flows, and would like to use that model
of handling asynchronous operations, use EmojiVmFlow.
###### Here's what's common to both ViewModels.
Take a look at the code of EmojiVmRealm and EmojiVmFlow side by side.
Here's how they work:
1. The *emojiState* variable is observed by Compose since it's created via the mutableStateOf. It allows Jetpack Compose to observe and react to values when they change to redraw the UI. Both ViewModels will get data from the Realm database and update the emojiState variable with it. This separates the code for how the UI is rendered from how the data for it is retrieved.
2. The ViewModel is set up as an AndroidViewModel to allow it to receive an Application object.
3. Since Application is accessible from it, the RealmModule can be pulled in.
4. RealmModule was instantiated in the custom application so that it could be passed to any ViewModel in the app.
* We get the Realm database instance from the RealmModule via getSyncedRealm.
* Searching for EmojiTile objects is as simple as calling where(EmojiTile::class.java).
* Calling .sort on the results of where sorts them by their index in ascending order.
* They're requested asynchronously with findAllAsync, so the entire operation runs in a background thread.
### EmojiVmRealm
EmojiVmRealm is a class that extends
ViewModel.
Take a look at the complete code
and copy it into your source folder. It provides logic operations and updates data to the Jetpack Compose UI. It uses standard Realm SDK functionality to asynchronously load up the emojis and order them for display.
Apart from what the two ViewModels have in common, here's how this class works:
#### Realm Change Listeners
A change listener watches for changes in the database. These changes might come from other people setting their emojis in their own apps.
``` kotlin
private val emojiTilesResults : RealmResults = getApplication().realmModule
.getSyncedRealm()
.where(EmojiTile::class.java)
.sort(EmojiTile::index.name)
.findAllAsync()
.apply {
addChangeListener(emojiTilesChangeListener)
}
```
> 💡 The Realm change listener is at the heart of reactive programming with Realm.
``` kotlin
private val emojiTilesChangeListener =
OrderedRealmCollectionChangeListener> { updatedResults, _ ->
emojiState = updatedResults.freeze()
}
```
The change listener function defines what happens when a change is
detected in the database. Here, the listener operates on any collection
of *EmojiTiles* as can be seen from its type parameter of
*RealmResults\*. In this case, when changes are detected, the
*emojiState* variable is reassigned with "frozen" results.
The freeze function is part of the Realm SDK and makes the object
immutable. It's being used here to avoid issues when items are deleted
from the server. A delete would invalidate the Realm object, and if that
object was providing data to the UI at the time, it could lead to
crashes if it wasn't frozen.
#### MutableState: emojiState
``` kotlin
import androidx.compose.runtime.getValue
import androidx.compose.runtime.neverEqualPolicy
import androidx.compose.runtime.setValue
var emojiState : List by mutableStateOf(listOf(), neverEqualPolicy())
private set
```
*emojiState* is a *mutableStateOf* which Compose can observe for
changes. It's been assigned a *private set*, which means that its value
can only be set from inside *EmojiVmRealm* for code separation. When a
change is detected, the *emojiState* variable is updated with results.
The changeset isn't required so it's marked "\_".
*neverEqualPolicy* needs to be specified since Mutable State's default
structural equality check doesn't see a difference between updated
*RealmResults*. *neverEqualPolicy* is then required to make it update. I
specify the imports here because sometimes you'd get an error if you
didn't specifically import them.
``` kotlin
private val emojiTilesChangeListener =
OrderedRealmCollectionChangeListener> { updatedResults, _ ->
emojiState = updatedResults.freeze()
}
```
Change listeners have to be released when the ViewModel is being
disposed. Any resources in a ViewModel that are meant to be released
when it's being disposed should be in onCleared.
``` kotlin
override fun onCleared() {
super.onCleared()
emojiTilesResults.removeAllChangeListeners()
}
```
### EmojiVmFlow
*EmojiVmFlow* offloads some asynchronous operations to Kotlin Flows while still retrieving data from Realm. Take a look at it in the sample repo here, and copy it to your app.
Apart from what the two ViewModels have in common, here's what this VM does:
The toFlow operator from the Realm SDK automatically retrieves the list of emojis when they're updated on the server.
``` kotlin
private val _emojiTiles : Flow> = getApplication().realmModule
.getSyncedRealm()
.where(EmojiTile::class.java)
.sort(EmojiTile::index.name)
.findAllAsync()
.toFlow()
```
The flow is launched in
viewModelScope
to tie it to the ViewModel lifecycle. Once collected, each emitted list
is stored in the emojiState variable.
``` kotlin
init {
viewModelScope.launch {
_emojiTiles.collect {
emojiState = it
}
}
}
```
Since *viewModelScope* is a built-in library scope that's cleared when
the ViewModel is shut down, we don't need to bother with disposing of
it.
## Switching UI Between Login and Gardens
As we put both the screens together in the view for the actual Activity,
here's what we're trying to do:
First, connect the *LoginVm* to the view and check if the user is
authenticated. Then:
* If authenticated, show the garden.
* If not authenticated, show the login view.
* This is done via *if(loginVm.showGarden)*.
Take a look at the entire
activity
in the repo. The only change we'll be making is in the *onCreate*
function. In fact, only the *setContent* function is modified to
selectively show either the Login or the Garden Screen
(*MainActivityUi*). It also connects the ViewModels to the Garden Screen
now.
The *LoginVm* internally maintains whether to *showGarden* or not based
on whether the login succeeded. If this succeeds, the garden screen
*MainActivityUI* is instantiated with its own ViewModel, supplying the
emojis it gathers from Realm. If the login hasn't happened, it shows the
login view.
> 💡 The code below uses EmojiVmRealm. If you were using EmojiVmFlow, just type in EmojiVmFlow instead. Everything will just work.
``` kotlin
@ExperimentalFoundationApi
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
val loginVm : LoginVm = viewModel()
if(loginVm.showGarden){
val model : EmojiVmRealm = viewModel()
MainActivityUi(model.emojiState)
} else
{
LoginView(loginVm::login)
}
}
}
}
```
## Tending the Garden Remotely
Here's what you'll have on your app once you're all logged in and the
garden screen is hooked up too: a lone 🟫 emoji on a vast, empty screen.
Let's move to the server to add some more emojis and let the server
handle sending them back to the app! Every user of the app will see the
same list of emojis. I'll show how to insert the emojis from the web
console.
Open up https://cloud.mongodb.com again. Hit *collections*. *Insert*
document will appear at the middle right. Then hit *insert document*.
Hit the curly braces so you can copy/paste
this
huge pile of emojis into it.
You'll have all these emojis we just added to the server pop up on the
device. Enjoy your critters!
Feel free to play around with the console. Change the emojis in
collections by double-clicking them.
## Summary
This has been a walkthrough for how to build an Android app that
effectively uses Compose and Realm together with the latest techniques
to build reactive apps with very little code.
In this article, we've covered:
* Using the MVVM architectural pattern with Jetpack Compose.
* Setting up MongoDB Realm.
* Using Realm in ViewModels.
* Using Realm to Kotlin Flows in ViewModels.
* Using anonymous authentication in Realm.
* Building Conditional UIs with Jetpack Compose.
There's a lot here to add to any of your projects. Feel free to use any
parts of this walkthrough or use the whole thing! I hope you've gotten
to see what MongoDB Realm can do for your mobile apps!
## What's Next?
In Part 2, I'll get to best practises for dealing with emojis using
EmojiCompat.
I'll also get into how to change the emojis from the device itself and
add some personalization that will enhance the app's functionality. In
addition, we'll also have to add some "rules" to handle use cases—for
example, users can only alter unclaimed "soil" tiles and handle conflict
resolution when two users try to claim the same tile simultaneously.
What happens when two people pick the same tiles at nearly the same
time? Who gets to keep it? How can we avoid pranksters changing our own
emojis? These questions and more will be answered in Part 2.
## References
Here's some additional reading if you'd like to learn more about what we
did in this article.
1. The official docs on Compose layout are incredible to see Compose's flexibility.
2. The codelabs teach this method of handling state.
3. All the code for this project.
4. Also, thanks to Monica Dinculescu for coming up with the idea for the garden on the web. This is an adaptation of her ideas.
> If you have questions, please head to our developer community
> website where the MongoDB engineers and
> the MongoDB community will help you build your next big idea with
> MongoDB. | md | {
"tags": [
"Realm",
"Android",
"Jetpack Compose",
"Mobile"
],
"pageDescription": "Dive into: Compose architecture A globally synced Emoji garden Reactivity like you've never seen before!",
"contentType": "Tutorial"
} | Building an Android Emoji Garden on Jetpack Compose with Realm | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-database-cascading-deletes | created | # Realm SDKs 10.0: Cascading Deletes, ObjectIds, Decimal128, and more
The Realm SDK 10.0 is now Generally Available with new capabilities such as Cascading Deletes and new types like Decimal128.
## Release Highlights
We're happy to announce that as of this month, the new Realm Mobile Database 10.0 is now Generally Available for our Java/Kotlin, Swift/Obj-C, and JavaScript SDKs.
This is the culmination of all our hard work for the last year and lays a new foundation for Realm. With Realm 10.0, we've increased the stability of the database and improved performance. We've responded to the Realm Community's feedback and built key new features, like cascading deletes, to make it simpler to implement changes and maintain data integrity. We've also added new data types.
Realm .NET is now released as a feature-complete beta. And, we're promoting the Realm Web library to 1.0, replacing the MongoDB Stitch Browser SDK. Realm Studio is also getting released as 10.0 as a local Realm Database viewer for the 10.0 version of the SDKs.
With this release, the Realm SDKs also support all functionality unlocked by MongoDB Realm. You can call a serverless function from your mobile app, making it simple to build a feature like sending a notification via Twilio. Or, you could use triggers to call a Square API once an Order object has been synced to MongoDB Realm. Realm's Functions and Triggers speed up your development and reduce the code you need to write as well as having to stand up and maintain web servers to wait for these requests. And you now have full access to all of MongoDB Realm's built-in authentication providers, including the ability to call your own custom logic.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!
## Cascading Deletes
We're excited to announce that one of our most requested features - cascading deletes - is now available. Previous versions of Realm put the burden of cascading deletes on you as the developer. Now, we're glad to be reducing the complexity and amount of code you need to write.
If you're a Realm user, you know that object relationships are a key part of the Realm Database. Realm doesn't impose restrictions on how you can link your objects, no matter how complex relationships become. Realm allows one-to-one, one-to-many, many-to-many, and backlinks. Realm stores relationships by reference, so even when you end up with a complicated object graph, Realm delivers incredibly fast lookups by traversing pointers.
But historically, some use cases prevented us from delivering cascading deletes. For instance, you might have wanted to delete a Customer object but still keep a record of all of the Order objects they placed over the years. The Realm SDKs wouldn't know if a parent-child object relationship had strict ownership to safely allow for cascading deletes.
In this release, we've made cascading deletes possible by introducing a new type of object that we're calling Embedded Objects. With Embedded Objects, you can convey ownership to whichever object creates a link to the embedded object. Using embedded object references gives you the ability to delete all objects that are linked to the parent upon deletion.
Imagine you have a BusRoute object that has a list of BusStop embedded objects, and a BusDriver object who is assigned to the route. You want to delete BusRoute and automatically delete only the BusStop objects, without deleting the BusDriver object, because he still works for the company and can drive other routes. Here's what it looks like: When you delete the BusRoute, the Realm SDK will automatically delete all BusStops. For the BusDriver objects you don't want deleted, you use a regular object reference. Your BusDriver objects will not be automatically deleted and can drive other routes.
The Realm team is proud to say that we've heard you, and we hope that you give this feature a try to simplify your code and improve your development experience.
::::tabs
:::tab]{tabid="Swift"}
``` Swift
// Define an object with one embedded object
class Contact: Object {
@objc dynamic var _id = ObjectId.generate()
@objc dynamic var name = ""
// Embed a single object.
// Embedded object properties must be marked optional.
@objc dynamic var address: Address? = nil
override static func primaryKey() -> String? {
return "_id"
}
convenience init(name: String, address: Address) {
self.init()
self.name = name
self.address = address
}
}
// Define an embedded object
class Address: EmbeddedObject {
@objc dynamic var street: String? = nil
@objc dynamic var city: String? = nil
@objc dynamic var country: String? = nil
@objc dynamic var postalCode: String? = nil
}
let sanFranciscoContact = realm.objects(Contact.self)
guard let sanFranciscoContact = realm.objects(Contact.self)
.filter("address.city = %@", "San Francisco")
.sorted(byKeyPath: "address.street")
.first,
let sanFranciscoAddress = sanFranciscoContact.address else {
print("Could not find San Francisco Contact!")
return
}
// prints City: San Francisco
print("City: \(sanFranciscoAddress.city ?? "nil")")
try! realm.write {
// Delete the instance from the realm.
realm.delete(sanFranciscoContact)
}
// now the embedded Address will be invalid.
// prints Is Invalidated: true
print("Is invalidated: \(sanFranciscoAddress.isInvalidated)")
```
:::
:::tab[]{tabid="Kotlin"}
``` Kotlin
// Define an object containing one embedded object
open class Contact(
@RealmField("_id")
@PrimaryKey
var id: ObjectId = ObjectId(),
var name: String = "",
// Embed a single object.
// Embedded object properties must be marked optional
var address: Address? = null) : RealmObject() {}
// Define an embedded object
@RealmClass(embedded = true)
open class Address(
var street: String? = null,
var city: String? = null,
var country: String? = null,
var postalCode: String? = null
): RealmObject() {}
// insert some data
realm.executeTransaction {
val contact = it.createObject()
val address = it.createEmbeddedObject(contact, "address")
address.city = "San Francisco"
address.street = "495 3rd St"
contact.name = "Ian"
}
val sanFranciscoContact = realm.where()
.equalTo("address.city", "San Francisco")
.sort("address.street").findFirst()
Log.v("EXAMPLE", "City: ${sanFranciscoContact?.address?.city}")
// prints San Francisco
// Get a contact to delete which satisfied the previous query
val contact = realm.where()
.equalTo("name", "Ian").findFirst()
Log.v("EXAMPLE", "IAN = : ${contact?.name}")
realm.executeTransaction {
// Delete the contact instance from its realm.
contact?.deleteFromRealm()
}
// now lets print an address query
Log.v("EXAMPLE", "Number of addresses: ${realm.where().count()}") // == 0
if (BuildConfig.DEBUG && sanFranciscoContact?.isValid != false) {
error("Assertion failed")
}
Log.v("EXAMPLE", "sanFranciscoContact is valid: ${sanFranciscoContact?.address?.isValid}") // false
```
:::
:::tab[]{tabid="Javascript"}
``` js
const ContactSchema = {
name: "Contact",
primaryKey: "_id",
properties: {
_id: "objectId",
name: "string",
address: "Address", // Embed a single object
},
};
const AddressSchema = {
name: "Address",
embedded: true, // default: false
properties: {
street: "string?",
city: "string?",
country: "string?",
postalCode: "string?",
},
};
const sanFranciscoContact = realm.objects("Contact")
.filtered("address.city = 'San Francisco'")
.sorted("address.street");
let ianContact = sanFranciscoContacts[0];
console.log(ianContact.address.city); // prints San Francisco
realm.write(() => {
// Delete ian from the realm.
realm.delete(ianContact);
});
//now lets see print the same query returns -
console.log(ianContact.address.city);
// address returns null
```
:::
:::tab[]{tabid=".NET"}
``` csharp
public class Contact : RealmObject
{
[PrimaryKey]
[MapTo("_id")]
public ObjectId Id { get; set; } = ObjectId.GenerateNewId();
[MapTo("name")]
public string Name { get; set; }
// Embed a single object.
[MapTo("address")]
public Address Address { get; set; }
}
public class Address : EmbeddedObject
{
[MapTo("street")]
public string Street { get; set; }
[MapTo("city")]
public string City { get; set; }
[MapTo("country")]
public string Country { get; set; }
[MapTo("postalCode")]
public string PostalCode { get; set; }
}
var sanFranciscoContact = realm.All()
.Filter("Contact.Address.City == 'San Francisco'").
.OrderBy(c => c.Address.Street)
.First();
// Prints Ian
Console.WriteLine(sanFranciscoContact.Name);
var iansAddress = sanFranciscoContact.Address;
// Prints San Francisco
Console.WriteLine(iansAddress.City);
// Delete an object with a transaction
realm.Write(() =>
{
realm.Remove(sanFranciscoContact);
});
// Prints false - since the parent object was deleted, the embedded address
// was removed too.
Console.WriteLine(iansAddress.IsValid);
// This will throw an exception because the object no longer belongs
// to the Realm.
// Console.WriteLine(iansAddress.City);
```
:::
::::
Want to try it out? Head over to our docs page for your respective SDK and take it for a spin!
- [iOS SDK
- Android SDK
- React Native SDK
- Node.js SDK
- .NET SDK
## ObjectIds
ObjectIds are a new type introduced to the Realm SDKs, used to provide uniqueness between objects. Previously, you would need to create your own unique identifier, using a function you wrote or imported. You'd then cast it to a string or some other Realm primitive type. Now, ObjectIds save space by being smaller, making it easier to work with your data.
An ObjectId is a 12-byte hexadecimal value that follows this order:
- A 4-byte timestamp value, representing the ObjectId's creation, measured in seconds since the Unix epoch
- A 5-byte random value
- A 3-byte incrementing counter, initialized to a random value
Because of the way ObjectIds are generated - with a timestamp value in the first 4 bytes - you can sort them by time using the ObjectId field. You no longer need to create another timestamp field for ordering. ObjectIDs are also smaller than the string representation of UUID. A UUID string column will take 36 bytes, whereas an ObjectId is only 12.
The Realm SDKs contain a built-in method to automatically generate an ObjectId.
::::tabs
:::tab]{tabid="Swift"}
``` Swift
class Task: Object {
@objc dynamic var _id: ObjectId = ObjectId.generate()
@objc dynamic var _partition: ProjectId? = nil
@objc dynamic var name = ""
override static func primaryKey() -> String? {
return "_id"
}
convenience init(partition: String, name: String) {
self.init()
self._partition = partition
self.name = name
}
}
```
:::
:::tab[]{tabid="Kotlin"}
``` Kotlin
open class Task(
@PrimaryKey var _id: ObjectId = ObjectId(),
var name: String = "Task",
_partition: String = "My Project") : RealmObject() {}
```
:::
:::tab[]{tabid="Javascript"}
``` js
const TaskSchema = {
name: "Task",
properties: {
_id: "objectId",
_partition: "string?",
name: "string",
},
primaryKey: "_id",
};
```
:::
:::tab[]{tabid=".NET"}
``` csharp
public class Task : RealmObject
{
[PrimaryKey]
[MapTo("_id")]
public ObjectId Id { get; set; } = ObjectId.GenerateNewId();
[MapTo("_partition")]
public string Partition { get; set; }
[MapTo("name")]
public string Name { get; set; }
}
```
:::
::::
Take a look at our documentation on Realm models by going here:
- [iOS SDK
- Android SDK
- React Native SDK
- Node.js SDK
- .NET SDK
## Decimal128
We're also introducing Decimal128 as a new type in the Realm SDKs. With Decimal128, you're able to store the exact value of a decimal type and avoid the potential for rounding errors in calculations.
In previous versions of Realm, you were limited to int64 and double, which only stored 64 bits of range. Decimal128 is a 16-byte decimal floating-point number format. It's intended for calculations on decimal numbers where high levels of precision are required, like financial (i.e. tax calculations, currency conversions) and scientific computations.
Decimal128 has over 128 bits of range and precision. It supports 34 decimal digits of significance and an exponent range of −6143 to +6144. It's become the industry standard, and we're excited to see how the community leverages this new type in your mathematical calculations. Let us know if it unlocks new use cases for you.
::::tabs
:::tab]{tabid="Swift"}
``` Swift
class Task: Object {
@objc dynamic var _id: ObjectId = ObjectId.generate()
@objc dynamic var _partition: String = ""
@objc dynamic var name: String = ""
@objc dynamic var owner: String? = nil
@objc dynamic var myDecimal: Decimal128? = nil
override static func primaryKey() -> String? {
return "_id"
}
```
:::
:::tab[]{tabid="Kotlin"}
``` Kotlin
open class Task(_name: String = "Task") : RealmObject() {
@PrimaryKey var _id: ObjectId = ObjectId()
var name: String = _name
var owner: String? = null
var myDecimal: Decimal128? = null
}
```
:::
:::tab[]{tabid="Javascript"}
``` js
const TaskSchema = {
name: "Task",
properties: {
_id: "objectId",
_partition: "string?",
myDecimal: "decimal128?",
name: "string",
},
primaryKey: "_id",
};
```
:::
:::tab[]{tabid=".NET"}
``` csharp
public class Foo : RealmObject
{
[PrimaryKey]
[MapTo("_id")]
public ObjectId Id { get; set; } = ObjectId.GenerateNewId();
[MapTo("_partition")]
public string Partition { get; set; }
public string Name { get; set; };
public Decimal128 MyDecimal { get; set; }
}
```
:::
::::
Take a look at our documentation on Realm models by going here -
- [iOS SDK
- Android SDK
- React Native SDK
- Node.js SDK
- .NET SDK
## Open Sourcing Realm Sync
Since launching MongoDB Realm and Realm Sync in June, we've also made the decision to open source the code for Realm Sync.
Since Realm's founding, we've committed to open source principles in our work. As we continue to invest in building the Realm SDKs and MongoDB Realm, we want to remain transparent in how we're developing our products.
We want you to see the algorithm we're using for Realm Sync's automatic conflict resolution, built upon Operational Transformation. Know that any app you build with Realm now has the source algorithm available. We hope that you'll give us feedback and show us the projects you're building with it.
See the repo to check out the code
## About the New Versioning
You may have noticed that with this release, we've updated our versioning across all SDKs to Realm 10.0. Our hope is that by aligning all SDKs, we're making it easier to know how database versions align across languages. We can't promise that all versions will stay aligned in the future. But for now, we hope this helps you to notice major changes and avoid accidental upgrades.
## Looking Ahead
The Realm SDKs continue to evolve as a part of MongoDB, and we truly believe that this new functionality gives you the best experience yet when using Realm. Looking ahead, we're continuing to invest in providing a best-in-class solution and are working to to support new platforms and use cases.
Stay tuned by following @realm on Twitter.
Want to Ask a Question? Visit our Forums
Want to make a feature request? Visit our Feedback Portal
Want to be notified of upcoming Realm events such as our iOS Hackathon in November 2020? Visit our Global Community Page
Running into issues? Visit our Github to file an Issue.
- RealmJS
- RealmSwift
- RealmJava
- RealmDotNet
>Safe Harbor
The development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality. | md | {
"tags": [
"Realm"
],
"pageDescription": "The Realm SDK 10.0 is now Generally Available with new capabilities such as Cascading Deletes and new types like Decimal128.",
"contentType": "News & Announcements"
} | Realm SDKs 10.0: Cascading Deletes, ObjectIds, Decimal128, and more | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/using-linq-query-mongodb-dotnet-core-application | created | # Using LINQ to Query MongoDB in a .NET Core Application
# Using LINQ to Query MongoDB in a .NET Core Application
If you've been keeping up with my series of tutorials around .NET Core and MongoDB, you'll likely remember that we explored using the Find operator to query for documents as well as an aggregation pipeline. Neither of these previously explored subjects are too difficult, but depending on what you're trying to accomplish, they could be a little messy. Not to mention, they aren't necessarily "the .NET way" of doing business.
This is where LINQ comes into the mix of things!
With Language Integrated Queries (LINQ), we can use an established and well known C# syntax to work with our MongoDB documents and data.
In this tutorial, we're going to look at a few LINQ queries, some as a replacement to simple queries using the MongoDB Query API and others as a replacement to more complicated aggregation pipelines.
## The requirements
To be successful with this tutorial, you should already have the following ready to go:
- .NET Core 6+
- MongoDB Atlas, the free tier or better
When it comes to MongoDB Atlas, you'll need to have a cluster deployed and properly configured with user roles and network rules. If you need help with this, take a look at my previous tutorial on the subject. You will also need the sample datasets installed.
While this tutorial is part of a series, you don't need to have read the others to be successful. However, you'd be doing yourself a favor by checking out the other ways you can do business with .NET Core and MongoDB.
## Creating a new .NET Core console application with the CLI
To keep this tutorial simple and easy to understand, we're going to create a new console application and work from that.
Execute the following from the CLI to create a new project that is ready to go with the MongoDB driver:
```bash
dotnet new console -o MongoExample
cd MongoExample
dotnet add package MongoDB.Driver
```
For this tutorial, our MongoDB Atlas URI string will be stored as an environment variable on our computer. Depending on your operating system, you can do something like this:
```bash
export ATLAS_URI="YOUR_ATLAS_URI_HERE"
```
The Atlas URI string can be found in your MongoDB Atlas Dashboard after clicking the "Connect" button and choosing your programming language.
Open the project's **Program.cs** file and add the following C# code:
```csharp
using MongoDB.Bson;
using MongoDB.Driver;
using MongoDB.Driver.Linq;
MongoClientSettings settings = MongoClientSettings.FromConnectionString(
Environment.GetEnvironmentVariable("ATLAS_URI")
);
settings.LinqProvider = LinqProvider.V3;
MongoClient client = new MongoClient(settings);
```
In the above code, we are explicitly saying that we want to use LINQ Version 3 rather than Version 2, which is the default in MongoDB. While you can accomplish many LINQ-related tasks in MongoDB with Version 2, you'll get a much better experience with Version 3.
## Writing MongoDB LINQ queries in your .NET Core project
We're going to take it slow and work our way up to bigger and more complicated queries with LINQ.
In case you've never seen the "sample_mflix" database that is part of the sample datasets that MongoDB offers, it's a movie database with several collections. We're going to focus strictly on the "movies" collection which has documents that look something like this:
```json
{
"_id": ObjectId("573a1398f29313caabceb515"),
"title": "Batman",
"year": 1989,
"rated": "PG-13",
"runtime": 126,
"plot": "The Dark Knight of Gotham City begins his war on crime with his first major enemy being the clownishly homicidal Joker.",
"cast": "Michael Keaton", "Jack Nicholson", "Kim Basinger" ]
}
```
There are quite a bit more fields to each of the documents in that collection, but the above fields are enough to get us going.
To use LINQ, we're going to need to create mapped classes for our collection. In other words, we won't want to be using `BsonDocument` when writing our queries. At the root of your project, create a **Movie.cs** file with the following C# code:
```csharp
using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;
[BsonIgnoreExtraElements]
public class Movie {
[BsonId]
[BsonRepresentation(BsonType.ObjectId)]
public string Id { get; set; }
[BsonElement("title")]
public string Title { get; set; } = null!;
[BsonElement("year")]
public int Year { get; set; }
[BsonElement("runtime")]
public int Runtime { get; set; }
[BsonElement("plot")]
[BsonIgnoreIfNull]
public string Plot { get; set; } = null!;
[BsonElement("cast")]
[BsonIgnoreIfNull]
public List Cast { get; set; } = null!;
}
```
We used a class like the above in our previous tutorials. We've just defined a few of our fields, mapped them to BSON fields in our database, and told our class to ignore any extra fields that may exist in our database that we chose not to define in our class.
Let's say that we want to return movies that were released between 1980 and 1990. If we weren't using LINQ, we'd be doing something like the following in our **Program.cs** file:
```csharp
using MongoDB.Bson;
using MongoDB.Driver;
MongoClient client = new MongoClient(
Environment.GetEnvironmentVariable("ATLAS_URI")
);
IMongoCollection moviesCollection = client.GetDatabase("sample_mflix").GetCollection("movies");
BsonDocument filter = new BsonDocument{
{
"year", new BsonDocument{
{ "$gt", 1980 },
{ "$lt", 1990 }
}
}
};
List movies = moviesCollection.Find(filter).ToList();
foreach(Movie movie in movies) {
Console.WriteLine($"{movie.Title}: {movie.Plot}");
}
```
However, since we want to use LINQ, we can update our **Program.cs** file to look like the following:
```csharp
using MongoDB.Driver;
using MongoDB.Driver.Linq;
MongoClientSettings settings = MongoClientSettings.FromConnectionString(
Environment.GetEnvironmentVariable("ATLAS_URI")
);
settings.LinqProvider = LinqProvider.V3;
MongoClient client = new MongoClient(settings);
IMongoCollection moviesCollection = client.GetDatabase("sample_mflix").GetCollection("movies");
IMongoQueryable results =
from movie in moviesCollection.AsQueryable()
where movie.Year > 1980 && movie.Year < 1990
select movie;
foreach(Movie result in results) {
Console.WriteLine("{0}: {1}", result.Title, result.Plot);
}
```
In the above code, we are getting a reference to our collection and creating a LINQ query. To break down the LINQ query to see how it relates to MongoDB, we have the following:
1. The "WHERE" operator is the equivalent to doing a "$MATCH" or a filter within MongoDB. The documents have to match the criteria in this step.
2. The "SELECT" operator is the equivalent to doing a projection or using the "$PROJECT" operator. We're defining which fields should be returned from the query—in this case, all fields that we've defined in our class.
To diversify our example a bit, we're going to change the match condition to match within an array, something non-flat.
Change the LINQ query to look like the following:
```csharp
var results =
from movie in moviesCollection.AsQueryable()
where movie.Cast.Contains("Michael Keaton")
select new { movie.Title, movie.Plot };
```
A few things changed in the above code along with the filter. First, you'll notice that we are matching on the `Cast` array as long as "Michael Keaton" exists in that array. Next, you'll notice that we're doing a projection to only return the movie title and the movie plot instead of all other fields that might exist in the data.
We're going to make things slightly more complex now in terms of our query. This time we're going to do what would have been a MongoDB aggregation pipeline, but this time using LINQ.
Change the C# code in the **Program.cs** file to look like the following:
```csharp
using MongoDB.Driver;
using MongoDB.Driver.Linq;
MongoClientSettings settings = MongoClientSettings.FromConnectionString(
Environment.GetEnvironmentVariable("ATLAS_URI")
);
settings.LinqProvider = LinqProvider.V3;
MongoClient client = new MongoClient(settings);
IMongoCollection moviesCollection = client.GetDatabase("sample_mflix").GetCollection("movies");
var results =
from movie in moviesCollection.AsQueryable()
where movie.Cast.Contains("Ryan Reynolds")
from cast in movie.Cast
where cast == "Ryan Reynolds"
group movie by cast into g
select new { Cast = g.Key, Sum = g.Sum(x => x.Runtime) };
foreach(var result in results) {
Console.WriteLine("{0} appeared on screen for {1} minutes!", result.Cast, result.Sum);
}
```
In the above LINQ query, we're doing a series of steps, just like stages in an aggregation pipeline. These stages can be broken down like the following:
1. Match all documents where "Ryan Reynolds" is in the cast.
2. Unwind the array of cast members so the documents sit adjacent to each other. This will flatten the array for us.
3. Do another match on the now smaller subset of documents, filtering out only results that have "Ryan Reynolds" in them.
4. Group the remaining results by the cast, which will only be "Ryan Reynolds" in this example.
5. Project only the group key, which is the cast member, and the sum of all the movie runtimes.
If you haven't figured it out yet, what we attempted to do was determine the total amount of screen time Ryan Reynolds has had. We isolated our result set to only documents with Ryan Reynolds, and then we summed the runtime of the documents that were matched.
While the full scope of the MongoDB aggregation pipeline isn't supported with LINQ, you'll be able to accomplish quite a bit, resulting in a lot cleaner looking code. To get an idea of the supported operators, take a look at the [MongoDB LINQ documentation.
## Conclusion
You just got a taste of LINQ with MongoDB in your .NET Core applications. While you don't have to use LINQ, as demonstrated in a few previous tutorials, it's common practice amongst C# developers.
Got a question about this tutorial? Check out the MongoDB Community Forums for help! | md | {
"tags": [
"C#",
"MongoDB",
".NET"
],
"pageDescription": "Learn how to use LINQ to interact with MongoDB in a .NET Core application.",
"contentType": "Tutorial"
} | Using LINQ to Query MongoDB in a .NET Core Application | 2024-05-20T17:32:23.501Z |