text
stringlengths
0
1.36k
### Writing the Generated Page to a File
Another key part of the script is the code that writes the generated page content to a file in the correct location. Here's what that looks like:
This code does a few important things:
1. It determines the correct directory and filename for the generated page based on the project structure. 1. It checks if the file already exists and, if so, extracts the existing date from the page's metadata. This allows us to preserve the original publication date if we're regenerating the page. 1. It generates the full page content using the `generatePostContent` function. 1. It creates the directory if it doesn't already exist. 1. It writes the generated content to the file.
## Automating the Build Process with npm and pnpm
One of the key benefits of using a script to generate data-driven pages is that we can automate the build process to ensure that the latest content is always available.
Let's take a closer look at how we can use npm and pnpm to run our script automatically before each build.
### Using npm run prebuild
In the package.json file for our Next.js project, we can define a "prebuild" script that will run automatically before the main "build" script:
With this setup, whenever we run `npm run build` to build our Next.js project, the prebuild script will run first, executing our page generation script and ensuring that the latest content is available. Using pnpm build
If you're using pnpm instead of npm, then the concept of a "prebuild" script no longer applies, unless you enable the `enable-pre-post-scripts` option in your `.npmrc` file as [noted here](https://pnpm.io/cli/run#enable-pre-post-scripts).
If you decline setting this option, but still need your prebuild step to work across `npm` and `pnpm`, then you can do something gross like this:
### Why automation matters
By automating the process of generating our data-driven pages as part of the build process, we can ensure that the latest content is always available to our users. This is especially important if our data is changing frequently, or if we're adding new tools or categories on a regular basis.
With this approach, we don't have to remember to run the script manually before each build - it happens automatically as part of the standard build process. This saves time and reduces the risk of forgetting to update the content before deploying a new version of the site.
Additionally, by generating the page content at build time rather than at runtime, we can improve the performance of our site by serving static HTML instead of dynamically generating the page on each request. This can be especially important for larger or more complex sites where performance is a key concern.
## Key Takeaways
While the full script is quite long and complex, breaking it down into logical sections helps us focus on the key takeaways:
1. Generating data-driven pages with Next.js allows us to create rich, informative content that is easy to update and maintain over time. 1. By separating the data (in this case, the categories and tools) from the presentation logic, we can create a flexible and reusable system for generating pages based on that data. 1. Using a script to generate the page content allows us to focus on the high-level structure and layout of the page, while still providing the ability to customize and tweak individual sections as needed. 1. By automating the process of generating and saving the page content, we can save time and reduce the risk of errors or inconsistencies.
While the initial setup and scripting can be complex, the benefits in terms of time savings, consistency, and maintainability are well worth the effort.
programmer
This is a brief story about the best programmer I've ever worked with.
A few years ago I worked with a celebrity developer who wrote the book on an entire category of technical pursuit.
This was pre-ChatGPT, and this rare human could produce entire applications overnight even in languages they didn't know well that worked perfectly and appeared idiomatically written.
In a one on one I asked if this person ever got stuck or frustrated despite their abnormal skill level and many decades of experience.
Their response?
"All the time".
trained
query
rag
model
rag
training
---
You can ask your RAG pipeline, "What line is the bug on? ", and it will tell you the answer almost instantly. How?
Embeddings models are the secret sauce that makes RAG work so well. How are they trained in this "asking questions of documents" use case?
In this blog post we'll unpack how embeddings models like OpenAI's `text-embedding-3-large` are trained to support this document retrieval and chat use case.
## Table of contents
## Training Data and Labels In the context of training an embedding model for RAG, the training data usually consists of pairs of queries and documents.
The labels aren't traditional categorical labels; instead, they are used to create pairs of similar (positive) and dissimilar (negative) document embeddings.
**Positive pairs
**: Queries and their relevant documents.
**Negative pairs
**: Queries and irrelevant documents (often sampled randomly or using hard negatives).
Here's what a simple pre-training example might look like in Python:
`
## Model Architecture Many modern embedding models are based on transformer architectures, such as BERT, RoBERTa, or specialized models like Sentence-BERT (SBERT). These models typically output token-level embeddings.
**Token
level embeddings
**: Each token (word or subword) in the input sequence gets its own embedding vector. I built [a demo showing what word and subword tokens look like here](/demos/tokenize).