text
stringlengths 0
1.36k
|
---|
)
|
vectorstore = PineconeVectorStore(embedding=embeddings, index_name=index_name)
|
vectorstore.similarity_search(query) python name: Upsert embeddings for changed MDX files
|
on:
|
push:
|
branches:
|
main
|
jobs: changed_files: runs-on: ubuntu-latest name: Process Changed Blog Embeddings
|
steps: - uses: actions/checkout@v4 with: fetch-depth: 0 # Ensures a full clone of the repository
|
name: Get changed files id: changed
|
files uses: tj
|
actions/changed
|
files@v44
|
name: List all changed files run: | echo "Changed MDX Files:" CHANGED_MDX_FILES=$(echo "${{ steps.changed
|
files.outputs.all_changed_files }}" | grep '\.mdx$') echo "$CHANGED_MDX_FILES" echo "CHANGED_MDX_FILES> $GITHUB_ENV echo "$CHANGED_MDX_FILES" >> $GITHUB_ENV echo "EOF" >> $GITHUB_ENV
|
name: Set API keys from secrets run: | echo "OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}" >> $GITHUB_ENV echo "PINECONE_API_KEY=${{ secrets.PINECONE_API_KEY }}" >> $GITHUB_ENV
|
name: Install dependencies if: env.CHANGED_MDX_FILES run: | pip install langchain_community langchain_pinecone langchain_openai langchain unstructured langchainhub
|
name: Process and upsert blog embeddings if changed if: env.CHANGED_MDX_FILES run: | python
|
c " import os from langchain_pinecone import PineconeVectorStore from langchain_openai import OpenAIEmbeddings from langchain.docstore.document import Document
|
# Manually load changed documents changed_files = os.getenv('CHANGED_MDX_FILES').split() docs = [Document(page_content=open(file, 'r').read(), metadata={'source': 'local', 'name': file}) for file in changed_files if file.endswith('.mdx')]
|
# Initialize embeddings and vector store embeddings = OpenAIEmbeddings(model='text-embedding-3-large') index_name = 'zack-portfolio-3072' vectorstore = PineconeVectorStore(embedding=embeddings, index_name=index_name vectorstore.add_documents(docs) "
|
name: Verify and log vector store status if: env.CHANGED_MDX_FILES run: | python
|
c " import os from pinecone import Pinecone pc = Pinecone(api_key=os.environ['PINECONE_API_KEY']) index = pc.Index('zack
|
portfolio
|
3072') print(index.describe_index_stats()) " ```
|
That's it for now. Thanks for reading! If you were helped in any way by this post or found it interesting, please leave a comment or like below or share it with a friend. 🙇
|
data
|
driven
|
I've begun experimenting with building some of my blog posts - especially those that are heavy on data, tables, comparisons and multi-dimensional considerations - using scripts, JSON and home-brewed schemas.
|
## Table of contents
|
## What are data
|
driven pages?
|
I'm using this phrase to describe pages or experiences served up from your Next.js project that you **compile** rather than edit.
|
Whereas you might edit a static blog post to add new information, with a data-driven page you would update the data-source and then run the associated build process, resulting in a web page you serve to your users.
|
## Why build data driven pages?
|
In short, data driven pages make it easier to maintain richer and more information-dense experiences on the web.
|
Here's a couple of reasons I like this pattern:
|
1. There is **more upfront work** to do than just writing a new MDX file for your next post, but once the build script is stable, it's much **quicker to iterate** (Boyd's Law) 2. By iterating on the core data model expressed in JSON, you can quickly add rich new features and visualizations to the page such as additional tables and charts 3. If you have multiple subpages that all follow a similar pattern, such as side by side product review, running a script one time is a lot faster than making updates across multiple files 4. You can hook your build scripts either into npm's `prebuild` hook, which runs before `npm run build` is executed, or to the `pnpm` build target, so that your data driven pages are freshly rebuilt with no additional effort on your part 5. This pattern is a much more sane way to handle data that changes frequently or a set of data that has new members frequently. In other words, if you constantly have to add Product or Review X to your site, would you rather manually re-create HTML sections by hand or add a new object to your JSON? 6. You can drive more than one experience from a single data source: think a landing page backed by several detail pages for products, reviews, job postings, etc.
|
## How it works
|
### The data
|
I define my data as JSON and store it in the root of my project in a new folder.
|
For example, here's an object that defines GitHub's Copilot AI-assisted developer tool for my [giant AI-assisted dev tool comparison post](/blog/ai-assisted-dev-tools-compared):
|
As you can see, the JSON defines every property and value I need to render GitHub's Copilot in a comparison table or other visualization.
|
### The script
|
The script's job is to iterate over the JSON data and produce the final post, complete with any visualizations, text, images or other content.
|
The full script is relatively long. You can [read the full script in version control](https://github.com/zackproser/portfolio/blob/main/scripts/create-ai-assisted-dev-tools-comparison-post.js), but in the next sections I'll highlight some of the more interesting parts.
|
### Generating the Post Content
|
One of the most important parts of the script is the `generatePostContent` function, which takes the categories and tools data and generates the full content of the blog post in markdown format. Here's a simplified version of that function:
|
This function generates the full markdown content of the blog post, including the metadata, introduction, table of contents, tool table, and category sections.
|
By breaking this out into a separate function, we can focus on the high-level structure of the post without getting bogged down in the details of how each section is generated.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.