instruction
stringlengths 25
134
| input
stringclasses 1
value | output
stringlengths 12
49.5k
|
---|---|---|
Write an article about "Your first full stack app" | Get your first full stack app on your portfolio
Always wanted to build a complete web app but aren't sure how to get started? This course is for you.
<Button
variant={"green"}
href={/learn/your-first-full-stack-app/0}
Your first full stack app - start course |
|
Write an article about "Generative AI Bootcamp" | export const status = 'available'
return ();
} |
|
Write an article about "an article" | {
if (name) this.name = name;
if (email) this.email = email;
try {
await sqlUPDATE users SET name = ${this.name}, email = ${this.email} WHERE id = ${this.id};
console.log('User profile updated successfully.');
} catch (error) {
console.error('Error updating user profile:', error);
// Handle error (e.g., rollback transaction, log error, etc.)
}
}
// Delete user profile from the database
async deleteProfile() {
try {
await sqlDELETE FROM users WHERE id = ${this.id};
console.log(User with ID ${this.id} deleted successfully.);
} catch (error) {
console.error('Error deleting user profile:', error);
// Handle error
}
}
// Method to display user info - useful for debugging
displayUserInfo() {
console.log(User ID: ${this.id}, Name: ${this.name}, Email: ${this.email});
}
}
// Example usage (inside an async function or top-level await in a module)
const user = new User(1, 'John Doe', '[email protected]');
user.displayUserInfo(); // Display initial user info
// Update user info
await user.updateProfile({ name: 'Jane Doe', email: '[email protected]' });
user.displayUserInfo(); // Display updated user info
// Delete user
await user.deleteProfile();
Prior to the widespread availability of Generative AI tools, you more or less needed to understand Javascript, its most recent syntax changes, object oriented programming conventions and database abstractions, at a minimum, to produce this code.
You also needed to have recently gotten some sleep, be more or less hydrated and have already had your caffeine to create this simple example.
And even the most highly-skilled keyboard-driven developers would have taken
a bit longer than a few seconds to write this out.
GenAI is not just for text or code...
Here's an example of me asking ChatGPT4 to generate me an image with the following prompt:
I'm creating a video game about horses enhanced with Jetpacks.
Please generate me a beautiful, cheerful and friendly sprite of a horse with a jetpack strapped onto its back that would be suitable for use in my HTML5 game.
Use a bright, cheery and professional retro pixel-art style.
I can use Large Language Models (LLMs) like ChatGPT to generate pixel art and other assets for my web and gaming projects.
Within a few moments, I got back a workable image that was more or less on the money given my prompt.
I didn't have to open my image editor, spend hours tweaking pixels using specialized tools, or hit up my designer or digital artist friends for assistance.
How does GenAI work?
Generative AI works by "learning" from massive datasets, to draw out similarities and "features"
Generative AI systems learn from vast datasets to build a model that allows them to produce new outputs.
For example, by learning from millions of images and captions, an AI can generate brand new photographic images based on text descriptions provided to it.
The key technique that makes this possible involves training machine learning models using deep neural networks that can recognize complex patterns.
Imagine you have a very smart robot that you want to teach to understand and use human language.
To do this, you give the robot a huge pile of books, articles, and conversations to read over and over again.
Each time the robot goes through all this information, it's like it's completing a grade in school, learning a little more about how words fit together and how they can be used to express ideas.
In each "grade" or cycle, the robot pays attention to what it got right and what it got wrong, trying to improve.
Think of it like learning to play a video game or a sport; the more you practice, the better you get.
The robot is doing something similar with language, trying to get better at understanding and generating it each time it goes through all the information.
This process of going through the information, learning from mistakes, and trying again is repeated many times, just like going through many grades in school.
For a model as capable as ChatGPT 4, the cost to perform this training can exceed $100 million, as OpenAI's Sam Altman has shared.
With each "generation" of learning, the robot gets smarter and better at using language, much like how you get smarter and learn more as you move up in school grades.
Why is GenAI having its moment right now?
GenAI is the confluence of many complimentary components and approaches reaching maturity at the same time
Advanced architectures: New architectures like transformers that are very effective for language and generation
Progressive advancement of the state of the art: Progressive improvements across computer vision, natural language processing, and AI in general
Why is GenAI such a big deal?
Prior to the proliferation of LLMs and Generative AI models, you needed to have some pixel art skills, and be proficient in use of photo editing / creation software such as Photoshop, Illustrator, or GIMP in order to produce high quality pixel art.
Prior to Gen AI, you needed to be a software developer to produce working code.
Prior to Gen AI, you needed to be a visual artist to produce images, or a digital artist to produce pixel art, video game assets, logos, etc.
With Generative AI on the scene, this is no longer strictly true.
You do still need to be a specialist to understand the outputs and have the capability to explain them.
In the case of software development, you still require expertise in how computers work, architecture and good engineering practices to
employ the generated outputs to good effect.
There are some major caveats to understand around this piece such as why Generative AI is currently a huge boon to senior and above level developers, but commonly misleading and actively harmful to junior developers, but in general it holds true:
Generative AI lowers the barrier for people to produce specialized digital outputs.
MIT News: Explained - Generative AI
McKinsey - The State of AI in 2023: Generative AI's breakout year
McKinsey - What is ChatGPT, DALL-E and generative AI?
Accenture - What is Generative AI?
GenAI in the wild - successful use cases
Since the initial explosion of interest around GenAI, most companies have sprinted toward integrating generative AI models into their products and services, with varying success.
Here's a look at some of the tools leveraging Generative AI successfully to accelerate their end users:
v0.dev
Vercel's v0.dev tool which generates user interfaces in React in response to natural language queries.
In the above example, I prompted the app with:
A beautiful pricing page with three large columns showing my free, pro and enterprise tiers of service for my new saas news offering
and the app immediately produced three separate versions that I can continue to iterate on in natural language, asking for visual refinements to the layout, style, colors, font-size and more.
Prior to Gen AI, all of this work would have been done by hand by a technical designer sitting closely with at least one frontend developer.
Pulumi AI
Pulumi AI generates working Pulumi programs that describe infrastructure as code in response to natural language prompts.
There are some current pain points, such as the tool
strongly favoring older versions of Pulumi code which are now "deprecated" or slated for removal, but in general this tool is capable of saving a developer a lot of time
by outining the patterns to get a tricky configuration working with AWS.
If Generative AI opens the door for non-specialists to create specialized outputs, it simultaneously accelerates specialists.
Generative AI is powerful because it enables development use cases that were previously out of reach due to being too technically complex for one developer to build out on their own.
I've experienced this phenomenon myself.
I have been pair-coding alongside Generative AI models for {timeElapsedSinceJanuary2023()}, and in that time I have started work on more ambitious applications than I normally would have tackled as side proejcts.
I have also
completed more side projects as a result.
I have gotten unstuck faster when faced with a complex or confusing failure scenario, because I was able to talk through the problem with ChatGPT and discuss alternative approaches.
ChatGPT4 responds to me
with the quality of response and breadth of experience that I previously only would have expected from senior and staff level engineers.
I have enjoyed my work more, because now I have a supremely helpful colleague who is always available, in my time zone.
Gen AI is never busy, never frustrated or overwhelmed, and is more likely to have read widely and deeply on a given
technology subject than a human engineer.
I employ careful scrutiny to weed out hallucinations.
Because I've been developing software and working at both small and large Sillicon Valley companies since 2012, I am able to instantly detect when ChatGPT or a similar tool is hallucinating, out of its depth or
poorly suited to a particular task due to insufficient training data.
Sanity checking a change from an SEO perspective before making it
Help me configure a new GitHub Action for my repository that automatically validates my API specification
Cooperatively build a complex React component for my digital school
Collaboratively update a microservice's database access pattern
Collaboratively upgrading a section of my React application to use a new pattern
Large Language Models (LLMs)
LLMs are a critical component of Generative AI
Large Language Models (LLMs) are the brains behind Generative AI, capable of understanding, generating, and manipulating language based on the patterns they've learned from extensive datasets.
Their role is pivotal in enabling machines to perform tasks that require human-like language understanding, from writing code to composing poetry.
Think of LLMs as the ultimate librarian, but with a superpower: instant recall of every book, article, and document ever written.
They don't just store information; they understand context, draw connections, and create new content that's coherent and contextually relevant.
This makes LLMs invaluable in driving forward the capabilities of Generative AI, enabling it to generate content that feels surprisingly human.
One of the main challenges with LLMs is "hallucination," where the model generates information that's plausible but factually incorrect or nonsensical.
This is akin to a brilliant storyteller getting carried away with their imagination.
While often creative, these hallucinations can be misleading, making it crucial to use LLMs with a critical eye, especially in applications requiring high accuracy.
Hallucinations refer to when an AI like ChatGPT generates responses that seem plausible but don't actually reflect truth or reality.
The system is essentially "making things up" based on patterns learned from its language data - hence "hallucinating".
The critical challenge here is that hallucination is more or less inextricable from the LLM behaviors we find valuable - and LLMs do not know when they do not know something.
This is precisely why it can be so dangerous for
junior or less experienced developers, for example, to blindly follow what an LLM says when they are attempting to pair code with one.
Without a sufficient understanding of the target space, its challenges and potential issues, it's possible to make a tremendous mess by following the hallucinations of an AI model.
Why does hallucination happen?
LLMs like ChatGPT have been trained on massive text datasets, but have no actual connection to the real world. They don't have human experiences or knowledge of facts.
Their goal is to produce outputs that look reasonable based on all the text they've seen.
So sometimes the AI will confidently fill in gaps by fabricating information rather than saying "I don't know."
This is one of the reasons you'll often see LLMs referred to as "stochastic parrots". They are attempting to generate the next best word based on all of the words and writing they have ever seen.
Should this impact trust in LLMs?
Yes, hallucinations mean we can't fully rely on LLMs for complete accuracy and truthfulness. They may get core ideas directionally right, but details could be invented.
Think of them more as an aid for content generation rather than necessarily fact sources.
LLMs don't have true reasoning capacity comparable to humans.
Approaching them with appropriate trust and skepticism is wise as capabilities continue advancing.
GenAI meets software development: AI Dev Tools
What is a developer's IDE?
IDE stands for Integrated Development Environment.
It is a text editor designed specifically for programmers' needs.
IDEs provide syntax highlighting, autocompletion of code, and boilerplate text insertion to accelerate the coding process.
Most modern IDEs are highly customizable.
Through plugins and configuration changes, developers customize keyboard shortcuts, interface color themes, extensions that analyze code or connect to databases, and more based on their workflow.
Two very popular IDEs are Visual Studio Code (VSCode) from Microsoft and Neovim, which is open-source and maintained by a community of developers.
In VSCode, developers can install all sorts of plugins from a central marketplace - plugins to lint and format their code, run tests, interface with version control systems, and countless others.
There is also rich support for changing the visual theme and layout.
Neovim is another IDE centered around modal editing optimized for speed and keyboard usage over mice.
Its users can create key mappings to quickly manipulate files and code under-the-hood entirely from the keyboard.
It embraces Vim language and edit commands for coding efficiency.
For example, the following gif demonstrates a custom IDE using tmux and Neovim (my personal preference):
My personal preference is to combine tmux with Neovim for a highly flexible setup that expands and contracts to the size of my current task.
Developers tend to "live in" their preferred IDE - meaning they spend a lot of time coding.
Developers are also highly incentivized to tweak their IDE and add automations for common tasks in order to make themselves more efficient.
For this reason, Developers may try many different IDEs over the course of their career, but most tend to find something they're fond of and stick with it, which has implications for services that are or are not
available in a given IDE.
Usually, a service or Developer-facing tool gets full support as a VSCode plugin long before an official Neovim plugin is created and maintained.
In summary, IDEs are incredibly valuable tools that can match the preferences and project needs of individual developers through customizations.
VSCode and Neovim have strong followings in their ability to adapt to diverse workflows. Developers can write code and configuration to customize the IDE until it perfectly suits their style.
Generative AI in Software Development: Codeium vs. GitHub Copilot
Codeium and GitHub Copilot represent the cutting edge of Generative AI in software development, both leveraging LLMs to suggest code completions and solutions.
While GitHub Copilot is built on OpenAI's Codex, Codeium offers its unique AI-driven approach.
The key differences lie in their integration capabilities, coding style adaptations, and the breadth of languages and frameworks they support, making each tool uniquely valuable depending on the developer's needs.
These tools, while serving the common purpose of enhancing coding efficiency through AI-assisted suggestions, exhibit distinct features and use cases that cater to different aspects of the development workflow.
Codeium review
Codeium vs ChatGPT
GitHub Copilot review
ChatGPT 4 and Codeium are still all I need
The top bugs all AI developer tools are suffering from
Codeium, praised for its seamless integration within popular code editors like VSCode and Neovim, operates as a context-aware assistant, offering real-time code suggestions and completions directly in the IDE.
Its ability to understand the surrounding code and comments enables it to provide highly relevant suggestions, making it an indispensable tool for speeding up the coding process.
Notably, Codeium stands out for its free access to individual developers, making it an attractive option for those looking to leverage AI without incurring additional costs, whereas GitHub has been perpetually cagey about its Copilot offerings and their costs.
As a product of GitHub, Copilot is deeply integrated with the platform's ecosystem, potentially offering smoother workflows for developers who are heavily invested in using GitHub for version control and collaboration.
Imagine AI developer tools as ethereal companions residing within your IDE, whispering suggestions, and solutions as you type.
They blend into the background but are always there to offer a helping hand, whether it's completing a line of code or suggesting an entire function.
These "code spirits" are revolutionizing how developers write code, making the process faster, more efficient, and often more enjoyable.
Here's what I think about the future of Generative AI, after evaluating different tools and pair-coding with AI for ${timeElapsedSinceJanuary2023()}}/>
Thoughts and analysis
Where I see this going
In the rapidly evolving field of software development, the integration of Generative AI is not just a passing trend but a transformative force.
In the time I've spent experimenting with AI to augment my workflow and enhance my own human capabilities, I've realized incredible productivity gains: shipping more ambitious and complete applications than ever before.
I've even enjoyed myself more.
I envision a future where AI-powered tools become indispensable companions, seamlessly augmenting human intelligence with their vast knowledge bases and predictive capabilities.
These tools will not only automate mundane tasks but also inspire innovative solutions by offering insights drawn from a global compendium of code and creativity.
As we move forward, the symbiosis between developer and AI will deepen, leading to the birth of a new era of software development where the boundaries between human creativity and artificial intelligence become increasingly blurred.
What I would pay for in the future
In the future, what I'd consider invaluable is an AI development assistant that transcends the traditional boundaries of code completion and debugging.
I envision an assistant that's deeply integrated into my workflow and data sources (email, calendar, GitHub, bank, etc), capable of understanding the context of my projects across various platforms, project management tools, and even my personal notes.
This AI wouldn't just suggest code; it would understand the nuances of my projects, predict my needs, and offer tailored advice, ranging from architectural decisions to optimizing user experiences.
This level of personalized and context-aware assistance could redefine productivity, making the leap from helpful tool to indispensable partner in the creative process.
My favorite AI-enhanced tools
| Job to be done | Name | Free or paid? |
|---|---|---|
| Architectural and planning conversations | ChatGPT 4 | Paid |
| Autodidact support (tutoring and confirming understanding) | ChatGPT 4 | Paid |
| Accessing ChatGPT on the command line | mods | Free |
| Code-completion | Codeium | Free for individuals. Paid options |
| AI-enhanced video editing suite | Kapwing AI | Paid |
| AI-enhanced video repurposing (shorts) | OpusClip | Paid |
| Emotional support | Pi.ai | Free |
Emotional support and mind defragging with Pi.ai
Pi.ai is the most advanced model I've encountered when it comes to relating to human beings.
I have had success using Pi to have a quick chat and talk through something that is
frustrating or upsetting me at work, and in between 15 and 25 minutes of conversation, I've processed and worked through the issue and my feelings and am clear headed enough to make forward progress again.
This is a powerful remover of obstacles, because the longer I do what I do, the more clear it becomes that EQ is more critical than IQ.
Noticing when I'm irritated or overwhelmed and having a quick talk with someone highly intelligent and sensitive in order to process things and return with a clear mind is invaluable.
Pi's empathy is off the charts, and it feels like you're speaking with a highly skilled relational therapist.
How my developer friends and I started using GenAI
Asking the LLM to write scripts to perform one-off tasks (migrating data, cleaning up projects, taking backups of databases, etc)
Asking the LLM to explain a giant and complex stack trace (error) that came from a piece of code we're working with
Asking the LLM to take some unstructured input (raw files, log streams, security audits, etc), extract insights and return a simple list of key-value pairs
Opportunities
The advent of Generative AI heralds a plethora of opportunities that extend far beyond the realms of efficiency and productivity.
With an expected annual growth rate of 37% from 2023 to 2030, this technology is poised to revolutionize industries by democratizing creativity, enhancing decision-making, and unlocking new avenues for innovation.
In sectors like healthcare, education, and entertainment, Generative AI can provide personalized experiences, adaptive learning environments, and unprecedented creative content.
Moreover, its ability to analyze and synthesize vast amounts of data can lead to breakthroughs in research and development, opening doors to solutions for some of the world's most pressing challenges.
Challenges
Potential biases perpetuated
Since models are trained on available datasets, any biases or problematic associations in that data can be propagated through the system's outputs.
Misinformation risks
The ability to generate convincing, contextually-relevant content raises risks of propagating misinformation or fake media that appears authentic. Safeguards are needed.
Lack of reasoning capability
Despite advances, these models currently have a limited understanding of factual knowledge and common sense compared to humans. Outputs should thus not be assumed fully accurate or truthful.
Architectures and approaches such as Retrieval Augmented Generation (RAG) are commonly deployed to anchor an LLM in facts and proprietary data.
Hallucinations can lead junior developers astray
One of the significant challenges posed by Generative AI in software development is the phenomenon of 'hallucinations' or the generation of incorrect or nonsensical code.
This can be particularly misleading for junior developers, who might not have the experience to discern these inaccuracies.
Ensuring that AI tools are equipped with mechanisms to highlight potential uncertainties and promote best practices is crucial to mitigate this risk and foster a learning environment that enhances, rather than hinders, the development of coding skills.
Tool fragmentation and explosion
As the landscape of Generative AI tools expands, developers are increasingly faced with the paradox of choice.
The proliferation of tools, each with its unique capabilities and interfaces, can lead to fragmentation, making it challenging to maintain a streamlined and efficient workflow.
Navigating a rapidly evolving landscape
The pace at which Generative AI is advancing presents a double-edged sword.
While it drives innovation and the continuous improvement of tools, it also demands that developers remain perennial learners to keep abreast of the latest technologies and methodologies.
This rapid evolution can be daunting, necessitating a culture of continuous education and adaptability within the development community to harness the full potential of these advancements.
To be fair, this has always been the case with software development, but forces like Generative AI accelerate the subjective pace of change even further.
Ethics implications
Given the challenges in safely deploying Generative AI, these are some of the most pressing implications for ethical standards:
Audit systems for harmful biases
And the ability to make and track corrections when needed.
Human oversight
We need measures to catch and correct or flag AI mistakes.
In closing: As a developer...
Having worked alongside Generative AI for some time now, the experience has been occasionally panic-inducing, but mostly enjoyable.
Coding alongside ChatGPT4 throughout the day feels like having a second brain that's tirelessly available to bounce ideas off, troubleshoot problems, and help me tackle larger and more complex development challenges on my own. |
|
Write an article about "Infrastructure as Code" | Build systems in the cloud - quickly
Infrastructure as code is a critical skill these days. Practitioners are able to define and bring up reproducible copies of architectures on cloud providers
such as AWS and Google Cloud.
This course will get you hands on with CloudFormation, Terraform and Pulumi.
<Button
variant={"green"}
href={/learn/infrastructure-as-code/0}
Infrastructure as code intro - start course |
|
Write an article about "GitHub Automations" | Time to automate with GitHub!
GitHub Automations help you maintain software more effectively with less effort
<Button
variant={"green"}
href={/learn/courses/github-automations/0}
GitHub Automations - start course |
|
Write an article about "Taking Command" | Time to build a command line tool!
Project-based practice: building a command line tool in Go
<Button
variant={"green"}
href={/learn/courses/taking-command/0}
Taking Command - start course |
|
Write an article about "an article" | export const meta = {
}
Segment 1
One |
|
Write an article about "Pair coding with AI" | More than the sum of its parts...
Learning how to effectively leverage AI to help you code, design systems, generate high quality images in any style and more
can make you more productive, and can even make your work more enjoyable and less stressful.
This course shows you how.
<Button
variant={"green"}
href={/learn/pair-coding-with-ai/0}
Pair coding with AI - start course |
|
Write an article about "Emotional Intelligence for Developers" | Procrastination is about negative emotions
As you get further into your career, you come to realize that the technical chops come over time and are the easy part.
Mastering your own emotions, working with emotional beings (other humans), and recognizing when something has come up and needs attention or
skillful processing will take you further than memorizing 50 new books or development patterns.
<Button
variant={"green"}
href={/learn/emotional-intelligence-for-developers/0}
Emotional Intelligence for Developers - start course |
|
Write an article about "Coming out of your shell" | Don't stick to the default terminal
Have you ever heard of ZSH? Alacrity? Butterfish?
In this course you'll learn to install, configure and leverage powerful custom shells to supercharge your command line skills.
<Button
variant={"green"}
href={/learn/courses/coming-out-of-your-shell/0}
Coming out of your shell - start course |
|
Write an article about "an article" | export const meta = {
}
Cusomization makes it yours
By changing your shell, you not only make your computer more comfortable, but
perhaps even more importantly, you learn about Unix, commands, tty, and more. |
|
Write an article about "an article" | export const meta = {
}
Your shell is your home as a hacker
But most people never dare to experiment with even changing their shell.
What a shame |
|
Write an article about "Git Going" | Most developers don't understand git...
Yet everyone needs git. Learning git well is one of the best ways to differentiate yourself to hiring managers.
Git is your save button!
Never lose work once you learn how to use git. While git has a lot of complex advanced features and configuration options, learning the basic workflow for being effective
doesn't take long, and this course will show you everything you need to know with a hands-on project.
<Button
variant={"green"}
href={/checkout?product=git-going}
Git Going - start course |
|
Write an article about "an article" | export const meta = {
}
Why Version Control?
Version control is super important! |
|
Write an article about "an article" | export const meta = {
}
Git vs GitHub
Git and GitHub are intertwined but different. |
|
Write an article about "an article" | export const meta = {
}
Most developers don't know git well
mermaid
gitGraph
commit
commit
branch develop
checkout develop
commit
commit
checkout main
merge develop
commit
commit
And this is a great thing for you. You can differentiate yourself to hiring managers, potential teams considering you, and anyone else you collaborate with
professionally by demonstrating a strong grasp of git.
Some day, you'll need to perform a complex git surgery, likely under pressure, in order to fix something or restore a service. You'll be glad then that you
practiced and learned git well now. |
|
Write an article about "an article" | export const meta = {
}
Git is your save button
That's why it's so critical to learn the basics well. Git enables you to save your work, have multiple copies of your code distributed around to other machines,
so that you can recover even if you spill tea all over your laptop, and professionally share code and collaborate with other develeopers. |
|
Write an article about "an article" | export const meta = {
}
Learning git pays off
Learning git is very important. |
|
Write an article about "an article" | export const meta = {
}
Git configuration
Configuring git is important. |
|
Write an article about "an article" | export const metadata = {
}
Descript
Descript is a powerful video editing tool that allows users to edit videos by editing the transcript, making the process more intuitive and accessible.
Features
Free Tier: No
Chat Interface: No
Supports Local Model: No
Supports Offline Use: No
IDE Support
No IDE support information available
Language Support
No language support information available
Links
Homepage
Review |
|
Write an article about "Part #2 Live code" | Part 2 of our previous video.
Join Roie Schwaber-Cohen and me as we continue to step through and discuss the Pinecone Vercel starter template that deploys an AI chatbot that is less likely to hallucinate thanks to Retrieval Augmented Generation (RAG). |
|
Write an article about "Pinecone & Pulumi" | I co-hosted a webinar with Pulumi's Scott Lowe about:
The delta between getting an AI or ML technique working in a Jupyter Notebook and prod
How to deploy AI applications to production using the Pinecone AWS Reference Architecture
How Infrastructure as Code can simplif y productionizing AI applications |
|
Write an article about "Live code" | Join Roie Schwaber-Cohen and me for a deep dive into The Pinecone Vercel starter template that deploys an AI chatbot that is less likely to hallucinate thanks to Retrieval Augmented Generation (RAG).
This is an excellent video to watch if you are learning about Generative AI, want to build a chatbot, or are having difficulty getting your current AI chatbots to return factual answers about specific topics and your proprietary data.
You don't need to already be an AI pro to watch this video, because we start off by explaining what RAG is and why it's such an The majority of this content was originally captured as a live Twitch.tv stream co-hosted by Roie (rschwabco) and myself (zackproser).
Be sure to follow us on Twitch for more Generative AI deep dives, tutorials, live demos, and conversations about the rapidly developing world of Artificial Intelligence. |
|
Write an article about "How to use Jupyter notebooks, langchain and Kaggle.com to create an AI chatbot on any topic" | In this video, I do a deep dive on the two Jupyter notebooks which I built as part of my office oracle project.
Both notebooks are now open source:
Open-sourced Office Oracle Test Notebook
Open-sourced Office Oracle Data Bench
I talk through what I learned, why Jupyter notebooks were such a handy tool for getting my data quality to where I needed it to be, before worrying about application logic.
I also demonstrate langchain DocumentLoaders, how to store secrets in Jupyter notebooks when open sourcing them, and much more.
What's involved in building an AI chatbot that is trained on a custom corpus of knowledge?
In this video I breakdown the data preparation, training and app development components and explain why Jupyter notebooks were such a handy tool while creating this app and tweaking my model.
App is open source at https://github.com/zackproser/office-... and a demo is currently available at https://office-oracle.vercel.app. |
|
Write an article about "Deploying the Pinecone AWS Reference Architecture - Part 1" | In this three part video series, I deploy the Pinecone AWS Reference Architecture with Pulumi from start to finish. |
|
Write an article about "Deploying the first Cloudflare workers in front of api.cloudflare.com and www.cloudflare.com" | On the Cloudflare API team, we were responsible for api.cloudflare.com as well as www.cloudflare.com.
Here's how we wrote the first Cloudflare Workers to gracefully deprecate TLS 1.0 and set them in front of
both properties, without any downtime.
And no, if you're paying attention, my name is not Zack Prosner, it's Zack Proser :) |
|
Write an article about "Project" | Adding speech-to-text capabilities to Panthalia allows me to commence blog posts faster and more efficiently
than ever before, regardless of where I might be.
In this video I demonstrate using speech to text to create a demo short story end to end, complete with generated images,
courtesy of StableDiffusionXL. |
|
Write an article about "Deploying the Pinecone AWS Reference Architecture - Part 2" | In this three part video series, I deploy the Pinecone AWS Reference Architecture with Pulumi from start to finish. |
|
Write an article about "Destroying the Pinecone AWS Reference Architecture" | I demonstrate how to destroy a deployed Pinecone AWS Reference Architecture using Pulumi. |
|
Write an article about "Exploring a Custom Terminal-Based Developer Workflow - Tmux, Neovim, Awesome Window Manager, and More" | This video showcases my custom terminal-based developer workflow that utilizes a variety of fantastic open-source tools like tmux, neovim, Awesome Window Manager, and more.
So, let's dive in and see what makes this workflow so efficient, powerful, and fun to work with.
One of the first tools highlighted in the video is tmux, a terminal multiplexer that allows users to manage multiple terminal sessions within a single window.
I explain how tmux can increase productivity by letting developers switch between tasks quickly and easily.
I also show off how tmux can help your workflow fit to your task, using techniques like pane splitting to expand and contract the space to the task at hand.
Next up is neovim, a modernized version of the classic Vim text editor.
I demonstrate how neovim integrates seamlessly with tmux, providing powerful text editing features in the terminal.
I also discuss some of the advantages of using neovim over traditional text editors, such as its extensibility, customization options, and speed.
The Awesome Window Manager also gets its moment in the spotlight during the video.
This dynamic window manager is designed for developers who want complete control over their workspace.
I show how Awesome Window Manager can be configured to create custom layouts and keybindings, making it easier to manage multiple applications and terminal sessions simultaneously.
Throughout the video, I share a variety of other open-source tools that I have integrated into my workflow.
Some of these tools include fzf, a command-line fuzzy finder that makes searching for files and directories a breeze; ranger, a file manager designed for the terminal; and zsh, a powerful shell that offers a multitude of productivity-enhancing features.
One of the key takeaways from the video is how By combining these open-source tools and tailoring them to their specific needs, I have created a workflow that helps me work faster, more efficiently, and with greater satisfaction.
So, if you're looking for inspiration on how to build your terminal-based developer workflow, this YouTube video is a must-watch.
See what I've learned so far in setting up a custom terminal-based setup for ultimate productivity and economy of movement. |
|
Write an article about "How to build an AI chatbot using Vercel's ai-chatbot template" | Curious how you might take Vercel's ai-chatbot template repository from GitHub and turn it into your own GPT-like chatbot of any identity? That's what I walkthrough in this video.
I show the git diffs and commit history while talking through how I integrated langchain, OpenAI, ElevenLabs for voice cloning and text to speech and Pinecone.io for the vector database in order to create a fully featured chat-GPT-like custom AI bot that can answer, factually, for an arbitrary corpus of knowledge. |
|
Write an article about "How to build chat with your data using Pinecone, LangChain and OpenAI" | I demonstrate how to build a RAG chatbot in a Jupyter Notebook end to end.
This tutorial is perfect for beginners who want help getting started, and for experienced developers who want to understand how LangChain, Pinecone and OpenAI
all fit together. |
|
Write an article about "A full play through of my HTML5 game, CanyonRunner" | CanyonRunner is a complete HTML5 game that I built with the Phaser.js framework. I wrote about how the game works in my blog post here. |
|
Write an article about "Pinecone & Pulumi" | I co-hosted a webinar with Pulumi's Engin Diri about:
The Pinecone AWS Reference Architecture,
How it's been updated to use Pinecone Serverless and the Pinecone Pulumi provider
How to deploy an AI application to production using infrastructure as code |
|
Write an article about "How to use ChatGPT in your terminal" | If you're still copying and pasting back and forth between ChatGPT in your browser and your code in your IDE, you're missing out.
Check out how easily you can use ChatGPT in your terminal! |
|
Write an article about "Project" | I've been working on this side project for several months now, and it's ready enough to demonstrate. In this video I talk through:
What it is
How it works
A complete live demo
Using Replicate.com for a REST API interface to StableDiffusion XL for image generation
Stack
Next.js
Vercel
Vercel Postgres
Vercel serverless functions
Pure JavaScript integration with git and GitHub thanks to isomorphic-git
Features
Secured via GitHub oAuth
StableDiffusion XL for image generation
Postgres database for posts and images data
S3 integration for semi-volatile image storage
Start and complete high quality blog posts in MDX one-handed while on the go
Panthalia is open-source and available at github.com/zackproser/panthalia |
|
Write an article about "What is a vector database?" | I walk through what a vector database is, by first explaining the types of problems that vector databases solve, as well as how AI "thinks".
I use clowns as an example of a large corpus of training data from which we can extract high level features, and I discuss architectures such as
semantic search and RAG. |
|
Write an article about "How to use Jupyter Notebooks for Machine Learning and AI tasks" | In this video, I demonstrate how to load Jupyter Notebooks into Google Colab and run them for free. I show how to load Notebooks from GitHub and how to execute individual cells and how to run
Notebooks end to end. I also discuss some important security considerations around leaking API keys via Jupyter Notebooks. |
|
Write an article about "Deploying the Pinecone AWS Reference Architecture - Part 3" | In this three part video series, I deploy the Pinecone AWS Reference Architecture with Pulumi from start to finish. |
|
Write an article about "Master GitHub Pull Request Reviews with gh-dash and Octo - A YouTube Video Tutorial" | In this video tutorial, I demonstrate the power of combining gh-dash and Octo for a seamless terminal-based GitHub pull request review experience.
In this video, I show you how these two powerful tools can be used together to quickly find, organize, and review pull requests on GitHub, all from the comfort of your terminal.
Topics covered
Discovering Pull Requests with gh-dash
We'll kick off the tutorial by showcasing gh-dash's impressive pull request discovery capabilities.
Watch as we navigate the visually appealing TUI interface to find, filter, and sort pull requests under common headers, using custom filters to locate the exact pull requests you need.
Advanced GitHub Searches in Your Terminal
Explore gh-dash's advanced search functionality in action as we demonstrate how to perform fully-featured GitHub searches directly in your terminal.
Learn how to search across repositories, issues, and pull requests using a range of query parameters, streamlining your pull request review process.
In-Depth Code Reviews Using Octo
Once we've located the pull requests that need reviewing, we'll switch gears and dive into Octo, the powerful Neovim plugin for code reviews.
Witness how Octo integrates seamlessly with Neovim, enabling you to view code changes, commits, and navigate the codebase with ease.
Participating in Reviews with Comments and Emoji Reactions
See how Octo takes code reviews to the next level by allowing you to leave detailed in-line comments and even add GitHub emoji reactions to comments.
With Octo, you can actively participate in the review process and provide valuable feedback to your colleagues, all within the Neovim interface.
Combining gh-dash and Octo for a Streamlined Workflow
In the final segment of the video tutorial, we'll demonstrate how to create a seamless workflow that combines the strengths of gh-dash and Octo.
Learn how to harness the power of both tools to optimize your GitHub pull request review process, from locating pull requests using gh-dash to conducting comprehensive code reviews with Octo.
By the end of this video tutorial, you will have witnessed the incredible potential of combining gh-dash and Octo for a robust terminal-based GitHub pull request review experience.
We hope you'll be inspired to integrate these powerful tools into your workflow, maximizing your efficiency and productivity in managing and reviewing pull requests on GitHub.
Happy coding! |
|
Write an article about "Mastering Fast, Secure AWS Access with open source tool aws-vault" | Don't hardcode your AWS credentials into your dotfiles or code! Use aws-vault to store them securely
In this YouTube video, I demonstrate how to use the open-source Golang tool, aws-vault, for securely managing access to multiple AWS accounts.
aws-vault stores your permanent AWS credentials in your operating system's secret store or keyring and fetches temporary AWS credentials from the AWS STS endpoint.
This method is not only secure but also efficient, especially when combined with Multi-Factor Authentication.
In this video, I demonstrate the following aspects of aws-vault:
Executing arbitrary commands against your account: The video starts by showing how aws-vault can be used to execute any command against your AWS account securely.
By invoking aws-vault with the appropriate profile name, you can fetch temporary AWS credentials and pass them into subsequent commands, ensuring a secure way of managing AWS access.
Quick AWS account login: Next, I show how to use aws-vault to log in to one of your AWS accounts quickly.
This feature is particularly helpful for developers and system administrators who manage multiple AWS accounts and need to switch between them frequently.
Integration with Firefox container tabs: One of the most exciting parts of the video is the demonstration of how aws-vault can be used in conjunction with Firefox container tabs to log in to multiple AWS accounts simultaneously.
This innovative approach allows you to maintain separate browsing sessions for each AWS account, making it easier to manage and work with different environments.
The video emphasizes how using aws-vault can significantly improve your command line efficiency and speed up your workflow while working with various test and production environments.
If you're a developer or system administrator looking to enhance your AWS account management skills, this YouTube video is for you. |
|
Write an article about "Building an AI chatbot with langchain, Pinecone.io, Jupyter notebooks and Vercel" | What's involved in building an AI chatbot that is trained on a custom corpus of knowledge?
In this video I breakdown the data preparation, training and app development components and explain why Jupyter notebooks were such a handy tool while creating this app and tweaking my model.
App is open source at https://github.com/zackproser/office-... and a demo is currently available at https://office-oracle.vercel.app. |
|
Write an article about "Deploying a jump host for the Pinecone AWS Reference Architecture" | I demonstrate how to configure, deploy and connect through a jump host so that you can interact with RDS Postgres
and other resources running in the VPC's private subnets. |
|
Write an article about "Cloud-Nuke - A Handy Open-Source Tool for Managing AWS Resources" | In this video, we'll have a more casual conversation about cloud-nuke, an open-source tool created and maintained by Gruntwork.io.
I discuss the benefits and features of cloud-nuke, giving you an idea of how it can help you manage AWS resources more efficiently.
First and foremost, cloud-nuke is a Golang CLI tool that leverages the various AWS Go SDKs to efficiently find and destroy AWS resources.
This makes it a handy tool for developers and system administrators who need to clean up their cloud environment, save on costs, and minimize security risks.
One of the main benefits of cloud-nuke is its ability to efficiently search and delete AWS resources.
It does this by using a powerful filtering system that can quickly identify and remove unnecessary resources, while still giving you full control over what gets deleted.
This means that you don't have to worry about accidentally removing critical resources.
Another useful feature of cloud-nuke is its support for regex filters and config files.
This allows you to exclude or target resources based on their names, giving you even more control over your cloud environment.
For example, you might have a naming convention for temporary resources, and with cloud-nuke's regex filtering, you can quickly identify and delete these resources as needed.
Configuring cloud-nuke is also a breeze, as you can define custom rules and policies for managing resources.
This means you can tailor the tool to meet the specific needs of your organization, ensuring that your cloud environment stays clean and secure.
One thing to keep in mind when using cloud-nuke is that it's This will help you avoid accidentally deleting critical resources, and it will also ensure that you're keeping up with any changes in your cloud environment.
In addition to using cloud-nuke as a standalone tool, you can also integrate it with other cloud management tools and services.
This will help you create a more comprehensive cloud management strategy, making it easier to keep your environment secure and well-organized.
To sum it up, cloud-nuke is a versatile open-source tool that can help you manage your AWS resources more effectively.
Its efficient search and deletion capabilities, support for regex filters and config files, and easy configuration make it a valuable addition to any developer's or system administrator's toolkit.
So, if you're looking for a better way to manage your AWS resources, give cloud-nuke a try and see how it can make your life easier. |
|
Write an article about "Semantic Search with TypeScript and Pinecone" | Roie Schwaber-Cohen and I discuss semantic search and step through the code for performing semantic search with Pinecone's vector database. |
|
Write an article about "Episode" | Table of contents
Welcome to Episode 2
In today's episode, we're looking at interactive machine learning demos, vector databases compared, and developer anxiety.
My work
Introducing - interactive AI demos
I've added a new section to my site, demos. To kick things off, I built two interactive demos:
Tokenization demo
Embeddings demo
Both demos allow you to enter freeform text and then convert it to a different representation that machines can understand.
The tokenization demo shows you how the tiktoken library converts your natural language into token IDs from a given vocabulary, while the embeddings demo shows you how text is converted to an array of floating point numbers representing the features that the
embedding model extracted from your input data.
I'm planning to do a lot more with this section in the future. Some initial ideas:
Create a nice intro page linking all the demos together in a sequence that helps you to iteratively build up Add more demos - I plan to ship a new vector database demonstration using Pinecone shortly that will introduce the high level concepts involved in working with vector databases and potentially even demonstrate visualizing high-dimensional vector space
Take requests - If you have ideas for new demos, or aspects of machine learning or AI pipelines that you find confusing, let me know by responding to this email.
Vector databases compared
I wrote a new post comparing top vector database offerings. I'm treating this as a living document, meaning that I'll likely
add to and refine it over time.
What's abuzz in the news
Here's what I've come across and have been reading lately.
The common theme is developer anxiety: the velocity of changes and new generative AI models and AI-assisted developer tooling, combined with ongoing
industry layoffs and the announcement of "AI software developer" Devin, has many developers looking to the future with deep concern and worry.
Some have wondered aloud if their careers are already over, some are adopting the changes in order to continue growing their careers, and still others remain deeply skeptical of AI's ability to replace all of the squishy aspects to our jobs that don't fit in a nice spec.
What's my plan?
As usual, I intend to keep on learning, publishing and growing.
I've been hacking alongside "AI" for a year and a half now, and so far my productivity and job satification have only improved.
Are we going to need less individual programmers at some unknown point in the future?
Probably.
Does that mean that there won't be opportunities for people who are hungry and willing to learn?
Probably not.
Recommended reading
The AI Gold Rush
The Top 100 GenAI Consumer Apps
Can You Replace Your Software Engineers With AI?
Developers are on edge
My favorite tools
High-level code completion
I am still ping-ponging back and forth between ChatGPT 4 and Anthropic's Claude 3 Opus.
I am generally impressed by Claude 3 Opus, but even with the premium subscription, I'm finding some the limits to be noticeably dear, if you will.
Several days in a row now I've gotten the
warning about butting up against my message sending limits.
At least for what I'm using them both for right now: architecture sanity checks and boilerplate code generation, it's not yet the case that one is so obviously superior that I'm ready to change up my workflow.
Autocomplete / code completion
Codeium!
AI-assisted video editing
Kapwing AI
That's all for this episode! If you liked this content or found it helpful in any way, please pass it on to someone you know who could benefit. |
|
Write an article about "Episode" | Table of contents
Starting fresh with episode 1
Why Episode 1? I've decided to invest more time and effort into my newsletter. All future episodes will now be
available on my newsletter archive at https://zackproser.com/newsletter.
Going forward, each of my newsletter episodes will include:
My work - posts, videos, open-source projects I've recently shipped
What's abuzz in the news - new AI models, open-source models, LLMs, GPTs, custom GPTs and more
My favorite tools - a good snapshot of the AI-enhanced and other developer tooling I'm enamored with at the moment
I will aim to publish and send a new episode every two weeks.
My work
The Generative AI Bootcamp: Developer Tooling course is now available
I've released my first course!
The Generative AI Bootcamp: DevTools course is designed for semi and non-technical folks who want to understand:
What Generative AI is
Which professions and skillsets it is disrupting, why and how
Which AI-enhanced developer tooling on the scene is working and why
This course is especially designed for investors, analysts and market researchers looking to understand the opportunities and challenges of Generative AI as it relates to Developer Tooling, Integrated Developer Environments (IDEs), etc.
2023 Wins - My year in review
2023 was a big year for me, including a career pivot to a new role, a new company and my first formal entry into the AI space.
I reflect on my wins and learnings from the previous year.
Testing Pinecone Serverless at Scale with the AWS Reference Architecture
I updated the Pinecone AWS Reference Architecture to use Pinecone Serverless, making for an excellent test bed for
putting Pinecone through its paces at scale.
Just keep an eye on your AWS bill!
Codeium vs ChatGPT
I get asked often enough about the differences between Codeium for code completion (intelligent autocomplete) and ChatGPT4, that I figured I should just write a
comprehensive comparison of their capabilities and utility.
My first book credit - My Horrible Career
What started out as an extended conversation with my programming mentor John Arundel became a whole book!
How to build a sitemap for Next.js that captures static and dynamic routes
Some old-school tutorial content for my Next.js and Vercel fans.
What's abuzz in the news
Anthropic releases Claude 3 family of models
I've been experimenting with Claude 3 Opus, their most intelligent model, to see how it slots in for high-level architecture discussions and code generation compared to ChatGPT 4.
So far, so good, but I'll have more thoughts and observations here soon. Watch this space!
My favorite tools
High-level code completion
Currently neck and neck between ChatGPT 4 and Anthropic's Claude 3 Opus. Stay tuned for more thoughts.
Autocomplete / code completion
Codeium!
AI-assisted video editing
Kapwing AI
That's all for this episode! If you liked this content or found it helpful in any way, please pass it on to someone you know who could benefit. |
|
Write an article about "ChatGPT4 and Codeium are still my favorite dev assistant stack" | As of October 10th, 2023, ChatGPT4 and Codeium are all I need to make excellent progress and have fun doing it.
As of October 10th, 2023, the Generative AI hype cycle is still in full-swing and there are more startups with their own developer-focused AI-assisted coding tools than ever before. Here's why
I'm still perfectly content with ChatGPT4 (with a Plus subscription for $20 per month) and Codeium, which I've reviewed here for code completion.
They are available everywhere
ChatGPT4 can be opened from a browser anywhere, even on internet-connected machines I don't own: chat.openai.com is part of my muscle memory now, and I once I log in, my entire
conversational history is available to me. Now that ChatGPT4 is available on Android, it's truly with me wherever I go.
They are low-friction
Now that ChatGPT4 is on my phone, I can start up a new conversation when I'm walking and away from my desk.
Some days, after working all day and winding down for sleep, I'll still have a couple of exciting
creative threads I don't want to miss out on, so I'll quickly jot or speak a paragraph of context into a new new GPT4 chat thread to get it whirring away on the idea.
I can either tune it by giving it more
feedback or just pass out and pick up the conversation the next day.
I don't always stick to stock ChatGPT4 form-factors, however. Charmbracelet's mods wrapper is the highest quality and most delightful tool I've found for working with
GPT4 in your unix pipes or just asking quick questions in your terminal such as, "Remind me of the psql command to connect to my Postgres host".
Being able to pipe your entire Python file to mods and ask it
to help you find the bug is truly accelerating.
Codium works like a dream once you get it installed. I tend to use Neovim by preference but also work with VSCode - once you're over the initial installation and auth hurdles, it "just works".
Most everything else I've tried doesn't work
No disrespect to these tools or the teams behind them. I do believe the potential is there and that many of them will become very successful in due time once the initial kinks are worked out.
But I've spent a great deal of time experimenting with these across various form factors: an Ubuntu desktop, my daily driver Linux laptop, a standard MacBook pro, and the reality is that the code or tests
or suggestions they output are often off the mark.
ChatGPT4 extends my capabilities with minimal fuss
Since ChatGPT4 is available in the browser, I can access it from any machine with an internet connection, even if I'm traveling.
Since it's now also available as an Android app, I can also reference past
conversations and start new ones on my phone.
The Android app has a long way to go until it's perfect, yet it does already support speech to text, making for the lowest possible friction entrypoint to a new
app idea, architectural discussion or line of inquiry to help cement my understanding of a topic I'm learning more about.
They are complimentary
ChatGPT4 excels at having long-winded discussions with me about the many ways I might implement desired functionality in one of my applications or side projects.
It often suggests things that I would have missed
on my first pass, such as the fact that Vercel has deployment hooks I can take advantage of, and it's especially useful once the project is underway.
I can say things like:
I'm changing the data model yet again now that I understand how I want this UX to work - drop these fields from the posts table, add these to the images table and re-generate the base SQL migrations I run to scaffold the app.
I think of and treat ChatGPT4 as a senior level technical peer.
I do sometimes ask it to generate code for me as a starting point for a new component, or to explain a TypeScript error that is baking my noodle, but it's main value is in being that intelligent always-available
coding partner I can talk through issues with.
Meanwhile, Codium runs in my actual IDE and is one of the best tools I've found at code-completion - it does a better job of just about anything I've evaluated at grokking the and its suggestions are often scarily spot-on, which means that it saves me a couple seconds here and there, closing HTML tags for me, finishing up that convenience JavaScript function I'm writing, even completing
my thought as I'm filling in a README.
That's the other key feature of Codium that makes it such a winner for me - it's with me everywhere and it can suggest completions in any context ranging from prose, to TOML, to Python,
to TypeScript, to Go, to Dockerfiles, to YAML, and on and on.
With GPT4 as my coding buddy who has the memory of an elephant and who can deconstruct even the nastiest stack-traces or log dumps on command, and Codium helping me to save little bits of time here and there but constantly,
I have settled on a workflow that accelerates me and, perhaps more Looking forward and what I'm still missing
I have no doubt that the current generation of developer-focused AI tools are going to continue improving at a rapid pace.
ChatGPT itself has seen nothing but enhancement at breakneck speed since I started using it and I haven't
even gotten my hands on its multi-modal (vision, hearing, etc) capabilities yet.
However, even with excellent wrappers such as mods, which I've mentioned above, what I find myself missing is the ability for ChatGPT4 to read and
see my entire codebase when I'm working on a given project.
The quality of its answers are occasionally hobbled by its inability to load my entire codebase into its context, which leads to it generating more generic sample code than it really needs to.
I'm confident with additional work and a litle time, it won't be long until ChatGPT4 or one of its competitors is able to say: "The reason your current Jest test is choking on that I'm able to get that information out of it now, but it just takes a good bit of careful prompting and more copy/paste than I would ideally like.
What I really want is the helpful daemon looking over my shoulder, who is smart enough to know
when to raise its hand to point out something that's going to cause the build to fail, and when to keep quiet because even if it knows better that's just my personal coding style preference so better to leave it alone.
We're not quite there yet, but all of the base ingredients to create this experience are.
Indeed, many different companies both large and small are sprinting full-tilt toward this experience, as I've written about recently, but there's still quite a way to go until these tools present uniformly smooth experiences to their end users:
GitHub Copilot review
The top bugs all AI developer tools have right now
Codeium review
CodiumAI PR agent for eased GitHub maintenance
Can ChatGPT4 help me complete side projects more quickly? |
|
Write an article about "Opengraph dynamic social images" | ${process.env.NEXT_PUBLIC_SITE_URL}/api/og}
alt="Zachary Proser's default opengraph image"
/>
What is opengraph?
Opengraph is a standard for social media image formats. It's the "card" that is rendered whenever you or someone else shares a URL to your site on social media:
It's considered a good idea to have an opengraph image associated with each of your posts because it's a bit of nice eye candy that theoretically helps improve your click-through rate.
A high quality opengraph image can help
make your site look more professional.
Implementing my desired functionality
This took me a bit to wrap my head around.
The examples Vercel provides were helpful and high quality as usual, (they even have a helpful opengraph playground) but I wish there had been more of them.
It took me a while to figure out how to implement the exact
workflow I wanted:
I add a "hero" image to each of my posts which renders on my blog's index page. I wanted my opengraph image for a post to contain the post's title as well as its hero image
I wanted a fallback image to render for my home or index pages - and in case the individual post's image couldn't be rendered for whatever reason
In this way, I could have an attractive opengraph image for each post shared online, while having a sane default image that does a good job of promoting my site in case of any issues.
In general, I'm pretty happy with how the final result turned out, but knowing myself I'll likely have additional tweaks to make in the future to improve it further.
If you look closely (right click the image and open it in a new tab), you can see that my image has two linear gradients, one for the green background which transitions between greens from top to bottom, and one for blue which transitions left to right.
In addition, each band has a semi-transparent background image - giving a brushed aluminum effect to the top and bottom green bands and a striped paper effect to the center blue card where the title and hero image are rendered.
I was able to
pull this off due to the fact that Vercel's '@vercel/og' package allows you to use Tailwind CSS in combination with inline styles.
Per-post images plus a fallback image for home and index pages
This is my fallback image, and it is being rendered by hitting the local /api/og endpoint.
It's src parameter is ${process.env.NEXT_PUBLIC_SITE_URL/api/og} which computes to "${process.env.NEXT_PUBLIC_SITE_URL}/api/og".
${process.env.NEXT_PUBLIC_SITE_URL}/api/og}
alt="Zachary Proser's default topengraph image"
/>
Example dynamically rendered opengraph images for posts:
Blog post with dynamic title and hero image
javascript
src={${process.env.NEXT_PUBLIC_SITE_URL}/api/og
?title=Retrieval Augmented Generation (RAG)
&image=/_next/static/media/retrieval-augmented-generation.2337c1a1.webp}
${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=Retrieval Augmented Generation (RAG)&image=/_next/static/media/retrieval-augmented-generation.2337c1a1.webp}
alt="Retrieval Augmented Generation post"
/>
Another blog post with dynamic title and hero image
javascript
src={${process.env.NEXT_PUBLIC_SITE_URL}/api/og
?title='AI-powered and built with...JavaScript?
&image=/_next/static/media/javascript-ai.71499014.webp}
${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=A-powered and built with...JavaScript?&image=/_next/static/media/javascript-ai.71499014.webp}
alt="AI-powered and built with JavaScript post"
/>
Blog post with dynamic title but fallback image
Having gone through this exercise, I would highly recommend implementing a fallback image that renders in two cases:
1. If the page or post shared did not have a hero image associated with it (because it's your home page, for example)
2. Some error was encountered in rendering the hero image
Here's an example opengraph image where the title was rendered dynamically, but the fallback image was used:
javascript
src={${process.env.NEXT_PUBLIC_SITE_URL}/api/og
?title=This is still a dynamically generated title}
${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=This is still a dynamically generated title}
alt="This is still a dynamically generated title"
/>
Understanding the flow of Vercel's '@vercel/og' package and Next.js
This is a flowchart of how the sequence works:
In essence, you're creating an API route in your Next.js site that can read two query parameters from requests it receives:
1. The title of the post to generate an image for
2. The hero image to include in the dynamic image
and use these values to render the final @vercel/og ImageResponse.
Honestly, it was a huge pain in the ass to get it all working the way I wanted, but it would be far worse without this library and Next.js integration.
In exchange for the semi-tedious experience of building out your custom OG image you get tremendous flexibility within certain hard limitations, which you can read about here.
Here's my current /api/og route code, which still needs to be improved and cleaned up, but I'm sharing it in case it helps anyone else trying to figure out this exact same flow.
This entire site is open-source and available at github.com/zackproser/portfolio
javascript
export const config = {
};
const { searchParams } = new URL(request.url);
console.log(og API route searchParams %o:, searchParams)
const hasTitle = searchParams.has('title');
const title = hasTitle ? searchParams.get('title') : 'Portfolio, blog, videos and open-source projects';
// This is horrific - need to figure out and fix this
const hasImage = searchParams.has('image') || searchParams.get('amp;image')
// This is equally horrific - need to figure out and fix this for good
const image = hasImage ? (searchParams.get('image') || searchParams.get('amp;image')) : undefined;
console.log(og API route hasImage: ${hasImage}, image: ${image})
// My profile image is stored in /public so that we don't need to rely on an external host like GitHub
// that might go down
const profileImageFetchURL = new URL('/public/zack.webp', const profileImageData = await fetch(profileImageFetchURL).then(
(res) => res.arrayBuffer(),
);
// This is the fallback image I use if the current post doesn't have an image for whatever reason (like it's the homepage)
const fallBackImageURL = new URL('/public/zack-proser-dev-advocate.webp', // This is the URL to the image on my site
const ultimateURL = hasImage ? new URL(${process.env.NEXT_PUBLIC_SITE_URL}${image}) : fallBackImageURL
const postImageData = await fetch(ultimateURL).then(
(res) => res.arrayBuffer(),
).catch((err) => {
console.log(og API route err: ${err});
});
return new ImageResponse(
Zachary Proser
Staff Developer Advocate @Pinecone.io
linear-gradient(to right, rgba(31, 97, 141, 0.8), rgba(15, 23, 42, 0.8)), url(https://zackproser.com/subtle-stripes.webp)
}}
>
{title}
<div tw="flex w-64 h-85 rounded overflow-hidden mt-4">
<img
src={postImageData}
alt="Post Image"
className="w-full h-full object-cover"
/>
</div>
</div>
</div>
<div tw="flex flex-col items-center">
<h1
tw="text-white text-3xl pb-2"
>
zackproser.com
</h1>
</div>
</div>
)
}
Here's my ArticleLayout.jsx component, which forms the <meta name="og:image" content={ogURL} /> in the head of each post to provide the URL that social media sites will call when rendering
their cards:
javascript
function ArrowLeftIcon(props) {
return (
)
}
export function ArticleLayout({
children,
metadata,
isRssFeed = false,
previousPathname,
}) {
let router = useRouter()
if (isRssFeed) {
return children
}
const sanitizedTitle = encodeURIComponent(metadata.title.replace(/'/g, ''));
// opengraph URL that gets rendered into the HTML, but is really a URL to call our backend opengraph dynamic image generating API endpoint
let ogURL = ${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=${sanitizedTitle}
// If the post includes an image, append it as a query param to the final opengraph endpoint
if (metadata.image && metadata.image.src) {
ogURL = ogURL + &image=${metadata.image.src}
}
console.log(ArticleLayout ogURL: ${ogURL});
let root = '/blog/'
if (metadata?.type == 'video') {
root = '/videos/'
}
const builtURL = ${process.env.NEXT_PUBLIC_SITE_URL}${root}${metadata.slug ?? null}
const postURL = new URL(builtURL)
return (
<>
{${metadata.title} - Zachary Proser}
{metadata.title}
<meta name="twitter:card" content="summary_large_image" />
<meta property="twitter:domain" content="zackproser.com" />
<meta property="twitter:url" content={postURL} />
<meta name="twitter:title" content={metadata.title} />
<meta name="twitter:description" content={metadata.description} />
<meta name="twitter:image" content={ogURL} />
</Head>
<Container className="mt-16 lg:mt-32">
<div className="xl:relative">
<div className="mx-auto max-w-2xl">
{previousPathname && (
<button
type="button"
onClick={() => router.back()}
aria-label="Go back to articles"
className="group mb-8 flex h-10 w-10 items-center justify-center rounded-full bg-white shadow-md shadow-zinc-800/5 ring-1 ring-zinc-900/5 transition dark:border dark:border-zinc-700/50 dark:bg-zinc-800 dark:ring-0 dark:ring-white/10 dark:hover:border-zinc-700 dark:hover:ring-white/20 lg:absolute lg:-left-5 lg:-mt-2 lg:mb-0 xl:-top-1.5 xl:left-0 xl:mt-0"
>
<ArrowLeftIcon className="h-4 w-4 stroke-zinc-500 transition group-hover:stroke-zinc-700 dark:stroke-zinc-500 dark:group-hover:stroke-zinc-400" />
</button>
)}
<article>
<header className="flex flex-col">
<h1 className="mt-6 text-4xl font-bold tracking-tight text-zinc-800 dark:text-zinc-100 sm:text-5xl">
{metadata.title}
</h1>
<time
dateTime={metadata.date}
className="order-first flex items-center text-base text-zinc-400 dark:text-zinc-500"
>
<span className="h-4 w-0.5 rounded-full bg-zinc-200 dark:bg-zinc-500" />
<span className="ml-3">{formatDate(metadata.date)}</span>
</time>
</header>
<Prose className="mt-8">{children}</Prose>
</article>
<Newsletter />
<FollowButtons />
</div>
</div>
</Container>
)
}
Thanks for reading
If you enjoyed this post or found it helpful in anyway, do me a favor and share the URL somewhere on social media so that you can see my opengraph image in action 🙌😁. |
|
Write an article about "Wash three walls with one bucket" | Without much more work, you can ensure your side projects are not only expanding your knowledge, but also expanding your portfolio of hire-able skills.
Building side projects is my favorite way to keep my skills sharp and invest in my knowledge portfolio. But this post is about more than picking good side projects that
will stretch your current knowledge and help you stay up on different modes of development. It's also about creating a virtual cycle that goes from:
Idea
Building in public
Sharing progress and thoughts
Incorporating works into your portfolio
Releasing and repeating
It's about creating leverage even while learning a new technology or ramping up on a new paradigm and always keeping yourself in a position to seek out your next opportunity.
Having skills is not sufficient
You also need to display those skills, display some level of social proof around those skills, and, It's one thing to actually complete good and interesting work, but does it really exist if people can't find it? You already did the work and learning - now, be find-able for it.
In order to best capture the value generated by your learning, it helps to run your own tech blog as I recently wrote about.
Let's try a real world example to make this advice concrete. Last year, I found myself with the following desires while concentrating on Golang and wanting to start a new side project:
I wanted to deepen my understanding of how size measurements for various data types works
I wanted a tool that could help me learn the comparative sizes of different pieces of data while I worked
I wanted to practice Golang
I wanted to practice setting up CI/CD on GitHub
How could I design a side project that would incorporate all of these threads?
Give yourself good homework
Simple enough: I would build a Golang CLI that helps you understand visually the relative size of things in bits and bytes.
I would build it as an open-source project on GitHub, and write excellent unit tests that I could wire up to run via GitHub actions on every pull request and branch push. This would not
only give me valuable practice in setting up CI/CD for another project, but because my tests would be run automatically on branch pushes and opened pull requests, maintenance for the project
would become much easier:
Even folks who had never used the tool before would be able to understand in a few minutes if their pull request failed any tests or not.
I would keep mental or written notes about what I learned while working on this project, what I would improve and what I might have done differently. These are seeds for the
blog post I would ultimately write about the project.
Finally I would add the project to my /projects page as a means of displaying these skills.
You never know what's going to be a hit
In art as in building knowledge portfolios, how you feel about the work, the subject matter and your ultimate solutions may be very different from how folks who are looking to hire or work with people like you
may look at them.
This means a couple of things: it's possible for you to think a project is stupid or simple or gross, and yet have the market validate a strong desire for what you're doing.
It's possible for something you considered
trivial, such as setting up CI/CD for this particular language in this way for the 195th time, to be exactly what your next client is looking to hire you for.
It's possible for something you consider unfinished, unpolished or not very good to be the hook that sufficiently impresses someone looking for folks who know the tech stack or specific technology you're working with.
It's possible for folks to hire you for something that deep down you no longer feel particularly fired up about - something stable or boring or "old hat" that's valuable regardless, which you end up doing for longer to
get some cash, make a new connection or establish a new client relationship.
This means it's also unlikely to be a great use of your time to obsess endlessly about one particular piece of your project - in the end, it could be that nobody in the world cares or shares your vision about how the
CLI or the graphics rendering engine works and is unique, but that your custom build system you hacked up in Bash and Docker is potentially transformative for someone else's business if applied by a trusted partner or consultant.
Release your work and then let go of it
Releasing means pushing the big red scary button to make something public: whether that means merging the pull request to put your post up, sharing it to LinkedIn or elsewhere, switching your GitHub repository from private or public,
or making the video or giving the talk.
Letting go of it means something different.
I've noticed that I tend to do well with the releasing part, which makes my work public and available to the world, but then I tend to spend too much time checking stats, analytics, click-through
rates, etc once the work has been out for a while. I want to change this habit up, because I'd rather spend that time and energy learning or working on the next project.
Depending on who you are and where you are in your creative journey, you may find different elements of this phase difficult or easy.
My recommendation is to actually publish your work, even if it's mostly there and not 100% polished.
You never
know what feedback you'll get or connection you'll make by simply sharing your work and allowing it to be out in the world.
Then, I recommend, while noting that I'm still working on this piece myself, that you let it go so that you are clear to begin work on the next thing.
Wash three walls with one bucket
The excitement to learn and expand your skillset draws you forward into the next project.
The next project gives you ample opportunity to encounter issues, problems, bugs, your own knowledge gaps and broken workflows.
These are valuable and part of the process; they are not indications of failure;
Getting close enough to the original goal for your project allows you to move into the polishing phase and to consider your activities retrospectively.
What worked and what didn't?
Why?
What did I learn?
Writing about or making videos about the project allows you to get things clear enough in your head to tell a story - which further draws in your potential audience and solidifes your expertise as a developer.
Your finished writing and other artifacts, when shared with the world, may continue to drive traffic to your site and projects for years to come, giving you leads and the opportunity to apply the skills you've been
honing in a professional or community context.
Create virtuous cycles that draw you forward toward your goals, and wash three walls with one bucket.
Where did this phrase come from?
"Kill two birds with one stone" is a popular catchphrase meaning to solve two problems with one effort. But it's a bit on the nose and it endorses avicide, which I'm generally against.
One of my professors once related the story of one of her professors who was a Catholic monk and an expert
in the Latin language.
He would say "Bullshit!", it's "wash two walls with one bucket" when asked for the equivalent to "kill two birds with one stone" in Latin.
I liked that better so I started using it where previously I would have
suggested wasting a pair of birds.
For this piece, I decided the key idea was to pack in dense learning opportunities across channels as part of your usual habit of exploring the space and practicing skills via side projects.
So, I decided to add another wall. |
|
Write an article about "The Pain and Poetry of Python" | export const href = "https://pinecone.io/blog/pain-poetry-python"
This was the fourth article I published while working at Pinecone:
Read article |
|
Write an article about "A Blueprint for Modern API" | Introduction
Pageripper is a commercial API that extracts data from webpages, even if they're rendered with Javascript.
In this post, I'll detail the Continuous Integration and Continuous Delivery (CI/CD) automations I've configured via GitHub Actions for my Pageripper project, explain how they work and why they make working Pageripper delightful (and fast).
Why care about developer experience?
Working on well-automated repositories is delightful.
Focusing on the logic and UX of my changes allows me to do my best work, while the repository handles the tedium of running tests and publishing releases.
At Gruntwork.io, we published git-xargs, a tool for multiplexing changes across many repositories simultaneously.
Working on this project was a delight, because we spent the time to implement an excellent CI/CD pipeline that handled running tests and publishing releases.
As a result, reviewing and merging pull requests, adding new features and fixing bugs was significantly snappier, and felt easier to do.
So, why should you care about developer experience? Let's consider what happens when it's a mess...
What happens when your developer experience sucks
I've seen work slow to a crawl because the repositories were a mess: long-running tests that took over 45 minutes to complete a run and that were flaky.
Even well-intentioned and experienced developers experience a
slow-down effect when dealing with repositories that lack CI/CD or have problematic, flaky builds and ultimately untrustable pipelines.
Taking the time to correctly set up your repositories up front is a case of slowing down to go faster. Ultimately, it's a matter of project velocity.
Developer time is limited and expensive, so making sure the path
is clear for the production line is critical to success.
What are CI/CD automations?
Continuous Integration is about constantly merging into your project verified and tested units of incremental value.
You add a new feature, test it locally and then push it up on a branch and open a pull request.
Without needing to do anything else, the automation workflows kick in and run the projects tests for you, verifying you haven't broken anything.
If the tests pass, you merge them in, which prompts more automation to deploy your latest code to production.
In this way, developers get to focus on logic, features, UX and doing the right thing from a code perspective.
The pipeline instruments the guardrails that everyone needs in order to move very quickly.
And that's what this is all about at the end of the day. Mature pipelines allow you to move faster. Safety begets speed.
Pageripper's automations
Let's take a look at the workflows I've configured for Pageripper.
On pull request
Jest unit tests are run
Tests run on every pull request, and tests run quickly. Unit tests are defined in jest.
Developers get feedback on their changes in a minute or less, tightening the overall iteration cycle.
npm build
It's possible for your unit tests to pass but your application build to still fail due to any number of things: from dependency issues to incorrect configurations and more.
For that reason, whenever tests are run, the workflow also runs an npm build to ensure the application builds successfully.
docker build
The Pageripper API is Dockerized because it's running on AWS Elastic Container Service (ECS).
Because Pageripper uses Puppeteer, which uses Chromium or an installation of the Chrome browser, building the Docker image is a bit involved and
also takes a while.
I want to know immediately if the build is broken, so if and only if the tests all pass, then a test docker build is done via GitHub actions as well.
OpenAPI spec validation
For consistency and the many downstream benefits (documentation and SDK generation, for example), I maintain an OpenAPI spec for Pageripper.
On every pull request, this spec is validated to ensure no changes or typos broke anything.
This spec is used for a couple of things:
Generating the Swagger UI for the API documentation that is hosted on GitHub pages and integrated with the repository
Generating the test requests and the documentation and examples on RapidAPI, where Pageripper is listed
Running dredd to validate that the API correctly implements the spec
Pulumi preview
Pageripper uses Pulumi and Infrastructure as Code (IaC) to manage not just the packaging of the application into a Docker container, but the orchestration of all other supporting infrastructure and AWS resources that comprise a functioning production API service.
This means that on every pull request we can run pulumi preview to get a delta of the changes that Pulumi will make to our AWS account on the next deployment.
To further reduce friction, I've configured the Pulumi GitHub application to run on my repository, so that the output of pulumi preview can be added directly to my pull request as a comment:
On merge to main
OpenAPI spec is automatically published
A workflow converts the latest OpenAPI spec into a Swagger UI site that details the various API endpoints, and expected request and response format:
Pulumi deployment to AWS
The latest changes are deployed to AWS via the pulumi update command. This means that what's at the HEAD of the repository's main branch is what's in production at any given time.
This also means that developers never need to:
Worry about maintaining the credentials for deployments themselves
Worry about maintaining the deployment pipeline themselves via scripts
Worry about their team members being able to follow the same deployment process
Worry about scheduling deployments for certain days of the week - they can deploy multiple times a day, with confidence
Thanks for reading
If you're interested in automating more of your API development lifecycle, have a look at the workflows in the Pageripper repository.
And if you need help configuring CI/CD for the ultimate velocity and developer producitvity, feel free to reach out! |
|
Write an article about "How to generate images with AI" | You can generate images using AI for free online through a variety of methods
I've been generating images using models such as StableDiffusion and DALLE for blog posts for months now. I can quickly produce
high-quality images that help tell my story.
This blog post will give you a lay of the land in what's currently possible, and point you to some resources for generating AI images whether you are as
technical as a developer or not - and whether you'd prefer to produce images via a simple UI or programmatically via an API.
In addition, I'll give you the minimum you need to understand about prompts and negative prompts and how to use them effectively.
DALLE-3 via Bing Create
Overall, this is probably the best option right now if you want high quality images without needing to pay.
You will need a Microsoft live account (which is free),
but otherwise you just log into bing.com/create and you write your prompt directly into the text input at the top:
This is using OpenAI's DALLE-3 model under the hood, which is noticeably better at converting the specific details and instructions in natural human language into an image that
resembles what the user intended.
I have been generally pretty impressed with its outputs, using them for recent blog post hero images as well as my own 404 page.
For example, I used Bing and DALLE-3 to generate the hero image for this post in particular via the following prompt:
Neon punk style. A close up of a hand holding a box and waving a magic wand over it. From the box, many different polaroid photos of different pixel art scenes are flying upward and outward.
Bing currently gives you 25 "boosts" per day, which appears to mean 25 priority image generation requests.
After you use them up, your requests might slow down as they fall toward the back of the queue.
Using DALLE-3 in this way also supports specifying the style of art you want generated upfront, such as "Pixel art style. Clowns juggling in a park".
Discord bots
Discord is the easiest and lowest friction way to get started generating images via StableDiffusion right now, especially if you're unwilling to pay for anything.
Stable Foundation is a popular Discord channel that hosts several different instances of the bot that you can ask for image generations via chat prompts.
Here's the direct link to the Stable Foundation discord invite page
This is a very handy tool if you don't need a ton of images or just want the occasional AI generated image with a minimum of setup or fuss required.
You can run discord in your browser, which makes things even
simpler as it requires no downloads.
There's some There are generated a couple of images, you'll eventually ask for another and be told to chill-out for a bit.
This is their Discord channel's way of rate-limiting you so that you don't cost them too much money and overwhelm
the service so that other users can't generate images.
And this is fair enough - they're providing you with free image generation services, after all.
Like other free online services, they also will not allow you to generate content that is considered not safe for work, or adult.
Also fair enough - it's their house their rules, but occasionally you'll run into slight bugs with the NSFW content detector that will incorrectly flag your innocent image prompt as resulting in NSFW content even when you didn't want it to,
which can lead to failed generations and more wasted time. If you want total control over your generations, you need to go local and use a tool like AUTOMATIC111, mentioned below.
Finally, because it's a Discord channel that anyone can join, when you ask the bot for your images and the bot eventually returns them, everyone else in the channel can see your requests and your generated images and could download them if they
wanted to.
If you are working on a top-secret project or you just don't want other people knowing what you're up to, you'll want to look into AUTOMATIC111 or other options for running image generation models locally.
Replicate.com
Replicate is an outstanding resource for technical and non-technical folks alike.
It's one of my favorite options and I use both their UI for quick image generations when I'm writing content, and I use their
REST API in my Panthalia project which allows me to start blog posts by talking into my phone and request images via StableDiffusion XL.
Replicate.com hosts popular machine learning and AI models and makes them available through
a simple UI that you can click around in and type image requests into, as well as a REST API for developers to integrate into their applications.
Replicate.com is one of those "totally obvious in retrospect" ideas: with the explosion of useful machine learning models, providing a uniform interface to running those models
easily was pretty brilliant.
To use Replicate go to replicate.com and click the Explore button to see all the models you can use. You'll find more than just image generation models,
but for the sake of this tutorial, look for StableDiffusionXL.
Once you're on the StableDiffusionXL model page, you can enter the prompt for the image you want to generate. Here's an example of a simple prompt that works well:
Pixel art style. Large aquarium full of colorful fish, algae and aquarium decorations. Toy rocks.
If you're a developer and you don't feel like wrangling Python models into microservices or figuring out how to properly Dockerize StableDiffusion, you can take advantage of Replicate's REST API, which is truly a delight, from experience:
I have generated a ton of images via Replicate every month for the past several months and the most they've charged me is $2 and some change. Highly recommended.
AUTOMATIC111
This open-source option requires that you be comfortable with GitHub and git at a minimum, but it's very powerful because it allows you to run StableDiffusion, as well as checkpoint models based on StableDiffusion,
completely locally.
As in, once you have this up and running locally using the provided script, you visit the UI on localhost and you can then pull your ethernet cord out of your laptop, turn off your WiFi
card's radio and still generate images via natural language prompts locally.
There are plenty of reasons why you might want to general images completely locally without sending data off your machine which we won't get into right now.
AUTOMATIC111 is an open-source project which means that it's going to have some bugs, but there's also a community of users who are actively engaging with the project, developers who are fixing those bugs regularly,
and plenty of GitHub issues and discussions where you can find fellow users posting workarounds and fixes for common problems.
The other major benefit of using this tool is that it's completely free.
If your use case is either tricky to capture the perfect image for, or if it necessitates you generating tons of images over and over again,
it may be worth the time investment to get this running locally and learn how to use it.
AUTOMATIC111 is also powerful because it allows you to use LoRa and LyCORIS models to essentially fine-tune whichever base model you're using to further customize your final image outputs.
LoRA, short for Low-Rank Adaptation, models are smaller versions of Stable Diffusion models designed to apply minor alterations to standard checkpoint models.
For example, there might be a LoRa model for Pikachu,
making it easier to generate scenes where Pikachu is performing certain actions.
The acronym LyCORIS stands for "Lora beYond COnventional methods, Other Rank adaptation Implementations for Stable diffusion."
Unlike LoRA models, LyCORIS encompasses a variety of fine-tuning methods.
It's a project dedicated to exploring diverse ways of parameter-efficient fine-tuning on Stable Diffusion via different algorithm implementations.
If you want to go deeper into understanding the current state of AI image generation via natural language, as well as checkpoint models, LoRa and LyCORIS models and similar techniques for getting specific outputs,
AUTOMATIC111 is the way to go.
If you are working with AUTOMATIC111, one of the more popular websites for finding checkpoint, LoRa and LyCORIS models is civit.ai which hosts a vast array of both SFW and NSFW models contributed by the community.
Prompt basics
Prompting is how you ask the AI model for an image in natural human language, like "Pixel art style. Aquarium full of colorful fish, plants and aquarium decorations".
Notice in the above examples that I tend to start by describing the style of art that I want at the beginning of the prompt, such as "Pixel art style" or "Neon punk style".
Some folks use a specific artist or photographer's name if they want the resulting image to mirror that style, which will work if the model has been trained on that artist.
Sometimes, results you'll get back from a given prompt are pretty close to what you want, but for one reason or another the image(s) will be slightly off.
You can actually re-run
generation with the same prompt and you'll get back slightly different images each time due to the random value inputs that are added by default on each run.
Sometimes, it's better to modify your prompt and try to describe the same scene or situation in simpler terms.
Adding emphasis in StableDiffusion image generation prompts
For StableDiffusion and StableDiffusionXL models in particular, there's a trick you can use when writing out your prompt to indicate that a particular phrase or feature is more and should be given more "weight" during image generation.
Adding parends around a word or phrase increases its weight relative to other phrases in your prompt, such as:
Pixel art style. A ninja running across rooftops ((carrying a samurai sword)).
You can use this trick in both StableDiffusion and StableDiffusionXL models, and you can use (one), ((two)) or (((three))) levels of parends, according to my testing, to signify that something
is more Negative prompts
The negative prompt is your opportunity to "steer" the model away from certain features or characteristics you're getting in your generated images that you don't want.
If your prompt is generating images close to what you want, but you keep getting darkly lit scenes or extra hands or limbs, sometimes adding phrases like "dark", "dimly lit", "extra limbs" or "bad anatomy"
can help.
Why generate images with AI?
Neon punk style.
An android artist wearing a french beret, sitting in the greek thinker position, and staring at a half-finished canvas of an oil painting landscape.
In a french loft apartment with an open window revealing a beautiful cityscape.
My primary motivation for generating images with AI is that I write a ton of blog posts both in my free time and as part of my day job, and I want high-quality eye candy
to help attract readers to click and to keep them engaged with my content for longer.
I also find it to be an absolute blast to generate Neon Punk and Pixel art images to represent even complex scenarios I'm writing about - so it increases my overall enjoyment of the creative process itself.
I have visual arts skills and I used to make assets for my own posts or applications with Photoshop or Adobe Illustrator - but using natural language to describe what I want is about a thousand times faster and
certainly less involved.
I've gotten negative comments on Hacker News before (I know, it sounds unlikely, but hear me out) over my use of AI-generated images in my blog posts, but in fairness to those commenters who didn't feel the need
to use their real names in their handles, they woke up and started their day with a large warm bowl of Haterade.
I believe that the content I produce is more interesting overall because it features pretty images that help to tell my overall story. |
|
Write an article about "Pinecone AWS Reference Architecture Technical Walkthrough" | export const href = "https://pinecone.io/learn/aws-reference-architecture"
I built Pinecone's first AWS Reference Architecture using Pulumi.
This is the seventh article I wrote while working at Pinecone:
Read the article |
|
Write an article about "How I keep my shit together" | I've been working in intense tech startups for the past {RenderNumYearsExperience()} years. This is what keeps me healthy and somewhat sane.
The timeline below describes an ideal workday. Notice there's a good split between the needs of my body and mind and the needs of my employer. After 5 or 6PM, it's family time.
Beneath the timeline, I explain each phase and why works for me.
Wake up early
I tend to be an early riser, but with age and additional responsibilities it's no longer a given that I'll spring out of bed at 5AM.
I set a smart wake alarm on my fitbit which attempts to rouse me when I'm
already in light sleep as close to my target alarm time as possible.
The more time I give myself in the morning for what is For the past two jobs now I've used this time to read, sit in the sun, meditate, drink my coffee, and hack on stuff that I care about like my many side projects.
Get sunlight
This helps me feel alert and gets my circadian cycle on track.
Vipassana meditation
I sit with my eyes closed, noticing and labeling gently: inhaling, exhaling, thinking, hearing, feeling, hunger, pain, fear, thinking, etc.
Metta meditation
I generate feelings of loving kindness for myself, visualizing myself feeling safe, healthy, happy and living with ease.
This one I may do alongside a YouTube video.
Manoj Dias of Open has a great one.
Coffee and fun
Some days, I'll ship a personal blog post, finish adding a feature to one of my side projects, read a book, or work on something that is otherwise First block of work and meetings
Depending on the day, I'll have more or less focus time or meetings. Sometimes I block out my focus time on my work calendar to help others be aware of what I'm up to and to keep myself focused.
I'll do open-source work, write blog posts, create videos, attend meetings, or even do performance analysis on systems and configure a bunch of alerting services to serve as an SRE in a pinch, in my current role as a staff developer advocate at Pinecone.io.
I work until noon or 1pm before stopping to break my fast.
Break my fast
I eat between the hours of noon and 8pm. This is the form of intermittent fasting that best works for me.
A few years ago, a blood panel showed some numbers indicating I was heading toward a metabolic syndrome I had no interest in acquiring, so I follow this protocol and eat mostly vegan but sometimes vegetarian (meaning I'll have cheese in very sparing amounts occasionally).
Sometimes I'll eat fish and sometimes I'll even eat chicken, but for the most part I eat vegan.
In about 3 months of doing this, an updated blood panel showed I had completely reversed my metabolic issues.
In general, I try to follow Michael Pollen's succinct advice: "Eat food.
Not too much.
Mostly plants".
Long walk
I've reviewed the daily habits of a slew of famous creatives from the past, from sober scientists to famously drug-using artists and every combination in between.
One thing that was common amongst most of them is that they took two or more longer walks during the day.
I try to do the same.
I find that walking is especially helpful if I've been stuck on something for a while or if I find myself arm-wrestling some code, repository or technology that won't cooperate the way I initially thought it should.
It's usually within the first 20 minutes of the walk that I realize what the issue is or at least come up with several fresh avenues of inquiry to attempt when I return, plus I get oxygenated and usually find myself in a better mood when I get back.
I carry my son in my arms as I walk, talking to him along the way.
Ice bath
This is from Wim Hof, whose breathing exercises I also found helpful.
I started doing cold showers every morning and tolerated them well and found they gave me a surge in energy and focused attention, so I ended up incrementally stepping it up toward regular ice baths.
First I bought an inflatable ice bath off Amazon and would occasionally go to the store and pick up 8 bags of ice and dump them into a tub full of hose water.
I'd get into the bath for 20 minutes, use the same bluetooth mask I use for sleep and play a 20 minute yoga nidra recording.
The more I did this, the more I found that ice baths were for me.
They not only boosted my energy and focus but also quieted my "monkey mind" as effectively as a deep meditative state that normally takes me more than 20 minutes to reach.
According to Andrew Huberman, the Stanford professor of neurology and opthamology who runs his own popular podcast, cold exposure of this kind can increase your available dopamine levels by 2x, which is similar to what cocaine would do, but for 6 continuous hours.
I've never tried cocaine so I can't confirm this from
experience, but I can say that when I get out of a 20 minute ice bath I'm less mentally scattered and I feel like I have plenty of energy to knock out the remainder of my workday.
Now, I produce my own ice with a small ice machine and silicon molds I fill with hose water and then transfer into a small ice chest.
Long walk at end of day and family time
Usually, working remotely allows me to be present with my family and to end work for the day between 4 and 6pm depending on what's going on.
We like to take a long walk together before returning to settle in for the night.
Sleep
I try to get to sleep around 11pm but that tends to be aspirational.
I use the manta bluetooth sleep mask because it helps me stay asleep longer in the morning as I've found I'm very sensitive to any light.
I connect it to Spotify and play a deep sleep playlist without ads that is 16 hours long.
I turn on do not disturb on my phone.
Sometimes if my mind is still active I'll do breath counting or other breathing exercises to slow down. |
|
Write an article about "Building data-driven pages with Next.js" | ;
I've begun experimenting with building some of my blog posts - especially those that are heavy on data, tables, comparisons and multi-dimensional considerations - using scripts, JSON and home-brewed schemas.
Table of contents
What are data-driven pages?
I'm using this phrase to describe pages or experiences served up from your Next.js project that you compile rather than edit.
Whereas you might edit a static blog post to add new information, with a data-driven page you would update the data-source and then run the associated build process, resulting
in a web page you serve to your users.
Why build data driven pages?
In short, data driven pages make it easier to maintain richer and more information-dense experiences on the web.
Here's a couple of reasons I like this pattern:
There is more upfront work to do than just writing a new MDX file for your next post, but once the build script is stable, it's much quicker to iterate (Boyd's Law)
By iterating on the core data model expressed in JSON, you can quickly add rich new features and visualizations to the page such as additional tables and charts
If you have multiple subpages that all follow a similar pattern, such as side by side product review, running a script one time is a lot faster than making updates across multiple files
You can hook your build scripts either into npm's prebuild hook, which runs before npm run build is executed, or to the pnpm build target, so that your data driven pages are freshly rebuilt with no additional effort on your part
This pattern is a much more sane way to handle data that changes frequently or a set of data that has new members frequently.
In other words, if you constantly have to add Product or Review X to your site, would you rather manually re-create HTML sections by hand or add a new object to your JSON?
You can drive more than one experience from a single data source: think a landing page backed by several detail pages for products, reviews, job postings, etc.
How it works
The data
I define my data as JSON and store it in the root of my project in a new folder.
For example, here's an object that defines GitHub's Copilot AI-assisted developer tool for my giant AI-assisted dev tool comparison post:
javascript
"tools": [
{
"name": "GitHub Copilot",
"icon": "@/images/tools/github-copilot.svg",
"category": "Code Autocompletion",
"description": "GitHub Copilot is an AI-powered code completion tool that helps developers write code faster by providing intelligent suggestions based on the context of their code.",
"open_source": {
"client": false,
"backend": false,
"model": false
},
"ide_support": {
"vs_code": true,
"jetbrains": true,
"neovim": true,
"visual_studio": true,
"vim": false,
"emacs": false,
"intellij": true
},
"pricing": {
"model": "subscription",
"tiers": [
{
"name": "Individual",
"price": "$10 per month"
},
{
"name": "Team",
"price": "$100 per month"
}
]
},
"free_tier": false,
"chat_interface": false,
"creator": "GitHub",
"language_support": {
"python": true,
"javascript": true,
"java": true,
"cpp": true
},
"supports_local_model": false,
"supports_offline_use": false,
"review_link": "/blog/github-copilot-review",
"homepage_link": "https://github.com/features/copilot"
},
...
]
As you can see, the JSON defines every property and value I need to render GitHub's Copilot in a comparison table or other visualization.
The script
The script's job is to iterate over the JSON data and produce the final post, complete with any visualizations, text, images or other content.
The full script is relatively long. You can read the full script in version control, but in the next sections I'll highlight some of the more interesting parts.
Generating the Post Content
One of the most Here's a simplified version of that function:
javascript
const generatePostContent = (categories, tools, existingDate) => {
const dateToUse = existingDate ||${new Date().getFullYear()}-${new Date().getMonth() + 1}-${new Date().getDate()};
const toolTable = generateToolTable(tools);
const categorySections = categories.map((category) => {
return generateCategorySection(category);
}).join('\n');
const tableOfContents = categories.map((category) => {
// ... generate table of contents ...
}).join('\n');
return
;
}
fs.writeFileSync(filename, content, { encoding: 'utf-8', flag: 'w' });
console.log(Generated content for "The Giant List of AI-Assisted Developer Tools Compared and Reviewed" and wrote to ${filename});
This code does a few It determines the correct directory and filename for the generated page based on the project structure.
It checks if the file already exists and, if so, extracts the existing date from the page's metadata. This allows us to preserve the original publication date if we're regenerating the page.
It generates the full page content using the generatePostContent function.
It creates the directory if it doesn't already exist.
It writes the generated content to the file.
Automating the Build Process with npm and pnpm
One of the key benefits of using a script to generate data-driven pages is that we can automate the build process to ensure that the latest content is always available.
Let's take a closer look at how we can use npm and pnpm to run our script automatically before each build.
Using npm run prebuild
In the package.json file for our Next.js project, we can define a "prebuild" script that will run automatically before the main "build" script:
json
{
"scripts": {
"prebuild": "node scripts/generate-ai-assisted-dev-tools-page.js",
"build": "next build",
...
}
}
With this setup, whenever we run npm run build to build our Next.js project, the prebuild script will run first, executing our page generation script and ensuring that the latest content is available.
Using pnpm build
If you're using pnpm instead of npm, then the concept of a "prebuild" script no longer applies, unless you enable the enable-pre-post-scripts option in your .npmrc file as noted here.
If you decline setting this option, but still need your prebuild step to work across npm and pnpm, then you can do something gross like this:
json
{
"scripts": {
"prebuild": "node scripts/generate-ai-assisted-dev-tools-page.js",
"build": "npm run prebuild && next build",
...
}
}
Why automation matters
By automating the process of generating our data-driven pages as part of the build process, we can ensure that the latest content is always available to our users.
This is especially With this approach, we don't have to remember to run the script manually before each build - it happens automatically as part of the standard build process.
This saves time and reduces the risk of forgetting to update the content before deploying a new version of the site.
Additionally, by generating the page content at build time rather than at runtime, we can improve the performance of our site by serving static HTML instead of dynamically generating the page on each request.
This can be especially Key Takeaways
While the full script is quite long and complex, breaking it down into logical sections helps us focus on the key takeaways:
Generating data-driven pages with Next.js allows us to create rich, informative content that is easy to update and maintain over time.
By separating the data (in this case, the categories and tools) from the presentation logic, we can create a flexible and reusable system for generating pages based on that data.
Using a script to generate the page content allows us to focus on the high-level structure and layout of the page, while still providing the ability to customize and tweak individual sections as needed.
By automating the process of generating and saving the page content, we can save time and reduce the risk of errors or inconsistencies.
While the initial setup and scripting can be complex, the benefits in terms of time savings, consistency, and maintainability are well worth the effort. |
|
Write an article about "Writing code on Mac or Linux but testing on Windows with hot-reloading" | Read article |
|
Write an article about "Warp AI terminal review" | Warp brings AI assistance into your terminal to make you more efficient
Table of contents
Warp is an AI-assisted terminal that speeds you up and helps you get unblocked with LLM-suggested command completion,
custom workflows and rich theme support.
Unlike command line tools like Mods which can be mixed and matched in most environments,
Warp is a full on replacement for your current terminal emulator.
The good
It works
The core experience works out of the box as advertised: it's pretty intuitive to get help with complex commmands, ask about errors and get
back useful responses that help you to move forward more quickly.
It's pretty
It's also great to see first-class theming support and I will say that warp looks great out of the box - even using the default theme.
The painful
No tmux compatibility currently
I'm an avowed tmux user. I love being able to quickly section off a new piece of screen real estate and have it be a full fledged terminal
in which I can interact with Docker images or SSH to a remote server or read a man page or write a script.
I like the way tmux allows me to flow my workspace to the size of my current task - when I'm focused on writing code or text I can zoom in
and allow that task to take up my whole screen.
When I need to do side by side comparisons or plumb data between files and projects, I can open up as many panes as I need to get the job done.
Unfortunately, at the time of writing, Warp does not support Tmux and it's not clear how far away
that support will be.
Sort of awkward to run on Linux
I have another quibble about the default experience of running warp on Linux currently:
It's currently a bit awkward, because I launch the warp-terminal binary from my current terminal emulator, meaning that I get a somewhat janky experience and an extra floating window to manage.
Sure, I could work around this - but the tmux issue prevents me from making the jump to warp as my daily driver.
You need to log in to your terminal
I know this will bother a lot of other folks even more than it bugs me, but one of the things I love about my current workflow is that hitting my control+enter hotkey gives me a fresh
terminal in under a second - I can hit that key and just start typing.
Warp's onboarding worked fine - there were no major issues or dark patterns - but it does give me pause to need to log into my terminal and it makes me wonder how gracefully warp degrades
when it cannot phone home.
Getting locked out of your terminal due to a remote issue would be a bridge too far for many developers.
Looking forward
I'm impressed by warp's core UX and definitely see the value.
While I do pride myself on constantly learning more about the command line,
terminal emulators and how to best leverage them for productivity, it's sort of a no-brainer to marry
the current wave of LLMs and fine-tuned models with a common developer pain point: not knowing how to fix something in their terminal.
Not every developer wants to be a terminal nerd - but they do want to get stuff done more efficiently and with less suffering than before.
I can see warp being a great tool to helping folks accomplish that.
Check out my detailed comparison of the top AI-assisted developer tools. |
|
Write an article about "CatFacts rewrite in Golang" | Visit the repo on GitHub
I rewrote CatFacts from scratch in Golang just for the practice. I wanted an excuse to understand Go modules.
In keeping with the spirit of going way over the top, this service is deployed via Kubernetes on Google Cloud, for the most resilient pranking service possible.
Read my complete write-up on Medium
I wrote up a technical deep dive on this project on Medium. You can check it out here. |
|
Write an article about "Git-xargs allows you to run commands and scripts against many Github repos simultaneously" | Demo
Intro
have you ever needed to add a particular file across many repos at once?
Or to run a search and replace to change your company or product name across 150 repos with one command?
What about upgrading Terraform modules to all use the latest syntax?
How about adding a CI/CD configuration file, if it doesn’t already exist, or modifying it in place if it does, but only on a subset of repositories you select?
You can handle these use cases and many more with a single git-xargs command.
Just to give you a taste, here’s how you can use git-xargs to add a new file to every repository in your Github organization:
bash
git-xargs \
--branch-name add-contributions \
--github-org my-example-org \
--commit-message "Add CONTRIBUTIONS.txt" \
touch CONTRIBUTIONS.txt
In this example, every repo in the my-example-org GitHub org have a CONTRIBUTIONS.txt file added, and an easy to read report will be printed to STDOUT :
Try it out
git-xargs is free and open-source - so you can grab it here: https://github.com/gruntwork-io/git-xargs
Learn more
Read the introductory blog post to better understand what git-xargs can do and its genesis story. |
|
Write an article about "How do you write so fast?" | ;
How do I write so fast?
Occasionally someone will ask me how I am able to write new content so quickly. This is my answer.
There are two reasons I'm able to write quickly:
1. I write in my head
I mostly write new articles in my head while I'm walking around doing other things. This means that by the time I am back at a computer, I usually just need to type in what I've already hashed out.
2. I automate and improve my process constantly
The fact that I'm constantly writing in my own head means that my real job is rapid, frictionless capture.
When I have an idea that I want to develop into a full post, I will capture it in one of two ways:
I'll use Obsidian, my second brain, and add a new note to my Writing > In progress folder
I'll use my own tool, Panthalia, (intro and update), which allows me to go from idea fragment to open pull request in seconds
I've found there's a significant motivational benefit to only needing to finish an open pull request versus writing an entire article from scratch.
Going from post fragment (or, the kernel of the idea) to open pull request reduces the perceived lift of the task. |
|
Write an article about "Keep Calm and Ship Like" | ;
Catching a breath
I want to reflect on what I accomplished last year and what I consider my biggest wins:
I netted hundreds of new email newsletter subscribers, LinkedIn followers, and Youtube subscribers.
I open-sourced several projects, many articles and YouTube demos and tutorials that I'm proud of.
I landed a Staff Developer Advocate role at Pinecone.io, where I shipped a separate set of articles on Generative AI and machine learning, plus webinars, open-source improvements to our clients and applications, and Pinecone's first AWS Reference Architecture in Pulumi.
The beginning of my "Year in AI"
In January 2023, I continued doing two things I had been doing for years, namely: open-sourcing side projects and tools and writing or making videos about them.
However, for some reason I felt a surge of enthusiasm around sharing my projects, perhaps because I was beginning to experiment with LLMs and realizing the productivity and support gains they could unlock.
So, I put a little extra polish into the blog posts and YouTube videos that shared my latest experiments with ChatGPT.
Early in the year, I wrote Can ChatGPT4 and GitHub Copilot help me produce a more complete side project more quickly?.
As I wrote in maintaining this site no longer fucking sucks, I also re-did this site for the Nth time, this time using the latest Next.js, Tailwind and a Tailwind UI template, that I promptly hacked up to my own needs, and deployed my new site to Vercel.
Here's my commit graph for the year on my portfolio project:
Which makes it less hard to believe that it was only 9 months ago I started building this version of the site in this incredibly hard to read screenshot of my first commit on the project:
My blogging finally got me somewhere
Writing about what I learn and ship in tech has been an In the beginning of the year I was working at Gruntwork.io, doing large scale AWS deployments for customers using Terraform, but as I wrote in You get to keep the neural connections, it came time for the next adventure.
And as I wrote about in Run your own tech blog, one of the key benefits of doing a lot of writing about your open-source projects and learnings is that you have high quality work samples ever at the ready.
This year, in the middle of working an intense job as a tech lead, I managed to do some self-directed job hunting in a down market, start with 5 promising opportunities and ultimately winnow the companies I wanted to work
for down to two.
I received two excellent offers in hand at the same time and was able to take my pick: I chose to start at Pinecone as a Staff Developer Advocate.
In the break between Gruntwork.io and Pinecone.io, I took one week to experiment with Retrieval Augmented Generation and built a Michael Scott from the office chatbot.
I open-sourced the data prep and quality testing Jupyter Notebooks I built for this project
plus the chatbot Next.js application itself, as I wrote about in my Office Oracle post.
I shipped like crazy at Pinecone
Articles
Once I started at Pinecone, I shipped a bunch of articles on Generative AI, machine learning and thought pieces on the future of AI and development:
Retrieval Augmented Generation
AI Powered and built with...JavaScript?
How to use Jupyter Notebooks to do Machine Learning and AI tasks
The Pain and Poetry of Python
Making it easier to maintain open-source projects with CodiumAI and Pinecone
Videos
Semantic Search with TypeScript and Pinecone
Live code review: Pinecone Vercel starter template and RAG - Part 1
Live code review: Pinecone Vercel starter template and RAG - Part 2
What is a Vector Database?
Deploying the Pinecone AWS Reference Architecture - Part 1
Deploying the Pinecone AWS Reference Architecture - Part 2
Deploying the Pinecone AWS Reference Architecture - Part 3
How to destroy the Pinecone AWS Reference Architecture
How to deploy a Jump host into the Pinecone AWS Reference Architecture
Projects
Introducing Pinecone's AWS Reference Architecture with Pulumi
Exploring Pinecone's AWS Reference Architecture
My personal writing was picked up, more than once
This was equally unexpected, thrilling and wonderful.
I did not know these people or outlets, but they found something of value in what I had to say.
Each of these surprised netted me a group of new newsletter and YouTube subscribers.
Daniel Messier included my rant Maintaining this site fucking sucks in his Unsupervised Learning newsletter
The Changelog picked up my Run your own tech blog post
Habr picked up and translated my First see if you've got the programming bug post into Russian. This resulted in about 65 new YouTube subscribers and new readers from Russian-speaking countries.
In addition, my programming mentor, John Arundel graciously linked to my blog when he published the blog post series I lightly collaborated on with him (He did the lion's share of the work).
You can read his excellent series, My horrible career, here.
The new subscribers and followers kept coming
My site traffic saw healthy regular growth and some spikes...
As I hoped, regularly publishing a stream of new content to my site and selectively sharing some of them on social media led to more and more organic traffic and a higher
count of indexed pages in top search engines.
By continuously building and sharing valuable content, tools and posts, I intend to continuously build organic traffic to my site, while eventually adding offerings like courses, training, books and more.
EmailOctopus Newsletter cleared 200...
When I rebuilt the latest version of my portfolio site, I wired up a custom integration with EmailOctopus so that I could have total control over how my Newsletter experience looks and behaves within my site.
In a way, this is the channel I'm most excited about because it's the channel I have the most control over.
These folks signed up directly to hear from me, so growing this audience is critcal for reaching my goals.
YouTube went from 0 to over 150...
I tend to do demos and technical walkthroughs on my YouTube channel. The various unexpected re-shares of my content to other networks led to a spike in YouTube subscribers.
I went from effectively no YouTube subscribers at the beginning of the year to 156 at the end of 2023.
I got a surprise hit on my video about performing GitHub pull request reviews entirely in your terminal.
More evidence that you should constantly publish what you find interesting, because you never know which topic or
video is going to be a hit.
LinkedIn
LinkedIn remained the most valuable channel for sharing technical and thought leadership content with my audience.
I saw the highest engagement on this platform, consistently since the beginning of the year.
I made the subtle but Reddit
Reddit was a close second to LinkedIn, or perhaps slightly ahead of it, judging solely from referral traffic.
I found that:
longform technical tutorials tended to perform best on Reddit
the community is suspicious even when you're just giving away a tutorial or sharing something open-source
Reddit posts that do well tend to deliver steady trickles of traffic over time
Consulting wins
I started being tapped for my insight into Generative AI, developer tooling and vector databases.
Initially, this came in the form of various think tanks and research firms asking me to join calls as an expert, and
to give my opinions and share my experiences as an experienced software developer experimenting with the first raft of AI-assisted developer tooling.
Realizing the opportunity at hand, I quickly gave my about page a face lift, making it more clear that I do limited engagements for my key areas of interest.
By the end of the year, I had successfully completed several such engagements, but was also beginning to see an uptick in direct outreach, not mediated by any third party.
Personal wins
There were many reasons I wanted to work at Pinecone as a developer advocate.
One of those many reasons was that the role involved some flying and some public speaking, both of which I have some phobia around.
I intentionally wanted to go to the places that scare me, and I am pleased to report that even after just a couple of sessions of exposure therapy this last year, I'm already feeling better about both.
I did some talks, webinars and conferences this year in Atlanta, San Francisco, New York and they all went really well, resulting in new contacts, Pinecone customers, followers and follow-up content.
Takeaways and learnings
Publish. Publish. Publish. You cannot know in advance what will be successful and what will fall flat. Which articles will take off and which will get a few silent readers.
I am regularly surprised by how well certain posts, videos and projects do, and which aspects of them folks find interesting, and how poorly certain projects do, despite a great deal of preparation.
Build self-sustaining loops
I use what I learn at work to write content and build side projects that people will find interesting.
I use what I learn in my side project development at work - constantly. Side projects have been an invaluable constant laboratory in which to expand my skill set and experience.
I use my skill sets and experience to help other people, including clients and those looking for assistance in development, understanding industry trends, and building better software.
Rinse and repeat constantly for many years, with minimal breaks in between. |
|
Write an article about "Programmer emotions" | | When | I feel |
|---|---|
| My program compiles after an onerous redactoring| elation|
| People add meetings to my calendar to talk through deliverables they haven't thought through or locked down yet| like my precious focus time is being wasted |
| Another developer says something I created was helpful to them| like a link in a long chain stretching from the past into the future|
| Someone downloads my code, tool or package| absolutely victorious |
| I ship something | My pull request is merged | absolutely victorious | |
|
Write an article about "Codeium vs ChatGPT" | Codeium began its life as an AI developer tool that offered code-completion for software developers, and
ChatGPT was originally a general purpose AI language model that could assist with a variety of tasks.
But as I write this post on February 2nd, 2024, many of these products' unique capabilities are beginning to
overlap. What are the key differences and what do you need to know in order to get the most out of them both?
When you're finished reading this post you'll understand why these tools are so powerful, which capabilities remain unique to each,
and how you can use them to level up your development or technical workflow.
Codeium vs ChatGPT - capabilities at a glance
| | Code generation | Image generation | Chat capabilities | Code completion | General purpose chat | Runs in IDEs | Free
|---|---|---|---|---|---|---|---|
| Codeium | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ |
| ChatGPT | ✅ | ✅ | ✅ | ✅ | ✅ | ✴️ | ❌ |
Legend
| Supported | Not supported | Requires extra tooling |
|---|---|---|
| ✅ | ❌ | ✴️ |
Let's break down each of these attributes in turn to better understand how these two tools differ:
Code generation
Both Codeium and ChatGPT are capable of advanced code generation, meaning that developers can ask the tool to write code in most any programming language and get back something pretty reasonable
most of the time.
For example, in the browser interface of ChatGPT 4, you could ask for a Javascript class that represents a user for a new backend system you're writing and get something
decent back, especially if you provide notes and refinements along the way.
For example, here's an actual conversation with ChatGPT 4 where I do just that.
Unless you're using a third party wrapper like a command line interface (CLI) or IDE plugin that calls the OpenAI API, it's slightly awkward to do this in ChatGPT's browser chat window -
because you're going to end up doing a lot of copying from the browser and judiciously pasting into your code editor.
Even with this limitation, I've still found using ChatGPT 4 to discuss technical scenarios as I work to be a massive accelerator.
Runs in IDEs
Codeium's advantage here is that it tightly integrates with the code editors that developers already use, such as VSCode and Neovim.
Think of Codeium as a code assistant that is hanging out in the background of whatever file you happen to be editing at the moment.
It can read all of the text and code in the file to build up context.
As you type, you will begin to see Codeium suggestions, which are written out in a separate color (light grey by default) ahead of your cursor.
As the developer, if you feel that the suggestion
is a good one, or what you were about to type yourself, you hit the hotkeys you've configured to accept the suggestion and Codeium writes it out for you, saving you time.
In a good coding or documentation writing session, where Codeium is correctly following along with you and getting the right context, these many little autocompletions add up to saving you
quite a bit of time.
Like GitHub CoPilot, you can also write out a large comment block describing the code or functionality you want beneath it, and that is usually more than enough for Codeium to outright write your
function, method or class as you've described it, which can also be very accelerating, e.g.,:
// This API route accepts the product slug and returns product details
// from the database, or an error if the product does not exist
Once you move your cursor below this, Codeium will start writing out the code necessary to fulfill your description.
With some extra work, you can bring ChatGPT into your terminal or code editor
This is not to say that you can't get ChatGPT into your terminal or code editor - because I happen to use it there everyday. It just means you
need to leverage one of many third party tools that call OpenAI's API to do so.
My favorite of these is called mods.
This makes the full power of OpenAI's latest models, as well as many powerful local-only and open-source models, available
in your terminal where developers tend to live.
I can have it read a file and suggest code improvements:
cat path/to/file | mods "Suggest improvements to this code"
or assign it the kinds of tasks I previously would have had to stop and do manually:
ls -lh /local/dir | mods "These files are all too large and I want them
all converted to .webp. Write me a script that performs the
downsizing and conversion"
There are many community plugins for VSCode and Neovim that wrap the OpenAI in a more complete way, allowing you to highlight code in your editor and have ChatGPT4 look at it, rewrite it, etc.
Is it free to use?
When you consider that it's possible to bring ChatGPT4 into your code editors and terminal with a little extra work, one of the key advantages that Codeium retains is its price.
I'm currently happy to pay $20 per month for ChatGPT Plus because I'm getting value out of it daily for various development tasks and for talking through problems.
But Codeium is absolutely free for individual developers, which is not to be overlooked, because the quality of its output is also very high.
What advantage does ChatGPT have over Codeium?
As of this writing, one of the most powerful things that ChatGPT can do that Codeium can't is rapidly create high quality images in just about any artistic style. Users describe the image
they want, such as:
"A bright and active school where several young hackers are sitting around working on computers while the instructor explains code on the whiteboard. Pixel art style."
Having an on-demand image generator that responds to feedback, has a wide array of artistic styles at its disposal and can more or less follow directions (it's definitely not perfect)
is a pretty incredible time-saver and assistant when you publish as much on the web as I do.
What about general purpose chat?
Up until recently, ChatGPT had the upper hand here. It's still one of the most powerful models available at the time of this writing, and it is not constrained to technical conversations.
In fact, one of my favorite ways to use it is as a tutor on some new topic I'm ramping up on - I can ask it complex questions to check my understanding and ask for feedback on the mental
models I'm building. Anything from pop culture to outer space, philosophy and the meaning of life are up for grabs - and you can have a pretty satisfying and generally informative discussion
with ChatGPT on these and many more topics.
Tools like Codeium and GitHub's CoPilot used to be focused on the intelligent auto-completion functionality for coders, but all of these "AI-assisted developer tools" have been scrambling to add
their own chat functionality recently.
Codeium now has free chat functionality - and from some initial testing, it does quite well with the kinds of coding asisstant tasks I would normally delegate to ChatGPT:
Should you use Codeium or ChatGPT?
Honestly, why not both? As I wrote in Codeium and ChatGPT are all I need, these two tools are incredibly powerful on their own,
and they're even more powerful when combined.
I expect that over time we'll begin to see more comprehensive suites of AI tools and assistants that share context,
private knowledge bases and are explicitly aware of one another.
Until then, I'm getting great effect by combining my favorite tools in my daily workflow.
How do I use Codeium and ChatGPT together?
As I write this blog post on my Linux laptop in Neovim, I first tab over to Firefox to ask ChatGPT to generate me a hero image I can use in this blog post. I do this in the chat.openai.com
web interface, because that interface is tightly integrated with DALLE, OpenAI's image generating model.
I'll let it do a first few iterations, giving notes as we go, and as I write, until we
get the right image dialed in.
Meanwhile, as I write out this blog post in Neovim, Codeium is constantly suggesting completions, which is generally less useful when I'm writing prose, but very useful whenever I'm coding, writing
documentation, writing scripts, etc. |
|
Write an article about "How to Run a Quake 3 Arena Server in an AWS ECS Fargate Task" | {metadata.description}
Read article |
|
Write an article about "Why your AI dev tool startup is failing with developers" | A frustated senior developer trying our your improperly tested dev tool for the first time
When I evaluate a new AI-assisted developer tool, such as codeium, GitHub CoPilot or OpenAI's ChatGPT4, this is the thought process I use to determine if it's something I can't live without or if
it's not worth paying for.
Does it do what it says on the tin?
This appears simple and yet it's where most AI-assisted developer tools fall down immediately. Does your product successfully do what it says on the marketing site?
In the past year I've tried more than a few well-funded, VC-backed,
highly-hyped coding tools that claim to be able to generate tests, perform advanced code analysis, or catch security issues that simply do not run successfully when loaded in Neovim or vscode.
The two cardinal sins most AI dev tool startups are committing right now
Product developers working on the tools often test the "happy path" according to initial product requirements
Development teams and their product managers do not sit with external developers to do user acceptance testing
Cardinal sin 1 - Testing the "happy path" only
When building new AI developer tooling, a product engineer might use one or more test repositories or sample codebases to ensure their tool can perform its intended functionality, whether it's generating tests or finding bugs.
This is fine for getting started, but a critical error I've noticed many companies make is that they never expand this set of test codebases to proactively attempt to flush out their bugs.
This could also be considered laziness and poor testing practices, as it pushes the onus of verifying your product works onto your busy early adopters,
who have their own problems to solve.
Cardinal sin 2 - Not sitting "over the shoulder" of their target developer audience
The other cardinal sin I keep seeing dev tool startups making is not doing user acceptance testing with external developers.
Sitting with an experienced developer who is not on your product team and watching them struggle to use your product successfully is often painful and very
eye-opening, but failing to do so means you're pushing your initial bug reports off to chance.
Hoping that the engineers with the requisite skills to try your product are going to have the time and inclination to write you a detailed bug report after your supposed wonder-tool just
failed for them on their first try is foolish and wasteful.
Most experienced developers would rather move on and give your competitors a shot, and continue evaluating alternatives until they find a tool that works.
Trust me - when I was in the market for an AI-assisted video editor, I spent 4 evenings in a row trying everything from incumbents
like Vimeo to small-time startups before finding and settling on Kapwing AI, because it was the first tool that actually worked and supported my desired workflow. |
|
Write an article about "Weaviate vs Milvus" | Table of contents
vector database comparison: Weaviate vs Milvus
This page contains a detailed comparison of the Weaviate and Milvus vector databases.
You can also check out my detailed breakdown of the most popular vector databases here.
Deployment Options
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Local Deployment | ✅ | ✅ |
| Cloud Deployment | ✅ | ❌ |
| On - Premises Deployment | ✅ | ✅ |
Scalability
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Horizontal Scaling | ✅ | ✅ |
| Vertical Scaling | ✅ | ❌ |
| Distributed Architecture | ✅ | ✅ |
Data Management
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Data Import | ✅ | ✅ |
| Data Update / Deletion | ✅ | ✅ |
| Data Backup / Restore | ✅ | ✅ |
Security
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Authentication | ✅ | ✅ |
| Data Encryption | ✅ | ❌ |
| Access Control | ✅ | ✅ |
Vector Similarity Search
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Distance Metrics | Cosine, Euclidean, Jaccard | Euclidean, Cosine, Jaccard |
| ANN Algorithms | HNSW, Beam Search | IVF, HNSW, Flat |
| Filtering | ✅ | ✅ |
| Post-Processing | ✅ | ✅ |
Integration and API
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Language SDKs | Python, Go, JavaScript | Python, Java, Go |
| REST API | ✅ | ✅ |
| GraphQL API | ✅ | ❌ |
| GRPC API | ❌ | ❌ |
Community and Ecosystem
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Open-Source | ✅ | ✅ |
| Community Support | ✅ | ✅ |
| Integration with Frameworks | ✅ | ✅ |
Pricing
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Free Tier | ❌ | ✅ |
| Pay-as-you-go | ❌ | ❌ |
| Enterprise Plans | ✅ | ✅ | |
|
Write an article about "Office Oracle - a complete AI Chatbot leveraging langchain, Pinecone.io and OpenAI" | What is this?
The Office Oracle AI chatbot is a complete AI chatbot built on top of langchain, Pinecone.io and OpenAI's GPT-3 model. It demonstrates how you can build a fully-featured chat-GPT-like experience
for yourself, to produce an AI chatbot with any identity, who can answer factually for any arbitrary corpus of knowledge.
For the purposes of demonstration, I used the popular Office television series, but this same stack and approach will work for AI chatbots who can answer for a company's documentation, or specific processes, products,
policies and more.
Video series
Be sure to check out my three-part video series on YouTube, where I break down the entire app end to end, and discuss the Jupyter notebooks and data science elements, in addition to the Vercel ai-chatbot template
I used and modified for this project:
AI Chatbots playlist on YouTube
Intro video and demo
Jupyter notebooks deep-dive
Next.js vercel template ai-chatbot deep-dive
Open source code
I open sourced the Jupyter notebooks that I used to prepare, sanitize and spot-check my data here:
Office Oracle Data workbench
Office Oracle Data test workbench
The data workbench notebook handles fetching, parsing and writing the data to local files, as well as converting the text to embeddings and upsertings the vectors into the Pinecone.io vector database.
The test workbench notebook demonstrates how to create a streamlined test harness that allows you to spot check and tweak your data model without requiring significant development changes to your application layer.
I also open sourced the next.js application itself |
|
Write an article about "First, find out if you've got the programming bug" | I'm thinking about learning to code. Which laptop should I get? Should I do a bootcamp? Does my child need special classes or prep in order to tackle a computer science degree?
A lot of different folks ask me if they should learn to code, if software development is a good career trajectory for them or their children, and what they need to study in school in order
to be successful.
Here's my advice in a nutshell
Before you should worry about any of that: your major, which school you're trying to get your kid into, which laptop you should purchase, you need to figure out if you (or your kid) have the "programming bug".
This will require a bit of exploration and effort on your part, but the good news is there's a ton of high quality and free resources online that will give you enough of a taste for coding and
building to help you determine if this is something worth pursuing as a career or hobby. I'll share some of my favorites in this post.
What is the programming bug?
"The programming bug" is the spark of innate curiosity that drives your learning forward. Innate meaning that it's coming from you - other people don't need to push you to do it.
In software development, coding, systems engineering, machine learning, data science; basically, in working with computers while also possibly working with people - there are periods of profound frustration and tedium, punctuated by anxiety and stress.
I have personally reached a level of frustration that brought
tears to my eyes countless times. If you pursue the path of a digital craftsperson, be assured that you will, too. Especially in the beginning. That's okay.
I also happen to think that being able to describe to machines of all shapes and sizes exactly what you want them to do in their own languages; to solve problems in collaboration with machines, and to be able to bring an idea from your imagination all the
way to a publicly accessible piece of software that people from around the world use and find utility or joy in - borders on magic.
The spark of curiosity allows you to continually re-ignite your own passion for the craft
In my personal experience, considering my own career, and also the folks I've worked with professionally who have been the most effective and resilient, the single determining criterion for success is
this innate curiosity and drive to continue learning and to master one's craft; curiosity in the field, in the tools, in the possibilities, in what you can build, share, learn and teach.
That's all well and good, but how do you actually get started?
Use free resources to figure out if you have the programming bug
Don't buy a new macbook. Don't sign up for a bootcamp. Not at first.
Use the many excellent free resources on the internet that are designed to help folks try out programming in many different languages and contexts.
Here are a few that I can recommend to get you started:
Exercism.io
Codewars
Codecademy
Edabit
Give the initial exercises a shot. It doesn't really matter what language you start with first, but if you have no clue, try Python, PHP, or JavaScript. When you come across a phrase or concept
you don't understand, try looking it up and reading about it.
It's key that none of these services require you to pay them anything to get started and get a taste for programming. You can do them in your browser on a weak, old computer or at the library or an
internet cafe, before shelling out for a fancy new laptop.
If it turns out you could go happily through the rest of your life without ever touching a keyboard again, you've lost nothing but a little time.
How can you get a feel for what the work is like?
Jobs in software development vary wildly in how they look - a few parameters are company size, team size, technology stack, the industry you're in (coding for aviation is very, very different from coding for advertising
in some meaningful ways), etc.
Nevertheless, it can be helpful to watch some professional developers do developer things, in order to gauge if it even seems interesting to you or not.
How can you peek into the day to day of some working developers?
Luckily, plenty of developers make it easy for you to do that, by sharing content on YouTube and Twitch.
This is very far from an exhaustive list, but here's a few channels I've watched recently that can help you see some
on-screen action for yourself:
Ants Are Everywhere - An ex-Googler reads the source code to popular open-source projects on YouTube, thinking through the process and showing how he answers his own questions
as they arise. Really excellent code spelunking.
Yours truly - I make tutorials on open source tools as well as share some recordings of myself live-coding on some open source projects.
Lately, and for the foreseeable future, I'll be going deep on A.I.
applications, LLMs (large language models such as ChatGPT and others), vector databases and machine learning topics.
TJ DeVries - A great open source developer, author of a very popular Neovim plugin (a coding tool for developers) and someone who makes their content accessible and interesting for all viewers.
The Primeagen - A spicier, no-holds-barred look at all things programming, getting into coding, learning to code, and operating at a high level as a software engineer from a Netflix engineer who isn't afraid to say it like it is.
I'll continue to add more as I find good channels to help folks get a feel for the day in, day out coding tasks.
Keep in mind: these channels will give you a good taste of working with code, using a code editor and working with different languages and tools, but that's only a part of the overall job of being a professional developer.
There's entire bookshelves worth of good content on the soft skills of the job: working effectively up and down your organization, planning, team structure and dynamics, collaborative coding, team meetings, methods for planning and tracking work,
methods for keeping things organized when working with other developers, etc.
These skills are perhaps even more soft skill development as well.
You may not might find the programming bug overnight
I've been on a computer since I was 3 years old, but for the first few years I was really only playing games, making diaramas with paint and similar programs.
Around age 11, I had a neighborhood friend who showed me an early Descent game on his PC.
He also had a C++ textbook that he let me borrow and read through.
At the tender age of 11, I was thinking to myself that I would read this book, become a developer, and then make my own games.
I started by
trying to understand the textbook material. This didn't pan out - and it would be another 15 years until I'd make a conscious decision to learn to code.
At age 26, I joined my first tech company as a marketing associate.
Luckily, there was a component of the job that was also quality assurance, and our product was very technical, so I had to use the command line
to send various test payloads into our engine and verify the outputs made sense. I was hooked.
The staff-level developer who was sitting next to me gave me just the right amount of encouragement and said that if I kept at it - I would be like "this" (he made a motion with his hand of an upward ramp).
From that point forward, I was teaching myself everything I could on nights and weekends.
Practicing, coding, reading about
coding and trying to build things. And I've never stopped.
The timeline for learning to code can be lumpy and will look different for different people. That's okay, too.
What do you do if you think you DO have the programming bug?
So what should you do if you try out some of these types of programming exercises and you find out that you really do like them?
That you find yourself thinking about them when you're doing something else?
What do you do next?
Start building and never stop.
This is the advice from a Stack Overflow developer survey from a few years ago about how to stay current and how to grow as a developer: "Build things all the time and never stop".
I couldn't agree more.
The first complete web app I built for myself was my Article Optimizer.
It was brutal.
I didn't even know the names of the things I didn't know - so I couldn't Google them.
I had to work backwards by examining apps that were similar enough
to what I was trying to build (for example, an app that presented the user with a form they could use to submit data) and reverse engineer it, reading the page source code, and trying to find out more information about the base technologies.
Form processing, APIs, custom fonts, CSS, rendering different images based on certain conditions, text processing and sanitization.
I learned a metric ton from my first major web app, even though it took me months to get it live.
And the first version was thrilling, but not at all
what I wanted.
So I kept on refining it, re-building it.
Learning new frameworks and techniques.
Around the third time I rewrote it, I got it looking and functioning the way I wanted, and I got it running live on the internet so that other people could use it.
Then I maintained it as a freely available app for many years. Hosting something on the internet, on your own custom domain, will teach you a ton as well.
This is the path that worked for me: find something that's outside of your comfort zone.
Figure out how to build it.
Chase down every curiosity - research it as best you can and then try to get it working.
Once you do, it's time for the next project.
This time, do something more ambitious
than last time around - something that will push you out of your comfort zone again so that you can learn even more.
Don't pony up your cash until you've gotten a free taste
I've seen people take out a loan for $12,000 in order to complete a coding bootcamp, just to discover during their first job placement that they don't actually enjoy working on the computer all day or want to continue building digital things.
If you're currently considering learning to code or getting into computers as a possible career, don't over invest until you've given yourself a taste of coding and building.
When you're on the job site day in and day out - doing the actual work, feeling the stress, and the joy and the panic and accomplishment, Mom and Dad are not going to be leaning over your shoulder (hopefully).
Software development, hacking, designing and building systems, creating apps and sites, solving hard engineering challenges with your ever-expanding toolkit can be a wonderful career - if you enjoy doing the work.
You need to figure out if you can find that spark and continually use it to renew your own interest and passion.
Looking for advice or have a question?
You can subscribe to my newsletter below, and the first email in the series will allow you to reply in order to share with me any challenges you're currently facing in your career, or questions you might have.
All the best and happy coding! |
|
Write an article about "CanyonRunner - a complete HTML5 game" | I Open Sourced My Game
Building CanyonRunner was a tremendous amount of fun, thanks largely to Richard Davey's excellent Phaser framework.
Along the way, I was assisted by many helpful Phaser community members and developers, so I wanted to give back by:
Open sourcing my game (check out the repo here)
Offering code samples and explaining some of the features I implemented
I built this game from start to finish in 76 days.
In the course of developing it, one of the running themes I noticed on the Phaser forums was that most developers were tackling their first game and were unsure about how to implement common game features like saved games, multiple levels, different experiences for mobile and desktop, etc.
Phaser is well organized and documented, so while its various API's and systems were easy to get started with, it was less clear to many developers how to fit everything together into a coherent gaming experience.
I open sourced CanyonRunner and decided to do an in-depth post about its various features in order to create a resource for other developers that might be in the middle of developing their own HTML5 game.
Hopefully some of the features I built into CanyonRunner, such as player-specific saved games, multiple levels each with their own atmosphere and game mechanics, different experiences optimized for desktop / mobile, and alternate endings determined by player performance, will resonate with and assist other game developers.
To get a sense of CanyonRunner, or to play it through in its entirety (which will take you less than 10 minutes if you make zero mistakes), click here to play!
Screenshots
Here's a look at some screenshots from the actual game.
I wanted the game to have a retro feel.
At the same time, the story, presented via inter-level navigation sequences, builds up an eerie atmosphere.
Intense Aerial Dogfights. Fight Off Marauding Bandits.
Wind Your Way Through the Eerie Story
Intense Environmental Effects
Auto-detects Mobile agents and renders a touchpad
Catchy Music Keeps The Pace Throughout The Game
Save System Keeps Your Score & Progress
Multiple Endings For Higher Replay Value
Playthrough Video
Want to get a sense of the gameplay without dodging spires yourself? Watch this full playthrough of the game to quickly get up to speed on the feel and main game mechanics of CanyonRunner.
Table of Contents
What you'll find in this post:
Game overview
Project structure
Using Node.js as a simple server
Creating a preloader
Creating a splash screen
Linking separate levels
Creating a save system
Creating different experiences for desktop and mobile
Creating multiple endings
Settings buttons - pause & mute
Game overview
CanyonRunner is a 2D side-scrolling action & adventure game, complete with a story, two possible endings, automatically saved game progress, aerial dogfights and air to air missle combat, and atmospheric special effects.
You assume the role of the mysterious CanyonRunner, a lone pilot navigating their rocket through a perilous 3 stage journey as they struggle to return to their family with desperately needed supplies.
Depending upon their performance, players are shown one of two possible endings after completing the game.
Project Structure
The CanyonRunner project is structured such that:
The development workflow is easy to understand and organized
Building a packaged and optimized distribution of the game can be done in one step
You can view the full project on Github here if you want to explore the structure on your own.
Let's take a look at the project's file tree, then consider the general purpose of each directory in turn:
bash
.
├── Gruntfile.js
├── assets
│ ├── audio
│ │ ├── audio.json
│ │ ├── audio.m4a
│ │ └── audio.ogg
│ ├── backgrounds
│ │ ├── desert-open.png
│ │ ├── level1-background.png
│ │ ├── level2-background.png
│ │ ├── level3-background.png
│ │ └── sad-desert.png
│ ├── favicon.png
│ └── sprites
│ ├── advance-button.png
│ ├── asteroid1.png
│ ├── asteroid10.png
│ ├── asteroid11.png
│ ├── asteroid12.png
│ ├── asteroid13.png
│ ├── asteroid14.png
│ ├── asteroid15.png
│ ├── asteroid16.png
│ ├── asteroid17.png
│ ├── asteroid18.png
│ ├── asteroid19.png
│ ├── asteroid2.png
│ ├── asteroid20.png
│ ├── asteroid3.png
│ ├── asteroid4.png
│ ├── asteroid5.png
│ ├── asteroid6.png
│ ├── asteroid7.png
│ ├── asteroid8.png
│ ├── asteroid9.png
│ ├── bandit-missile.png
│ ├── bandit.png
│ ├── canyon-runner-splash.png
│ ├── cry-about-it-button.png
│ ├── down-arrow.png
│ ├── explosion1.png
│ ├── explosion10.png
│ ├── explosion11.png
│ ├── explosion12.png
│ ├── explosion13.png
│ ├── explosion14.png
│ ├── explosion15.png
│ ├── explosion16.png
│ ├── explosion2.png
│ ├── explosion3.png
│ ├── explosion4.png
│ ├── explosion5.png
│ ├── explosion6.png
│ ├── explosion7.png
│ ├── explosion8.png
│ ├── explosion9.png
│ ├── fire-missile-button-desktop.png
│ ├── fire-missile-button-mobile.png
│ ├── fire1.png
│ ├── fire2.png
│ ├── fire3.png
│ ├── happy-splashscreen.png
│ ├── healthkit.png
│ ├── healthorb1.png
│ ├── healthorb2.png
│ ├── healthorb3.png
│ ├── home-burning.png
│ ├── how-to-play-desktop.png
│ ├── how-to-play-mobile.png
│ ├── inverted-rock.png
│ ├── kaboom.png
│ ├── left-arrow.png
│ ├── missile.png
│ ├── navigation-bandit.png
│ ├── navigation-home.png
│ ├── navigation-supply.png
│ ├── pause-button.png
│ ├── play-again-button.png
│ ├── progress.png
│ ├── right-arrow.png
│ ├── rock.png
│ ├── rocket-sprite.png
│ ├── sad-splashscreen.png
│ ├── scrap1.png
│ ├── scrap2.png
│ ├── scrap3.png
│ ├── scrap4.png
│ ├── share-the-love-button.png
│ ├── smoke-puff.png
│ ├── sound-icon.png
│ ├── sprites.json
│ ├── sprites.png
│ ├── start-button.png
│ ├── success.png
│ ├── try-again-button.png
│ └── up-arrow.png
├── build
│ ├── CanyonRunner.js
│ ├── CanyonRunner.min.js
│ ├── config.php
│ ├── custom
│ │ ├── ninja.js
│ │ ├── ninja.min.js
│ │ ├── p2.js
│ │ ├── p2.min.js
│ │ ├── phaser-arcade-physics.js
│ │ ├── phaser-arcade-physics.min.js
│ │ ├── phaser-ninja-physics.js
│ │ ├── phaser-ninja-physics.min.js
│ │ ├── phaser-no-libs.js
│ │ ├── phaser-no-libs.min.js
│ │ ├── phaser-no-physics.js
│ │ ├── phaser-no-physics.min.js
│ │ ├── pixi.js
│ │ └── pixi.min.js
│ ├── phaser.d.ts
│ ├── phaser.js
│ ├── phaser.map
│ └── phaser.min.js
├── compiler.jar
├── css
│ └── stylesheet.css
├── icons
│ ├── app_icon_1024x1024.png
│ ├── app_icon_114x114.png
│ ├── app_icon_120x120.png
│ ├── app_icon_144x144.png
│ ├── app_icon_152x152.png
│ ├── app_icon_256x256.png
│ ├── app_icon_512x512.png
│ ├── app_icon_57x57.png
│ ├── app_icon_60x60.png
│ ├── app_icon_72x72.png
│ └── app_icon_76x76.png
├── images
│ └── orientation.png
├── index.html
├── package.json
├── server.js
└── src
├── Boot.js
├── EmotionalFulcrum.js
├── EveryThingYouBelievedAboutYourFamilyWasHellishlyWrong.js
├── HomeSweetHome.js
├── HowToPlay.js
├── Level1.js
├── Level2.js
├── Level3.js
├── MainMenu.js
├── NavigationBandit.js
├── NavigationHome.js
├── NavigationSupply.js
└── Preloader.js
10 directories, 143 files
Project root
.gitignore: This special file tells the version control system, git, which files it can "ignore" or not worry about placing under source control.
If your game project is generating logs, debug output, or uses node_modules, you can save space in your repository by specifying these files and directories in your .gitignore file.
Gruntfile.js: I used the command line task-runner Grunt in order to automate some of tedious and repetitive development tasks.
Grunt will be familiar to many web developers, but for those of you who have not encountered it before, Grunt allows you to define tasks, namely those that you find yourself repeatedly having to perform while developing, and bundle them together into a single or a few commands.
As an example, if you are working with scss, you may constantly find yourself performing the same mundane tasks as you build out your project, such as concatenting 4 different scss files together, then compiling them to raw css, then minifying that resulting css file and moving it into a specific folder where it can be served.
Instead of doing this manually each time, you can configure a grunt task to do exactly these steps in that order - and all you'd have to do is type "grunt" on the command line.
Better yet, Grunt can "watch" certain files or directories for changes and then perform associated actions on its own.
You can even set up completely customized tasks to perform, as we'll see in a moment with the Google Closure Compiler for optimizing JavaScript.
Grunt can be painful to set up and configure, and often times it's overkill for a small project, but it can effectively streamline your workflow if you're dealing with multiple source files, concatenation and minification.
In CanyonRunner, as in many Phaser projects, I save off each game state as a separate javascript file for sanity while developing, but we only want to serve as few minified javascript files as possible with our finished game.
This makes Grunt a logical choice.
compiler.jar: This is the Google Closure Compiler, which is a tool that makes JavaScript download and run faster.
After concatenating all my custom JavaScript into a single file, I run it through the Closure Compiler so that the final output .js file that is served up by the game is as lean and mean as possible.
The compile command within the exec task uses the compiler.jar to generate the optimized CanyonRunner.min.js, save it to the correct build directory, and echo a string confirming the task completely successfully:
javascript
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
},
}
},
},
'build/<%= pkg.name %>.min.js' : ['<%= concat.dist.dest %>']
}
}
},
//Compile Final JS File
//Output Clean Distribution Build
//Move assets
}
});
grunt.loadNpmTasks('grunt-contrib-uglify');
grunt.loadNpmTasks('grunt-contrib-concat');
grunt.loadNpmTasks('grunt-exec');
grunt.registerTask('default', ['concat', 'uglify', 'exec']);
};
Note that the assets are also moved by Grunt into their correct destinations during the build.
Once your Gruntfile is in place and configured correctly, it's much easier to just type "grunt" in your terminal and get a perfectly built game as an output than to build one manually.
This is doubly true if you're testing something that requires you to make changes and then build, or if you're trying to remember how to build the project after 3 months of not touching it.
Creating a Distribution-Ready Build in One Command
The reason we soldier through the initial tedium of configuring Grunt is that once we have everything set up, we can literally build a distribution-ready copy of our game in a single command:
bash
$ grunt
If you've cloned the CanyonRunner repo and are following along, you can cd into the project root and type grunt in your terminal to see what I'm talking about.
Grunt will execute the tasks configured in your Gruntfile to concatenate all the javascript files, run them through the Google Closure Compiler, copy all the assets correctly and put everything where it belongs: into a single new directory called CanyonRunner-distribution which will appear in the project root.
This distribution directory can now be named whatever you want and handed off to a customer or game hosting site for distribution.
Having this build infrastructure in place will save you dozens of hours over the course of a project.
Directories at a Glance
Now, let's consider the purpose of each directory.
assets. This directory holds the audio files, background images, and spritesheets used by our game.
build.
This directory is home to the files that are required by our game to work, such as the actual Phaser framework files.
Our build tool gathers up only what is needed from here when creating a distribution.
css. Holds the simple stylesheet required to make the orientation (rotate your phone!) image work properly.
icons. Holds the various sized app icons that would be required by, say, an iOS app that was loading your Phaser game in a webview.
images.
This directory holds a special image required by Phaser to render the screen telling the user they should rotate their phone to landscape mode in order to play the game.
When phaser detects that the user's phone is being held upright, this image is used to render that hint on screen.
node_modules.
This is the directory where npm, node's package manager, installs dependencies.
When you require a module in your node.js script, one of the places node looks for that module is here.
In the case of this project, our server.js file (see next section) uses the express module, which would end up here after running the npm install command.
src.
Arguably the most Once we have finished our game and we're ready to make a build, our build tool will look into this directory to gather together all the separate files into one single concatenated and minified javascript file that's fit for distribution with our finished game.
Running a local fileserver to ease development with node.js
While developing a Phaser game it is very helpful to have a local fileserver that we can run with a single command.
This makes it easy to serve up our index.html file locally, which loads all our Javascript and Phaser files so we can see and play our game as we're building it.
You could set up a traditional web stack with apache, or use something that handles this for you such as Mamp.
I feel these options are too involved for what we want to do: simply serve up our index.html file locally so we can view it at localhost:8080.
Our index.html file will in turn load the Phaser library, and then our game itself so we can test changes with low hassle and iterate quickly.
Follow these instructions to install Node.js on your machine. Once that's done, you can run the server.js file in the project root by typing:
bash
$ server.js
Now you can play and test your Phaser game by typing localhost:8080 into your browser.
Let's take a look at what this simple utility script looks like:
javascript
var
express = require('express'),
app = express(),
port = 8080
;
//Set the 'static' directory to the project root - where index.html resides
app.use(express.static('./'));
//When root is requested, send index.html as a response
app.get('/', function(req, res){
res.send('index.html');
});
//Create the server by listening on the desired port
var server = app.listen(port, function() {
console.log('Visit localhost:' + port + ' to see your Phaser game');
});
Notice we're requiring the Express module to abstract away serving static assets.
This means you'll need to install express library locally to your project in order for server.js to work.
If you don't already have express installed globally on your system, type:
bash
$ sudo npm i
This command will pull down all required dependencies from npm, node's package management system.
With our simple fileserver in place, all we have to do to view changes to our source code or playtest our game is run our server and visit localhost:8080.
Creating a preloader
Phaser games use preloaders as special game states to perform setup and configuration tasks that can or must be run before a more interactive game state is loaded.
Let's examine CanyonRunner's preloader state. It has a few There's a very handy Phaser convenience feature known as a Preload Sprite that I'm taking advantage of here to render the loading bar that says "loading" and expands from 0 to 100% as the splashscreen itself is being prepared.
First, you set up the sprite that will be used as the preloadBar.
Then you can call the method setPreloadBar and pass in the sprite - Phaser handles all the internal timing and display logic for us.
javascript
CanyonRunner.Preloader = function (game) {
this.ready = false;
};
CanyonRunner.Preloader.prototype = {
this.background = this.add.sprite(0, 0, 'desert-open');
this.splashscreen = this.add.sprite(0, 0, 'sprites', 'canyon-runner-splash');
this.preloadBar = this.add.sprite(this.game.world.centerX - 127.5, this.game.world.centerY, 'sprites', 'progress');
this.load.setPreloadSprite(this.preloadBar);
this.game.load.image('desert-open', 'assets/backgrounds/desert-open.png');
this.game.load.image('sad-desert', 'assets/backgrounds/sad-desert.png');
this.game.load.image('dark-forest', 'assets/backgrounds/level3-background.png');
this.game.load.image('desert', 'assets/backgrounds/level1-background.png');
this.game.load.image('desert2', 'assets/backgrounds/level2-background.png');
//AudioSprites
if (this.game.device.firefox || this.game.device.chrome || this.game.device.chromeOS) {
this.game.load.audiosprite('sound', 'assets/audio/audio.ogg', 'assets/audio/audio.json');
} else {
this.game.load.audiosprite("sound", 'assets/audio/audio.m4a', 'assets/audio/audio.json');
}
},
this.preloadBar.cropEnabled = false;
},
if (this.cache.isSoundDecoded('sound') && this.ready == false)
{
this.state.start('MainMenu');
}
}
};
Optimizing Asset Delivery with Audiosprites
Another An audiosprite is a single file that contains all the sound effects and songs required by the game mashed together, to save space and bandwidth during asset delivery.
Tools that create audiosprites also export a map file, usually in json or xml, that explicitly states the time slices of the audiosprite wherein each individual sound and song starts and ends.
Let's take a look:
javascript
{
"resources":[
"audio.ogg",
"audio.m4a",
"audio.mp3",
"audio.ac3"
],
"spritemap":{
"3rdBallad":{
"start":0,
"end":39.18222222222222,
"loop":true
},
"AngryMod":{
"start":41,
"end":99.79292517006803,
"loop":true
},
"Ariely":{
"start":101,
"end":132.64009070294784,
"loop":true
},
"angel":{
"start":134,
"end":140.53786848072562,
"loop":true
},
"aronara":{
"start":142,
"end":202.12081632653062,
"loop":true
},
"crash":{
"start":204,
"end":205.94902494331066,
"loop":false
},
"explosion":{
"start":207,
"end":221.7374149659864,
"loop":false
},
"heal":{
"start":223,
"end":226.0316553287982,
"loop":false
},
"launch":{
"start":228,
"end":232.39873015873016,
"loop":false
},
"missile-lock":{
"start":234,
"end":238.99954648526077,
"loop":false
},
"missile-reload":{
"start":240,
"end":241.27274376417233,
"loop":false
},
"negative":{
"start":243,
"end":243.68643990929706,
"loop":false
},
"rocket-start":{
"start":245,
"end":247.20734693877552,
"loop":false
},
"sonar-found":{
"start":249,
"end":252.10131519274375,
"loop":false
},
"sonar":{
"start":254,
"end":260.26213151927436,
"loop":true
},
"success":{
"start":262,
"end":308.4660317460317,
"loop":true
},
"swoosh":{
"start":310,
"end":311.2408163265306,
"loop":false
}
}
}
This mapping file allows frameworks like Phaser to efficiently deliver a single audio file, while still allowing the game developer the convenience of referring to individual sounds and songs by their key names while programming.
After developing CanyonRunner by using separate audio files for each sound, I used a free tool to create one single audiosprite for each supported filetype (different browsers support different audiosprite filetypes).
That's why the preloader uses Phaser's device helper methods to determine which browser the player is using.
Firefox and Chrome support .ogg files, while Safari supports .m4a.
You can convert your final audiosprite to both formats and include it in your assets directory.
With your preloader determining the proper format based on your user's browser, each player will get a single optimized audiosprite that will run perfectly for them.
Creating a splash screen
Successfully building a complete game requires attention to lots of small details which, taken together, build up a feeling of a polished and finished product.
One of the first contact points our players will have with our game is the splashscreen.
A good splashscreen can set up the feel and mood of the game, begin introducing the themes that will run throughout, and get the player excited about playing.
Let's take a look at how we can create a splashscreen for our Phaser HTML5 game.
Here is the full MainMenu.js file for CanyonRunner, which sets up the intro splashscreen and waits for the player to click the Start button:
javascript
CanyonRunner.MainMenu = function(game) {
};
CanyonRunner.MainMenu.prototype = {
this.sound = this.game.add.audioSprite('sound');
//Check if Returning Player & If Has Level Progress Saved
this.playerStats;
if (localStorage.getItem('Canyon_Runner_9282733_playerStats') != null) {
this.playerStats = JSON.parse(localStorage.getItem('Canyon_Runner_9282733_playerStats'));
} else {
this.playerStats = {
};
}
//Load Main Menu
this.background = this.game.add.tileSprite(0, 0, 1200, 600, 'desert-open');
this.background.fixedToCamera = true;
this.splashscreen = this.add.sprite(0, 0, 'sprites', 'canyon-runner-splash');
this.sound.play('aronara');
this.soundButton = this.game.add.button(this.game.world.centerX + 335, this.game.world.centerY - 285, 'sprites', this.toggleMute, this, 'sound-icon', 'sound-icon', 'sound-icon');
this.soundButton.fixedToCamera = true;
if (!this.game.sound.mute) {
this.soundButton.tint = 16777215;
} else {
this.soundButton.tint = 16711680;
}
//Read Player Stats & Display
if (this.playerStats.topScore > 0 && this.playerStats.topTime > 0) {
this.playerStatTextStyle = {
};
this.playerStatString = "YOUR TOP SCORE: " + this.playerStats.topScore + " & YOUR TOP TIME: " + Math.round(this.playerStats.topTime);
this.playerStatText = this.game.add.text(this.game.world.centerX - 350, this.game.world.centerY - 275, this.playerStatString, this.playerStatTextStyle);
}
//Create Intro Player
this.player = this.game.add.sprite(64, 64, 'sprites', 'rocket-sprite');
this.player.y = 320;
this.game.physics.enable(this.player, Phaser.Physics.ARCADE);
this.player.body.bounce.y = 0.2;
this.player.body.collideWorldBounds = true;
this.player.body.setSize(64, 34, 0, 15);
//Set up Initial Events
this.game.time.events.add(300, this.introFlyingScene, this);
this.startbutton = this.add.button(350, 500, 'sprites', this.startGame, this, 'start-button', 'start-button', 'start-button');
},
//Scroll Background
if (!this.jetFired) {
//Scroll background for flying appearance
this.background.tilePosition.x -= 2;
} else {
this.background.tilePosition.x -= 10;
}
//Start Afterburners
if (this.burnEngines) {
this.emitter.emitX = this.player.x - 25;
this.emitter.emitY = this.player.y + 30;
}
},
if (!this.mute) {
this.game.sound.mute = true;
this.mute = true;
this.soundButton.tint = 16711680;
} else {
this.game.sound.mute = false;
this.mute = false;
this.soundButton.tint = 16777215;
}
},
//Fly Rocket to Center Screen
this.introTween = this.game.add.tween(this.player);
this.introTween.to({
}, 2000);
this.introTween.start();
//Fly the ship into view and do a barrel roll
this.introFlyingTimer = this.game.time.create(this.game);
this.introFlyingTimer.add(1100, function() {
//this.doABarrelRoll();
this.hoverShipAnimation();
}, this);
this.introFlyingTimer.start();
//Turn on Afterburners
this.engineBurnTimer = this.game.time.create(this.game);
this.engineBurnTimer.add(2000, function() {
this.startEngines();
this.jetFired = true;
}, this);
this.engineBurnTimer.start();
this.initialPauseTimer = this.game.time.create(this.game);
//Pause the Player
this.initialPauseTimer.add(2500, function() {
this.hoverShip = false;
}, this);
this.initialPauseTimer.start();
},
//Temporarily pause ship above text
this.hoverShip = true;
this.hoverShipTimer = this.game.time.create(this.game);
this.hoverShipTimer.add(2000, function() {
this.hoverShip = false;
this.player.angle = 0;
}, this);
this.hoverShipTimer.start();
},
//Create Particle Jet Engine Burn
this.emitter = this.game.add.emitter(this.game.world.centerX, this.game.world.centerY, 400);
this.emitter.makeParticles('sprites', ['fire1', 'fire2', 'fire3', 'smoke-puff']);
this.emitter.gravity = 200;
this.emitter.setAlpha(1, 0, 2000);
this.emitter.setScale(0.4, 0, 0.4, 0, 2000);
this.emitter.start(false, 3000, 3);
this.burnEngines = true;
this.sound.play('rocket-start');
},
this.sound.stop('aronara');
//Load Proper State for Player
this.state.start(this.playerStats.returnPlayerToState);
}
};
From reading through the source code you can see that the MainMenu.js file does a few key things:
Checks if the current player has saved game data, and renders it
Creates the background, starts scrolling the screen, adds the rocket
Plays the intro music
Runs an initial "flying scene" with afterburners and a subtle speed-up
Sets up a startGame function, bound to the start button
A good splashscreen could be as simple as a static background image with a start button.
The main goal is to provide an introduction to the game.
Notice I've created a mute button on the intro scene - but not a pause button.
It's a good idea to give your player the option to mute the game early on in case they're playing in a situation where they don't want sound output.
However, on this particular screen a pause button is irrelevant, since the intro scene will loop forever until the user taps or clicks Start.
Creating and Linking Separate Levels
At a high level, the process of creating and linking together separate levels involves writing a separate .js file representing each level and placing it in the src/ directory.
I'd strongly suggest naming each file after the scene or level it renders.
This way you can always quickly and easily find the file you need to edit when you want to make a change to a particular level.
Once we're code complete on our game, our build tool will gather up all these separate .js files and concatenate them into a single file that represents our Phaser game.
Making Levels Unique with New Game Mechanics
In the case of CanyonRunner, I looked at each level as an opportunity to explore a new game mechanic.
Level 1 is the most straightfoward level and the easiest level to complete.
This is because I'm conscientiously using it to introduce players to the first main game mechanic that will be carried throughout the game: avoiding craggy and somewhat randomized spires.
While Level 1 is not the most exciting or challenging level, it is a necessary introduction to a few key facts that a successful player of CanyonRunner must understand:
Spires are coming at you constantly, in varying and sometimes unpredictable positions
Smashing into a spire causes you damage
Your rocket can take up to 3 hits before exploding
You can "heal" your rocket by catching medkits (yes, medkits - it's a cartoony game)
When backed into a corner, you can blast spires with your missiles
Your missiles are unlimited, but take a long time to reload, so you have to fire carefully
When you break out the key lessons like this, it becomes clear that there's actually a good deal going on in the first level.
Meanwhile, these are concepts that are true for every subsequent level in CanyonRunner, so it's Level 2 is my personal favorite and it introduces a completely new game mechanic: dogfights.
In level 2, you are hounded by a series of bandits armed with missiles as powerful as your own.
Taking a hit from one of these enemy missiles means instant death and restarting the level.
There's an entire mini-game to figuring out how the bandits behave and track you, and what you have to do in order to actually hit them with a missile.
At the same time you're dodging enemy missiles and trying to shoot down bandits, those spires and healthkits are still coming at you in somewhat randomized fashion.
Thus, the gameplay here is compounding and builds upon previous mechanics.
At times, you may be forced by an enemy missile to take a direct hit from a spire, since you know your rocket can withstand up to three of those, but will be instantly destroyed by an enemy missile.
Level 3 likewise introduces its own unique mechanic: a vicious meteor storm.
While continuing to dodge spires and collect healthkits that are coming at you horizontally, you must also successfully dodge somewhat randomized and potentially very fast meteoroids that are raining down on you vertically.
As is true with the spires, your rocket can survive up to three direct hits from a meteoroid.
The now two planes of deadly obstacles compound on one another to create the most movement-intensive level in CanyonRunner.
Incrementally introducing new game mechanics in this manner is a good way to increasingly challenge your players as your game progresses, while still making the gameplay feel logical and sensible.
Throwing all of these mechanics at the player from the get-go could result in the game feeling overly complex and runs the risk of alienating and overwhelming the player, who is then unlikely to return.
It's far better to allow the player to make some degree of progress while learning the core mechanics before throwing your biggest challenges at them.
Handling transitions between levels
Let's take a quick look at how level1.js handles the next 3 possible states following play:
The user succeeds - and should be passed to the next scene
The user fails - and should be passed to the main menu after we preserve their game data
The user quits - and should be passed directly to the main menu
Here are the functions at the end of level1.js that handle these transitions.
While the crux of switching states is calling Phaser's state.start method, you'll usually want to perform certain tear-down or data persistance tasks before making the switch:
javascript
//Handle Player Scores and Times
this.interval = 0;
this.step = this.playerStats.topScore - this.interval;
if (this.score > this.step) {
this.playerStats.topScore = this.interval + this.score;
}
this.playerStats.topTime = this.playerStats.topTime + this.survivalTimer.seconds;
localStorage.setItem('Canyon_Runner_9282733_playerStats', JSON.stringify(this.playerStats));
//Reset Game After Pause
this.resetTimer = this.game.time.create(this.game);
this.resetTimer.add(4000, function () {
this.explosion.kill();
this.game.state.start('MainMenu');
}, this);
this.resetTimer.start();
},
//Handle Player Scores and Times
this.playerStats.topScore = 50;
this.playerStats.topTime = this.playerStats.topTime + this.survivalTimer.seconds;
//Set Highest Level Completed by Player
this.playerStats.returnPlayerToState = 'NavigationBandit';
localStorage.setItem('Canyon_Runner_9282733_playerStats', JSON.stringify(this.playerStats));
this.buttonAdvance = this.game.add.button(350, 500, 'sprites', this.nextLevel, this, 'advance-button', 'advance-button', 'advance-button');
this.buttonAdvance.fixedToCamera = true;
},
this.sound.stop('success');
this.state.start('NavigationBandit');
},
this.state.start('MainMenu');
}
In the case of the player failing or succeeding on the given level, their latest score, their furthest position in the game (as stored in the returnPlayerToState attribute) and their current in-game time are stored via the game's save system before the player is advanced to the next state.
See the next section for a complete treatment of a Local Storage based game-save system.
Creating a game save system
HTML5 features a robust storage system known as Local Storage.
Local storage offers an attractive means of persisting user data for HTML5 game developers.
It is widely supported across many different browser and devices and offers a simple interface for storing and retrieving custom objects.
In the case of CanyonRunner, I store a few key things on the user's system so that I can persist their game progress in case they complete only one or two levels in one session and return later.
I call this object playerStats - it's a json object with 3 attributes:
The user's Top Score (represented by the number of spires they've avoided)
The user's current time-in-game represented by the number of seconds they've survived
The name of the game state that the user should be returned to (updated as they progress through the game)
javascript
//////////////////////
//READ LOCAL STORAGE
//////////////////////
this.playerStats;
if (localStorage.getItem('Canyon_Runner_9282733_playerStats') != null) {
this.playerStats = JSON.parse(localStorage.getItem('Canyon_Runner_9282733_playerStats'));
} else {
this.playerStats = { topScore: 0, topTime: 0, returnPlayerToState: 'HowToPlay'};
}
The create function of a given Phaser state is the perfect time to inspect localStorage to see if the player already has an object stored (and to create one if they don't).
Invoking the Local Storage API, I use the localStore.getItem method to check for the special object name I use to set save objects for CanyonRunner.
The idea here is similar to namespacing your WordPress plugins - you don't have control over the storage keynames that other developers might write to the user's browser via other games, webapps or websites.
To prevent collisions, you should namespace your storage object's name to your game - adding some random numbers decreases the chances of collision.
In the previous gist above, you can see the logic for updating the player's progress and scores in the handlUserDataLoss and handleUserDataLevelComplete functions.
Creating different experiences for desktop and mobile devices
This is probably my personal favorite feature of CanyonRunner.
Let's say I have CanyonRunner set up and hosted at a particular URL.
If you visit this URL with your desktop / laptop browser, you'll get the full desktop version - complete with the keyboard control scheme and the extra fancy (and resource intensive!) particle effects like rocket and missile afterburners and glowing healing mist on healthkits.
However, should you happen to hit the same URL with your smartphone, you'll be given the optimized mobile version, with touchpad controls rendered right over the game scene, and no particle effects (to drastically improve mobile performance).
I implemented this feature because I wanted one single instance of the CanyonRunner game to work for all players regardless of what device they were using to play.
As the game developer, this also makes my life easier, because once I have the logic and assets in place to handle and serve the two different versions of the game, I don't have to worry about supporting and keeping on parity two actually separate codebases.
The two main pieces to this feature are the game logic that checks for whether the player is using a desktop or mobile device, and the assets and functions that work together to render the mobile touchpad on screen and bind its buttons to the correct player actions.
Let's take a look:
javascript
CanyonRunner.Level1.prototype = {
//START MUSIC
///////////////////
this.sound = this.game.add.audioSprite('sound');
this.sound.play('aronara');
//////////////////
//SET BACKGROUND
//////////////////
this.background = this.game.add.tileSprite(0, -100, 2731, 800, 'desert');
this.background.fixedToCamera = true;
///////////////////////
//CREATE TOUCH GAMEPAD
///////////////////////
//Only Mobile Gets Touchpad
if (!this.game.device.desktop) {
this.buttonUp = this.game.add.button(this.game.world.centerX - 300, this.game.world.centerY + 50, 'sprites', null, this, 'up-arrow', 'up-arrow', 'up-arrow');
this.buttonUp.fixedToCamera = true;
this.buttonUp.onInputDown.add(function() {
this.up = true;
}, this);
this.buttonUp.onInputUp.add(function() {
this.up = false;
}, this);
this.buttonRight = this.game.add.button(this.game.world.centerX - 200, this.game.world.centerY + 100, 'sprites', null, this, 'right-arrow', 'right-arrow', 'right-arrow');
this.buttonRight.fixedToCamera = true;
this.buttonRight.onInputDown.add(function() {
this.right = true;
}, this);
this.buttonRight.onInputUp.add(function() {
this.right = false;
}, this);
this.buttonDown = this.game.add.button(this.game.world.centerX - 300, this.game.world.centerY + 150, 'sprites', null, this, 'down-arrow', 'down-arrow', 'down-arrow');
this.buttonDown.fixedToCamera = true;
this.buttonDown.onInputDown.add(function() {
this.down = true;
}, this);
this.buttonDown.onInputUp.add(function() {
this.down = false;
}, this);
this.buttonLeft = this.game.add.button(this.game.world.centerX - 400, this.game.world.centerY + 100, 'sprites', null, this, 'left-arrow', 'left-arrow', 'left-arrow');
this.buttonLeft.fixedToCamera = true;
this.buttonLeft.onInputDown.add(function() {
this.left = true;
}, this);
this.buttonLeft.onInputUp.add(function() {
this.left = false;
}, this);
}
//Desktop & Mobile Get Different Firing Buttons
if (this.game.device.desktop) {
this.fireButton = this.game.add.button(this.game.world.centerX - 60, this.game.world.centerY - 300, 'sprites', null, this, 'fire-missile-button-desktop', 'fire-missile-button-desktop', 'fire-missile-button-desktop');
this.fireButton.fixedToCamera = true;
this.fireButton.onInputDown.add(function() {
this.fireMissile();
}, this);
} else {
this.fireButton = this.game.add.button(this.game.world.centerX - 350, this.game.world.centerY - 150, 'sprites', null, this, 'fire-missile-button-mobile', 'fire-missile-button-mobile', 'fire-missile-button-mobile');
this.fireButton.fixedToCamera = true;
this.fireButton.onInputDown.add(function() {
this.fireMissile();
}, this);
}
...
You can see I'm leveraging Phaser's game.device.desktop method to determine which type of device the player is using, allowing me to implement the two control schemes within an if else statement.
Notice that when rendering the mobile gamepad, I'm setting each button's fixedToCamera property to true.
Given that CanyonRunner is a side-scroller, doing this prevents the buttons from sliding off the screen at the start of the level, which would make them considerably less useful to the player.
Phaser's helper device methods that determine which kind of device your players are using make it easy to optimize your game experience for desktop, mobile and tablet form-factors simultaneously.
Creating multiple endings depending upon player performance
Recent triple A titles as well as classic old school games have explored the concept of multiple endings.
Multiple endings increase replay value by allowing players to do multiple playthroughs, following different paths or making different major plot decisions depending on the type of ending they are trying to get.
Multiple endings also allow you to make thematic statements about the kinds of choices, behaviors or chance occurrences that lead to your protagonist achieving either glory or infamy, salvation or condemnation.
I wanted to explore this concept with CanyonRunner, so I implemented a simple multiple ending system.
You will get one of two possible endings when you play through CanyonRunner, depending upon how quickly you complete the game.
This is one of the reasons that I keep track of the player's "Top Time" or total number of seconds since beginning to play through Level 1.
This concept of time being precious and limited is thematically harmonious with CanyonRunner's story: you are racing desperately needed food and supplies home to your family in a barren post-apocalyptic wasteland.
If you take too long doing so, you simply no longer have a family to return to.
Creating multiple endings
If you want to implement multiple endings in your own Phaser game, the underlying logic of how you determine which ending a player unlocks is up to you, but here's a high level overview of how you would organize such a concept in your code:
As the player progresses through your game, you keep tabs on one or more performance metrics.
This could be their total score, how many hostages they rescued, what physical percentage of the world they explored and walked over, how much gold they ended up with, how many innocents they waxed, etc.
If you want this to persist between game sessions, you'll want to store this information either via Local Storage, a cookie, or your user database if you have one.
After the player has completed the final level, or slain the final boss, or found the final hidden object, at whichever point in your particular game the player is considered to have "won", you can have some logic that inspects this player performance information to make a determination about which game state they will proceed to.
Maybe your player collected over 1500 gold throughout the course of playing, and rescued 25 innocents, so they'll receive the "You are rich and beneficent and live happily ever after" ending.
Maybe they killed every NPC they came across to enrich themselves, so they'll get the "You're an infamous monster that nobody likes" ending.
At this point, actually showing the player the correct ending is simply a matter of calling game.state.start with the right state name for the ending they've earned.
Creating the Ending-Determining Game Logic
Let's take a look at how I implemented this in CanyonRunner.
Regardless of which ending the player will ultimately unlock, all players will see this interstitial scene after completing the 3rd level.
It's the scene that shows the CanyonRunner obtaining a lock on their home beacon and descending to land at home.
This makes it a great place to execute the logic that determines which ending to give the player, since this is something that can be done in the background while the player is watching the actual scene on screen.
You can see where I'm determining and starting the correct ending within the rocketLanding function:
javascript
CanyonRunner.EmotionalFulcrum = function(game) {
this.angelicVoices = null;
};
CanyonRunner.EmotionalFulcrum.prototype = {
this.sound = this.game.add.audioSprite('sound');
this.sound.play('sonar');
//Set Background
this.background = this.game.add.tileSprite(0, 0, 1200, 800, 'sad-desert');
this.background.fixedToCamera = true;
/////////////////////////////
//CREATE SOUND TOGGLE BUTTON
/////////////////////////////
this.soundButton = this.game.add.button(this.game.world.centerX + 240, this.game.world.centerY - 290, 'sprites', this.toggleMute, this, 'sound-icon', 'sound-icon', 'sound-icon');
this.soundButton.fixedToCamera = true;
if (!this.game.sound.mute) {
this.soundButton.tint = 16777215;
} else {
this.soundButton.tint = 16711680;
}
//////////////////////
//READ LOCAL STORAGE
//////////////////////
this.playerStats;
if (localStorage.getItem('Canyon_Runner_9282733_playerStats') != null) {
this.playerStats = JSON.parse(localStorage.getItem('Canyon_Runner_9282733_playerStats'));
} else {
this.playerStats = {
};
}
//////////////////
//CREATE PLAYER
//////////////////
this.player = this.game.add.sprite(64, 64, 'sprites', 'rocket-sprite');
this.player.y = 120;
this.game.physics.enable(this.player, Phaser.Physics.ARCADE);
///////////////////////////////////
//Create Particle Jet Engine Burn
///////////////////////////////////
this.emitter = this.game.add.emitter(this.game.world.centerX, this.game.world.centerY, 400);
this.emitter.makeParticles('sprites', ['fire1', 'fire2', 'fire3', 'smoke-puff']);
this.emitter.gravity = 20;
this.emitter.setAlpha(1, 0, 3000);
this.emitter.setScale(0.4, 0, 0.4, 0, 5000);
this.emitter.start(false, 3000, 5);
this.emitter.emitX = this.player.x - 25;
this.emitter.emitY = this.player.y + 30;
this.burnEngines = true;
this.descendToLearnTheTruth();
},
this.emitter.emitX = this.player.x - 25;
this.emitter.emitY = this.player.y + 30;
if (this.landing) {
this.landingEmitter.emitX = this.player.x + 27;
this.landingEmitter.emitY = this.player.y + 30;
}
//At rest, player should not move
this.player.body.velocity.x = 0;
this.player.body.velocity.y = 0;
this.playerSpeed = 250;
this.backgroundTileSpeed = 4;
//Scroll background for flying appearance
if (this.slowRocket) {
this.background.tilePosition.x -= 4;
this.sound.play('sonar-found');
} else if (this.stopRocket) {
this.background.tilePosition.x = 0;
if (Math.floor(this.player.angle == -90)) {
this.stopRocket = false;
this.player.angle = -90;
this.rocketLanding();
}
this.player.angle -= 2;
} else if (!this.landing) {
this.background.tilePosition.x -= 10;
}
},
if (!this.mute) {
this.game.sound.mute = true;
this.mute = true;
this.soundButton.tint = 16711680;
} else {
this.game.sound.mute = false;
this.mute = false;
this.soundButton.tint = 16777215;
}
},
this.sound.play('sonar-found');
this.homeSignatureLockedTextStyle = {
};
this.homeSignatureLockedTextString = "Home Signature Detected! Calculating Landing Trajectory!"
this.homeSignatureLockedText = this.game.add.text(this.player.x + 20, this.player.y, this.homeSignatureLockedTextString, this.homeSignatureLockedTextStyle);
this.homeSignatureLockedTextExpiration = this.game.time.create(this.game);
this.homeSignatureLockedTextExpiration.add(4000, function() {
this.homeSignatureLockedText.destroy();
}, this);
this.homeSignatureLockedTextExpiration.start();
this.game.add.tween(this.player).to({
}, 5000, Phaser.Easing.Linear.None, true);
this.descendTimer = this.game.time.create(this.game);
this.descendTimer.add(4900, function() {
this.slowRocket = true;
this.emitter.kill();
}, this);
this.descendTimer.start();
this.beginLandingTimer = this.game.time.create(this.game);
this.beginLandingTimer.add(5300, function() {
this.slowRocket = false;
this.stopRocket = true;
}, this);
this.beginLandingTimer.start();
},
this.sound.stop('sonar');
this.sound.play('angel');
this.landing = true;
this.landingEmitter = this.game.add.emitter(this.player.x, this.player.y, 400);
this.landingEmitter.makeParticles('sprites', ['smoke-puff']);
this.landingEmitter.gravity = 20;
this.landingEmitter.setAlpha(1, 0, 3000);
this.landingEmitter.setScale(0.4, 0, 0.4, 0, 5000);
this.landingEmitter.start(false, 3000, 5);
this.landingEmitter.emitX = this.player.x - 25;
this.landingEmitter.emitY = this.player.y + 30;
//Landing Tween
this.game.add.tween(this.player).to({
}, 10500, Phaser.Easing.Linear.None, true);
//Jump to Final Scene Timer
this.showFinalSceneTimer = this.game.time.create(this.game);
this.showFinalSceneTimer.add(10500, function() {
this.sound.stop('sonar');
this.sound.stop('angel');
if (this.playerStats.topTime > 355) {
this.state.start('EveryThingYouBelievedAboutYourFamilyWasHellishlyWrong');
} else if (this.playerStats.topTime <= 375) {
this.state.start('HomeSweetHome');
} else {
this.state.start('EveryThingYouBelievedAboutYourFamilyWasHellishlyWrong');
}
}, this);
this.showFinalSceneTimer.start();
}
};
Settings buttons - pause & mute
While implementing pause and mute buttons may seem like a small and unIt is massively annoying as a player to load up a game while trying to kill some time in a business meeting or to steal a few moments of succor from a family dinner or spousal argument only to have your smartphone erupt in obnoxious, poorly dubbed dubstep that you scramble to mute - only to find there is no mute button.
This is exactly the kind of oversight that will drive your players away for good.
Luckily for us, Phaser makes it simple to implement Pause and Mute buttons - so let's go ahead and do that:
javascript
/////////////////////////////
//CREATE SOUND TOGGLE BUTTON
/////////////////////////////
this.soundButton = this.game.add.button(this.game.world.centerX + 240, this.game.world.centerY - 290, 'sprites', this.toggleMute, this, 'sound-icon', 'sound-icon', 'sound-icon');
this.soundButton.fixedToCamera = true;
if (!this.game.sound.mute) {
this.soundButton.tint = 16777215;
} else {
this.soundButton.tint = 16711680;
}
//////////////////////
//CREATE PAUSE BUTTON
//////////////////////
this.pauseButton = this.game.add.sprite(this.game.world.centerX + 320, this.game.world.centerY - 280, 'sprites', 'pause-button');
this.pauseButton.inputEnabled = true;
this.pauseButton.fixedToCamera = true;
this.pauseButton.events.onInputUp.add(function() {
this.game.paused = true;
this.pauseButton.tint = 16711680;
}, this);
this.game.input.onDown.add(function() {
if (this.game.paused) this.game.paused = false;
this.pauseButton.tint = 16777215;
}, this);
...
if (!this.game.sound.mute) {
this.game.sound.mute = true;
this.soundButton.tint = 16711680;
} else {
this.game.sound.mute = false;
this.soundButton.tint = 16777215;
}
},
As with our mobile touchpad buttons, it's Notice that I conditionally tint the pause and mute buttons depending upon their status - this is an easy way to make the buttons and the entire game interface feel more responsive, as well as to provide a necessary visual signal to the player about whether or not the game is currently paused or muted.
As you can see in the code, Phaser is doing all the heavy lifting for us when it comes to actually pausing game execution or muting sound.
As developers, we need only flip the boolean property of game.sound.mute or game.paused as makes sense within our interface logic, and the framework handles it from there.
That's All for Now
I hope this tutorial and examination of some of CanyonRunner's game mechanics and features was helpful to you.
If it was, please say thanks by sharing this post or starring the CanyonRunner repo on Github.
If something isn't clear or if you'd like to see some other feature or mechanic explained that isn't called out here, or if you just have general feedback, please drop me an e-mail. |
|
Write an article about "CatFacts in Node.js" | Visit the project page on GitHub
What Developers Are Saying About This Project
It was Having finally reached a close developer friend after giving him an exclusive demo of Super CatFacts Attack, he had this to say about this sms-interfacing, child-process-forking open source project:
"F@KING STOP WITH THE F@KING CATFACTS DUDE.
SERIOUSLY.
THIS WAS NOT FUNNY AFTER THE FIRST 30 MINUTES.
I HAD TO PAY $75 TO AT&T ON MY ONE DAY OFF TO CHANGE MY PHONE NUMBER.
I GET THE CONCEPT OF THE PRANK AND WE'RE FRIENDS BUT YOU ARE SERIOUSLY PUSHING ME RIGHT NOW AND I DON"T APPRECIATE IT.
DO NOT TEXT ME AGAIN."
Hear the Super CatFacts Call Menu
Twilio + Node.js + Forked Child Processes = Pranking Bliss
What is this?
Start and stop elaborate CatFacts pranks on your friends and family by simply texting this app with your victim's number.
This project was a great excuse to explore some concepts I had been wanting to play with for a while:
Using SMS commands from your phone to control a server, which preserves state
Locking down said server so that it only responds to YOU or your trusted friends
Managing child processes by tagging them with custom properties (like a rare white rhino) so we can look them up later by tag AND KILL THEM
Leveraging this forked process model in such a way that we can simulataneously attack multiple targets with a single server
Weaving together Twilio's sms and telephony functionality along with various Twiml verbs to create a rich user experience
Using an Express app to generate Twiml that responds to calls and texts and plays static sound files served by the server
Using a simple json file to configure our server
Passing environment variables to a node script
What You'll Learn
This tutorial will also serve as a great introduction to using Twilio in concept, and specifically using Twilio's Node.js SDK.
This is a good tutorial for anyone that's been wanting to use Twilio's service but just hasn't gotten around to it.
After following along with building Super CatFacts Attack, it will be clear to you how you could build actually useful services using the functionality that Twilio provides.
Examples of great apps you could build implementing the concepts described in this tutorial include:
Services that do a lot of intensive crunching or network / service based lookups and return you condensed information on demand to your phone via sms
Apps that kick off tasks that are too complicated to manage on your phone while you're busy - with a simple sms command.
The app would publish the result of its work as a public page and send you the result as a bitly link
Voting systems: this project demonstrates how to accept sms messages via Twilio and handle them in your server.
You could easily build a voting system or custom sweepstakes event system following the same pattern
Mass communication systems: send a text containing special commands to your server, and have your server notify all your relevant friends, contacts, family members more efficiently than you could on your own
SMS based games - send texts as commands, game actions, in a wide distirbuted scavenger-hunt-style game
Enough Productivity. I DEMAND CATFACTS!
Super CatFacts Attack Workflow
You're out and about with some friends.
You subtly send a text to a special contact you have saved in your phone - the body of your message is the phone number of the friend sitting next to you.
The person you're texting isn't a person - it's a private server you own.
Your server parses the phone number you sent in your sms - and launches a CatFacts Attack against it.
Your friend receives a text message from a strange phone number: Thank you for subscribing to CatFacts! You will now receive fun facts about cats! >o<
WTF says your friend, after receiving the 10th fact in a row - this time with a message to call about their CatFacts account. Maybe you should give them a call, you suggest helpfully.
Your friend follows your advice and calls the CatFacts Call Center, where they get an IVR system that is similarly CatFacts-themed. They get a working menu and can either:
Request another CatFact to be immediately delivered to their phone
Pull up their account to determine why they were subscribed (it's because they love cats)
Request to have their account disabled (which will fail due to technical difficulties)
Hilarity ensues.
Table of Contents
What you'll find in this post:
Technical Overview
An Important Twilio Precursor
Handling Inbound SMS In Your Express App
Handling Inbound Calls in Your Express App
Generating Twiml with Node.js
Forking Child Processes (Starting Attacks)
Keeping Track of Child Processes
Murdering Child Processes (Stopping Attacks)
Technical Overview
Super CatFacts Attack is a Node.js server that exposes endpoints for receiving and processing commands that are POSTed by Twilio.
In turn, SCFA returns valid Twiml to direct Twilio to send SMS messages, respond to telephone calls, and process phone keypad presses.
Blending all of this functionality together seamlessly, Super CatFacts Attack constitutes an epic, always-available pranking service you can quickly deploy to baffle and amuse your friends and family.
An Important Twilio Precursor
If you're not familiar with Twilio, read this paragraph first.
Twilio abstracts away telephony integration for developers, allowing devs to use familiar HTTP concepts to send, receive and process SMS messages and phone calls.
When a user sends an SMS message to a Twilio number, Twilio will look up the associated endpoint (that you have built and exposed on a publically available server and specified in your Twilio account dashboard) and make a POST request to that endpoint.
The Twilio POST request contains all the information you'll need as a developer to integrate telephony into your web app or service: the phone number of the sending user, the body of their message, where their number was registered, etc.
All of this information is organized nicely by Twilio so that you can access it in your request object.
Meanwhile, you can use client libraries (like the 'twilio' node module I'm leveraging in this project) to make valid HTTP requests to Twilio to have Twilio send SMS messages or make phone calls in response as appopriate for your app's workflow.
Twilio also has a concept of Twiml (Twilio Markup Language), whose usage is demonstrated in this project and tutorial.
That's really all you need to know for the purposes of this app - but if you want to read more in depth, check out the excellent official Twilio documentation and examples here.
Handling Inbound Twilio SMS in Your Express App
Handling Inbound Twilio-based SMS in your Express app is as simple as defining a route for Twilio to POST to when your associated Twilio number (the one you configure on your Twilio dashboard) receives an SMS message.
javascript
/
Handle incoming sms commands
Verifies that the requestor is authorized - then determines the request type (start / stop attacking)
Finally starts or stops an attack as appropriate
@param {Request} Twilio POST request - generated when a user sends an sms to the associated Twilio number
@param {Response} Express response
/
app.post('/incoming-sms', function(req, res){
//Get the requestor's phone number from the Twilio POST object
var requestor = req.body.From;
//If target is currently under attack and is not an admin - and they text this number - give them a particular text response
if (isTargetBeingAttacked(requestor) && !isAuthorizedUser(requestor)){
sendResponseSMS(requestor, 'Command not recognized. We will upgrade your CatFacts account to send you facts more frequently. Thanks for choosing CatFacts!');
} else if (!isAuthorizedUser(requestor)){
//Do nothing and do not respond if requestor is unauthorized
return;
} else {
//Get body content of sms sent by requestor
var payload = req.body.Body;
//Check if this is a stop attack request - returns target number if it is
var check = isStopRequest(payload);
if(check){
//isStopRequest returns the target phone number for valid stop requests
var target = check;
//Stop the attack on the supplied number
handleAdminStopRequest(requestor, target);
} else {
//Start an attack on the supplied number
handleAdminAttackRequest(requestor, payload);
}
//Give Twilio a successful response for logging purposes
res.status(200).send('Finished processing POST request to /incoming-sms');
}
});
Twilio does a nice job of organizing all the data we need into an easy to use POST body - so we start by grabbing the phone number of the person who sent the SMS command (likely you!).
We can pass this number into some custom authentication functions that inspect config.json to determine whether or not the given user is authorized to issue commands to the server.
We also check whether or not the target is already being attacked or not - while we want to support multiple simultaneous attacks on a variety of targets, we don't want to run multiple simultaneous attacks on one single target.
We're insane, not evil.
Handling Inbound Twilio Calls in Your Express App
Handling inbound calls is pretty similar, only we can use this an opportunity to start working with Twiml.
Twilio Markup Language is what you should return on endpoints that Twilio hits in response to your users making a phone call to a Twilio number.
When Twilio connects a phone call, it hits the endpoint you'be specified on your account dashboard and parses the Twiml that you return in order to create the call experience (saying something, playing a sound, generating a menu that the user can dial their way through, etc).
We're going to do all of these now.
Let's start by defining the route that Twilio will make a POST request to when someone calls our associated Twilio phone number:
javascript
/
Handle a user phoning the Super CatFacts Attack Call Center
@param {Request} - POST request from Twilio - generated when a user calls the associated Twilio phone number
@param {Response}
@return {Response} - Response containing valid Twiml as a string - which creates the CatFacts call center experience
/
app.post('/incoming-call', function(req, res){
res.writeHead(200, { 'Content-Type': 'text/xml' });
res.end(generateCallResponseTwiml().toString());
});
Our simple route accepts a POST request from Twilio, writes a Content-Type header and sends back a Twiml response, which will lead to the user hearing our menu interface spoken on the phone during their call.
Let's now examine how that Twiml response is actually built:
javascript
/
Generates valid Twiml that creates the Super CatFacts Attack Call Center menu experience
@return {String} response - valid Twiml xml complete with say, play and gather commands
/
function generateCallResponseTwiml() {
var response = new twilio.TwimlResponse();
response.say("Thank you for calling Cat Facts!", {
})
.play(config.server_root + '/sounds/shortMeow.wav')
.say("Cat Facts is the number one provider of fun facts about cats!
All of our representatives are currently assisting other cat lovers.
Please remain on the feline!
In the meantime, please listen carefully as our menu options have recently changed.", {
})
.gather({
}, function(){
this.say("If you would like to receive a fun cat fact right now, press 1. If you would like to learn about how you were subscribed to CAT FACTS, please press 2", {
})
.say("If for some fur-brained reason you would like to unsubscribe from fantastic hourly cat facts, please press 3 3 3 3 4 6 7 8 9 3 1 2 6 in order right now", {
})
});
return response;
}
The Node SDK (available in the npm module 'twilio') exposes helpful methods for easily building up Twiml.
Once you create the initial Twiml response object, you build up its actual content by calling the various Twiml verb methods (say, play, gather, etc).
Notice that we can play static files served by Express, because we defined our static root when starting our server.
We can easily get the path by using the server root as defined in our config.json plus the filename itself.
The result is that the user gets perfectly timed cat noises - for a truly authentic experience.
The trickiest verb is gather.
Similar in concept to the action property of an HTML form, we need to specify the action URL - where Twilio can go to get the response information following the user actually dialing a key for a particular menu item - in order for our IVR menu to work properly.
Notice that I've specified a wildcard for the optional finishOnKey parameter.
This will cause Twilio to stop listening for inputs after it gets a single keypress, which will make more sense when we next examine the function that actually handles our user's inputs:
javascript
/
Handle user inputs during the CatFacts Call Center Menu
@param {Request} req Express Request
@param {Response} res Express Response
@return {[type]} Response containing valid Twiml for Twilio to parse
/
app.post('/catfacts-call-menu', function(req, res){
//Get the number the user pressed from the Twilio request object
var pressed = req.body.Digits;
var calling_user = req.body.From;
//Set up headers
res.writeHead(200, { 'Content-Type': 'text/xml' });
//Handle whichever number the user pressed
switch(pressed){
case '1':
//User requested a CatFact - pull a random one out of catfacts.json
var fact = require('./data/catfacts.json').random();
//Send a random CatFact to the caller
sendResponseSMS(calling_user, fact);
//Create a twiml response to build up
var twiml = new twilio.TwimlResponse();
twiml.say('One brand spanking new Cat Fact coming right up. We\'re working hard to deliver your fact. Thanks for using CatFacts and please call again!', {
})
//Play a sound that Express is serving as a static file
.play(config.server_root + '/sounds/angryMeow.wav');
//Send the response back for Twilio to parse on the fly - and play for the caller
res.end(twiml.toString());
break;
case '2':
//User wants to know why they were subscribed to CatFacts - Why, because they love cats, of course!
var twiml = new twilio.TwimlResponse();
twiml.say('Please wait one moment while I pull up your account', {
})
.play(config.server_root + '/sounds/longMeow.wav')
.say('Thanks for your patience.
You were subscribed to CatFacts because you love fun facts about cats.
As a thank you for calling in today, we will increase the frequency of your catfacts account at no extra charge', {
})
.play(config.server_root + '/sounds/angryMeow.wav')
.say('Have a furry and fantastic day', {
});
//Send account explanation response back to Twilio to parse on the fly - and play for the caller
res.end(twiml.toString());
break;
case '3':
//User wants to unsubscribe - but we don't like quitters
var twiml = new twilio.TwimlResponse();
twiml.say('We understand you would like to cancel your CatFacts account.
Unfortunately, we are currently experiencing technical difficulties and cannot process your request at this time.
To apologize for the inconvenience, we have upgraded you to a Super CatFacts Account for no extra charge', {
})
.play(config.server_root + '/sounds/angryMeow.wav');
res.end(twiml.toString());
break;
default:
var twiml = new twilio.TwimlResponse();
twiml.say('Sorry, we were unable to process your request at this time. Don\'t worry, we will send you complimentary CatFacts as an apology for the inconvenience.', {
})
.play(config.server_root + '/sounds/angryMeow.wav');
res.end(twiml.toString());
break;
}
});
You can get as fancy as you want to when parsing your user's touchtone input within a Twilio-based call menu like this.
For my purposes, a simple switch break interface sufficed.
By weaving together sounds and speech, you can create rich telephony experiences for your users.
I wanted to support admins sending in multiple attack commands and having the service intelligently manage them all - so that a single admin could simultaneously prank an entire dinner table's worth of suckers.
I achieve this by having a simple script, attack.js, that actually knows how to run an attack. It gets forked by the parent process everytime a valid attack request is received by the server:
javascript
/
Processes an authorized admin's attack request and launches the attack
Handles tracking the attack child process at the app-level so it can be referenced later / stopped
@param {String} requesting_admin - The phone number of the requesting admin
@param {String} target - The phone number of the target to be attacked
@return void
/
handleAdminAttackRequest = function(requesting_admin, target) {
//Ensure target is not already being attacked (we have some degree of decency - no?)
if (!isTargetBeingAttacked(target)) {
//Fork a new attack process - passing the requesting admin's phone number and target phone number as string arguments
var CatfactsAttack = child_process.fork('./attack.js', [requesting_admin, target]);
//Handle messages sent back from child processes
CatfactsAttack.on('message', function(m){
switch(m.status){
case 'invalid_status':
CatfactsAttack.kill();
//Send invalid target sms back to admin
sendResponseSMS(m.requesting_admin, 'Oops! ' + target + ' doesn\'t appear to be a valid number. Attack NOT Launched!');
break;
case 'starting_attack':
//Tag the attack child_process with its target number
CatfactsAttack.target_number = m.child_target;
//Add child_process to app-level array of current attacks
beginTrackingAttack(CatfactsAttack);
//Send sms confirming attack back to admin
sendResponseSMS(m.requesting_admin, 'Attack Vector Confirmed: CatFacts Bombardment Underway! - Text: "downboy ' + m.child_target + '" to stop attack.');
break;
case 'exhausted':
//Remove number from app-level array of numbers being attacked
stopTrackingAttack(m.child_target);
//Send exhaustion notification sms back to admin
sendResponseSMS(m.requesting_admin, 'CatFacts Attack on ' + target + ' ran out of facts! Attack Complete.');
CatfactsAttack.kill();
break;
}
});
}
}
Notice that we can pass an array of string arguments to the child process when we fork it.
The child process can then access these variables and use them privately.
The parent process and child process can also message each other back and forth as shown here.
This allows us to write a service where the parent properly supervises and manages child processes based upon their self-reported states.
Forking our individual attacks in this way allows us to ensure that every victim gets their own private "attack context", starting with the initial introductory message and always proceeding one after another in the correct order, regardless of how many other people are being attacked by the same server at any given time.
Another key line here happens in the starting_attack case.
Notice that I'm "tagging" the child process with the target_number it is actually running its attack on.
I'm using this tag as a unique identifier, so that I can look up the child processes later on by their target number when an admin says it's time to stop attacking a given phone.
Now, let's take a look at a rudimentary manner of keeping tabs on these child processes so we can look them up by number later and kill them.
Tracking Child Processes
Here's a simple way to keep tabs on the various simulatenous attacks that might be running at any given time.
Each time we kick off a valid child process CatFacts attack, we store it in an app-level array:
javascript
/
Adds given child_process to the app-level array of running attacks so it can be terminated later
@param {Object} child_process - A node child process representing a currently running attack
@return void
/
beginTrackingAttack = function(child_process) {
var currentAttacks = app.get('activeAttacks');
currentAttacks.push(child_process);
app.set('activeAttacks', currentAttacks);
}
This makes it simple to look up processes later by their target_number property:
javascript
/
Helper method that determines whether or not a supplied number is currently under attack
@param {String} target - the phone number to check for current attacks
@return {Boolean} targetIsBeingAttacked - Whether or not the given number is under attack
/
isTargetBeingAttacked = function(target) {
if (target.charAt(0) == '+' && target.charAt(1) == '1'){
target = target.replace('+1', '');
}
var targetIsBeingAttacked = false;
var currentAttacks = app.get('activeAttacks');
if (!currentAttacks.length) return;
currentAttacks.forEach(function(currentAttack){
if (currentAttack.target_number == target){
targetIsBeingAttacked = true;
}
});
return targetIsBeingAttacked;
}
When an admin sends a stop command with the number of the victim whose phone should no longer be bombarded, we need to look up the currently running attack using that number, and kill it:
javascript
/
Finds a currently running attack by phone number and terminates it in response to an admin stop attack request
@param {String} requesting_admin - The phone number of the admin requesting a stop
@param {String} target_number - The phone number that should not be attacked anymore
@return void
/
handleAdminStopRequest = function(requesting_admin, target_number) {
var currentAttacks = app.get('activeAttacks');
var foundAttack = false;
if (!currentAttacks.length) return;
currentAttacks.forEach(function(currentAttack){
if (currentAttack.target_number == target_number){
foundAttack = currentAttack;
}
});
if (foundAttack){
foundAttack.kill();
sendResponseSMS(requesting_admin, 'Successfully terminated CatFacts Attack on ' + target_number);
}
} |
|
Write an article about "Automations - shell scripts leveraging OpenAI to make your developer workflow buttery smooth and way more fun" | What is this?
automations is a collection of shell scripts that automatically handle git operations, provide local code reviews, pull requests, and more!
These scripts leverage some of my favorite Go CLI libraries from the talented folks of github.com/charmbracelet.
I wrote them for fun and also to save me time, and, as I continue to use them and polish them every time I find an issue, I am happy to report that they are indeed fun to use.
I'm pretty sure they're making me faster as well.
Current automations
With that out of the way, let's get into the automations themselves:
autogit - ensures you're working with the latest code locally
autoreview - get a senior-level code review in your terminal
autocommitmessage - always write excellent, clear commit messages
autopullrequest - open pull requests with clear titles and descriptions
autogit
When you change into a new local repository, autogit runs (thanks to an alias you've configured for your shell) and ensures that your default branch is up to date with the remote.
This helps prevent you from opening pull requests with stale changes.
autoreview
Want a Senior-level code review right in your terminal to help you catch all the low-hanging fruit before you bug a human teammate? autoreview does just that.
Here's an example of the kind of review it can produce, and it writes the file to ~/.autoreview so that you can revisit them or lift code from them.
autocommitmessage
So far, this may be the automation that has saved me the most time. I love the way it works because it's not just stamping out some low quality commit message that nobody can parse out later.
It actually describes the context of your changes pretty effectively, so it's currently hitting the sweet spot (at least for me) of combining increased speed with increased quality.
Typing my personal alias for it, gcai, is so fast there's no excuse for writing "check in latest" anymore.
autopullrequest
This is the most recent automation, but it's pretty promising so far.
It uses the official GitHub CLI, gh to open pull requests.
It uses ChatGPT4 to generate clear, succinct and professional PR descriptions and titles, and then it opens the pull request for you.
Here's an example pull request it recently opened for me:
What's next?
I'll continue improving the docs and the automations themselves.
I hope to add some more automations in the future, too.
I want to look into autocompletion functionality and adding installation instructions for various shells.
A request
If you found this interesting or useful, please consider sharing it with someone you know who may be interested in the project, or sign-up for my newsletter to receive tips and projects like this as soon as I publish them.
Thanks for reading! |
|
Write an article about "Terminal velocity - how to get faster as a developer" | "You could be the greatest architect in the world, but that won't matter much if it takes you forever to type everything into your computer." Hugo Posca
Why read this article?
When you're finished reading this article, you'll understand the why and how behind my custom development setup, which has subjectively made me much faster and happier in my day to day work.
Here's a screenshot of my setup, captured from a streaming session.
If you're interested in developer productivity and tooling and you want to watch me hack on open source using my complete setup, be sure to check out my my YouTube channel.
In this blog post, I'm going to show you exactly what I type in my terminal, and exactly how I navigate my filesystem and work on projects, using high fidelity demos:
Perhaps more Ultimately, my workflow reflects what I've learned so far in my career.
Why do I care about this so much?
I believe that when it's fun and efficient to do your work and interact with your tools and devices, you're more productive and happier.
Therefore, one of the reasons this topic energizes me is that it's an investment into making something that I do intensively for many hours per week as pleasurable and efficient as it reasonably should be.
But there's also another Many developers throughout my career have assisted me, and have taken time out of their day to stop what they were doing to show me a better way to do something, or a new tool or shortcut.
My current skill level is a product of my constant practice and the sum total of every new technique and pattern someone more experienced took the time to relay to me.
Therefore, I am also publishing this post as a means of saying thanks and paying forward the same favor to anyone who could benefit from this information.
In this post, I share the most Learning 1 - Keep your hands on the keyboard
That's what most of this really boils down to.
In general, don't use the mouse.
Browse the web using your keyboard.
Yes, it will suck initially and be uncomfortable and cause you to be slower overall.
This will not last long if you stick with it.
I'm now much faster using Vimium to handle even semi-complex tasks like reviewing a large pull request on GitHub, because I can jump directly to the HTML nodes I want, rather than having to drag a mouse across the screen multiple times.
There's a demo of me navigating GitHub with my keyboard just a bit further on in this article.
Learning 2 - The fundamentals must be speedy
"You need to move a little faster than that son. Speed is Life." Viper, Titanfall 2
For a more real world and less silly example, see also Boyd's Law.
There are certain actions you'll perform a great number of times in your career as a software developer.
You'll do them a number of times today, even!
All of these things need to be extremely fast for you to execute.
Fast like scratching an itch is - there's the impulse to be somewhere, and your fingers find the place effortlessly.
No exceptions!
Navigating to code, locally or in the browser. This means finding the correct repository and jumping into it very quickly, with minimal keystrokes.
Understanding or mapping code. This means being able to see a symbol outline (variables, functions, classes, consts, etc) of a given file and see all the files in the project arranged hierarchically
Quick pattern and string searching which allows you to answer the many questions that naturally arise as you're working with code
These tasks are each Navigating to code, locally
I work at a company with many (> 150) repositories.
I manage this by cloning all the repositories to my development machine (using a script) and optionally running another script to step into each repository and perform a git fetch and reset.
Maintaining all the repositories I'll reasonably touch locally on my machine allows me to take full advantage of powerful command line tools like fzf and rg (ripgrep).
I haven't yet felt the need to, but I could further optimize this by creating a cron job to run the update script each morning before I start work, so that I'm always looking at the latest code.
Once I started managing my code locally, fzf particularly began to shine as a tool for jumping quickly to any directory on my system.
As a fuzzy-finder, fzf can do much more than that, but if you use it only for quick jumps to different directories, you'll still be deriving a great deal of value from it.
fzf in action
The default is navigation
If I run vim in the root of any directory, my neovim setup will automatically open the directory in neotree, which makes navigating and understanding the file system easy and fast.
Navigating to code, in the browser
For general keyboard-based browsing, I use the Vimium plugin for Firefox. Here's a demo of me navigating an actual pull request on GitHub and demonstrating how easy (and quick) it is to:
Comment on any line
Jump to any file changed in the pull request
Expand and use even the special functions behind an ellipses menu
Start editing a file within a pull request, if desired
Understanding or mapping code quickly
When I open a new project or file, I want to orient myself quickly and develop a general understanding of the project layout and the structure of the program.
To accomplish this, I lean on AstroNvim's configuration to pop up a symbol outline in my current file that I can use to navigate and understand the code:
Finding files, code or arbitrary strings on your local machine
Whether you're working on a new feature, trying to orient yourself to a new codebase or performing upgrades across several repositories, you're naturally going to have a lot of questions about your source code come up:
Which versions of this software are currently deployed?
How many places does that string we need to replace occur?
Did that same typo we've seen elsewhere occur in this code?
And many, many more.
You'd be surprised how many of these questions ripgrep can answer for you.
I recommend learning as many of the flags for rg as you can.
I picked up ripgrep a few years ago and it remains one of the things I run constantly throughout the day to get my work done more efficiently.
Learning 3 - Reflow your workspace to fit your work
I may execute many different tasks during the course of a day: write a new feature in one language, fix up some flaky or slow tests in another, write an RFC in markdown, create new configuration files, perform a deployment, etc.
This involves getting data from one place like a terraform output into a command line argument, and then copying and pasting the output of that command into the separate log file you're keeping, which you then separately want to pipe into another operation in a different pane.
I think of my tmux panes as unix pipes.
The main idea is that my workspace is a fluid thing that can shift and scale up or down in order to accomodate the task at hand.
If I'm writing code that needs to know the docker image ID of a particular image I built recently, I can pop open a new tmux pane and run whatever Docker commands I need to get that information.
Because each pane is a shell, I can script against and further process my output with any and every unix tool to get exactly what I want.
Let's make this more concrete with an example.
In the following demo gif, I use neovim to edit code, which means my editor is just another tmux pane.
In this case, I'm writing some code that needs a Docker image ID.
I need only create a new split and do my Docker work there.
When I have the Docker image ID I need, I can close the extra pane, recovering the full screen's real estate for my focused coding task.
In my day to day work, I might have between 3 and 8 different terminal panes open on a single screen, depending on the task.
Panes show up to do some work and get some output that can be easily piped to any other pane.
Panes whose work is finished can get closed down, recovering screen real-estate for other tasks.
I constantly reflow my workspace to my work.
Desktops - with an s
Awesome Window Manager allows me to organize my work across two external monitors into 9 Windows each. This is extremely handy and something I use everyday.
Here's a rough idea of how I divide up my Windows to provide you with some inspiration, but everyone is likely going to settle on an arrangement that makes them happy:
Comms (Slack, email)
Notes / Second brain (Obsidian.md)
Spotify
Zoom call
Main tmux window for editing code, with all sessions stored as tmux sessions
6, 7, and 8 are my utility windows. I currently run my StreamDeck UI and logs across these
Browser windows for whatever I'm actively working on
Having these windows divided up in this way simplifies context-switching throughout the day for me.
I always know exactly which window has which kind of application running in it, so it's snappy and natural to switch back and forth between them as needed, even while pair-coding or on a Zoom call.
Next up, and a request
That's it for this introductory post! In forthcoming posts in this series, I'll go deep on:
setting up these tools - how to install and configure them
managing your configuration with git for recovery and reuse across multiple machines
shell optimizations that compound the speed boosts
advanced patterns, custom shell functions, additional use-cases, demos and useful scripts
And now, a humble request.
If you've found this article at all helpful or interesting, please share it with someone you think could benefit from the information.
And, if you have feedback, questions or other content you'd like to see in the future, please don't hesitate to reach out and let me know.
Thank you for reading! |
|
Write an article about "Announcing the Pinecone AWS Reference Architecture in Pulumi" | export const href = "https://pinecone.io/blog/aws-reference-architecture"
I built Pinecone's first AWS Reference Architecture using Pulumi.
This is the sixth article I wrote while working at Pinecone:
Read article |
|
Write an article about "Glossary of tech phrases" | If you came here from Hacker News...
Then I've learned that I need to explicitly spell out in the beginning of the post that this article is partly facetious.
| Phrase said under stress| Real meaning |
|---|---|
| Thanks for the feedback. | Fuck you. |
| Thanks for the feedback! (with an explanation point) | Die in a slow-moving, low-temperature and ultimately avoidable fire.|
| That's interesting. | You are unburdened by talent. |
| Could you please file an issue? | Swim into a boat motor. Face-first, but get your arms in there, too. |
| Like I said... | Clean the shit out of your ears. |
| Let's table this for the time being.
We've got N minutes left and a lot of agenda items remaining.
| I'm sick of hearing your voice.
Shut the fuck up.
Your time management is even worse than your hygiene.
|
| How about we do this...| I'm tired of waiting for you to get it, so now I'm going to distract you with a false compromise. |
| Sounds good | I have zero faith in your ability to deliver. None. So I'm passive agressively pointing to the fact that it only sounds good, because it's never going to be. |
| Let me know if I can do anything to help | This is completely on you and I don't want to hear about it again. |
| Let's take some action items. | I can't understand the power dynamics on this Zoom call yet but I'm incapable of contributing to the problems we're discussing here |
| \<Name>, can you take an action item?
| I'm a Muppet missing the adult hand that goes up my ass and tells me what to say.
I can tell you're all starting to catch on, so in my low-grade panic I'm going to try to distract all of you while I continue to wait around around for the adult to show up...|
| NIT: I prefer a more \<vague adjective> coding style...| I'm finally secure enough in this institution to enforce my father-shaped emotional void upon you and our codebase.
I never learned to wash my ass properly.
|
| Great idea, \<executive above me>! | You could have kept my entire salary plus benefits and skipped hiring me with no impact whatsoever on final outcomes or team strength. |
| Great idea, \<individual contributor below me>!| Your idea is now my idea. I'm going to start pitching it as my original work starting in my next meeting beginning in 10 minutes.|
|Happy to help!|I'm updating my resume.|
| I'll take a look | Let me Google that for you, you incompetent ninny. I'm also positive you got a larger equity grant than I did. |
| You're one of our top performers | We're underpaying you. | |
|
Write an article about "" | ;
export default (props) => |
|
Write an article about "Tech I wish" | ;
Tell me where to send my money!
In this post I will dream big about "super hearing aids" that extend my senses and enhance my intelligence.
I don't believe we're that far off from this product existing, so if you know of something similar,
feel free to leave a comment at the bottom of this page.
LLM onboard with vector storage and internet access
They need to be small enough to be comfortably worn, yet I expect unreasonably long battery life. They should be hard to notice yet unobstrusively enhance their wearer's senses and intelligence.
For example, an onboard LLM could auto translates any incoming language to my native tongue, help me remember They can be subtly signaled to start or stop listening and recording things and then eventually rules can be set based on context (never record inside my home).
Pushes me real time alerts that relate to my safety or my family's safety
In essence, the super hearing aids are a blend of our best state of the art technologies that are easy to keep on me all day and which attempts to help keep me alive.
They will remind me how to retrace my steps and guide me back to where I parked my car, but also alert me to dangers I might not be aware of on my own. Even giving me
advance warning on a danger I would otherwise detect myself but later on would be advantageous.
This includes alerting me of events nearby such as shootings, extreme weather, traffic congestion, police reports that are relevant to my immediate safety, etc.
They could automatically call and route family and EMTs to my location if I fell ill or lost consciousness...
Seamlessly connects and disconnects from entertainment systems
When my super hearing aids aren't actively preserving my life, I'll expect them to preserve the ideal audio balance so that I can hear what's I'd like them to pair automatically with my devices so I can hear the TV across the room without blasting everyone else in the room with higher volume.
I'd like to be able to hear my game and have a conversation at the same time.
For less leisurely times, such as in stressful interpersonal conflicts, or certain professional settings, I'd like to be able to switch on recording and auto-transcription,
ideally without it being obvious.
My expectations are high. What's going to be the first device to check off all these boxes? |
|
Write an article about "Pinecone vs FAISS" | Table of contents
vector database comparison: Pinecone vs FAISS
This page contains a detailed comparison of the Pinecone and FAISS vector databases.
You can also check out my detailed breakdown of the most popular vector databases here.
Deployment Options
| Feature | Pinecone | FAISS |
| ---------| -------------| -------------|
| Local Deployment | ❌ | ✅ |
| Cloud Deployment | ✅ | ❌ |
| On - Premises Deployment | ❌ | ✅ |
Scalability
| Feature | Pinecone | FAISS |
| ---------| -------------| -------------|
| Horizontal Scaling | ✅ | ❌ |
| Vertical Scaling | ❌ | ✅ |
| Distributed Architecture | ✅ | ❌ |
Data Management
| Feature | Pinecone | FAISS |
| ---------| -------------| -------------|
| Data Import | ✅ | ✅ |
| Data Update / Deletion | ✅ | ✅ |
| Data Backup / Restore | ❌ | ❌ |
Security
| Feature | Pinecone | FAISS |
| ---------| -------------| -------------|
| Authentication | ✅ | ❌ |
| Data Encryption | ✅ | ❌ |
| Access Control | ✅ | ❌ |
Vector Similarity Search
| Feature | Pinecone | FAISS |
|---------|-------------|-------------|
| Distance Metrics | Cosine, Euclidean | Euclidean, Inner Product |
| ANN Algorithms | Custom | IVF, HNSW, Flat |
| Filtering | ✅ | ❌ |
| Post-Processing | ❌ | ❌ |
Integration and API
| Feature | Pinecone | FAISS |
|---------|-------------|-------------|
| Language SDKs | Python | |
| REST API | ✅ | ❌ |
| GraphQL API | ❌ | ❌ |
| GRPC API | ❌ | ❌ |
Community and Ecosystem
| Feature | Pinecone | FAISS |
|---------|-------------|-------------|
| Open-Source | ❌ | ✅ |
| Community Support | ✅ | ✅ |
| Integration with Frameworks | ✅ | ✅ |
Pricing
| Feature | Pinecone | FAISS |
|---------|-------------|-------------|
| Free Tier | ✅ | ✅ |
| Pay-as-you-go | ✅ | ❌ |
| Enterprise Plans | ✅ | ❌ | |
|
Write an article about "I'm joining Pinecone.io as a Staff Developer Advocate!" | I'm pivoting from a career of pure software engineering roles to developer advocacy!
I'm excited to share that I'm joining Pinecone.io as a Staff Developer Advocate!
Why Pinecone?
Pinecone.io makes the world's most performant cloud-native vector database, which is essential for storing and efficiently querying embeddings.
Vector databases are critical infrastructure in the midst of the
AI revolution we're currently experiencing.
Over the last half year or so, I've been getting deeper into Generative AI and its many use cases, particularly focusing on how to improve and enhance developer workflows (such as my own) with tools like ChatGPT, Codeium, etc.
But the main thing that got me interested in Pinecone was their outstanding technical content, and the very talented team behind it.
My first introduction was James Briggs's outstanding YouTube videos, which led me to Pinecone's own learning center.
As I got deeper into their examples, ran some of them and experimented, I was blown away at the quality and amount of deep technical content that Pinecone was giving away for free.
If you've read my work before, you know I'm fanatical about open sourcing things
and sharing learning for free with other developers.
I've also written about my initial, very positive, experiences with Vercel, and, in addition to James's technical content, I learned that another very talented staff developer advocate, Roie Schwaber-Cohen, was shipping outstanding
Vercel templates that implemented full-stack GenAI chatbots complete with Retrieval Augmented Generation (RAG).
If you want a great intro to what vector databases are and how embeddings can give AI models accurate long-term memories (and prevent hallucinations), check out this great intro by my
colleague Roie.
After considering a few other opportunities, I found that I was most excited about joining this team and going deep on Generative AI use cases and vector databases.
Why pivot to Developer Advocacy?
In short, I've figured out that I love technical storytelling as much as I love building systems and applications.
I've always been a writer, but since January of 2023, I've found myself increasingly itching to not only produce technical tutorials and deep-dives, but also share them with my fellow developers.
I also enhanced my production pipeline, modernizing my portfolio site and migrating to Vercel for more rapid iteration, as I wrote about here.
My game plan was simple: continue to do my day job as an open source developer, continue to learn in public, but also capture my best learnings, insights, favorite new tech stacks and tools via my blog, YouTube channel and even Twitch!
Learning in public has brought me a lot of personal joy, especially when I share my learnings with others and get feedback that it helped even one other developer.
When I found the opening at Pinecone for a developer advocate that would combine:
development of applications to demonstrate the latest and most effective patterns around applications that leverage the latest AI technologies
open source development to assist the overall community and make it easier to build AI applications
technical storytelling
I knew the time had come to jump in, join the team, and go deep on a completely new technology stack, after spending the last 3 and a half years immersed in Infrastructure as Code, SDK and tooling development and deployment automation.
Why does this mean so much to me?
I am a self-taught developer.
When I was starting out, and to this day, the work of others who have been generous enough to decide to write up their insights, code blocks, special hacks, secret fixes to that one weird error message, etc and share them with the world, have helped me immensely in my quest.
I do the same to give back to the developer community and broader technically-minded community.
Pinecone is hiring!
If you're interested in joining the Pinecone team, check out this page! |
|
Write an article about "Talk @" | ;
On December 6th, 2023, Pinecone and Cohere held a joint meetup at the Andreesen Horowitz offices in downtown San Francisco.
I was one of the speakers, along with Jay Alammar from Cohere and Kat Morgan from Pulumi.
I gave a talk titled, "Navigating from Jupyter Notebooks to production" that announced the Pinecone AWS Reference Architecture with Pulumi.
I used a an extended mountaineering metaphor to compare getting the kernel of your application working in a Jupyter Notebook to arriving at base camp.
In some ways, you're so close to production and yet still so far away, because you need to come up with good solutions for secrets, logging, monitoring, autoscaling, networking, and on and on.
I then announced the Pinecone AWS Reference Architecture using Pulumi and talked through what it was and how it can help folks get into production with high-scale use cases for Pinecone faster.
Using Pulumi, I was able to build the initial version of the Pinecone AWS Reference Architecture in about a month, start to finish.
I gave a brief introduction to Infrastructure as Code (IaC) and explained why it's a very powerful tool for working with cloud providers such as AWS or GCP.
I also candidly explained the pitfalls of introducing IaC to a team that doesn't have much cloud development experience, and explained how easy it is for cognitive load and overall complexity to skyrocket past the point of the offering being useful.
I related my experiences around mitigating the pitfall of added complexity by intentionally cross-training your entire team in working with IaC and deploying and destroying any architectures you may produce together.
The default is for this knowledge to become siloed across one or two developers who originally worked on the system, but the key is to ensure everyone on the team is equally confident in working with IaC and cloud providers, via cross-training and practice.
The event was great and we had nearly 125 folks show up to network, eat and listen to our talks.
A huge thank you to the folks at Andreesen and Horowitz for their hospitality, support and excellent venue. |
|
Write an article about "GitHub Copilot review" | GitHub Copilot has immense potential, but continues to underwhelm
When I signed up to try out GitHub Copilot, I was delighted to find that GitHub gifted me a free license to use it based on my being an active open-source developer.
Initially, I configured it for use with Neovim, my preferred code editor, but have also used it with VSCode.
Here's my unvarnished opinion after giving it several chances over the course of several months.
The potential is there but the performance is not
GitHub Copilot struggles to make relevant and high-quality code completion suggestions.
I don't do comment-driven development, where you specify what you want in a large block of code comments and cross your
fingers that Copilot can figure it out and generate your desired code correctly, but even when I did this to put Copilot through its paces, it still underwhelmed me.
As other developers have noted, unfortunately Copilot manages best when you've already defined the overall structure of your current file, and already have a similar function that Copilot can reference.
In these cases, Copilot can usually be trusted to handle completing the boilerplate code for you.
ChatGPT4 upgrade, product page teasers and developer rage
GitHub Copilot X might be the promised land, but the launch and teaser has been handled poorly
For many months now, GitHub's Copilot product page has teased the fact that the next version of GitHub Copilot, known currently as Copilot X, will use ChatGPT4 under the hood and will handle way more than code completions.
Copilot X will also help you with pull request descriptions, adding tests to existing code bases, and it will have a chat feature that allows you to ask detailed questions about your codebase and theoretically get back
useful answers.
It's worth noting that Sourcegraph's Cody has been able to do this (albeit with some bugs) for many months now thanks to its powerful approach of marrying graph-based knowledge of your codebase with embeddings (vectors) of your
code files which allows its supporting large language model (LLM), Anthropic's Claude, to return useful responses about your code for your natural language queries.
The main axe I have to grind with GitHub's product team is the level of vagueness and "I guess we'll see!" that their product page has communicated to developers who might otherwise be interested in giving Copilot X a spin.
One of the FAQ's is about price and upgrades for GitHub Copilot base model users.
Will Copilot X be free?
Will it cost a premium subscription?
"Who knows!
We're still trying to figure that out ourselves".
The sign-up and waiting list user experience has also been deeply lacking, because apparently each of Copilot X's main features: pull request description generation, test generation, chat, etc are separate waiting lists that you
need to sign-up for and wait on individually. This seems like a giant miss.
There are open-source and free competitors who continue to build developer mindshare
Meanwhile, competitors such as codeium have been far more transparent with their developer audience and have been working well for many users the entire time that Copilot X has been inscrutable and vague about
whether it will even be available to individual developers or only accessible to those at companies large enough to foot the bill for a team license with multiple developer seats.
Codeium is not the only horse in town.
Many developers, myself included, are still deriving tremendous benefit and acceleration from keeping a browser tab to OpenAI's ChatGPT4 open as we code, for talking through architecturaly decisions,
generating boilerplate code and for assistance debugging complex stack traces and TypeScript errors, to name a few use cases.
In the long term, developer experience and UX will win the game, and developers will coalesce around the tools that most reliably deliver them acceleration, peace of mind, and enhanced abilities to tackle additional scope and more ambitious
projects.
GitHub Copilot X would do well to take a more open approach, state their intentions clearly and be transparent about their plans for developer experience, because developers in the market for AI-assisted tooling are falling in love
with their many competitors in the meantime. |
|
Write an article about "AI-powered and built with...JavaScript?" | export const href = "https://www.pinecone.io/learn/javascript-ai"
This was the second article I published while working at Pinecone:
Read article |
|
Write an article about "My morning routine" | ;
My morning routine
One of my productivity "hacks" is to get up several hours before work starts and go into the office. I've been doing this since my last job - roughly four years now.
Here's what I tend to do:
Hack on things that are The longer the day goes on, the more obligations and competing responsibilities I have tugging on me.
Setting this time aside allows me to give some focus to projects I wouldn't otherwise get to ship or learn from.
Write
I'll write blog posts and newsletters.
I'll write to old friends whom I haven't spoken to in a while or to new connections I just met.
I'll write notes about my ideas so they're properly captured for later.
I'll return emails if they're Write more
Sometimes, I'll write out my thoughts in a raw format if I'm trying to think through or process something personal or professional.
I prefer to use Obsidian for my second brain because the sync functionality works so well across my development machines and phone.
In the past, I drew
I used to alternate pretty well between art and computer stuff, but lately, computer stuff has been winning out for several reasons.
Luckily, I can use computers to generate imagery, which I quite enjoy.
But doing art in the morning can be good.
I'll say that it CAN BE good because regardless of the medium I'm working in during my morning sessions, I'll either get a huge rush and boost from getting something I like done before work or go into work feeling pretty frustrated that I was spinning my wheels on something.
Clean up my office
And my desk and floor.
Take out the trash.
Restock something that's out.
Ensure I have water, snacks, caffeine, tools, a notebook, and pens.
Open the window, turn on the fan - wipe down my desk.
Remove the keys that are sticking and clean them.
Read
Outside of showing up to work every single day and trying to learn something or improve, this habit has probably had the greatest total impact on my life and career.
I'll read technical books and business books within and without my comfort zone, and I'll read about health, the body, or the personal habits of famous artists and writers.
I went a little deeper into reading and why it helped me so much in this post, but I'm due to create an updated reading list on a blog post somewhere.
Start now if you're not reading a lot and want to boost your career.
Meditate
I have to return to this one constantly and restart repeatedly (which is okay). But it greatly helps my general outlook, mood, and focus.
Reflect
As I get older, I try to focus on my inputs more - the regularity of showing up every day despite whatever insults or injuries have me moving slowly when I first wake up.
More of the creating and publishing and less of the checking analytics...
But I'm human. Having a crappy start to the day development-wise still bums me out. Fortunately, I am energized most of the time by whatever I am working on that morning.
What's your morning routine? Let me know in the comments below 👇. |
|
Write an article about "Autocomplete is not all you" | ;
I've spoken with many investors and analysts over the past year and a half about AI assisted developer tooling. With some exceptions, their questions tend to frame developer tooling
as a horse race.
They want to know if Codeium has any chance of beating GitHub's Copilot, given GitHhub's resources and superior distribution story (how many millions of developers can they email whenever they want?).
As someone who codes and builds systems every day, I've had enough of these conversations to see that folks on the outside looking in are trying to understand what makes one tool better or more effective than the other.
But that's not the most The most Today, you can already talk to your laptop to build applications and systems. What will we have tomorrow?
Specifically, I'm talking about interfaces. What is the the ideal near future interface for developers? Will they be typing in code or instructions? Speaking to an agent or group of agents?
A year and a half ago, I started experimenting with Codeium, and then essentially every other tool I could get my hands on.
Codeium, GitHub Copilot (in its original form), and many other tools help developers code faster, by looking at what they've written and making suggestions.
Autocomplete tools run where developers are coding and make predictions so that hopefully
the developer need only accept the suggestion and continue typing.
And that's all well and good. And it became table stakes for developer tooling in about the first 8 months of the GenAI boom.
Cursor and Zed are two tools that are pushing the boundaries of what's possible in developer tooling by changing the input from typing to plain English.
By the way, Cursor has its own autocomplete built-in, although folks who tend to adopt Cursor happily end up typing in less and less code (maybe more prose).
I wrote about my experience adopting Cursor here. Initially, I chafed against VSCode's UI and behavior, but once I had used Cursor for several weekend and evening hacking sessions, I simply
could not unsee my own productivity gains.
In fact, using Cursor has only pushed my expectations higher - and I suspect this year I'll be able to build an app by speaking into my phone while walking my dog.
If I were just an autocomplete tool, I'd be extremely worried. |
|
Write an article about "How to Run background jobs on Vercel without a queue" | ;
Table of contents
Have you ever wanted to return a response immediately from a Vercel API route while still performing long-running tasks in the background?
Would you rather not pay for a queue or add another component to your architecture?
In this post I'll demonstrate how you can keep your job-accepting API route snappy while still performing long-running or resource intensive processing in the background - without impacting your end-users.
The problems
I recently needed to solve the following problems with my Next.js application deployed on Vercel:
1. I needed my job-accepting API route to return a response quickly.
1. I had expensive and slow work to perform in the background for each new job.
1.
Time is not unlimited.
Vercel capped function invocation timeouts at 5 minutes for Pro plan users.
I didn't want to risk doing so much work in one API route that it was likely I'd hit the timeout.
I wanted to divvy up the work.
The solution
The solution is to use a fire and forget function. Forget, in the sense that we're not awaiting the response from the long-running process.
Here's what my fire-and-forget function looks like in my app Panthalia:
javascript
export async function startBackgroundJobs(post: Post) {
const baseUrl = process.env.VERCEL_URL ? https://${process.env.VERCEL_URL} : 'http://localhost:3000';
try {
await fetch(${baseUrl}/api/jobs, {
'Content-Type': 'application/json',
},
}).then(() => {
console.log(Finished updating existing post via git)
})
} catch (error) {
console.log(error: ${error});
}
}
This is a React Server Component (RSC) by default because it does not include the 'use client' directive at the top of the file, and because Next.js routes default to being server components.
Because this is a server component, we actually have to determine the ${baseUrl} for our API call - whereas you may be familiar with calling fetch on a partial API route like so:
await fetch('/api/jobs', {})
in our server component we must supply a fully qualified URL to fetch.
Calling functions that call other API routes
The startBackgroundJobs function is really just an API call to a separate route, /api/jobs which POSTs to that route the information about the new post including its ID.
Everything that other route needs to start processing work in a separate invocation. Meanwhile, the startBackgroundJobs call itself is quick because it's making a request and returning.
This means the API route can immediately return a response to the client after accepting the new task for processing:
javascript
// This route accepts new blog posts created by the Panthalia client
// It calls a function, startBackgroundJobs which itself calls a separate
// API route, passing the information necessary to continue processing
// long-running jobs that will operate on the post data
// Meanwhile, the client receives a response immediately and is freed up to
// create more posts
export async function POST(request: Request) {
try {
const session = await getServerSession(authOptions)
if (!session) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 })
}
const formData = await request.json()
const {
title,
slug,
summary,
content,
...formImagePrompts
} = formData
// Query to insert new blog post into the database
const result = await sql
INSERT INTO posts(
title,
slug,
summary,
content,
status
)
VALUES(
${title},
${slug},
${summary},
${content},
'drafting'
)
RETURNING ;
;
// Save the postId so we can use it to update the record with the pull request URL once it's available
const newPost: Post = {
title,
slug,
summary,
content,
}
const promptsToProcess = formImagePrompts.imagePrompts as imagePrompt[]
// Query to insert images into the database
for (const promptToProcess of promptsToProcess) {
const imgInsertResult = await sql
INSERT INTO
images(
post_id,
prompt_text)
VALUES(
${newPost.id},
${promptToProcess.text}
)
}
// Fire and forget the initial post setup (git operations) and the image generation tasks
startBackgroundJobs(newPost);
// Because we're not awaiting the response from the long-running job, we can immediately return a response to the client
return NextResponse.json({ result, success: true }, { status: 200 });
} catch (error) {
console.log(error: ${error} );
return NextResponse.json({ error }, { status: 500 });
}
}
Wrapping up
And there you have it.
Using this pattern, you can return a response to your client immediately to keep your application quick and responsive, while simultaneously handling longer-running jobs in a separate execution context. |
|
Write an article about "Wisdomseeker" | Wikipedia is known to be structured in such a way that many articles ultimately lead to Philosophy - if you click the first main body link of each article in succession.
Wisdom Seeker performs this task for you automatically and reports on the path it took to reach Philosophy.
It also knows when it is stuck in a loop - and will report loops as well as meandering paths that will never arrive at Philosophy.
To try it out, paste a Wikipedia link - Wisdom Seeker's report includes the full links to each page, so you can follow along manually.
Screenshots
Easy one-step form accepts any Wikipedia link
Key Features
Visits Requested Page and Pathfinds to Philosophy Article
Is Aware of Being Stuck in a Loop
Reports Loops and Meandering Paths That Will Never Lead to Philosophy
Reports Full Path with Links So You Can Follow Manually
Technical Details & Takeaways
Wisdom Seeker was a fun app that explored an interesting quirk of Wikipedia that I was not familiar with previously.
The biggest challenge in building this app was selecting the exact first link within the body paragraph, given that each Wikipedia page has so many links.
Ultimately, a combination of aggressive jQuery style selectors and regex filtering of retrieved HTML did the trick. |
|
Write an article about "Testing Pinecone Serverless at Scale with the AWS Reference Architecture" | export const href = "https://www.pinecone.io/learn/scaling-pinecone-serverless/"
In this tutorial, I walk readers through how the Pinecone AWS Reference Architecture's autoscaling policies work, and how to
use the tools provided in the repository to generate test records to flex the system under load.
This is the eigth article I wrote while working at Pinecone:
Read article |
|
Write an article about "Teatutor Deep Dive" | View and clone the project here
In this post, I’ll do a deep-dive on my recently published open-source app, Tea Tutor.
This application allows you to quickly define and serve colorful and interactive quizzes on any topic over SSH, and it was written using Charm’s Bubbletea and Wish libraries.
What is Bubbletea and why should I care?
Charm has built a truly impressive suite of terminal user interface (TUI) libraries and tools.
For Golang developers seeking to build rich applications that approach the interactivity of HTML in the terminal, solutions like Bubbletea, complimented by Glamour and Lipgloss, are some of the best options today.
Introducing the Bubbletea blog post series
This article is the first in a series about building interactive terminal applications with Bubbletea. The next posts in the series are:
Introducing the Tea Tutor program and my Bubbletea blog post series
This post — A deep dive into the Tea Tutor program, including tips and tricks figured out
COMING SOON — How to serve your Bubbletea program over SSH with Wish
COMING SOON — Packaging your Bubbletea program using Infrastructure as Code techniques to make deploying it easier
COMING SOON — Future enhancements planned for Tea Tutor
Why did I build this app?
My usual answer: for fun and practice.
I knew I wanted to build something non-trivial with Bubbletea.
Ever since I stumbled across the Charm repositories, I was impressed with their examples and demos and the synergy between their open-source projects.
I also build a lot of command line interfaces (CLIs) in my day job as an infrastructure / automation / DevOps engineer, so I’m always keeping an eye out for better ways of doing things.
Practice, practice, practice
My initial forays into working with Bubbletea were painful, as I regularly encountered crashes or couldn’t quite get things wired up and behaving the exact way I wanted.
In the end, I really just needed to fill the gaps in my own knowledge, and change my usual habits of approaching a Golang program until I was able to make reliable progress.
Anyone who has written or maintained highly evented Javascript applications, or used Backbone.js before, is likely going to find Bubbletea’s patterns familiar, and the Bubbletea maintainers have included two great tutorials, and are publishing content regularly across YouTube and Twitch.
My difficulty stemmed from my own incorrect mental models, and from finicky pieces like panics due to trying to access an out of bounds index in a slice — which can commonly occur when you’re navigating through some collection in your terminal UI (TUI).
I’ll share some tips on how I made this behavior more reliable in my app a little later on.
Bubble Tea is based on the functional design paradigms of The Elm Architecture, which happens to work nicely with Go. It’s a delightful way to build applications.
I never doubted making it to the other side was worth it.
Now that I have my head around the general operating model of a Bubbletea program, adding new functionality to my existing program is indeed more straightforward, easier to keep organized and yes, even, at times, delightful.
Thinking through my desired experience
To begin with, I started thinking through the core app experience.
I knew my core use case was to provide a very low friction, yet effective study aid for folks preparing to take an AWS certification exam.
I wanted anyone studying for AWS certifications, who finds themselves with a few spare minutes, to be able to fire off a simple
ssh quiz.<somedomain>.com
command and immediately be served their own private instance of a familiar slide-deck-like experience that asks them multiple-choice AWS certification questions and records all their answers.
When the user submits their final answer, their score should be calculated by looking through all their submissions and comparing them to the correct answers.
The user should get an easy to scan report showing them each question, their response and whether or not their response was correct.
This formed the kernel of my app idea.
Modeling the data
Now that I knew what I wanted to build, it was time to figure out the data model.
The data model for questions is quite straightforward.
Once I had done enough mocking by hardcoding a few questions as variables in main.go to be able to figure out the general flow I wanted for the UI, I knew I wanted to represent my questions in a separate YAML file that could be loaded at runtime.
There are a couple advantages to unlock by separating the data from the source code, out into a separate file:
You could imagine wanting to load different questions.yml files on different servers so that you could easily provision different subject-specific quiz servers
By having the core set of initial questions defined in a flat YAML file in the root of the project, anyone could come along and contribute to the project by opening a pull request adding a bunch of high quality questions across a variety of topics — even if that person weren’t necessarily a developer
It’s a lot easier to work on application / UI code that is agnostic about the actual data it is displaying.
Otherwise, you end up codifying some of your actual data in your source code, making it more difficult, brittle and tedious to change in the future.
Handling resize events like a pro
Bubbletea makes it easy to handle WindowSizeMsg events, which contain information about the size of the current terminal window.
This is very handy not just for ensuring your app looks great even when a user resizes their terminal, but can also help you render custom experiences and even trigger re-paints as needed.
Here’s the WindowSizeMsg handler in my app. As you can see, we’re actually handling two different re-sizable entities in this case within our Update method:
We size the pager according to the latest width and height of the user’s terminal and.
This pager is a scroll-able buffer where I store the contents of the user’s results printout, so that the user can easily scroll up and down through their test results
We set the width of the progress bar that shows the user how far along through the current quiz they are according to the latest width and height of the user’s terminal
You can also use WindowSizeMsg to trigger UI re-paints
In the case of Tea Tutor, I render the user’s final score in a Viewport “Bubble”:
However, I was noticing an unfortunate bug whenever I ran my app over SSH and connected to it as a client:
Although everything worked perfectly if I just ran go run main.go , whenever I ran my app on a server and connected to it over SSH, my final results were rendered, but not at the correct full height and width of the user’s current terminal.
I knew I had logic in my Update function’s tea.WindowSizeMsg handler that would update the Viewport’s size based on the latest values of the user’s terminal size — so I really just wanted to trigger that logic just as the Viewport was being rendered for the user.
Therefore I decided to implement a separate tea.Cmd here, which is to say a function that returns a tea.Msg , to sleep for half a second and then use the latest terminal size values to render the user’s final report in a way that correctly takes up the full height and width of the current terminal.
It looks like this, and I return this command when the app reaches its displayingResults phase:
go
func sendWindowSizeMsg() tea.Msg {
time.Sleep(500 time.Millisecond)
width, height, _: = term.GetSize(0)
return tea.WindowSizeMsg {
}
}
The delay is barely perceptible to the user, if at all, but the result is that the user’s final score report is rendered within the Viewport correctly — taking up the whole available height and width of the current terminal.
Creating separate navigation functions
I mentioned above that, when I started working with Bubbletea, I ran into frequent crashes due to my mishandling of a slice data type that backed a collection the UI would iterate through.
For example, you can imagine rendering a simple list in a Bubbletea program, and allowing the user to move their “cursor” up and down the list to select an item.
Imagine the model’s cursor field is an int , and that it is incremented each time the user presses the up button or the letter k .
Imagine that you have wired up the enter button to select the item in the backing slice at the index of whatever value cursor is currently set to.
In this scenario, it’s easy to accidentally advance the cursor beyond the bounds of the model’s backing slice, leading to a panic when you press enter, because you’re trying to access an index in the slice that is out of bounds.
I tackled this problem by creating separate methods on the model for each of the navigational directions:
go
case "down", "j":
m = m.SelectionCursorDown()
case "up", "k":
m = m.SelectionCursorUp()
case "left", "h":
m = m.PreviousQuestion()
case "right", "l":
m = m.NextQuestion()
Within each of these directional helper methods, I encapsulate all of the logic to safely increment the internal value for cursor — including re-setting it to a reasonable value if it should somehow exceed the bounds of its backing slice:
Here’s the example implementation of SelectionCursorUp :
go
func(m model) SelectionCursorDown() model {
if m.playingIntro {
return m
}
m.cursor++
if m.categorySelection {
if m.cursor >= len(m.categories) {
m.cursor = 0
} else {
if m.cursor >= len(m.QuestionBank[m.current].Choices) {
m.cursor = 0
}
}
return m
}
}
If we somehow end up with a cursor value that exceeds the actual length of the backing slice, we just set the cursor to 0.
The inverse logic is implemented for all other directional navigation functionality.
Split your View method into many sub-views
As you can see here in my View method, I’m returning several different sub-views depending on the “mode” my Bubbletea app is running in.
There are several boolean values the model has to represent whether a particular phase of the app is running or not, and all the toggling between event states happens in the Update function’s appropriate cases.
I found that when working with multiple views, it’s nice to have your sub-views split out into separate functions that you can then conditionally return depending on your own app’s requirements.
go
func(m model) View() string {
if m.displayingResults {
s.WriteString(m.RenderResultsView())
} else if m.playingIntro {
s.WriteString(m.RenderIntroView())
} else if m.categorySelection {
s.WriteString(m.RenderCategorySelectionView())
} else {
s.WriteString(m.RenderQuizView())
s.WriteString(m.RenderQuizProgressView())
}
return s.String()
}
This would also work well with a switch statement.
That’s all for this post! Thanks for reading and keep an eye out for the next post in the series! |
|
Write an article about "Making it easier to maintain open-source projects with CodiumAI and Pinecone" | export const href = "https://pinecone.io/blog/codiumai-pinecone-similar-issues"
This was the fifth article I published while working at Pinecone:
Read article |
|
Write an article about "The Giant List of AI-Assisted Developer Tools Compared and Reviewed" | Introduction
Here's a comprehensive comparison AI-assisted developer tools, including code autocompletion, intelligent terminals/shells, and video editing tools. Reviews are linked when available.
Table of Contents
Code Autocompletion
Intelligent Terminals / Shells
Video Editing
Mutation Testing
Enhanced IDE
Tools and reviews
| Tool | Category | Review | Homepage
|------|------|------|------|
| GitHub Copilot | Code Autocompletion | 📖 Review | https://github.com/features/copilot |
| Warp | Intelligent Terminals / Shells | 📖 Review | https://www.warp.dev/ |
| Descript | Video Editing | Coming soon | https://www.descript.com/ |
| Codeium | Code Autocompletion | 📖 Review | https://codeium.com |
| Kapwing | Video Editing | Coming soon | https://www.kapwing.com |
| Aider | Intelligent Terminals / Shells | Coming soon | https://aider.chat |
| Cody | Code Autocompletion | Coming soon | https://sourcegraph.com/cody |
| Mods | Enhanced IDE | Coming soon | https://github.com/charmbracelet/mods |
| Zed | Code Autocompletion | Coming soon | https://zed.dev |
| Mutahunter | Mutation Testing | Coming soon | https://github.com/codeintegrity-ai/mutahunter |
| Cursor | Code Autocompletion | 📖 Review | https://cursor.sh |
| OpusClip | Video Editing | Coming soon | https://www.opus.pro |
| Tabnine | Code Autocompletion | 📖 Review | https://www.tabnine.com |
| MutableAI | Code Autocompletion | 📖 Review | https://mutable.ai |
| CodiumAI | Code Autocompletion | Coming soon | https://www.codium.ai |
| Grit.io | Code Autocompletion | 📖 Review | https://www.grit.io |
| Adrenaline AI | Code Autocompletion | Coming soon | https://useadrenaline.com |
| Amazon CodeWhisperer | Code Autocompletion | Coming soon | https://aws.amazon.com/codewhisperer/ |
| Figstack | Code Autocompletion | Coming soon | https://www.figstack.com |
Code Autocompletion
Code autocompletion tools save developers time and effort by automatically suggesting and completing code snippets based on the context of the code being written.
Open source
| Tool | Client | Backend | Model |
|------|-|-|-|
| GitHub Copilot | ❌ | ❌ | ❌ |
| Codeium | ✅ | ❌ | ❌ |
| Cody | ✅ | ❌ | ❌ |
| Zed | ✅ | ❌ | ❌ |
| Cursor | ❌ | ❌ | ❌ |
| Tabnine | ❌ | ❌ | ❌ |
| MutableAI | ❌ | ❌ | ❌ |
| CodiumAI | ❌ | ❌ | ❌ |
| Grit.io | ❌ | ❌ | ❌ |
| Adrenaline AI | ❌ | ❌ | ❌ |
| Amazon CodeWhisperer | ❌ | ❌ | ❌ |
| Figstack | ❌ | ❌ | ❌ |
,
Ide support
| Tool | Vs Code | Jetbrains | Neovim | Visual Studio | Vim | Emacs | Intellij |
|------|-|-|-|-|-|-|-|
| GitHub Copilot | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ |
| Codeium | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ |
| Cody | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ |
| Zed | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Cursor | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| Tabnine | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| MutableAI | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| CodiumAI | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Grit.io | ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ |
| Adrenaline AI | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ |
| Amazon CodeWhisperer | ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ |
| Figstack | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
,
Pricing
| Tool | Model | Tiers |
|------|-------|------|
| GitHub Copilot | subscription | Individual: $10 per month, Team: $100 per month |
| Codeium | subscription | Individual: $10 per month, Team: $100 per month |
| Cody | free | Standard: Free, Pro: Subscription based - contact for pricing |
| Zed | subscription | Free: Free, Pro: $20 per month, Team: $50 per month |
| Cursor | free | Hobby: Free, Pro: $20 per month, Business: $40 per month |
| Tabnine | subscription | Basic: Free, Pro: $12 per month per seat, Enterprise: $39 per month per seat |
| MutableAI | subscription | Basic: $2 per month and up, Premium: $15 per month, Enterprise: Custom |
| CodiumAI | subscription | Developer: Free, Teams: $19 per user/month, Enterprise: Let's talk |
| Grit.io | subscription | CLI: Free, Team: $39 / contributor, Enterprise: Book a demo |
| Adrenaline AI | subscription | Free Trial: Free, limited time, Pro: $10 per month |
| Amazon CodeWhisperer | free | Individual: Free |
| Figstack | freemium | Free: Free, Starter: $9 per month, Unlimited: $29 per month |
,
Free tier
| Tool | Free tier |
|------|------|
| GitHub Copilot | ❌ |
| Codeium | ✅ |
| Cody | ✅ |
| Zed | ✅ |
| Cursor | ✅ |
| Tabnine | ✅ |
| MutableAI | ❌ |
| CodiumAI | ✅ |
| Grit.io | ✅ |
| Adrenaline AI | ✅ |
| Amazon CodeWhisperer | ✅ |
| Figstack | ✅ |
,
Chat interface
| Tool | Chat interface |
|------|------|
| GitHub Copilot | ❌ |
| Codeium | ✅ |
| Cody | ✅ |
| Zed | ✅ |
| Cursor | ✅ |
| Tabnine | ✅ |
| MutableAI | ❌ |
| CodiumAI | ✅ |
| Grit.io | ❌ |
| Adrenaline AI | ❌ |
| Amazon CodeWhisperer | ✅ |
| Figstack | ❌ |
,
Creator
| Tool | Creator |
|------|------|
| GitHub Copilot | GitHub |
| Codeium | Codeium |
| Cody | Sourcegraph |
| Zed | Zed Industries |
| Cursor | Anysphere |
| Tabnine | Codota |
| MutableAI | MutableAI Corp |
| CodiumAI | CodiumAI |
| Grit.io | Grit Inc |
| Adrenaline AI | Adrenaline Tech |
| Amazon CodeWhisperer | Amazon Web Services |
| Figstack | Figstack Inc |
,
Language support
| Tool | Python | Javascript | Java | Cpp |
|------|-|-|-|-|
| GitHub Copilot | ✅ | ✅ | ✅ | ✅ |
| Codeium | ✅ | ✅ | ✅ | ✅ |
| Cody | ✅ | ✅ | ✅ | ✅ |
| Zed | ✅ | ✅ | ✅ | ✅ |
| Cursor | ✅ | ✅ | ❌ | ❌ |
| Tabnine | ✅ | ✅ | ✅ | ✅ |
| MutableAI | ✅ | ✅ | ❌ | ❌ |
| CodiumAI | ✅ | ✅ | ✅ | ✅ |
| Grit.io | ✅ | ✅ | ✅ | ❌ |
| Adrenaline AI | ✅ | ✅ | ✅ | ✅ |
| Amazon CodeWhisperer | ✅ | ✅ | ✅ | ❌ |
| Figstack | ✅ | ✅ | ✅ | ✅ |
,
Supports local model
| Tool | Supports local model |
|------|------|
| GitHub Copilot | ❌ |
| Codeium | ❌ |
| Cody | ❌ |
| Zed | ❌ |
| Cursor | ✅ |
| Tabnine | ❌ |
| MutableAI | ❌ |
| CodiumAI | ❌ |
| Grit.io | ❌ |
| Adrenaline AI | ❌ |
| Amazon CodeWhisperer | ❌ |
| Figstack | ❌ |
,
Supports offline use
| Tool | Supports offline use |
|------|------|
| GitHub Copilot | ❌ |
| Codeium | ❌ |
| Cody | ❌ |
| Zed | ❌ |
| Cursor | ✅ |
| Tabnine | ❌ |
| MutableAI | ❌ |
| CodiumAI | ❌ |
| Grit.io | ❌ |
| Adrenaline AI | ❌ |
| Amazon CodeWhisperer | ❌ |
| Figstack | ❌ |
Intelligent Terminals / Shells
Intelligent terminals and shells enhance the command-line experience with features like command completion, advanced history search, and AI-powered assistance.
Open source
| Tool | Client | Backend | Model |
|------|-|-|-|
| Warp | ❌ | ❌ | ❌ |
| Aider | ❌ | ❌ | ❌ |
,
Pricing
| Tool | Model | Tiers |
|------|-------|------|
| Warp | subscription | Free: Free, Pro: $15 per month, Team: $22 per month, Enterprise: Custom |
| Aider | subscription | Free: Free and open source |
,
Free tier
| Tool | Free tier |
|------|------|
| Warp | ✅ |
| Aider | ✅ |
,
Chat interface
| Tool | Chat interface |
|------|------|
| Warp | ✅ |
| Aider | ✅ |
,
Command completion
| Tool | Command completion |
|------|------|
| Warp | ✅ |
| Aider | ✅ |
,
Advanced history
| Tool | Advanced history |
|------|------|
| Warp | ✅ |
| Aider | ❌ |
,
Supports local model
| Tool | Supports local model |
|------|------|
| Warp | ❌ |
| Aider | ❌ |
,
Supports offline use
| Tool | Supports offline use |
|------|------|
| Warp | ❌ |
| Aider | ❌ |
Video Editing
AI-assisted video editing tools simplify the video editing process by offering features like automatic transcription, editing via transcription, and intelligent suggestions.
Open source
| Tool | Client | Backend | Model |
|------|-|-|-|
| Descript | ❌ | ❌ | ❌ |
| Kapwing | ❌ | ❌ | ❌ |
| OpusClip | ❌ | ❌ | ❌ |
,
Pricing
| Tool | Model | Tiers |
|------|-------|------|
| Descript | subscription | Creator: $12 per month, Pro: $24 per month |
| Kapwing | subscription | Free: Free, Pro: $16 per month, Business: $50 per month, Enterprise: Custom |
| OpusClip | subscription | Free: Free, limited features, Starter: $15 per month, Pro: $29 per month |
,
Free tier
| Tool | Free tier |
|------|------|
| Descript | ❌ |
| Kapwing | ✅ |
| OpusClip | ✅ |
,
Works in browser
| Tool | Works in browser |
|------|------|
| Descript | ✅ |
| Kapwing | ✅ |
| OpusClip | ✅ |
,
Supports autotranscribe
| Tool | Supports autotranscribe |
|------|------|
| Descript | ✅ |
| Kapwing | ✅ |
| OpusClip | ✅ |
,
Edit via transcription
| Tool | Edit via transcription |
|------|------|
| Descript | ✅ |
| Kapwing | ✅ |
| OpusClip | ✅ |
Mutation Testing
Mutation testing tools enhance software quality by injecting faults into the codebase and verifying if the test cases can detect these faults.
Open source
| Tool | Client | Backend | Model |
|------|-|-|-|
| Mutahunter | ✅ | ✅ | ❌ |
,
Language support
| Tool | Python | Javascript | Java | Cpp |
|------|-|-|-|-|
| Mutahunter | ✅ | ✅ | ✅ | ❌ |
,
Supports local model
| Tool | Supports local model |
|------|------|
| Mutahunter | ✅ |
,
Supports offline use
| Tool | Supports offline use |
|------|------|
| Mutahunter | ❌ |
,
Pricing
| Tool | Model | Tiers |
|------|-------|------|
| Mutahunter | free | Free: Free |
Enhanced IDE
Enhanced IDE tools provide advanced features and integrations to improve the development experience within integrated development environments.
Remember to bookmark and share
This page will be updated regularly with new information, revisions and enhancements. Be sure to share it and check back frequently. |
|
Write an article about "How to build a React.js and Lambda app with Git push continuous deployment" | Read the article |
|
Write an article about "Updated Codeium analysis and review" | ;
Codeium is still my favorite AI-assisted autocomplete tool. Here's a look at why, and at the company behind it.
Table of contents
Reviewing Codeium almost a year later
I initially reviewed Codeium here, almost one year ago, when it became my daily-driver code completion tool.
Now, I’m revisiting Codeium. I’ll explain why I still prefer Codeium over other tools and what improvements the Codeium team has made in the past year.
How I code and what I work on
While I have VSCode and Cursor installed and will occasionally jump between IDEs, for the most part I code in Neovim, and run the AstroNvim project.
AstroNvim is an open-source project that composes Neovim plugins together using a Lua API to create a full fledged IDE experience in your terminal. And Codeium's Neovim integration works well here.
AstroNvim is an open-source community-maintained configuration of Neovim plugins and associated config to turn stock Neovim into an IDE similar to VSCode.
These days, I primarily work with the Next.js framework, and ship JavaScript, Typescript, Python, and more while working on full stack applications, data science Jupyter Notebooks, and large scale distributed systems defined via Infrastructure as Code.
Codeium's dashboard keeps track of your completions and streaks.
Why Codeium is Still My Favorite Tool
High-quality code completion
Codeium offers precise, contextually appropriate code completion suggestions.
This feature speeds up the development process and reduces errors, making coding more efficient.
Proprietary context-aware model leads to higher accuracy
Codeium’s proprietary model (ContextModule) takes into account:
The list of files you currently have open
The repo-wide context of every file in your project
In addition to ContextModule, designed to determine the most relevant inputs and state to present to the LLM, Codeium employs reranking and prompt building to achieve high-quality suggestions.
Reranking uses Codeium secret sauce and precomputed embeddings (vector representations of natural language and code) to determine the relative Finally, in the prompt building step, Codeium supplies carefully crafted prompts alongside the reranked context, ensuring highly accurate code completion suggestions.
Widest support for different file types
A standout feature of Codeium is its versatility. It seamlessly works across various file types—from Dockerfiles to application code and even prose, providing support wherever needed.
At the time of writing, Codeium supports over seventy different file types, whereas GitHub Copilot supports far fewer.
Support for Neovim
Codeium's early support for Neovim is noteworthy, because most AI-assisted developer tooling entrants initially introduce support for VSCode only.
Neovim users often find that AI-assisted tools lag in compatibility.
Still, Codeium's integration here is robust and effective, enhancing the development experience for a broader audience and winning the Codeium team some hacker street-cred.
Codeium - Key Features and Updates
AI-Powered Autocomplete
Codeium offers a sophisticated autocomplete feature that significantly accelerates coding by predicting the next lines of code based on the current context.
This feature supports over 70 programming languages, ensuring wide applicability across different development environments.
This feature is the core value add for Codeium, and it continues to work very well.
Intelligent Code Search
Codeium is teasing the addition of AI-assisted code search to their offerings. There’s currently a landing page to sign-up here.
I haven’t been able to try this out yet.
If this works anything like GitHub’s relatively recently overhauled code search feature, I’ll be very interested in trying it out.
That said, I do already have shortcuts configured in my preferred IDE, Neovim, for finding symbols, navigating projects efficiently, going from method or function to the call sites referencing them, etc.
I wouldn't consider this a must-have feature as a result, but I can see it increasing the overall value proposition of Codeium.
AI-Driven Chat Functionality
Another innovative feature of Codeium is its AI-powered chat, which acts like a coding assistant within the IDE.
This tool can assist with code refactoring, bug fixes, and even generating documentation, making it a versatile helper for both seasoned developers and those new to a language or framework.
I tend not to lean on this feature very much when I’m coding - preferring to accept the code completion suggestions Codeium makes as I go.
That said, when I last evaluated Codeium’s and Copilot’s chat features they were quite similar.
Codeium’s AI-assisted chat functionality enables you to:
Generate code in response to a natural language request
Explain existing code that you may be unfamiliar with
Refactor existing code in response to a natural language request
Translate existing code from one language to another.
Integration with Major IDEs
Codeium prides itself on its broad compatibility with major Integrated Development Environments (IDEs), including Visual Studio Code, JetBrains, and even unique setups like Chrome extensions.
This ensures that developers can leverage Codeium’s capabilities within their preferred tools without needing to switch contexts.
Recent Updates and Enhancements
As part of its commitment to continuous improvement, Codeium regularly updates its features and expands its language model capabilities.
Recent updates may include enhanced machine learning models for more accurate code suggestions and additional tools for code management and review processes.
Codeium documents all of its changes as Changeset blog posts on its official blog: https://codeium.com
Series B
Codeium announced a $65M Series B round, making them one of the best funded companies in the Gen-AI for software development space.
Partnerships
Codeium announced a set of strategic partnerships at the beginning of 2024:
Dell (on-premise infrastructure)
Atlassian (Source code management and knowledge stores)
MongoDB (Developer communities with domain-specific languages (DSLs))
Termium - code completion in the terminal
A.K.A “Codeium in the terminal”. Termium is the application of the same proprietary model to a new environment where developers could use some support: the command line prompt.
Used by developers to access remote machines, manage deployments of as many as one or thousands of servers, issues cloud environment commands, run database migrations and much more.
This prototype feature puts Codeium in direct competition with the Warp AI-enhanced terminal offering.
The Codeium team are being rightfully cautious about announcing and promoting this integration, because making mistakes in the terminal could be exceedingly costly for developers and the companies that employ them.
That said, because Codeium was intentionally built as a layer between the terminal and the end-user developer, intercepting and intelligently debugging stack traces before they even make it back to the developer does seem like a winning prospect.
Codeium Live: free, forever up-to-date in-browser chat
Codeium released a prototype feature called Codeium Live, which allows developers to index and chat with external libraries from their browser.
This is a useful feature for any developer attempting to ramp up on a new dependency, which is something most working developers are doing regularly.
Codeium Live solves a common problem of hallucination in foundation models.
Even though models such as OpenAI’s ChatGPT4 are amongst the best in terms of overall intelligence and accuracy of responses, they still suffer from out of date training data that doesn’t capture massive upgrades projects such as Next.js have undergone.
This means that asking for code samples or explanations from ChatGPT4 is likely to result in hallucinations - whereas Codeium Live is regularly updated with fresh source code for the dependencies and libraries it has indexed.
Multi repository context awareness for self-hosted customers
This will allow developers to reference code in non-active private repositories as part of their Chat experience, as well as allow the Codeium context awareness engine to retrieve context automatically from other, potentially highly relevant, repositories at inference time.
Usability and Developer Experience
User Testimonials, Reviews and Awards
Codeium was the only AI code assistant to be included in Forbe’s AI 50 list of the top private AI companies in the world.
Codeium was highlighted for:
Being the first AI assistant to GA in-IDE chat, context awareness, and more capabilities
Near 5-star ratings on all extension marketplaces
Broadest IDE and language availability out of any AI assistant
Unique personalization capabilities that tailor Codeium’s system to every company’s and individual’s specific codebases, semantics & syntax, and best
practices (see this, this, this, this and this)
Working with the leading companies in every vertical, including regulated industries like finance, defense, and healthcare due to our self-hosted deployment and our better compliance guarantees (this and this).
Case studies with Anduril, Dell, Clearwater Analytics, and Vector Informatik to highlight the broad appeal
Dell, Atlassian, MongoDB, CodeSandbox, and more as strategic partners
Feedback from users highlights Codeium's impact on improving coding efficiency and workflow.
Many report that its autocomplete and search features significantly reduce the time spent on coding and debugging, which is especially valuable in large projects.
The positive reception is evident in numerous reviews where developers express satisfaction with the seamless integration and time-saving capabilities of Codeium.
Codeium has been generally well-received on Product Hunt, earning high praise for its extensive features and ease of use, particularly among users of JetBrains IDEs like Goland and PyCharm.
It's also noted for being effective across various IDEs, including VSCode, and for supporting a wide range of programming languages.
Users appreciate its ability to significantly enhance coding efficiency and the fact that it offers these powerful tools for free.
It holds a rating of 4.75 out of 5 based on 28 reviews, indicating strong user satisfaction.
In comparison, GitHub Copilot also receives positive feedback but tends
to be more appreciated within the GitHub community and those who heavily use Visual Studio Code.
Copilot is praised for its context-aware code suggestions and seamless integration with GitHub's ecosystem, making it a favored choice for developers already embedded in that environment.
However, Copilot is a paid service, which contrasts with Codeium’s free offering, and this could be a deciding factor for individual developers or startups operating with limited budgets.
Pricing and Accessibility
Free Access for Individual Developers
Codeium is uniquely positioned in the market by offering its full suite of features for free to individual developers.
This approach not only democratizes access to advanced coding tools but also allows a broad spectrum of developers to enhance their coding capabilities without financial barriers.
By contrast, GitHub’s Copilot starts at $10 per month for individuals. GitHub may grant active open-source maintainers free access to GitHub Copilot on a discretionary basis.
Enterprise Solutions
For teams and businesses, Codeium offers tailored plans that provide additional features such as deployment in a virtual private cloud (VPC), advanced security options, and dedicated support.
These plans are priced per seat, which allows businesses to scale their usage based on team size and needs.
Security and Privacy
Data Privacy
One of Codeium’s strongest selling points is its commitment to privacy.
Unlike some competitors, Codeium does not train its AI models on user data.
This approach addresses growing concerns about data privacy within the development community.
Security Features
Codeium provides robust security features, including end-to-end encryption for data transmission and storage.
For enterprise users, there are additional security measures such as role-based access controls and compliance with industry-standard security protocols.
Transparency and Trust
Codeium’s transparent approach to handling user data and its proactive communication about privacy practices have helped it build trust within the developer community.
This is particularly important in an era where data breaches and privacy violations are common. |
|
Write an article about "How are embeddings models trained for RAG?" | ;
You can ask your RAG pipeline, "What line is the bug on?", and it will tell you the answer almost instantly. How?
Embeddings models are the secret sauce that makes RAG work so well. How are they trained in this "asking questions of documents" use case?
In this blog post we'll unpack how embeddings models like OpenAI's text-embedding-3-large are trained to support this document retrieval and chat use case.
Table of contents
Training Data and Labels
In the context of training an embedding model for RAG, the training data usually consists of pairs of queries and documents.
The labels aren't traditional categorical labels; instead, they are used to create pairs of similar (positive) and dissimilar (negative) document embeddings.
Positive pairs: Queries and their relevant documents.
Negative pairs: Queries and irrelevant documents (often sampled randomly or using hard negatives).
Here's what a simple pre-training example might look like in Python:
python
Sample data
queries = ["What is RAG?", "Explain embeddings.", "How to train a model?"]
documents = [
"RAG stands for Retrieval-Augmented Generation.",
"Embeddings are vector representations of text.",
"Model training involves adjusting weights based on loss."
]
Function to create positive and negative pairs
def create_pairs(queries, documents):
pairs = []
labels = []
for query in queries:
Positive pair (query and relevant document)
positive_doc = random.choice(documents)
pairs.append((query, positive_doc))
labels.append(1)
Negative pair (query and irrelevant document)
negative_doc = random.choice([doc for doc in documents if doc != positive_doc])
pairs.append((query, negative_doc))
labels.append(0)
return pairs, labels
pairs, labels = create_pairs(queries, documents)
for pair, label in zip(pairs, labels):
print(f"Query: {pair[0]} \nDocument: {pair[1]} \nLabel: {label}\n")
Model Architecture
Many modern embedding models are based on transformer architectures, such as BERT, RoBERTa, or specialized models like Sentence-BERT (SBERT). These models typically output token-level embeddings.
Token-level embeddings: Each token (word or subword) in the input sequence gets its own embedding vector. I built a demo showing what word and subword tokens look like here.
Pooling mechanism: Embeddings for each token are useful, but how do we roll those up into something more meaningful?
To get a single vector representation of the entire document or query, a pooling mechanism is applied to the token-level embeddings.
Pooling Mechanism
Pooling mechanisms are used to get an embedding that represents an entire document or query. How can we condense the token-level embeddings into a single vector? There are several common approaches:
Mean Pooling
Mean pooling involves averaging the embeddings of all tokens in the sequence.
This method takes the mean of each dimension across all token embeddings, resulting in a single embedding vector that represents the average contextual information of the entire input.
This approach provides a smooth and balanced representation by considering all tokens equally. For example:
python
Example token embeddings (batch_size x seq_length x embedding_dim)
token_embeddings = torch.randn(1, 10, 768)
Mean pooling
mean_pooled_embedding = torch.mean(token_embeddings, dim=1)
print(mean_pooled_embedding.shape) Output shape: (1, 768)
[CLS] Token Embedding
In models like BERT, a special [CLS] token is added at the beginning of the input sequence.
The embedding of this [CLS] token, produced by the final layer of the model, is often used as a representation of the entire sequence.
The [CLS] token is designed to capture the aggregated information of the entire input sequence.
This approach provides a strong, contextually rich representation due to its position and function.
python
from transformers Initialize BERT model and tokenizer
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
Example input text
text = "Example input text for BERT model."
Tokenize input
inputs = tokenizer(text, return_tensors='pt')
Get token embeddings from BERT model
outputs = model(inputs)
Extract [CLS] token embedding
cls_embedding = outputs.last_hidden_state[:, 0, :]
print(cls_embedding.shape) Output shape: (1, 768)
Max Pooling
Max pooling selects the maximum value from each dimension across all token embeddings.
This method highlights the most significant features in each dimension, providing a single vector representation that emphasizes the most prominent aspects of the input.
This method captures the most salient features, and can be useful in scenarios where the most significant feature in each dimension is python
Example token embeddings (batch_size x seq_length x embedding_dim)
token_embeddings = torch.randn(1, 10, 768)
Max pooling
max_pooled_embedding = torch.max(token_embeddings, dim=1)[0]
print(max_pooled_embedding.shape) Output shape: (1, 768)
In summary:
Mean Pooling: Averages all token embeddings to get a balanced representation.
[CLS] Token Embedding: Uses the embedding of the [CLS] token, which is designed to capture the overall context of the sequence.
Max Pooling: Selects the maximum value from each dimension to emphasize the most significant features.
These pooling mechanisms transform the token-level embeddings into a single vector that represents the entire input sequence, making it suitable for downstream tasks such as similarity comparisons and document retrieval.
Loss Functions
The training objective is to learn embeddings such that queries are close to their relevant documents in the vector space and far from irrelevant documents.
Common loss functions include:
Contrastive loss: Measures the distance between positive pairs and minimizes it, while maximizing the distance between negative pairs. See also Geoffrey Hinton's paper on Contrastive Divergence.
Triplet loss: Involves a triplet of (query, positive document, negative document) and aims to ensure that the query is closer to the positive document than to the negative document by a certain margin.
This paper on FaceNet describes using triplets, and this repository has code samples.
Cosine similarity loss: Maximizes the cosine similarity between the embeddings of positive pairs and minimizes it for negative pairs.
Training Procedure
The training process involves feeding pairs of queries and documents through the model, obtaining their embeddings, and then computing the loss based on the similarity or dissimilarity of these embeddings.
Input pairs: Query and document pairs are fed into the model.
Embedding generation: The model generates embeddings for the query and document.
Loss computation: The embeddings are used to compute the loss (e.g., contrastive loss, triplet loss).
Embedding Extraction
After training, the model is often truncated to use only the layers up to the point where the desired embeddings are produced.
For instance:
Final layer embeddings: In many cases, the embeddings from the final layer of the model are used.
Intermediate layer embeddings: Sometimes, embeddings from an intermediate layer are used if they are found to be more useful for the specific task.
Let's consider a real example
Sentence-BERT (SBERT) is a good example of a model specifically designed for producing sentence-level embeddings.
Model architecture: Based on BERT, but with a pooling layer added to produce a fixed-size vector for each input sentence.
Training data: Uses pairs of sentences with a label indicating if they are similar or not.
Training objective: Uses a Siamese network structure and contrastive loss to ensure that similar sentences have embeddings close to each other and dissimilar sentences have embeddings far apart.
Summary
Training an embedding model for Retrieval Augmented Generation use cases requires a few key components:
Training data: Pairs of queries and documents (positive and negative).
Model output: Typically token-level embeddings pooled to create sentence/document-level embeddings.
Loss functions: Contrastive loss, triplet loss, cosine similarity loss.
Embedding extraction: Uses final or intermediate layer embeddings after training. |
|
Write an article about "How to securely store secrets in BitWarden CLI and load them into your shell when needed" | Read the article |
|
Write an article about "Git operations in JavaScript for pain and profit" | ;
What do you do if you need to run git in a context where you cannot install whatever you want? What if you need to run git
in a serverless environment that only supports JavaScript?
In How to run background jobs on Vercel without a queue, I wrote about how to keep the API route that accepts new jobs snappy and responsive using fire-and-forget functions.
In this post, I'll demonstrate how you can use isomorphic-git even where you can't install git via a traditional package manager like apt, ship a new Docker image or update the build script.
In this case, we're running
git within our Vercel serverless functions.
Table of contents
The problem
I'm developing a tool for myself to streamline my blog post authoring workflow, called Panthalia.
The mobile client allows me to author new blog posts rapidly, because I tend to have several new post ideas at one time. This means the name of the game is capture speed.
The workflow that Panthalia performs to make changes to my website that hosts my blog
As you can see in the above flow diagram, Panthalia makes changes to my site via git.
It clones my site, adds new posts, pushes changes up on a new branch and uses a GitHub token to open pull requests.
Running git where you can't run git
Git is required in this flow. Sure, there are some funky ways I could leverage GitHub's API to modify content out of band, but for reliability and atomic operations, I wanted to use git.
My portfolio site which hosts my blog, and Panthalia, which modifies my site via git, are both Next.js projects deployed on Vercel.
This means that any API routes I create in these two apps will be automatically split out into Vercel serverless functions upon deployment.
I can't modify their build scripts. I can't use apt to install git. I can't update a Docker image to include the tools I need. It's JavaScript or bust.
Enter the impressive isomorphic-git project. You can npm install or pnpm add isomorphic-git like any other package to install it.
isomorphic-git allows you to clone, fetch, stage and commit changes and operate on branches using a familiar API.
Git operations in JavaScript
Let's take a look at the method that clones my site and creates a new branch based on the new blog post title:
javascript
// Convenience method to clone my portfolio repo and checkout the supplied branch
export async function cloneRepoAndCheckoutBranch(branchName: string) {
try {
// Wipe away previous clones and re-clone
await freshClone();
// Check if the branch already exists locally, if not, fetch it
const localBranches = await git.listBranches({ fs, dir: clonePath });
if (!localBranches.includes(branchName)) {
console.log(Branch ${branchName} not found locally. Fetching...);
await git.fetch({
fs,
http,
});
}
// Checkout the existing branch
await git.checkout({
fs,
});
console.log(Successfully checked out branch: ${branchName});
} catch (err) {
console.log(cloneRepoAndCheckoutBranch: Error during git operations: ${err});
return null;
}
}
Here are the convenience methods I created to simplify wiping any pre-existing clones and starting afresh.
javascript
// Convenience method to wipe previous repo and re-clone it fresh
export async function freshClone() {
// Blow away any previous clones
await wipeClone();
// Clone the repo
await cloneRepo();
}
// Convenience method to clone my portfolio repo and checkout the main branch
export async function cloneRepo() {
await git.clone({
fs,
http,
});
console.log('Repo successfully cloned.');
}
// Wipe the clone directory
export async function wipeClone() {
if (fs.existsSync(clonePath)) {
await rmdir(process.env.CLONE_PATH, { recursive: true });
console.log('Previously existing clone directory removed.');
} else {
console.log('No clone directory to remove...');
}
}
Git in the time of serverless
Vercel's serverless functions do expose a filesystem that you can write to. I set a CLONE_PATH environment variable that defaults to /tmp/repo.
There are also timing considerations. Vercel functions can run for up to 5 minutes on the Pro plan, so I don't wany any particular API route's work to be
terminated due to a timeout.
That's why I perform a shallow clone - meaning only the latest commit - and I configure the singleBranch option to only grab the main branch.
Given that my portfolio site has a decent amount of history and a lot of images, these two changes cut down
cloning time from a few minutes to about 30 seconds.
There's also the asynchronous nature of the compute environment Vercel functions are running in.
One of the reasons I do a full wipe and clone is that
I always want my operations to be idempotent, but I can't be sure whether a given serverless function invocation will or will not have access to a previously cloned repo
in the temporary directory.
Authentication considerations with JavaScript-based git
My portfolio site, github.com/zackproser/portfolio is open-source and public, so I don't need any credentials to clone it.
However, once Panthalia has cloned my repo, created a new branch and committed the changes to add my new blog post, it does need to present a
GitHub token when pushing my changes.
You present a token using the onAuth callback in isomorphic-git. I recommend storing your GitHub token as an environment variable locally and in Vercel:
javascript
// Push the commit
await git.push({
fs,
http,
}).then(() => {
console.log(Successfully git pushed changes.);
})
Wrapping up
I knew that I wanted to ship Next.js and deploy it on Vercel because the developer experience is so delightful (and fast). This meant that I couldn't use the traditional git client
in the JavaScript serverless function context.
The isomorphic-git project allows me to implement a complete programmatic git workflow in my Next.js app running on Vercel, and it works very well. |
|
Write an article about "Can ChatGPT-4 and GitHub Copilot help me produce a more complete side project more quickly?" | Chat GPT-4 pointing out one of my bugs while I work
Over the years, I have embarked on countless weekend side projects to explore new technologies, refine my skills, and simply enjoy the creative process.
As a Golang developer with a penchant for keyboard-driven development, I have learned to combine command line tools, IDEs, and text editors to maximize productivity and deliver results efficiently.
This weekend, I decided to see what, if any, speed boost I could unlock by weaving in ChatGPT-4 to my developer workflow and using it alongside GitHub Copilot, which I have been experimenting with for several months now.
By the end of this post, you'll gain insights into the current state of these tools and how a Senior Software Engineer found them to be helpful, and in some cases, very helpful.
First off, I needed a simple project idea that could be tackled quickly.
Enter sizeof, the CLI project I developed during this experiment.
Drawing from my experience with command line tools and efficient development practices, sizeof is designed to help developers build an intuitive understanding of the relative sizes of various inputs, strings, web pages, and files.
It leverages the popular charmbracelet bubbletea library to create an interactive TUI (Terminal User Interface) that displays this information in a visually appealing manner.
The sizeof project is open-source and available at https://github.com/zackproser/sizeof.
sizeof is really simple. Here's a quick demo gif that shows how it works:
One of the key strengths of large language models (LLMs) like ChatGPT-4 is their ability to generate reasonable starting points for various tasks.
For instance, they excel at producing CircleCI configurations and READMEs, as they've essentially "seen" all possible variations.
Leveraging this capability, I requested a starting point for the command line interface, which enabled me to dive into the development process quickly and start iterating on the initial code.
Here I am asking ChatGPT for the initial README, and because I am pretty sure it's seen most of my writing already, I can ask it to do its best to write it the way I would.
As you can see in its response, it is going so far to include the HTML badges I tend to include in my open source projects.
It also reflects my tendency to include a Table of Contents in any document I create!
It goes without saying, this is very useful and something I would love to be able to do in an even more streamlined manner.
I still edited the final README by hand in Neovim and made all my own edits and tweaks, but I will have landed this side project much faster for not having to do everything from scratch each time.
Likewise, I essentially described the blog post and asked ChatGPT-4 for the first draft.
By asking ChatGPT-4 to generate a first draft based on my desired talking points, I obtained a solid foundation to extend and edit.
This greatly accelerated the writing process and allowed me to focus on refining the content.
My impressions of the quality of its responses is generally favorable.
I experience about zero friction in explaining myself to ChatGPT 4 - and getting it to do exactly what I want.
Because it is aware of the previous context, I can ask it insanely useful things like this:
The sweet spot - generative, tedious tasks
ChatGPT 4 really shines when asked to perform tedious tasks that you would normally do yourself or hand to your copywriter when preparing a post for publishing online.
Here I am asking ChatGPT 4 to effectively save me a chunk of time.
Its first response was excellent so getting the copy I needed for LinkedIn and Twitter took me as long as it would have taken to effective type or speak my description of the task into being.
ChatGPT 4 generated:
The initial CLI scaffold, and then immediately tweaked it to include bubbletea in order to implement a TUI at my request
The initial README for the project
When asked, the mermaid.js diagram featured in the README of the sizeof CLI project
The first draft of this blog post
The LinkedIn post copy to announce the blog post
The Twiter post to announce the blog post
In all these cases, I ended up slightly modifying the results myself, and in all these cases, these are artifacts I can and have produced for myself, but there's no doubt generating these starting points saved me a lot of time.
Where is GitHub Copilot in all of this?
Nowhere, essentially.
For the most part, while working with ChatGPT4, there was nothing of value that GitHub Copilot had to add, and I find its code suggestions painfully clumsy and incorrect, which is more a testament to the speed at which these LLM models are currently being developed and advanced.
Copilot X, which is forthcoming at the time of this writing and leverages ChatGPT 4 under the hood, will be of significantly more interest to me depending on its pricing scheme.
The UX sticking points
Despite the clear advantages, there were some challenges when using ChatGPT-4.
One of these was the developer experience.
I want a seamless integration of this, and likely, a bunch of other current and forthcoming models right in my terminal and likely right in Neovim.
While open-source developers have created an impressive array of neovim and vim plugins, as well as experimental web applications that integrate ChatGPT into development workflows, achieving a first-class developer experience remains elusive, at least for me.
I have found a couple of open source developers putting forth some really awesome tools very quickly. Here are some of the best things I have found so far, which is by no means exhaustive:
YakGPT - Allows you to run a local server that hits the OpenAI API directly (bypassing their UI) and allowing you to do speech to text and text to speech.
It's the closest thing to hands-free ChatGPT that I've seen so far.
ChatGPT.nvim - ChatGPT in Neovim
Context switching is the current main friction point I experienced during my experiment.
Using ChatGPT-4 effectively required a fair amount of transitioning between my terminal driven development environment and the web browser for a clunky, often buggy UI experience a la Open AI's native web client.
As someone who strives for optimized workflows and minimized friction, I found this cumbersome and disruptive to my work.
What I really want is close to what YakGPT is already making possible: I want a super-intelligent daemon that I can task with background research or questions that require massive information digestion, or tedious things like figuring out a valid starting point for a CircleCI configuration for my new repository.
It seems I haven not yet discovered the magic key to integrate these tools seamlessly into my workflow, but at the rate things are advancing and given the amount of attention this space is currently getting, I expect this to continue to evolve very rapidly.
I can imagine the workflow that at this point is not far off, where I could extend my own capabilities with several AI agents or models.
I have not yet found the Neovim plugin that I want to roll forward with and tweak to my liking.
In several of the neovim plugins for ChatGPT I experimented with, I have noticed issues with ChatGPT API status needing to be reflected somehow within the terminal or curent buffer: essentially, signaling to the user that the ChatGPT plugin is not dead, just waiting on data.
I fully expect all of these minor UX quirks to dissipate rapidly, and I expect to be leveraging some kind of LLM model regularly within my personal workflow in the months to come.
Finally, the more that I came to see what I could accomplish with a streamlined LLM experience in my preferred code editor, the more that I realized I am probably going to want some kind of AI interface in a variety of different contexts.
As a developer and writer, I use various applications, such as Obsidian for building my second brain.
Leveraging my experience in combining different tools, I am now eager to see similar AI-powered interfaces integrated into these other contexts to further streamline my productivity and creative processes.
I wonder if we will end up wanting or seeing "AI wallets" or model multiplexers that allow you to securely or privately host a shared backend or database?
If I want to run ChatGPT 4 in my terminal for coding, but also in my browser to help me edit text, and then later on my phone, how would I ideally share context amongst those different access points? |
|
Write an article about "Introduction to embeddings (vectors" | Table of contents
In the rapidly evolving world of artificial intelligence (AI) and machine learning, there's a concept that's revolutionizing the way machines understand and process data: embeddings.
Embeddings, also known as vectors, are floating-point numerical representations of the "features" of a given piece of data.
These powerful tools allow machines to achieve a granular "understanding" of the data we provide, enabling them to process and analyze it in ways that were previously impossible.
In this comprehensive guide, we'll explore the basics of embeddings, their history, and how they're revolutionizing various fields.
Extracting Features with Embedding Models
At the heart of embeddings lies the process of feature extraction.
When we talk about "features" in this context, we're referring to the key characteristics or attributes of the data that we want our machine learning models to learn and understand.
For example, in the case of natural language data (like text), features might include the semantic meaning of words, the syntactic structure of sentences, or the overall sentiment of a document.
To obtain embeddings, you feed your data to an embedding model, which uses a neural network to extract these relevant features.
The neural network learns to map the input data to a high-dimensional vector space, where each dimension represents a specific feature.
The resulting vectors, or embeddings, capture the essential information about the input data in a compact, numerical format that machines can easily process and analyze.
There are various embedding models available, ranging from state-of-the-art models developed by leading AI research organizations like OpenAI and Google, to open-source alternatives like Word2Vec and GloVe.
Each model has its own unique architecture and training approach, but they all share the common goal of learning meaningful, dense representations of data.
A Detailed Example
To better understand how embeddings work in practice, let's consider a concrete example. Suppose we have the following input data:
"The quick brown fox jumps over the lazy dog"
When we pass this string of natural language into an embedding model, the model uses its learned neural network to analyze the text and extract its key features.
The output of this process is a dense vector, or embedding, that looks something like this:
bash
[0.283939734973434, -0.119420836293, 0.0894208490832, ..., -0.20392492842, 0.1294809231993, 0.0329842098324]
Each value in this vector is a floating-point number, typically ranging from -1 to 1.
These numbers represent the presence or absence of specific features in the input data.
For example, one dimension of the vector might correspond to the concept of "speed," while another might represent "animal." The embedding model learns to assign higher values to dimensions that are more strongly associated with the input data, and lower values to dimensions that are less relevant.
So, in our example, the embedding vector might have a high value in the "speed" dimension (capturing the concept of "quick"), a moderate value in the "animal" dimension (representing "fox" and "dog"), and relatively low values in dimensions that are less relevant to the input text (like "technology" or "politics").
High dimensional vector space - each point is a vector and their distance from one another represents their similarity.
The true power of embeddings lies in their ability to capture complex relationships and similarities between different pieces of data.
By representing data as dense vectors in a high-dimensional space, embedding models can learn to group similar items together and separate dissimilar items.
This enables machines to perform tasks like semantic similarity analysis, clustering, and classification with remarkable accuracy and efficiency.
Applications of Embeddings
The potential applications of embeddings are vast and diverse, spanning across multiple domains and industries.
Some of the most prominent areas where embeddings are making a significant impact include:
Natural Language Processing (NLP) In the field of NLP, embeddings have become an essential tool for a wide range of tasks, such as:
Text classification
Embedding models can learn to represent text documents as dense vectors, capturing their key semantic features.
These vectors can then be used as input to machine learning classifiers, enabling them to automatically categorize text into predefined categories (like "spam" vs.
"not spam," or "positive" vs.
"negative" sentiment).
Sentiment analysis
By learning to map words and phrases to sentiment-specific embeddings, models can accurately gauge the emotional tone and opinion expressed in a piece of text.
This has powerful applications in areas like social media monitoring, customer feedback analysis, and brand reputation management.
Named entity recognition
Embeddings can help models identify and extract named entities (like people, places, organizations, etc.) from unstructured text data.
By learning entity-specific embeddings, models can disambiguate between different entities with similar names and accurately label them in context.
Machine translation
Embedding models have revolutionized the field of machine translation by enabling models to learn deep, semantic representations of words and phrases across different languages.
By mapping words in the source and target languages to a shared embedding space, translation models can capture complex linguistic relationships and produce more accurate, fluent translations.
Image and Video Analysis
Embeddings are not limited to textual data – they can also be applied to visual data like images and videos. Some key applications in this domain include:
Object detection
By learning to map image regions to object-specific embeddings, models can accurately locate and classify objects within an image.
This has Face recognition
Embedding models can learn to represent faces as unique, high-dimensional vectors, capturing key facial features and enabling accurate face identification and verification.
This technology is used in a variety of settings, from mobile device unlocking to law enforcement and security systems.
Scene understanding
By learning to embed entire images or video frames, models can gain a holistic understanding of the visual scene, including object relationships, spatial layouts, and contextual information.
This enables applications like image captioning, visual question answering, and video summarization.
Video recommendation
Embeddings can capture the semantic content and style of videos, allowing recommendation systems to suggest similar or related videos to users based on their viewing history and preferences.
Recommendation Systems
Embeddings play a crucial role in modern recommendation systems, which aim to provide personalized content and product suggestions to users. Some key applications include:
Product recommendations
By learning to embed user preferences and product features into a shared vector space, recommendation models can identify meaningful similarities and suggest relevant products to users based on their past interactions and behavior.
Content personalization
Embedding models can learn to represent user profiles and content items (like articles, videos, or songs) as dense vectors, enabling personalized content ranking and filtering based on individual user preferences.
Collaborative filtering
Embeddings enable collaborative filtering approaches, where user and item embeddings are learned jointly to capture user-item interactions.
This allows models to make accurate recommendations based on the preferences of similar users, without requiring explicit feature engineering.
Anomaly Detection
Embeddings can also be used to identify unusual or anomalous patterns in data, making them a valuable tool for tasks like:
Fraud detection
By learning normal behavior patterns and embedding them as reference vectors, models can flag transactions or activities that deviate significantly from the norm, potentially indicating fraudulent behavior.
Intrusion detection
In the context of network security, embeddings can help models learn the typical patterns of network traffic and user behavior, enabling them to detect and alert on anomalous activities that may signal a security breach or intrusion attempt.
System health monitoring
Embeddings can capture the normal operating conditions of complex systems (like industrial equipment or software applications), allowing models to identify deviations or anomalies that may indicate potential failures or performance issues.
Leveraging the power of embeddings, developers and data scientists can build more intelligent and efficient systems that can better understand and process complex data across a wide range of domains and applications.
A Brief History of Embeddings
The concept of embeddings has its roots in the field of natural language processing, where researchers have long sought to represent words and phrases in a way that captures their semantic meaning and relationships.
One of the earliest and most influential works in this area was the Word2Vec model, introduced by Tomas Mikolov and his colleagues at Google in 2013.
Word2Vec revolutionized NLP by demonstrating that neural networks could be trained to produce dense vector representations of words, capturing their semantic similarities and relationships in a highly efficient and scalable manner.
The key insight behind Word2Vec was that the meaning of a word could be inferred from its context – that is, the words that typically appear around it in a sentence or document.
By training a shallow neural network to predict the context words given a target word (or vice versa), Word2Vec was able to learn highly meaningful word embeddings that captured semantic relationships like synonymy, antonymy, and analogy.
For example, the embedding for the word "king" would be more similar to the embedding for "queen" than to the embedding for "car," reflecting their semantic relatedness.
The success of Word2Vec sparked a wave of research into neural embedding models, leading to the development of more advanced techniques like GloVe (Global Vectors for Word Representation) and FastText.
These models built upon the core ideas of Word2Vec, incorporating additional information like global word co-occurrence statistics and subword information to further improve the quality and robustness of the learned embeddings.
In recent years, the power of embeddings has been further amplified by the advent of deep learning and the availability of large-scale training data.
State-of-the-art embedding models like BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer) have pushed the boundaries of what's possible with neural embeddings, achieving remarkable results on a wide range of NLP tasks like question answering, text summarization, and sentiment analysis.
At the same time, the success of embeddings in NLP has inspired researchers to apply similar techniques to other domains, such as computer vision and recommender systems.
This has given rise to new types of embedding models, like CNN-based image embeddings and graph embeddings for social networks, which have opened up exciting new possibilities for AI and machine learning.
As the field of AI continues to evolve at a rapid pace, embeddings will undoubtedly play an increasingly By providing a powerful and flexible framework for representing and analyzing data, embeddings are poised to unlock new frontiers in artificial intelligence and transform the way we interact with technology.
The Future of Embeddings
As we look to the future, it's clear that embeddings will continue to play a central role in the development of more intelligent and capable AI systems.
Some of the key areas where we can expect to see significant advancements in the coming years include:
Multimodal Embeddings
One of the most exciting frontiers in embedding research is the development of multimodal embedding models that can learn joint representations across different data modalities, such as text, images, audio, and video.
By combining information from multiple sources, these models can potentially achieve a more holistic and nuanced understanding of the world, enabling new applications like cross-modal retrieval, multimodal dialogue systems, and creative content generation.
Domain-Specific Embeddings
While general-purpose embedding models like Word2Vec and BERT have proven highly effective across a wide range of tasks and domains, there is growing interest in developing more specialized embedding models that are tailored to the unique characteristics and requirements of particular industries or applications.
For example, a medical embedding model might be trained on a large corpus of clinical notes and medical literature, learning to capture the complex relationships between diseases, symptoms, treatments, and outcomes.
Similarly, a financial embedding model could be trained on news articles, company reports, and stock market data to identify key trends, risks, and opportunities in the financial markets.
By leveraging domain-specific knowledge and training data, these specialized embedding models have the potential to achieve even higher levels of accuracy and utility compared to their general-purpose counterparts.
Explainable Embeddings
As AI systems become increasingly complex and opaque, there is a growing need for embedding models that are more interpretable and explainable.
While the high-dimensional vectors learned by current embedding models can capture rich semantic information, they are often difficult for humans to understand or reason about directly.
To address this challenge, researchers are exploring new techniques for learning more interpretable and transparent embeddings, such as sparse embeddings that rely on a smaller number of active dimensions, or factorized embeddings that decompose the learned representations into more meaningful and human-understandable components.
By providing more insight into how the embedding model is making its decisions and predictions, these techniques can help to build greater trust and accountability in AI systems, and enable new forms of human-machine collaboration and interaction.
Efficient Embedding Learning
Another key challenge in the development of embedding models is the computational cost and complexity of training them on large-scale datasets.
As the size and diversity of available data continue to grow, there is a need for more efficient and scalable methods for learning high-quality embeddings with limited computational resources and training time.
To this end, researchers are exploring techniques like few-shot learning, meta-learning, and transfer learning, which aim to leverage prior knowledge and pre-trained models to accelerate the learning process and reduce the amount of labeled data required.
By enabling the rapid development and deployment of embedding models in new domains and applications, these techniques could greatly expand the impact and accessibility of AI and machine learning in the real world.
Learning More About Embeddings
If you're excited about the potential of embeddings and want to dive deeper into this fascinating field, there are many excellent resources available to help you get started.
Here are a few recommended readings and educational materials:
Research Papers
"Efficient Estimation of Word Representations in Vector Space" by Tomas Mikolov, et al. (Word2Vec)
"GloVe: Global Vectors for Word Representation" by Jeffrey Pennington, et al.
"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" by Jacob Devlin, et al.
Books
"Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville (chapters on representation learning and embeddings)
"Mining of Massive Datasets" by Jure Leskovec, Anand Rajaraman, and Jeff Ullman (chapters on dimensionality reduction and embeddings)
Online demos
Embeddings demo
By investing time in learning about embeddings and experimenting with different techniques and models, you'll be well-equipped to harness their power in your own projects and contribute to the exciting field of AI and machine learning.
Wrapping Up
Embeddings are a fundamental building block of modern artificial intelligence, enabling machines to understand and reason about complex data in ways that were once thought impossible.
By learning dense, continuous vector representations of the key features and relationships in data, embedding models provide a powerful framework for a wide range of AI applications, from natural language processing and computer vision to recommendation systems and anomaly detection.
As we've seen in this post, the concept of embeddings has a rich history and a bright future, with ongoing research pushing the boundaries of what's possible in terms of multimodal learning, domain specialization, interpretability, and efficiency. |
|
Write an article about "ggshield can save you from yourself. Never accidentally commit secrets again" | A developer watching the API key they accidentally committed to GitHub migrating throughout the internet - thinking of the Slack messages they're about to have to send...
Committing a secret, like an API key or service token, to version control is a painful mistake, which I've done myself and seen plenty of others do, too. Fixing it usually involves
at least some degree of emotional pain - you need to announce the incident and work whatever incident process your organization might have.
You may need to ask others for assistance, either in generating a new secret or helping you rotate the compromised one if you don't have sufficient access yourself.
Engineers from the Ops, API and frontend teams attempting to excise an exposed secret from Git history, while the director of engineering sobs in the background
Depending on the size of your org, this could involve filing
some tickets, starting a couple of different conversations, pulling other colleagues off what they were focusing on, etc.
It sucks - and it's worth investing in a process that can help
you by preventing you from committing anything that looks like a secret in the first place.
ggshield is an amazing tool from GitGuardian
You may have heard of GitGuardian - a service that runs against public GitHub repositories, scans them for secrets (API keys, tokens, passwords, etc) which may have been accidentally committed, and
then emails you an alert and some helpful tips on remediating the issue.
I was delighted to discover they also have a command line interface (CLI) that anyone can use to scan their local working directory to ensure no secrets are hardcoded or otherwise exposed.
Even more powerfully, you can integrate ggshield with a git pre-commit hook in order to ensure that every single time you attempt to commit code from your machine, you automatically get a sanity check
ensuring you're not about to leak something sensitive.
What does it look like when ggshield saves your ass?
Here's a screenshot of a test I ran against a local repository.
First, I used openssl to generate a certificate that was just sitting in my local repository.
I then ran ggshield secret scan repo, and the tool
correctly found my secret, before I committed it and ruined everyone's day.
It's even better when combined with git hooks
Git hooks allow you to "hook" into different git lifecycle events in order to run your own code. For example, the pre-commit hook allows you to run your own code before you create a new git commit.
This is the perfect
place to scan for secrets, and ggshield supports this workflow out of the box.
GitGuardian has great documentation for all the ways you can use ggshield alongside git hooks, but I personally wanted to run it as a global pre-commit hook.
This means that for any and every repository I'm working with, ggshield will scan my local git changes for secrets.
This is a nice way to automate away needing to worry about letting a random token or certificate slip through when you're in
a hurry.
Of course it's critical to perform code reviews, and ask your team for a second set of eyes to ensure you're keeping your secrets out of version control, but having this extra layer of automated scanning works very
well in concert with these other best practices.
ggshield preventing a secret from escaping |
|
Write an article about "My first book credit! My Horrible Career" | I've written before about how one of the best things I ever did for my career was to hire
John Arundel as a Golang mentor.
In the time that I've known John, he started out as my Golang programming coach and then became more of a general software development and career mentor.
Today, I feel fortunate to call him a friend.
As we'd have our conversations back and forth on slack and zoom calls, I'd pick his brain about advancing in one's career, finding meaningful work and, as John elegantly puts it, crafting job that you don't need a vacation from.
The more we discussed this, the more John realized there was probably a book here, and that book is now available to the world and titled, "My Horrible Career".
I played a very small role in prompting him with some of the questions I was most curious to know the answers to.
John is a very talented writer, an excellent teacher and mentor and he's generously made this book free, so be sure to head over and download it now!
Download My Horrible Career |
|
Write an article about "Retrieval Augmented Generation (RAG" | export const href = "https://www.pinecone.io/learn/retrieval-augmented-generation"
This was the first article I published while working at Pinecone:
Read article |
Subsets and Splits