{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ETL to get the text data from the playlist\n", "\n", "This notebook shows the process of building the corpus of transcripts from the YouTube playlist.\n", "\n", "**Extract**: Pull data (transcripts) from each video. \n", "**Transform**: \n", "**Load**: Load data into our database where it will be retrieved from. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from models import etl\n", "import json" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we load the video information. This includes the video IDs and titles." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "with open('data/single_video.json') as f:\n", " video_info = json.load(f)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we must extract the transcripts using the YouTube Transcript API. This is done over all of the videos. \n", "This produces a list of video segments with timestamps. \n", "Next, we format the transcript by adding metadata so that the segments are easily identified for retreival later. \n", "Since the original segments are small, they are batched with overlap to preserve semantic meaning." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "get_video_transcript took 0.84 seconds.\n", "Transcript for video 5sLYAQS9sWQ fetched.\n" ] } ], "source": [ "videos = []\n", "for video in video_info:\n", " video_id = video[\"id\"]\n", " video_title = video[\"title\"]\n", " transcript = etl.get_video_transcript(video_id)\n", " print(f\"Transcript for video {video_id} fetched.\")\n", " if transcript:\n", " formatted_transcript = etl.format_transcript(transcript, video_id, video_title, batch_size=5, overlap=2)\n", " \n", " videos.extend(formatted_transcript)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last step is to load the data into a database. We will use a Chromadb database. \n", "The embedding function is the ____ model from HuggingFace." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Database created at data/single_video.db\n" ] } ], "source": [ "# Initialize the database\n", "from utils.embedding_utils import MyEmbeddingFunction\n", "import chromadb\n", "\n", "embed_text = MyEmbeddingFunction()\n", "\n", "db_path = \"data/single_video.db\"\n", "client = chromadb.PersistentClient(path=db_path)\n", "\n", "client.create_collection(\n", " name=\"huberman_videos\",\n", " embedding_function=embed_text,\n", " metadata={\"hnsw:space\": \"cosine\"}\n", ")\n", "\n", "print(f\"Database created at {db_path}\")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Data loaded to database at data/single_video.db.\n" ] } ], "source": [ "# Add the data to the database\n", "client = chromadb.PersistentClient(path=db_path)\n", " \n", "collection = client.get_collection(\"huberman_videos\")\n", "\n", "documents = [segment['text'] for segment in videos]\n", "metadata = [segment['metadata'] for segment in videos]\n", "ids = [segment['metadata']['segment_id'] for segment in videos]\n", "\n", "collection.add(\n", " documents=documents,\n", " metadatas=metadata,\n", " ids=ids\n", ")\n", "\n", "print(f\"Data loaded to database at {db_path}.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here is some of the data:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of segments: 26\n" ] }, { "data": { "text/html": [ "
\n", " | ids | \n", "embeddings | \n", "metadatas | \n", "documents | \n", "uris | \n", "data | \n", "
---|---|---|---|---|---|---|
0 | \n", "5sLYAQS9sWQ__0 | \n", "[-0.11489544063806534, -0.03262839838862419, -... | \n", "{'segment_id': '5sLYAQS9sWQ__0', 'source': 'ht... | \n", "GPT, or Generative Pre-trained Transformer, is... | \n", "None | \n", "None | \n", "
1 | \n", "5sLYAQS9sWQ__12 | \n", "[0.094169981777668, -0.10430295020341873, 0.02... | \n", "{'segment_id': '5sLYAQS9sWQ__12', 'source': 'h... | \n", "Now foundation models are pre-trained on large... | \n", "None | \n", "None | \n", "
2 | \n", "5sLYAQS9sWQ__15 | \n", "[0.042587604373693466, -0.061460819095373154, ... | \n", "{'segment_id': '5sLYAQS9sWQ__15', 'source': 'h... | \n", "I'm talking about things like code. Now, large... | \n", "None | \n", "None | \n", "
3 | \n", "5sLYAQS9sWQ__18 | \n", "[-0.0245895367115736, -0.058405470103025436, -... | \n", "{'segment_id': '5sLYAQS9sWQ__18', 'source': 'h... | \n", "these models can be tens of gigabytes in size ... | \n", "None | \n", "None | \n", "
4 | \n", "5sLYAQS9sWQ__21 | \n", "[0.05348338559269905, -0.016104578971862793, -... | \n", "{'segment_id': '5sLYAQS9sWQ__21', 'source': 'h... | \n", "So to put that into perspective, a text file t... | \n", "None | \n", "None | \n", "
5 | \n", "5sLYAQS9sWQ__24 | \n", "[0.07004527002573013, -0.08996045589447021, -0... | \n", "{'segment_id': '5sLYAQS9sWQ__24', 'source': 'h... | \n", "A lot of words just in one Gb. And how many gi... | \n", "None | \n", "None | \n", "
6 | \n", "5sLYAQS9sWQ__27 | \n", "[0.0283487681299448, -0.11020224541425705, -0.... | \n", "{'segment_id': '5sLYAQS9sWQ__27', 'source': 'h... | \n", "Yeah, that's truly a lot of text. And LLMs are... | \n", "None | \n", "None | \n", "
7 | \n", "5sLYAQS9sWQ__3 | \n", "[-0.0700172707438469, -0.061202701181173325, -... | \n", "{'segment_id': '5sLYAQS9sWQ__3', 'source': 'ht... | \n", "And I've been using GPT in its various forms f... | \n", "None | \n", "None | \n", "
8 | \n", "5sLYAQS9sWQ__30 | \n", "[-0.04904637485742569, -0.1277533322572708, -0... | \n", "{'segment_id': '5sLYAQS9sWQ__30', 'source': 'h... | \n", "and the more parameters a model has, the more ... | \n", "None | \n", "None | \n", "
9 | \n", "5sLYAQS9sWQ__33 | \n", "[0.03286760300397873, -0.041724931448698044, 0... | \n", "{'segment_id': '5sLYAQS9sWQ__33', 'source': 'h... | \n", "All right, so how do they work? Well, we can t... | \n", "None | \n", "None | \n", "