Spaces:
Running
Running
File size: 3,193 Bytes
8da8974 4a8c2df 8da8974 9ce8ff3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
title: Youtube Summary Ai
emoji: π
colorFrom: pink
colorTo: red
sdk: docker
sdk_version: 1.46.0
app_file: app.py
pinned: false
---
# YouTube Summary AI
Transform YouTube videos into concise notes and summaries using fully local AI processing. This application runs entirely on your machine with no external API calls, ensuring complete privacy and security of your data.

## Key Features
- π **100% Local Processing**: All AI operations run on your machine
- No API keys required
- No data sent to external servers
- Complete privacy and security
- Use CPU or GPU to run the AI
- π― **Offline Capable**: Once models are downloaded, works without internet
- β‘ **Fast Processing**: Direct local inference without API latency
- π₯ Easy YouTube video URL input
- π Advanced audio extraction using yt-dlp
- π Local transcription using Whisper
- π€ Local AI summarization using LLaMA, Gemma or others LLM
- π Shareable summary links
- π» Clean and intuitive user interface
## How It Works
1. **Download**: Downloads YouTube video audio locally using yt-dlp
2. **Transcribe**: Processes audio using local Whisper model
3. **Summarize**: Generates summary using local LLaMA model
4. **All data stays on your machine!**
## Prerequisites
Before running the application, make sure you have the following installed:
- Python 3.8 or higher
- FFmpeg
- Ollama with LLaMA model
## Installation
1. Clone the repository
```bash
git clone https://github.com/Shivp1413/youtube-summary-ai.git
cd youtube-summary-ai
```
2. Create a virtual environment (recommended)
```bash
python -m venv venv
source venv/bin/activate # On Windows, use: venv\Scripts\activate
```
3. Install the required packages
```bash
pip install -r requirements.txt
```
4. Install and run Ollama with LLaMA model
```bash
# Install Ollama from https://ollama.ai
ollama pull llama3.1
```
## First-Time Setup
When you first run the application, it will:
1. Download the Whisper base model (~150MB) for local transcription
2. Use your local LLaMA model for summarization
3. All subsequent runs will use these local models
## Usage
1. Start the Streamlit application:
```bash
streamlit run app.py
```
2. Open your web browser and navigate to `http://localhost:8501`
3. Enter a YouTube URL and click "Generate Summary"
4. Share the summary using the generated link
## Project Structure
```
youtube-summary-ai/
βββ app.py # Main Streamlit application
βββ summarizer.py # Video processing and local AI logic
βββ requirements.txt # Project dependencies
βββ assets/ # Project assets
β βββ demo.gif # Application demo
βββ README.md # Project documentation
```
## Security Features
- β
No API keys needed
- β
No cloud services required
- β
All processing happens locally
- β
No data leaves your machine
- β
Full control over your data
- β
Works offline after initial setup
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |