Spaces:
Runtime error
A newer version of the Gradio SDK is available:
5.12.0
title: Audio-Transcriptor
sdk: gradio
emoji: ⚡
colorFrom: green
colorTo: red
pinned: false
short_description: Audio transcription with speaker diarization using Whisper.
sdk_version: 5.3.0
Audio Transcription and Diarization Tool
Overview
This project provides a robust set of tools for transcribing audio files using the Whisper model and performing speaker diarization with PyAnnote. Users can process audio files, record audio, and save transcriptions with speaker identification.
Table of Contents
Features
- Transcription: Convert audio files in various formats to text (automatically converts to WAV).
- Speaker Diarization: Identify different speakers in the audio.
- Speaker Retrieval: Name speakers during transcription.
- Audio Recording: Record audio directly from a microphone.
- Audio Preprocessing: Includes resampling, format conversion, and audio enhancement.
- Multiple Model Support: Choose from various Whisper model sizes.
Supported Whisper Models
This tool supports various Whisper model sizes, allowing you to balance accuracy and computational resources:
tiny
: Fastest, lowest accuracybase
: Fast, good accuracysmall
: Balanced speed and accuracymedium
: High accuracy, slowerlarge
: High accuracy, resource-intensivelarge-v1
: Improved large modellarge-v2
: Further improved large modellarge-v3
: Latest and most accuratelarge-v3-turbo
: Optimized for faster processing
Specify the model size when initializing the Transcriptor:
transcriptor = Transcriptor(model_size="base")
The default model size is "base" if not specified.
Requirements
To run this project, you need Python 3.7+ and the following packages:
- openai-whisper
- pyannote.audio
- librosa
- tqdm
- python-dotenv
- termcolor
- pydub
- SpeechRecognition
- pyaudio
- tabulate
- soundfile
- torch
- numpy
- transformers
- gradio
Install the required packages using:
pip install -r requirements.txt
Setup
Clone the repository:
git clone https://github.com/your-username/audio-transcription-tool.git cd audio-transcription-tool
Install the required packages:
pip install -r requirements.txt
Set up your environment variables:
- Create a
.env
file in the root directory. - Add your Hugging Face token:
HF_TOKEN=your_hugging_face_token_here
- Create a
Usage
Basic Example
Here's how to use the Transcriptor class to transcribe an audio file:
from pyscript import Transcriptor
# Initialize the Transcriptor
transcriptor = Transcriptor()
# Transcribe an audio file
transcription = transcriptor.transcribe_audio("/path/to/audio")
# Interactively name speakers
transcription.get_name_speakers()
# Save the transcription
transcription.save()
Audio Processing Example
Use the AudioProcessor class to preprocess your audio files:
from pyscript import AudioProcessor
# Load an audio file
audio = AudioProcessor("/path/to/audio.mp3")
# Display audio details
audio.display_details()
# Convert to WAV format and resample to 16000 Hz
audio.convert_to_wav()
# Display updated audio details
audio.display_changes()
Transcribing an Existing Audio File or Recording
To transcribe an audio file or record and transcribe audio, use the demo application provided in demo.py
:
python demo.py
Key Components
Transcriptor
The Transcriptor
class (in pyscript/transcriptor.py
) is the core of the transcription process. It handles:
- Loading the Whisper model
- Setting up the diarization pipeline
- Processing audio files
- Performing transcription and diarization
AudioProcessor
The AudioProcessor
class (in pyscript/audio_processing.py
) manages audio file preprocessing, including:
- Loading audio files
- Resampling
- Converting to WAV format
- Displaying audio file details and changes
- Audio enhancement (noise reduction, voice enhancement, volume boost)
AudioRecording
The audio_recording.py
module provides functions for recording audio from a microphone, checking input devices, and saving audio files.
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a new branch:
git checkout -b feature-branch-name
- Make your changes and commit them:
git commit -m 'Add some feature'
- Push to the branch:
git push origin feature-branch-name
- Submit a pull request
Acknowledgments
- OpenAI for the Whisper model
- PyAnnote for the speaker diarization pipeline
- All contributors and users of this project