Model Card for Model ID
Model Card for Fine-Tuned Pegasus Summary Generator
Model Details
Model Description
This model is a fine-tuned version of the Pegasus model for text summarization, specifically optimized for generating structured summaries from transcripts. The model has been trained to capture key points, remove redundant information, and maintain coherence in summaries.
- Developed by: Akshay Choudhary
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Model type: Transformer-based summarization model
- Language(s) (NLP): English
- License: [More Information Needed]
- Finetuned from model [optional]: google/pegasus-large
Model Sources [optional]
- Repository: https://huggingface.co/akshay9125/Transcript_Summerizer/
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
The model can be directly used for transcript summarization in various applications, including:
Meeting and lecture transcript summarization
Podcast and interview summarization
Summarization of long-form text data
Downstream Use [optional]
he model can be fine-tuned further for:
Domain-specific summarization (e.g., medical, legal, educational transcripts)
Integration into AI-powered note-taking tool
Out-of-Scope Use
Generating highly creative or fictional content
Summarizing extremely noisy or low-quality transcripts
Generating precise legal or medical documentation without expert verification
Bias, Risks, and Limitations
The model may exhibit biases based on:
T* he dataset used for fine-tuning
The quality and clarity of input transcripts
Potential loss of nuanced context in summarization
Recommendations
Users should:
Validate summaries for critical use cases
Avoid using the model for tasks requiring absolute accuracy without human verification
Be aware of potential biases in summarization
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained("akshay9125/Transcript_Summerizer") model = PegasusForConditionalGeneration.from_pretrained("akshay9125/Transcript_Summerizer")
def summarize_text(text): inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="longest") summary_ids = model.generate(**inputs) return tokenizer.decode(summary_ids[0], skip_special_tokens=True)
Training Details
Training Data
Dataset: Collected and preprocessed transcript datasets
Preprocessing: Removal of noise, speaker labels, and unnecessary pauses
Training Procedure
Preprocessing: Tokenization with Pegasus tokenizer
Training regime: FP16 mixed precision
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
- Model size: ~568M parameters
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
Akshay Choudhary
Model Card Contact
[More Information Needed]
- Downloads last month
- 65