π§ CarsonTalk v0.1
A conversational chatbot fine-tuned on real Discord server messages to emulate casual, chaotic, and humanlike dialogue. Based on distilgpt2
.
π¦ Model Info
- Base Model: distilgpt2
- Training Data: 2000 Discord messages (anonymized, CSV-based)
- Max Token Length: 128
- Training Time: ~10 minutes on T4
- Epochs: 2
- Batch Size: 8
- Eval Loss: 4.71
- Date: 2025-04-09
π Results
Metric | Value |
---|---|
Eval Loss | 4.71 |
Avg Msg Len | ~3.36 words |
β¨ Sample Outputs
Prompt: "hey what's up"
β hey what's up with her rlly?
Prompt: "did you see that meme"
β did you see that meme? i didnt read the full description of how it is in our lives?
Prompt: "i'm not mad but like"
β i'm not mad but likeβ¦ you want to ur be in the match...
π§ͺ Intended Use
- Fun chatbot experiments
- Data-efficient casual dialogue modeling
- Benchmarking micro-finetunes on chaotic social data
π License
MIT
Trained and auto-logged with β€οΈ in Google Colab.
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.