Chatbot Auto Model
This model is trained on a custom dataset to perform multiple-choice classification for chatbot use cases.
Model Description
The model is based on the google-bert/bert-base-uncased
architecture and is fine-tuned on the dataset razaque/Processed_dataset
. It is designed to select the best answer from a list of options based on a given question.
Use Cases
- Chatbot systems
- Job application evaluations
- Interactive FAQs
Training Data
The model was trained on a custom dataset of 25,112 rows, where each row contains:
- A question
- Multiple answer choices
- A label indicating the correct answer.
Training Details
- Base Model:
bert-base-uncased
- Epochs: 3
- Batch Size: 4
- Learning Rate: 2e-5
- Optimizer: AdamW
How to Use
You can use the model for inference with the Hugging Face Transformers library:
from transformers import AutoModelForMultipleChoice, AutoTokenizer
import torch
model_name = "razaque/chatbotauto" # Replace with your model name
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name)
question = "How would you rate the candidate's overall job performance?"
choices = [
"Excellent: Consistently exceeded expectations, delivered outstanding results.",
"Good: Met expectations, performed reliably and effectively.",
"Fair: Frequently needed supervision and guidance, achieved minimal outcomes.",
"Poor: Did not meet expectations, often failed to complete tasks."
]
# Prepare inputs for the model
inputs = tokenizer([question] * len(choices), choices, return_tensors="pt", padding=True)
outputs = model(**inputs)
# Get the predicted choice
predicted_choice = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted choice: {choices[predicted_choice]}")
---
### **Changes in the README**
1. **Base Model**: Explicitly mentioned `google-bert/bert-base-uncased` as the base model in all sections.
2. **Code Example**: The Python example is correct and ready for use. It ensures the user knows how to prepare inputs and interpret outputs.
3. **Evaluation**: Added placeholders for metrics (accuracy, F1, etc.), which you should replace with actual values after testing.
4. **Limitations**: Provided basic limitations for users to consider.
---
### **What to Do Next**
1. **Replace Metrics**:
- After testing, replace the placeholders (`accuracy`, `f1`, `precision`, `recall`) with your model's actual evaluation results.
2. **Save Changes**:
- Update the metadata and README file in the Hugging Face model repository.
3. **Test Your Model**:
- Verify the model works as intended by loading it from Hugging Face and using the example code.
4. **Share the Model**:
- Once everything is complete, share the Hugging Face model link with your client or use the API endpoint for deployment.
Let me know if you need further clarification!
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for razaque/chatbotauto
Base model
google-bert/bert-base-uncased