StoryTell AI
This model, developed using Hugging Faceโs transformer library, is designed for educational storytelling in computer science. It helps users learn CS concepts through interactive narratives. Users can request stories about specific topics, such as loops, and the model generates informative and engaging content complete with assessments.
Model Details
Model Description
This model is an innovative tool for teaching fundamental computer science concepts via educational storytelling. It generates interactive stories tailored to specific CS topics requested by the user, such as algorithms, programming basics & more, incorporating assessments to enhance learning and engagement.
- Developed by: Ranam Hamoud & George Kanaan
- Model type: PEFT adapter model using LoRA from Meta's Llama2 7B
- Language(s) (NLP): English
- License: MIT License
- Finetuned from model : Finetuned from Meta's Llama-2 7B model
Model Sources
- Repository: Data Generation Scripts
- Paper [optional]: Not provided
- Demo: StorytellAI on Hugging Face Spaces
Uses
Direct Use
The model is designed to be used directly via an interactive interface where users can ask for stories about specific computer science topics. It's suitable for educational purposes, particularly in learning environments or as a supplementary learning tool.
Downstream Use
While primarily designed for educational storytelling, the model could potentially be adapted for other educational applications or interactive learning tools that require narrative generation.
Out-of-Scope Use
The model is not intended for high-stakes decision-making, nor should it be used as a sole resource for learning computer science. It is not designed to handle topics outside of its training scope, such as non-CS related content.
Bias, Risks, and Limitations
The model might inherit biases from its training data, which was generated using prompts from OpenAI's GPT-3.5-Turbo/4. It may also exhibit limitations in understanding and generating accurate technical content if the input deviates significantly from the data seen during training.
Recommendations
Users should be aware of the potential for inherited biases and should use this model as a supplementary educational tool alongside other resources. Further evaluation and monitoring are recommended to identify and mitigate any emergent biases or inaccuracies.
How to Get Started with the Model
Here's a general framework for initializing and running the model, detailed in the repository and demo linked above. Please consult the provided links for specific implementation details and code.
Training Details
Training Data
The model was trained on a custom dataset generated specifically for this project, aimed at creating educational content related to computer science topics. The data generation scripts and datasets are available at the linked GitHub repository.
Training Hyperparameters
The model was trained on an NVIDIA A100 machine using quantization techniques to optimize performance. Training involved configurations like LoRA adaptation and fine-tuning of Meta's Llama2 7B model under specified training arguments.
Evaluation
Testing Data, Factors & Metrics
Further details on testing data and evaluation metrics will be provided.
Results
Results of the training and subsequent evaluations will be provided to understand the effectiveness of the model in educational storytelling.
Environmental Impact
- Hardware Type: NVIDIA A100
- Hours used: 8 hours
- Cloud Provider: RunPod
- Carbon Emitted: Estimates not provided
[More Information Needed]
Framework versions
- PEFT 0.10.0
- Downloads last month
- 3
Model tree for ranamhamoud/storytell
Base model
meta-llama/Llama-2-7b-hf