--- library_name: transformers tags: [] --- # Model Card for Model ID ## Model Details ### Model Description This model is a quantized and fine-tuned version of gemma-2b-it instruction model - **Developed by:** Core&Outline - **Finetuned from model google/gemma-2b-it:** [More Information Needed] ### Model Sources [optional] - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses To use this model via inference API or otherwise please ensure to follow the Google formatting. Ensure that your prompt is preceeded with user {{prompt}} model ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data Fine tuning was done using an internal dataset here: https://huggingface.co/datasets/core-outline/llama-2-7b-chat-hf [More Information Needed] ### Training Procedure Training was done manually on an NVIDIA-T4 instance. Find the code to the training procedure here: https://github.com/Core-Outline/core-transformer-network/blob/master/GemmaFineTuning.ipynb