--- language: - en pipeline_tag: text-generation library_name: transformers tags: - cerebras - LLM inference: false --- # Instruction-tuned Cerebras GPT 111M The smallest of [cerebras GPT models](https://huggingface.co/cerebras) with only 111M parameters instruction fine-tuned. ## Model Description Instruction fine-tuned [cerebras-GPT-111M](https://huggingface.co/cerebras/Cerebras-GPT-111M) ## Training data The model was fine-tuned with the following data: [alpaca_gpt4_data](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM/blob/main/data/alpaca_gpt4_data.json) (data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs) and [alpaca_data_cleaned](https://github.com/tloen/alpaca-lora/blob/a3027fea37c2087b8b0131b21a4cd948bbdcd9e0/alpaca_data_cleaned.json). ## Prompt template Fine-tuning was performed with the promp template from [stanford alpaca](https://github.com/tatsu-lab/stanford_alpaca): ```python PROMPT_DICT = { "prompt_input": ( "Below is an instruction that describes a task, paired with an input that provides further context. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ), "prompt_no_input": ( "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ), } ``` ## Usage It is recommended to format input according to the prompt template mentioned above during inference for best results.