--- language: en license: mit tags: - phi-2 - peft - lora - fine-tuned - neuroscience --- # Neuroscience Fine-tuned Phi-2 Model ## Model Description This is a fine-tuned version of Microsoft's Phi-2 model, adapted specifically for neuroscience domain content. ## Training Procedure - **Base Model**: Microsoft Phi-2 (2.7B parameters) - **Training Type**: LoRA fine-tuning - **Training Dataset**: BrainGPT/train_valid_split_pmc_neuroscience_2002-2022_filtered_subset - **Training Duration**: 3+ epochs - **Parameter-Efficient Fine-Tuning**: Used LoRA with r=16, alpha=32 ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model base_model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto") # Load adapter model = PeftModel.from_pretrained(base_model, "alaamostafa/Microsoft-Phi-2") # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2") # Generate text input_text = "Recent advances in neuroscience suggest that" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```