Model Summary
Finetuned Model - Manoj21k/microsoft-phi-2-finetuned
Alpaca Datasets Instruction Finetuning
We are pleased to introduce the Manoj21k/microsoft-phi-2-finetuned model, which has undergone fine-tuning using Alpaca datasets with instructional objectives. This process aims to enhance the model's performance in understanding and generating responses based on specific instructions. Here are key details about this finetuned model:
Fine-Tuning Details:
Datasets Used:
The model has been fine-tuned using Alpaca datasets, which are curated for instructional objectives. These datasets provide diverse examples and scenarios to improve the model's ability to follow instructions accurately.
Instructional Objectives:
The fine-tuning process emphasizes the model's proficiency in understanding and responding to prompts provided in an instructional format. This includes scenarios where explicit instructions are given, allowing the model to generate more contextually relevant and task-specific outputs.
Intended Use Cases:
Instruction-Based Tasks:
The finetuned model is particularly well-suited for tasks that involve providing instructions in the prompt, such as generating detailed responses, following specific guidelines, or addressing instructional queries.
Enhanced Controllability:
Users can expect improved controllability when using this model, making it a valuable asset for applications where precise instruction adherence is crucial.
Code Format:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the finetuned model
finetuned_model = AutoModelForCausalLM.from_pretrained("Manoj21k/microsoft-phi-2-finetuned")
# Tokenize input with instruction and generate output
tokenizer = AutoTokenizer.from_pretrained("Manoj21k/microsoft-phi-2-finetuned")
input_text = "Instruct: Issue with the delivered product"
inputs = tokenizer(input_text, return_tensors="pt", return_attention_mask=False)
output = finetuned_model.generate(**inputs, max_length=200)
# Decode and print the generated text
decoded_output = tokenizer.batch_decode(output)[0]
print(decoded_output)
where the model generates the text after the comments.
Notes:
*The fine-tuned model is specialized for instruction-based tasks and may outperform the base Phi-2 model in scenarios that require adherence to explicit instructions.
*Users are encouraged to experiment with various instructional prompts to leverage the model's capabilities effectively.
*As always, we appreciate user feedback to continue refining and improving the model for a wide range of applications. you are using transformers>=4.36.0
, always load the model with trust_remote_code=True
to prevent side-effects.
Limitations of Phi-2
Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring trainig data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
Software
License
The model is licensed under the MIT license.
- Downloads last month
- 21