Model Summary

CodePhi2 is finetuning of the Microsoft Phi-2 LLM with 2.7 billion parameters. It was finetuned on TokenBender's code_instructions_122k_alpaca_style. The end goal was to increase Phi-2's coding ability while imbuing the Alpaca format.

Instruction Format (Alpaca)

CodePhi2 has been finetuned on the Alpaca instruction format, and as such should be prompted like below:

Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}

### Response:

Notes

If you are using transformers>=4.36.0, always load the model with trust_remote_code=True to prevent side-effects.

Downloads last month
24
Safetensors
Model size
2.78B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TitleOS/CodePhi2

Base model

microsoft/phi-2
Finetuned
(287)
this model

Dataset used to train TitleOS/CodePhi2