phi-2-alpaca-cleaned

This model is an instruction-tuned version of the microsoft/phi-2 model fine-tuned on the yahma/alpaca-cleaned dataset.

In the training, full parameter fine-tuning of phi-2 was performed, and LoRA was not used.

Text Format

Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Based on the information provided, rewrite the sentence by changing its tense from past to future.

### Input:
She played the piano beautifully for hours and then stopped as it was midnight.	

### Response:
She will play the piano beautifully for hours and then stop as it will be midnight.	

Training

  • GPUs: 8 × A6000 48GB
  • per_device_train_batch_size: 8
  • gradient_accumulation_steps: 8
  • per_device_eval_batch_size: 8
  • num_train_epochs: 3
  • learning_rate: 2e-5
  • warmup_ratio: 0.03

Software

  • pytorch: 2.1.2
  • transformers: 4.38.0.dev0
  • accelerate: 0.26.1
  • deepspeed: 0.13.1
Downloads last month
9
Safetensors
Model size
2.78B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train ohashi56225/phi-2-alpaca-cleaned