Fazmin's picture
Create README.md
e392c60
|
raw
history blame
2.16 kB
metadata
datasets:
  - Anthropic/hh-rlhf
  - ehartford/dolphin
  - conceptofmind/t0_submix_original
  - conceptofmind/niv2_submix_original
language:
  - en
pipeline_tag: text-generation

Mac Llama 13B

Model Description

`Mac Llama 13B Experimental model is a Llama2 13B model finetuned on an Orca style Dataset

Usage

Mac Llama 13B should be used with this prompt format:

### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of Stable Beluga 13B

Model Details

Training Procedure

Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:

Dataset Batch Size Learning Rate Learning Rate Decay Warm-up Weight Decay Betas
Orca pt1 packed 256 3e-5 Cosine to 3e-6 100 1e-6 (0.9, 0.95)

Ethical Considerations and Limitations

Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.