metadata
license: apache-2.0
datasets:
- stingning/ultrachat
- TIGER-Lab/MathInstruct
- ise-uiuc/Magicoder-Evol-Instruct-110K
- OpenAssistant/oasst2
- teknium/openhermes
- bigcode/commitpackft
- Open-Orca/SlimOrca
- ise-uiuc/Magicoder-OSS-Instruct-75K
language:
- en
library_name: transformers
base_model:
- mllmTeam/PhoneLM-0.5B
PhoneLM-0.5B-Instruct is a 0.5 billion parameter decoder-only language model.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = 'mllmTeam/PhoneLM-0.5B-Instruct'
question = "Hello, who are you?"
prompt = [{"role": "user", "content": question}]
model = AutoModelForCausalLM.from_pretrained(model_name, device_map='cuda', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
inp = tokenizer(input_text, return_tensors="pt")
inp = {k: v.to('cuda') for k, v in inp.items()}
out = model.generate(**inp,
max_length=256,
do_sample=True,
temperature=0.7,
top_p=0.7
)
text = tokenizer.decode(out[0], skip_special_tokens=True)
print(text)
Model Details
- Developed by: mllmTeam
- Model type:
PhoneLM 0.5B
models are auto-regressive language models based on the transformer decoder architecture. - Language(s): English
- Paper: PhoneLM Technical Report
- Library: PhoneLM
Model Architecture
The model is a decoder-only transformer architecture with the following modifications:
Hidden Size | Layers | Heads | Sequence Length |
---|---|---|---|
1024 | 24 | 16 | 2048 |
- Position Embeddings: Rotary Position Embeddings (Su et al., 2021) applied to the first 25% of head embedding dimensions for improved throughput following Black et al. (2022). PhoneLM quantized the sin and cos values in Rotary Position Embeddings to 8-bit integers.
- Normalization: LayerNorm (Ba et al., 2016) with learned bias terms as opposed to RMSNorm (Zhang & Sennrich, 2019).
- Biases: We remove all bias terms from the feed-forward networks and multi-head self-attention layers, except for the biases of the query, key, and value projections (Bai et al., 2023).
- ReLU Activation Function: ReLU(Glorot et al., 2011) activation functions are adopted in feed-forward networks.
- Tokenizer: We use the SmolLM(Allal et al., 2024)'s tokenizer with a vocabulary size of 49,152.
License
- This repository is released under the Apache-2.0 License.、
Citation
@misc{yi2024phonelmanefficientcapablesmall,
title={PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training},
author={Rongjie Yi and Xiang Li and Weikai Xie and Zhenyan Lu and Chenghua Wang and Ao Zhou and Shangguang Wang and Xiwen Zhang and Mengwei Xu},
year={2024},
eprint={2411.05046},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2411.05046},
}