Zhuoming Chen
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -17,4 +17,16 @@ tags:
|
|
| 17 |
A draft model for Llama3.1/3.2/3.3 series models, specialized in python coding. This model is finetuned from the first 4 layers of facebook/layerskip-llama3.2-1B.
|
| 18 |
|
| 19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
|
|
|
| 17 |
A draft model for Llama3.1/3.2/3.3 series models, specialized in python coding. This model is finetuned from the first 4 layers of facebook/layerskip-llama3.2-1B.
|
| 18 |
|
| 19 |
|
| 20 |
+
## Citation
|
| 21 |
+
|
| 22 |
+
```bibtex
|
| 23 |
+
@article{chen2024sequoia,
|
| 24 |
+
title={Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding},
|
| 25 |
+
author={Chen, Zhuoming and May, Avner and Svirschevski, Ruslan and Huang, Yuhsun and Ryabinin, Max and Jia, Zhihao and Chen, Beidi},
|
| 26 |
+
journal={arXiv preprint arXiv:2402.12374},
|
| 27 |
+
year={2024}
|
| 28 |
+
}
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
|
| 32 |
|