Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ widget:
|
|
18 |
|
19 |
# LiteLlama: Reduced-Scale Llama
|
20 |
|
21 |
-
|
22 |
|
23 |
|
24 |
## Dataset and Tokenization
|
@@ -81,7 +81,7 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
81 |
|
82 |
|
83 |
## Contact
|
84 |
-
This model
|
85 |
|
86 |
|
87 |
|
|
|
18 |
|
19 |
# LiteLlama: Reduced-Scale Llama
|
20 |
|
21 |
+
We present an open-source reproduction of Meta AI's [LLaMa 2](https://ai.meta.com/llama/). However, with significantly reduced model sizes, [LiteLlama-460M-1T](https://huggingface.co/ahxt/LiteLlama-460M-1T) has 460M parameters trained with 1T tokens.
|
22 |
|
23 |
|
24 |
## Dataset and Tokenization
|
|
|
81 |
|
82 |
|
83 |
## Contact
|
84 |
+
This model was developed by [Xiaotian Han](https://ahxt.github.io/) from Texas A&M University at the DATA Lab under the supervision of Prof [Xia "Ben" Hu](https://cs.rice.edu/~xh37/index.html), and the model is released under MIT License.
|
85 |
|
86 |
|
87 |
|