Pooja Ganesh
commited on
Commit
•
76bd16b
1
Parent(s):
16a1299
Update README.md
Browse files
README.md
CHANGED
@@ -18,4 +18,21 @@ tags:
|
|
18 |
- meta
|
19 |
- llama-3
|
20 |
- llama
|
21 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
- meta
|
19 |
- llama-3
|
20 |
- llama
|
21 |
+
---
|
22 |
+
|
23 |
+
# meta-llama/Llama-3.1-8B
|
24 |
+
- ## Introduction
|
25 |
+
This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset, and applying [onnxruntime-genai model builder](https://github.com/microsoft/onnxruntime-genai/tree/main/src/python/py/models) to convert to ONNX.
|
26 |
+
- ## Quantization Strategy
|
27 |
+
- ***Quantized Layers***: TBD
|
28 |
+
- ***Weight***: TBD
|
29 |
+
- ## Quick Start
|
30 |
+
For quickstart, refer to AMD [RyzenAI-SW-EA](https://account.amd.com/en/member/ryzenai-sw-ea.html)
|
31 |
+
|
32 |
+
#### Evaluation scores
|
33 |
+
The perplexity measurement is run on the wikitext-2-raw-v1 (raw data) dataset provided by Hugging Face. Perplexity score measured for prompt length 2k is 6.861743.
|
34 |
+
|
35 |
+
#### License
|
36 |
+
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved.
|
37 |
+
|
38 |
+
license: llama3.1
|