korywat commited on
Commit
21cb7b6
1 Parent(s): 50ecce6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +91 -32
README.md CHANGED
@@ -17,9 +17,11 @@ tags:
17
 
18
  Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency.
19
 
20
- This is based on the implementation of Llama-v3-8B-Chat found
21
- [here]({source_repo}). More details on model performance
22
- accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
 
 
23
 
24
  ### Model Details
25
 
@@ -38,14 +40,6 @@ accross various devices, can be found [here](https://aihub.qualcomm.com/models/l
38
  - Token generator output: 1 output token + KV cache outputs
39
  - Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
40
 
41
- | Model | Device | Chipset | Target Runtime | Response Rate (Tokens/Second) | Time To First Token (TTFT) Range (Seconds) | Evaluation |
42
- |---|---|---|---|---|---|---|
43
- | Llama-v3-8B-Chat | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 66.14 | (0.028, 0.92) | -- | -- |
44
- | Llama-v3-8B-Chat | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 66.14 | (0.028, 0.92) | -- | -- |
45
- | Llama-v3-8B-Chat | Samsung Galaxy S23 Ultra | Snapdragon® 8 Gen 2 | QNN | 66.14 | (0.028, 0.92) | -- | -- |
46
- | Llama-v3-8B-Chat | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 66.14 | (0.028, 0.92) | -- | -- |
47
- | Llama-v3-8B-Chat | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 66.14 | (0.028, 0.92) | -- | -- |
48
-
49
  ## Deploying Llama 3 on-device
50
  Please follow [this tutorial](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llama)
51
  to compile QNN binaries and generate bundle assets to run [ChatApp on Windows](https://github.com/quic/ai-hub-apps/tree/main/apps/windows/cpp/ChatApp) and on Android powered by QNN-Genie.
@@ -73,6 +67,90 @@ Response: Superposition is a fundamental concept in quantum mechanics, which is
73
 
74
 
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
  ## License
77
  * The license for the original implementation of Llama-v3-8B-Chat can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE).
78
  * The license for the compiled assets for on-device deployment can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE)
@@ -86,26 +164,7 @@ Response: Superposition is a fundamental concept in quantum mechanics, which is
86
 
87
 
88
  ## Community
89
- * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
90
  * For questions or feedback please [reach out to us](mailto:[email protected]).
91
 
92
- ## Usage and Limitations
93
-
94
- Model may not be used for or in connection with any of the following applications:
95
-
96
- - Accessing essential private and public services and benefits;
97
- - Administration of justice and democratic processes;
98
- - Assessing or recognizing the emotional state of a person;
99
- - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
100
- - Education and vocational training;
101
- - Employment and workers management;
102
- - Exploitation of the vulnerabilities of persons resulting in harmful behavior;
103
- - General purpose social scoring;
104
- - Law enforcement;
105
- - Management and operation of critical infrastructure;
106
- - Migration, asylum and border control management;
107
- - Predictive policing;
108
- - Real-time remote biometric identification in public spaces;
109
- - Recommender systems of social media platforms;
110
- - Scraping of facial images (from the internet or otherwise); and/or
111
- - Subliminal manipulation
 
17
 
18
  Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency.
19
 
20
+ This model is an implementation of Llama-v3-8B-Chat found [here]({source_repo}).
21
+ This repository provides scripts to run Llama-v3-8B-Chat on Qualcomm® devices.
22
+ More details on model performance across various devices, can be found
23
+ [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
24
+
25
 
26
  ### Model Details
27
 
 
40
  - Token generator output: 1 output token + KV cache outputs
41
  - Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
42
 
 
 
 
 
 
 
 
 
43
  ## Deploying Llama 3 on-device
44
  Please follow [this tutorial](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llama)
45
  to compile QNN binaries and generate bundle assets to run [ChatApp on Windows](https://github.com/quic/ai-hub-apps/tree/main/apps/windows/cpp/ChatApp) and on Android powered by QNN-Genie.
 
67
 
68
 
69
 
70
+ | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
71
+ | ---|---|---|---|---|---|---|---|
72
+
73
+
74
+
75
+ ## Installation
76
+
77
+ This model can be installed as a Python package via pip.
78
+
79
+ ```bash
80
+ pip install "qai-hub-models[llama_v3_8b_chat_quantized]"
81
+ ```
82
+
83
+
84
+
85
+ ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
86
+
87
+ Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
88
+ Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
89
+
90
+ With this API token, you can configure your client to run models on the cloud
91
+ hosted devices.
92
+ ```bash
93
+ qai-hub configure --api_token API_TOKEN
94
+ ```
95
+ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
96
+
97
+
98
+
99
+ ## Demo off target
100
+
101
+ The package contains a simple end-to-end demo that downloads pre-trained
102
+ weights and runs this model on a sample input.
103
+
104
+ ```bash
105
+ python -m qai_hub_models.models.llama_v3_8b_chat_quantized.demo
106
+ ```
107
+
108
+ The above demo runs a reference implementation of pre-processing, model
109
+ inference, and post processing.
110
+
111
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
112
+ environment, please add the following to your cell (instead of the above).
113
+ ```
114
+ %run -m qai_hub_models.models.llama_v3_8b_chat_quantized.demo
115
+ ```
116
+
117
+
118
+ ### Run model on a cloud-hosted device
119
+
120
+ In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
121
+ device. This script does the following:
122
+ * Performance check on-device on a cloud-hosted device
123
+ * Downloads compiled assets that can be deployed on-device for Android.
124
+ * Accuracy check between PyTorch and on-device outputs.
125
+
126
+ ```bash
127
+ python -m qai_hub_models.models.llama_v3_8b_chat_quantized.export
128
+ ```
129
+
130
+
131
+
132
+
133
+
134
+
135
+ ## Deploying compiled model to Android
136
+
137
+
138
+ The models can be deployed using multiple runtimes:
139
+ - TensorFlow Lite (`.tflite` export): [This
140
+ tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
141
+ guide to deploy the .tflite model in an Android application.
142
+
143
+
144
+ - QNN (`.so` export ): This [sample
145
+ app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
146
+ provides instructions on how to use the `.so` shared library in an Android application.
147
+
148
+
149
+ ## View on Qualcomm® AI Hub
150
+ Get more details on Llama-v3-8B-Chat's performance across various devices [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
151
+ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
152
+
153
+
154
  ## License
155
  * The license for the original implementation of Llama-v3-8B-Chat can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE).
156
  * The license for the compiled assets for on-device deployment can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE)
 
164
 
165
 
166
  ## Community
167
+ * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
168
  * For questions or feedback please [reach out to us](mailto:[email protected]).
169
 
170
+