amanpreetsingh459 commited on
Commit
2ee211e
·
1 Parent(s): 33c304c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -4,9 +4,9 @@ license: mit
4
 
5
  # llama-2-7b-chat_q4_quantized_cpp
6
  - This model contains the 4-bit quantized version of [llama2](https://github.com/facebookresearch/llama) model in cpp.
7
- - This can be run on a local cpu system as a cpp module *(instructions for the same are given below)*
8
  - As for the testing, the model has been tested on `Linux(Ubuntu)` os with `12 GB RAM` and `core i5 processor`.
9
-
10
  # Usage:
11
  1. Clone the llama C++ repository from github:<br>
12
  `git clone https://github.com/ggerganov/llama.cpp.git`
 
4
 
5
  # llama-2-7b-chat_q4_quantized_cpp
6
  - This model contains the 4-bit quantized version of [llama2](https://github.com/facebookresearch/llama) model in cpp.
7
+ - This can be run on a local cpu system as a cpp module *(instructions for the same are given below)*.
8
  - As for the testing, the model has been tested on `Linux(Ubuntu)` os with `12 GB RAM` and `core i5 processor`.
9
+ - The performance is `roughly` **907.46 ms per token**, **1.10 tokens per second**
10
  # Usage:
11
  1. Clone the llama C++ repository from github:<br>
12
  `git clone https://github.com/ggerganov/llama.cpp.git`