hyunsikc commited on
Commit
ee1f451
·
verified ·
1 Parent(s): 50a4aaa

Updated the model list

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -30,11 +30,10 @@ for more information and learn more about RNGD at https://furiosa.ai/rngd
30
  | [furiosa-ai/Llama-3.1-8B-Instruct](https://huggingface.co/furiosa-ai/Llama-3.1-8B-Instruct) | BF16 | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | >= 2025.2 |
31
  | [furiosa-ai/Llama-3.1-8B-Instruct-FP8](https://huggingface.co/furiosa-ai/Llama-3.1-8B-Instruct-FP8) | FP8 quantized | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | >= 2025.2 |
32
  | [furiosa-ai/Llama-3.3-70B-Instruct](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct) | BF16 | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
33
- | [furiosa-ai/Qwen2.5-32B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-32B-Instruct) | BF16 | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | >= 2025.3 |
 
 
34
 
35
- <!--
36
- | [furiosa-ai/Llama-3.3-70B-Instruct-FP8](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct-FP8) | FP8 weight quantization | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
37
- -->
38
 
39
  ## Examples
40
 
 
30
  | [furiosa-ai/Llama-3.1-8B-Instruct](https://huggingface.co/furiosa-ai/Llama-3.1-8B-Instruct) | BF16 | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | >= 2025.2 |
31
  | [furiosa-ai/Llama-3.1-8B-Instruct-FP8](https://huggingface.co/furiosa-ai/Llama-3.1-8B-Instruct-FP8) | FP8 quantized | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | >= 2025.2 |
32
  | [furiosa-ai/Llama-3.3-70B-Instruct](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct) | BF16 | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
33
+ | [furiosa-ai/Llama-3.3-70B-Instruct-INT8](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct-INT8) | INT8 weight quantization | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
34
+ | [furiosa-ai/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-Coder-32B-Instruct) | BF16 | [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | >= 2025.3 |
35
+
36
 
 
 
 
37
 
38
  ## Examples
39