hyunsikc commited on
Commit
a58a7d0
·
verified ·
1 Parent(s): d39a436

Remove the hotfix version from supported version

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -36,11 +36,11 @@ Please check out the collection of models at https://huggingface.co/furiosa-ai/c
36
  | [furiosa-ai/Llama-3.1-8B-Instruct-FP8](https://huggingface.co/furiosa-ai/Llama-3.1-8B-Instruct-FP8) | FP8 quantized | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | >= 2025.2 |
37
  | [furiosa-ai/Llama-3.3-70B-Instruct](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct) | BF16 | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
38
  | [furiosa-ai/Llama-3.3-70B-Instruct-INT8](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct-INT8) | INT8 weight quantization | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
39
- | [furiosa-ai/Qwen2.5-7B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-7B-Instruct) | BF16 | [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | >= 2025.3.1 |
40
- | [furiosa-ai/Qwen2.5-14B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-14B-Instruct) | BF16 | [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) | >= 2025.3.1 |
41
- | [furiosa-ai/Qwen2.5-32B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-32B-Instruct) | BF16 | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | >= 2025.3.1 |
42
- | [furiosa-ai/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-Coder-7B-Instruct) | BF16 | [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | >= 2025.3.1 |
43
- | [furiosa-ai/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-Coder-14B-Instruct) | BF16 | [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) | >= 2025.3.1 |
44
  | [furiosa-ai/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-Coder-32B-Instruct) | BF16 | [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | >= 2025.3 |
45
 
46
 
 
36
  | [furiosa-ai/Llama-3.1-8B-Instruct-FP8](https://huggingface.co/furiosa-ai/Llama-3.1-8B-Instruct-FP8) | FP8 quantized | [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | >= 2025.2 |
37
  | [furiosa-ai/Llama-3.3-70B-Instruct](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct) | BF16 | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
38
  | [furiosa-ai/Llama-3.3-70B-Instruct-INT8](https://huggingface.co/furiosa-ai/Llama-3.3-70B-Instruct-INT8) | INT8 weight quantization | [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | >= 2025.3 |
39
+ | [furiosa-ai/Qwen2.5-7B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-7B-Instruct) | BF16 | [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) | >= 2025.3 |
40
+ | [furiosa-ai/Qwen2.5-14B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-14B-Instruct) | BF16 | [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) | >= 2025.3 |
41
+ | [furiosa-ai/Qwen2.5-32B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-32B-Instruct) | BF16 | [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) | >= 2025.3 |
42
+ | [furiosa-ai/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-Coder-7B-Instruct) | BF16 | [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) | >= 2025.3 |
43
+ | [furiosa-ai/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-Coder-14B-Instruct) | BF16 | [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) | >= 2025.3 |
44
  | [furiosa-ai/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/furiosa-ai/Qwen2.5-Coder-32B-Instruct) | BF16 | [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) | >= 2025.3 |
45
 
46