Spaces:
Sleeping
Sleeping
AurelioAguirre
commited on
Commit
·
61f9d2f
1
Parent(s):
01a871f
Fixed req file v3
Browse files- main/logs/llm_api.log +22 -0
- requirements.txt +1 -1
main/logs/llm_api.log
CHANGED
@@ -29,3 +29,25 @@
|
|
29 |
2025-01-09 17:11:38,299 - llm_api - INFO - Loading model from source: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
30 |
2025-01-09 17:11:38,487 - llm_api - ERROR - Failed to initialize generation model huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated: Using `bitsandbytes` 8-bit quantization requires the latest version of bitsandbytes: `pip install -U bitsandbytes`
|
31 |
2025-01-09 17:11:38,487 - api_routes - ERROR - Error initializing model: Using `bitsandbytes` 8-bit quantization requires the latest version of bitsandbytes: `pip install -U bitsandbytes`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
2025-01-09 17:11:38,299 - llm_api - INFO - Loading model from source: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
30 |
2025-01-09 17:11:38,487 - llm_api - ERROR - Failed to initialize generation model huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated: Using `bitsandbytes` 8-bit quantization requires the latest version of bitsandbytes: `pip install -U bitsandbytes`
|
31 |
2025-01-09 17:11:38,487 - api_routes - ERROR - Error initializing model: Using `bitsandbytes` 8-bit quantization requires the latest version of bitsandbytes: `pip install -U bitsandbytes`
|
32 |
+
2025-01-09 17:12:48,606 - hf_validation - WARNING - No .env file found. Fine if you're on Huggingface, but you need one to run locally on your PC.
|
33 |
+
2025-01-09 17:12:48,606 - hf_validation - ERROR - No HF_TOKEN found in environment variables
|
34 |
+
2025-01-09 17:12:48,606 - main - INFO - Starting LLM API server
|
35 |
+
2025-01-09 17:12:48,606 - llm_api - INFO - Initializing LLM API
|
36 |
+
2025-01-09 17:12:48,606 - llm_api - INFO - LLM API initialized successfully
|
37 |
+
2025-01-09 17:12:48,606 - api_routes - INFO - Router initialized with LLM API instance
|
38 |
+
2025-01-09 17:12:48,608 - main - INFO - FastAPI application created successfully
|
39 |
+
2025-01-09 17:12:59,453 - api_routes - INFO - Received request to initialize model: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
40 |
+
2025-01-09 17:12:59,453 - llm_api - INFO - Initializing generation model: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
41 |
+
2025-01-09 17:12:59,453 - llm_api - INFO - Loading model from source: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
42 |
+
2025-01-09 17:12:59,628 - llm_api - ERROR - Failed to initialize generation model huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend
|
43 |
+
2025-01-09 17:12:59,628 - api_routes - ERROR - Error initializing model: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend
|
44 |
+
2025-01-09 17:14:44,390 - api_routes - INFO - Received request to initialize model: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
45 |
+
2025-01-09 17:14:44,390 - llm_api - INFO - Initializing generation model: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
46 |
+
2025-01-09 17:14:44,390 - llm_api - INFO - Loading model from source: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated
|
47 |
+
2025-01-09 17:14:53,032 - llm_api - ERROR - Failed to initialize generation model huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend
|
48 |
+
2025-01-09 17:14:53,032 - api_routes - ERROR - Error initializing model: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend
|
49 |
+
2025-01-09 17:15:14,956 - api_routes - INFO - Received request to initialize model: microsoft/phi-4
|
50 |
+
2025-01-09 17:15:14,956 - llm_api - INFO - Initializing generation model: microsoft/phi-4
|
51 |
+
2025-01-09 17:15:14,956 - llm_api - INFO - Loading model from local path: main/models/phi-4
|
52 |
+
2025-01-09 17:15:14,965 - llm_api - ERROR - Failed to initialize generation model microsoft/phi-4: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend
|
53 |
+
2025-01-09 17:15:14,965 - api_routes - ERROR - Error initializing model: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend
|
requirements.txt
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
accelerate==1.2.1
|
2 |
annotated-types==0.7.0
|
3 |
anyio==4.8.0
|
4 |
-
bitsandbytes
|
5 |
certifi==2024.12.14
|
6 |
charset-normalizer==3.4.1
|
7 |
click==8.1.8
|
|
|
1 |
accelerate==1.2.1
|
2 |
annotated-types==0.7.0
|
3 |
anyio==4.8.0
|
4 |
+
bitsandbytes @ https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-manylinux_2_24_x86_64.whl#sha256=66deda2b99cee0d4e52a183d9bac5c8e8618cd9b4d4933ccf23b908622d6b879
|
5 |
certifi==2024.12.14
|
6 |
charset-normalizer==3.4.1
|
7 |
click==8.1.8
|