Spaces:
Running
on
Zero
Running
on
Zero
specify gpu build for llama cpp python
Browse files- requirements.txt +1 -1
requirements.txt
CHANGED
@@ -2,7 +2,7 @@ wheel
|
|
2 |
jieba @ https://www.piwheels.org/simple/jieba/jieba-0.42.1-py3-none-any.whl
|
3 |
docopt @ https://github.com/GoogleCloudPlatform/gcloud-python-wheels/raw/refs/heads/master/wheelhouse/docopt-0.6.2-py2.py3-none-any.whl
|
4 |
--extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
|
5 |
-
llama-cpp-python
|
6 |
streamlit
|
7 |
duckduckgo_search
|
8 |
gradio
|
|
|
2 |
jieba @ https://www.piwheels.org/simple/jieba/jieba-0.42.1-py3-none-any.whl
|
3 |
docopt @ https://github.com/GoogleCloudPlatform/gcloud-python-wheels/raw/refs/heads/master/wheelhouse/docopt-0.6.2-py2.py3-none-any.whl
|
4 |
--extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
|
5 |
+
llama-cpp-python --no-binary=:all: --global-option=build_ext --global-option="-DLLAMA_USE_CUBLAS=1"
|
6 |
streamlit
|
7 |
duckduckgo_search
|
8 |
gradio
|