base_model: amd/AMD-Llama-135m-code | |
datasets: | |
- cerebras/SlimPajama-627B | |
- manu/project_gutenberg | |
license: apache-2.0 | |
tags: | |
- llama-cpp | |
- gguf-my-repo | |
# ApprikatAI/AMD-Llama-135m-code-FP16-GGUF | |
This model was converted to GGUF format from [`amd/AMD-Llama-135m-code`](https://huggingface.co/amd/AMD-Llama-135m-code) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. | |
Refer to the [original model card](https://huggingface.co/amd/AMD-Llama-135m-code) for more details on the model. | |
## Use with llama.cpp | |
Install llama.cpp through brew (works on Mac and Linux) | |
```bash | |
brew install llama.cpp | |
``` | |
Invoke the llama.cpp server or the CLI. | |
### CLI: | |
```bash | |
llama-cli --hf-repo ApprikatAI/AMD-Llama-135m-code-FP16-GGUF --hf-file amd-llama-135m-code-fp16.gguf -p "The meaning to life and the universe is" | |
``` | |
### Server: | |
```bash | |
llama-server --hf-repo ApprikatAI/AMD-Llama-135m-code-FP16-GGUF --hf-file amd-llama-135m-code-fp16.gguf -c 2048 | |
``` | |