metadata
base_model: 01-ai/Yi-6B-200K
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
widget:
- example_title: Yi-34B-Chat
text: hi
output:
text: ' Hello! How can I assist you today?'
- example_title: Yi-34B
text: >-
There's a place where time stands still. A place of breath taking wonder,
but also
output:
text: >2-
an eerie sense that something is just not right…
Between the two worlds lies The Forgotten Kingdom - home to creatures
long since thought extinct and ancient magic so strong it defies belief!
Only here can you find what has been lost for centuries: An Elixir Of
Life which will restore youth and vitality if only those who seek its
power are brave enough to face up against all manner of dangers lurking
in this mysterious land! But beware; some say there may even exist
powerful entities beyond our comprehension whose intentions towards
humanity remain unclear at best ---- they might want nothing more than
destruction itself rather then anything else from their quest after
immortality (and maybe someone should tell them about modern medicine)?
In any event though – one thing remains true regardless : whether or not
success comes easy depends entirely upon how much effort we put into
conquering whatever challenges lie ahead along with having faith deep
down inside ourselves too ;) So let’s get started now shall We?
Angel367/Yi-6B-200K-Q4_K_M-GGUF
This model was converted to GGUF format from 01-ai/Yi-6B-200K
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Angel367/Yi-6B-200K-Q4_K_M-GGUF --hf-file yi-6b-200k-q4_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Angel367/Yi-6B-200K-Q4_K_M-GGUF --hf-file yi-6b-200k-q4_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Angel367/Yi-6B-200K-Q4_K_M-GGUF --hf-file yi-6b-200k-q4_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Angel367/Yi-6B-200K-Q4_K_M-GGUF --hf-file yi-6b-200k-q4_k_m.gguf -c 2048