shaikehsan commited on
Commit
cc044d1
·
verified ·
1 Parent(s): 5e587a0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model: imsanjoykb/sqlCoder-Qwen2.5-8bit
6
+ new_version: imsanjoykb/sqlCoder-Qwen2.5-8bit
7
+ pipeline_tag: text-generation
8
+ library_name: adapter-transformers
9
+ tags:
10
+ - unsloth,
11
+ - pytorch,
12
+ - inference-endpoint,
13
+ - sql-code-generation,
14
+ - llama-cpp
15
+ - gguf-my-lora
16
+ ---
17
+
18
+ # shaikehsan/sqlCoder-Qwen2.5-8bit-F16-GGUF
19
+ This LoRA adapter was converted to GGUF format from [`imsanjoykb/sqlCoder-Qwen2.5-8bit`](https://huggingface.co/imsanjoykb/sqlCoder-Qwen2.5-8bit) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
20
+ Refer to the [original adapter repository](https://huggingface.co/imsanjoykb/sqlCoder-Qwen2.5-8bit) for more details.
21
+
22
+ ## Use with llama.cpp
23
+
24
+ ```bash
25
+ # with cli
26
+ llama-cli -m base_model.gguf --lora sqlCoder-Qwen2.5-8bit-f16.gguf (...other args)
27
+
28
+ # with server
29
+ llama-server -m base_model.gguf --lora sqlCoder-Qwen2.5-8bit-f16.gguf (...other args)
30
+ ```
31
+
32
+ To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).