metadata
license: other
inference: false
Dromedary-65B-LoRA GGML
These files are the result of merging the delta weights of IBM's Dromedary 65B LoRA with the original Llama 65B model.
This repo contains GGML files for for CPU inference using llama.cpp.
Repositories available
REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
llama.cpp recently made a breaking change to its quantisation methods.
I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit b9fd7ee
or later) to use them.
The previous files, which will still work in older versions of llama.cpp, can be found in branch previous_llama
.
Provided files
Name |
Quant method |
Bits |
Size |
RAM required |
Use case |
dromedary-lora-65B.ggml.q4_0.bin |
q4_0 |
4bit |
40.8GB |
43GB |
4-bit. |
dromedary-lora-65B.ggml.q5_0.bin |
q5_0 |
5bit |
44.9GB |
47GB |
5-bit. Higher accuracy, higher resource usage and slower inference. |
dromedary-lora-65B.ggml.q5_1.bin |
q5_1 |
5bit |
49GB |
51GB |
5-bit. Even higher accuracy, higher resource usage and slower inference. |
Original Dromedary Model Card
See https://github.com/IBM/Dromedary#model-weights for instructions.
Model details
Model type:
Dromedary is an open-source self-aligned language model trained with minimal human supervision.
The base language model is LLaMA-65b, based on the transformer architecture.
Model date:
Dromedary was trained between April 2023 and May 2023, but its knowledge only goes up until Sept-2021.
Organizations developing the model:
The Dromedary team as a joint effort between CMU and IBM.
Paper or resources for more information:
https://mitibmdemos.draco.res.ibm.com/dromedary
License:
LLaMA's Non-commercial bespoke license
Where to send questions or comments about the model:
https://github.com/IBM/Dromedary/issues
Intended use
Primary intended uses:
The primary use of Dromedary is research on the alignment of large language models.
Primary intended users:
The primary intended users of the model are researchers in artificial intelligence.
Delta weights
We use the following configuration for the LoRA weights:
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
Training dataset
Fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning),
Evaluation dataset
We evaluate Dromedary on TruthfulQA and HHH Eval, as well as Vicuna benchmark questions.