Model Card for Model ID

dragon-llama-answer-tool is a quantized version of DRAGON Llama 7B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.

dragon-llama-7b is a fact-based question-answering model, optimized for complex business documents.

To pull the model via API:

from huggingface_hub import snapshot_download           
snapshot_download("llmware/dragon-llama-answer-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)  

Load in your favorite GGUF inference engine, or try with llmware as follows:

from llmware.models import ModelCatalog  
model = ModelCatalog().load_model("dragon-llama-answer-tool")            
response = model.inference(query, add_context=text_sample)  

Note: please review config.json in the repository for prompt wrapping information, details on the model, and full test set.

Model Description

  • Developed by: llmware
  • Model type: GGUF
  • Language(s) (NLP): English
  • License: Llama 2 Community License
  • Quantized from model: llmware/dragon-llama

Model Card Contact

Darren Oberst & llmware team

Downloads last month
47
GGUF
Model size
6.74B params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Collection including llmware/dragon-llama-answer-tool