OLMo 7B-Instruct-GGUF

For more details on OLMO-7B-Instruct, refer to Allen AI's OLMo-7B-Instruct model card.

OLMo is a series of Open Language Models designed to enable the science of language models. The OLMo base models are trained on the Dolma dataset. The Instruct version is trained on the cleaned version of the UltraFeedback dataset.

OLMo 7B Instruct is trained for better question answering. They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques.

This version of the model is derived from ssec-uw/OLMo-7B-Instruct-hf as GGUF format, a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes.

In addition to the model being in GGUF format, the model has been quantized, to reduce the computational and memory costs of running inference. We are currently working on adding all of the Quantization Types.

These files are designed for use with GGML and executors based on GGML such as llama.cpp.

Get Started

To get started using one of the GGUF file, you can simply use llama-cpp-python, a Python binding for llama.cpp.

  1. Install llama-cpp-python of at least v0.2.70 with pip. The following command will install a pre-built wheel with basic CPU support. For other installation methods, see llama-cpp-python installation docs.

    pip install llama-cpp-python>=0.2.70 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
    
  2. Download one of the GGUF file. In this example, we will download the OLMo-7B-Instruct-Q4_K_M.gguf, when the link is clicked.

  3. Open up a python interpreter and run the following commands. For example, we can ask it: What is a solar system?

    You will need to modify the model_path argument to where the GGUF model has been saved in your system

    from llama_cpp import Llama
    llm = Llama(
          model_path="path/to/OLMo-7B-Instruct-Q4_K_M.gguf"
    )
    result_dict = llm.create_chat_completion(
          messages = [
              {
                  "role": "user",
                  "content": "What is a solar system?"
              }
          ]
    )
    print(result_dict['choices'][0]['message']['content'])
    
  4. That's it, you should see the result fairly quickly! Have fun! ๐Ÿค–

Contact

For errors in this model card, contact Don or Anant, {landungs, anmittal} at uw dot edu.

Acknowledgement

We would like to thank the hardworking folks at Allen AI for providing the original model.

Additionally, the work to convert and quantize the model was done by the University of Washington Scientific Software Engineering Center (SSEC), as part of the Schmidt Sciences Virtual Institute for Scientific Software (VISS).

Downloads last month
140
GGUF
Model size
6.89B params
Architecture
olmo

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Space using ssec-uw/OLMo-7B-Instruct-GGUF 1

Collection including ssec-uw/OLMo-7B-Instruct-GGUF