datasets: | |
- togethercomputer/RedPajama-Data-1T | |
language: | |
- en | |
Here are a few GGUF(v2) quantizations of the model [conceptofmind/Open-LLongMA-3b](https://huggingface.co/conceptofmind/Open-LLongMA-3b) | |
Which is | |
**Based on:** [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) | |
Open LLongMA 3B is a language model trained to have 8192 tokens of context size using linear rope_scaling 0.25, Using 1.0 it will output gibberish. | |