Text Generation
Transformers
Safetensors
llama
mergekit
Merge
abacusai/Dracarys-Llama-3.1-70B-Instruct
Sao10K/L3-70B-Euryale-v2.1
gbueno86/Cathallama-70B
sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
nothingiisreal/L3.1-70B-Celeste-V0.1-BF16
Fizzarolli/L3.1-70b-glitz-v0.2
cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
conversational
text-generation-inference
Inference Endpoints
base_model: | |
- abacusai/Dracarys-Llama-3.1-70B-Instruct | |
- Sao10K/L3-70B-Euryale-v2.1 | |
- gbueno86/Cathallama-70B | |
- sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1 | |
- nothingiisreal/L3.1-70B-Celeste-V0.1-BF16 | |
- Fizzarolli/L3.1-70b-glitz-v0.2 | |
- cyberagent/Llama-3.1-70B-Japanese-Instruct-2407 | |
library_name: transformers | |
tags: | |
- mergekit | |
- merge | |
- abacusai/Dracarys-Llama-3.1-70B-Instruct | |
- Sao10K/L3-70B-Euryale-v2.1 | |
- gbueno86/Cathallama-70B | |
- sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1 | |
- nothingiisreal/L3.1-70B-Celeste-V0.1-BF16 | |
- Fizzarolli/L3.1-70b-glitz-v0.2 | |
- cyberagent/Llama-3.1-70B-Japanese-Instruct-2407 | |
# KaraKaraWitch/L3.1-70b-Inori | |
Inori is the second 70b for the weekend for me to play around. | |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/c7e_f5q8wCZfXgy9Hh2e1.png) | |
Learning from the previous model, I yeeted Hermes into the atmosphere and used Glitz as a base. | |
Inori takes a different approach by using Model Stock. | |
- Dracarys (I just threw it in, but can be useful for code) | |
- Euryale (You all know it!) | |
- Cathallama (Athene + turboderp_cat) | |
- New Dawn (I heard people like it so I added it in) | |
- Celeste (RP) | |
- Japanese-Instruct (Enhancement of Japanese Language for the weebs out there.) | |
No Hermes was harmed in the making of this model stock merge. | |
L3.1-70b-Inori is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): | |
* [abacusai/Dracarys-Llama-3.1-70B-Instruct](https://huggingface.co/abacusai/Dracarys-Llama-3.1-70B-Instruct) | |
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) | |
* [gbueno86/Cathallama-70B](https://huggingface.co/gbueno86/Cathallama-70B) | |
* [sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1](https://huggingface.co/sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1) | |
* [nothingiisreal/L3.1-70B-Celeste-V0.1-BF16](https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16) | |
* [cyberagent/Llama-3.1-70B-Japanese-Instruct-2407](https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407) | |
## General Thoughts | |
- This model has some weird censorship issues. Soketimes it triggers that it can't generate explicit text while sometimes it doesn't. | |
- For that reason, I don't recommend people to use this model. | |
## Yap / Chat Format | |
L3 Instruct. | |
## Quants & Hosts | |
[![GGUF](https://img.shields.io/badge/mradermacher%2FL3.1--70b--Inori--GGUF-Dummy?style=flat&label=GGUF)](https://huggingface.co/mradermacher/L3.1-70b-Inori-GGUF) | |
[![GGUF-i1](https://img.shields.io/badge/mradermacher%2FL3.1--70b--Inori--i1--GGUF-Dummy?style=flat&label=GGUF-i1)](https://huggingface.co/mradermacher/L3.1-70b-Inori-i1-GGUF) | |
[![Featherless](https://img.shields.io/badge/KaraKaraWitch%2FL3.1--70b--Inori-Dummy?style=flat&label=Featherless&color=facc15)](https://featherless.ai/models/KaraKaraWitch/L3.1-70b-Inori) | |
## 🧩 Configuration | |
```yaml | |
models: | |
- model: Fizzarolli/L3.1-70b-glitz-v0.2 | |
- model: cyberagent/Llama-3.1-70B-Japanese-Instruct-2407 | |
- model: Sao10K/L3-70B-Euryale-v2.1 | |
- model: nothingiisreal/L3.1-70B-Celeste-V0.1-BF16 | |
- model: sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1 | |
- model: gbueno86/Cathallama-70B | |
- model: abacusai/Dracarys-Llama-3.1-70B-Instruct | |
merge_method: model_stock | |
base_model: Fizzarolli/L3.1-70b-glitz-v0.2 | |
parameters: | |
normalize: true | |
dtype: bfloat16 | |
``` | |
## 💻 Usage | |
```python | |
!pip install -qU transformers accelerate | |
from transformers import AutoTokenizer | |
import transformers | |
import torch | |
model = "KaraKaraWitch/L3.1-70b-Inori" | |
messages = [{"role": "user", "content": "What is a large language model?"}] | |
tokenizer = AutoTokenizer.from_pretrained(model) | |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
pipeline = transformers.pipeline( | |
"text-generation", | |
model=model, | |
torch_dtype=torch.float16, | |
device_map="auto", | |
) | |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | |
print(outputs[0]["generated_text"]) | |
``` |