File size: 1,339 Bytes
edb2f92
 
82b8d4c
 
edb2f92
 
 
82b8d4c
 
e114f07
82b8d4c
e114f07
9eaf006
82b8d4c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
base_model: MarsupialAI/Monstral-123B
language:
- en
license: other
license_name: mrl
pipeline_tag: text-generation
tags:
- chat
quantized_by: Liedichi
---

# 8bpw exl2 quantization of MarsupialAI/Monstral-123B
A Mistral-Large merge
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a531bc7ec6af0f95c707b1/qyyXvD91Ua6KRF6CB1oyG.png)

This model is a slerp merge of Behemoth and Magnum V4.  The intention was to moisten up Behemoth a bit and give it some of that
Claude flavor, but without being nearly as *thirsty* as Magnum.  I feel it succeeds in both areas.

Mergefuel: 
- TheDrummer/Behemoth-123B-v1
- anthracite-org/magnum-v4-123b

This model is uncensored and perfectly capable of generating objectionable material.  It is far less likely to return NSFW content
for SFW prompts than Magnum V4, but you should still exercise caution.  As with any LLM, no factual claims 
made by the model should be taken at face value.  You know that boilerplate safety disclaimer that most professional models have?  
Assume this has it too.  This model is for entertainment purposes only. 

Original: https://huggingface.co/MarsupialAI/Monstral-123B
 
GGUFs:  https://huggingface.co/MarsupialAI/Monstral-123B_iMat_GGUF

EXL2:  https://huggingface.co/MarsupialAI/Monstral-123B_4.0bpw_EXL2


# Prompt Format
Mistral or Metharme