Mnemosyne-7B

Mnemosyne-7B is an experimental large language model (LLM) created by merging several pre-trained models designed for informative and educational purposes. It combines the strengths of these models with the hope of achieving a highly informative and comprehensive LLM.

GGUF: https://huggingface.co/mradermacher/Mnemosyne-7B-GGUF

Important Note:

This is an experimental model, and its performance and capabilities are not guaranteed. Further testing and evaluation are required to assess its effectiveness.

🧩 Configuration

models:
  - model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
  - model: openbmb/Eurus-7b-kto
  - model: Weyaxi/Newton-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16

Mnemosyne-7B is a merge of the following models using mergekit:

Downloads last month
55
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for bunnycore/Mnemosyne-7B

Quantizations
2 models

Spaces using bunnycore/Mnemosyne-7B 8

Collection including bunnycore/Mnemosyne-7B