---

license: llama3
language:
- en
tags:
  - roleplay
  - llama3
  - sillytavern
  - idol

---

[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)


# QuantFactory/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF
This is quantized version of [aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K) created using llama.cpp

# Original Model Card

# The final version of Llama 3.0 will be followed by the next iteration starting from Llama 3.1.
# Special Thanks:
 - Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
 - https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
 - mradermacher's superb gguf version, thank you for your conscientious and responsible dedication.
 - https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-i1-GGUF
 - https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF

# These are my own quantizations (updated almost daily).
The difference with normal quantizations is that I quantize the output and embed tensors to f16.
and the other tensors to 15_k,q6_k or q8_0.
This creates models that are little or not degraded at all and have a smaller size.
They run at about 3-6 t/sec on CPU only using llama.cpp
And obviously faster on computers with potent GPUs
- the fast cat at [ZeroWw/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.2-Uncensored-32K-GGUF)

# Model Description:
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
- Saving money(LLama 3)
- only test en.
- Input Models input text only. Output Models generate text and code only.
- Uncensored
- Quick response
- The underlying model used is winglian/Llama-3-8b-64k-PoSE (The theoretical support is 64k, but I have only tested up to 32k. :)
- A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :)
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
- Roleplay
- Specialized in various role-playing scenarios
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test) 
- more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets)
![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K/resolve/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.png)

## virtual idol Twitter
- https://x.com/aifeifei799

# Questions
- The model's response results are for reference only, please do not fully trust them.


# Stop Strings
```python
    stop = [
      "## Instruction:",
      "### Instruction:",
      "<|end_of_text|>",
      "  //:",
      "</s>",
      "<3```",
      "### Note:",
      "### Input:",
      "### Response:",
      "### Emoticons:"
    ],
```
# Model Use
- Koboldcpp https://github.com/LostRuins/koboldcpp
- Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this [fork](https://github.com/Nexesenex/kobold.cpp) if anyone has issues.
- LM Studio https://lmstudio.ai/
- Please test again using the Default LM Studio Windows preset.
- llama.cpp https://github.com/ggerganov/llama.cpp
- Backyard AI https://backyard.ai/
- Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
- Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K-Q4_K_S-imat.gguf?download=true
- more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request
# character
- https://character-tavern.com/
- https://characterhub.org/
- https://pygmalion.chat/
- https://aetherroom.club/
- https://backyard.ai/
- Layla AI chatbot
### If you want to use vision functionality:
 * You must use the latest versions of [Koboldcpp](https://github.com/Nexesenex/kobold.cpp).
 
### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
 
 * You can load the **mmproj** by using the corresponding section in the interface:
 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
### Thank you:
 To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
- Hastagaras
- Gryphe
- cgato
- ChaoticNeutrals
- mergekit
- merge
- transformers
- llama
- Nitral-AI
- MLP-KTLim
- rinna
- hfl
- Rupesh2
- stephenlzc
- theprint
- Sao10K
- turboderp
- TheBossLevel123
- winglian
- .........
---
# llama3-8B-DarkIdol-2.3-Uncensored-32K

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using ./llama3-8B-DarkIdol-2.3b as a base.

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Sao10K/L3-8B-Niitama-v1
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
  - model: turboderp/llama3-turbcat-instruct-8b
  - model: winglian/Llama-3-8b-64k-PoSE
merge_method: model_stock
base_model: winglian/Llama-3-8b-64k-PoSE
dtype: bfloat16

models:
  - model: maldv/badger-writer-llama-3-8b
  - model: underwoods/writer-8b
  - model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
  - model: vicgalle/Roleplay-Llama-3-8B
  - model: cgato/TheSalt-RP-L3-8b-DPO-v0.3.2-e0.15.2
  - model: ./llama3-8B-DarkIdol-2.3a
merge_method: model_stock
base_model: ./llama3-8B-DarkIdol-2.3a
dtype: bfloat16

models:
  - model: Rupesh2/Meta-Llama-3-8B-abliterated
  - model: Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
  - model: Orenguteng/Llama-3-8B-Lexi-Uncensored
  - model: theprint/Llama-3-8B-Lexi-Smaug-Uncensored
  - model: vicgalle/Unsafe-Llama-3-8B  
  - model: vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B
  - model: ./llama3-8B-DarkIdol-2.3b
merge_method: model_stock
base_model: ./llama3-8B-DarkIdol-2.3b
dtype: bfloat16
```