Natkituwu commited on
Commit
79c82c7
1 Parent(s): be8059f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -4
README.md CHANGED
@@ -1,16 +1,26 @@
1
  ---
2
- base_model: []
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
 
 
7
  license: cc-by-nc-4.0
8
  ---
9
- # Kunokukulemonchini-7b
10
 
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
 
 
 
 
 
12
 
13
- Here is an 4.1bpw exl2 quant [Kunokukulemonchini-7b-4.1bpw-exl2](https://huggingface.co/icefog72/Kunokukulemonchini-7b-4.1bpw-exl2) for people like me with 6gb vram.
14
  ## Merge Details
15
 
16
  Slightly edited kukulemon-7B config.json before merge to get at least ~32k context window.
 
1
  ---
2
+ base_model:
3
+ - grimjim/kukulemon-7B
4
+ - Nitral-AI/Kunocchini-7b-128k-test
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
+ - mistral
10
+ - alpaca
11
  license: cc-by-nc-4.0
12
  ---
 
13
 
14
+ # Kunokukulemonchini-7b-7.1bpw-exl2
15
+
16
+ This is an 7.1 bpw exl2 quant of a merger [icefog72/Kunokukulemonchini-7b](https://huggingface.co/icefog72/Kunokukulemonchini-7b).
17
+
18
+ I wanted to replicate what IceFog did with 6GB cards, looking for long context and quality but scaling it to 8GB cards.
19
+
20
+ Works great for people with 8GB of vram who are looking for both long context and quality.
21
+
22
+ With a 4060 8GB i end up getting 16k context and better quality responces compared to the 6.5bpw version.
23
 
 
24
  ## Merge Details
25
 
26
  Slightly edited kukulemon-7B config.json before merge to get at least ~32k context window.