File size: 647 Bytes
a4dcb8e
 
 
 
 
 
 
 
 
 
04330df
 
 
 
 
a4dcb8e
 
 
04330df
a4dcb8e
04330df
a4dcb8e
a25f715
a4dcb8e
04330df
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
base_model:
- prithivMLmods/Phi-4-o1
- prithivMLmods/Megatron-Opus-14B-2.0
- Delta-Vector/Hamanasu-15B-Instruct
- pankajmathur/orca_mini_phi-4
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- experimental
language:
- ru
- en
---
# merge

This is a merge of pre-trained language models.

This merge was made to test how phi4 reacts to merging.

It reacted good. Overall intellectual capabilities are good, rp language is rich, ERP is supported, but not as good as wanted. Replies are short. On ru it is better than it's parts, on eng it is good too.

Just remember, it is experimental, so tested on little amount of replies ~100