|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- en |
|
datasets: |
|
- Gryphe/Opus-WritingPrompts |
|
- Sao10K/Claude-3-Opus-Instruct-15K |
|
- Sao10K/Short-Storygen-v2 |
|
- Sao10K/c2-Logs-Filtered |
|
--- |
|
|
|
L3-70B-Euryale-v2.1 |
|
|
|
![Euryale](https://images7.alphacoders.com/921/921311.jpg) |
|
|
|
**She's back!** |
|
|
|
Stheno's Sister Model, a healthier and chonkier dose this time. |
|
``` |
|
- Same Dataset used as Stheno v3.2 -> See notes there. |
|
- LoRA Fine-Tune -> FFT is simply too expensive. |
|
- Trained on 8x H100 SXMs and then some more. |
|
``` |
|
|
|
Testing Notes |
|
``` |
|
- Better prompt adherence. |
|
- Better anatomy / spatial awareness. |
|
- Adapts much better to unique and custom formatting / reply formats. |
|
- Very creative, lots of unique swipes. |
|
- Is not restrictive during roleplays. |
|
- Feels like a big brained version of Stheno. |
|
``` |
|
|
|
*Likely due to it being a 70B model instead of 8B. Similar vibes comparing back in llama 2, where 70B models were simply much more 'aware' in the subtler areas and contexts a smaller model like a 7B or 13B back then simply is not able to handle.* |
|
|
|
*** |
|
|
|
As per usual, support me here: |
|
|
|
Ko-fi: https://ko-fi.com/sao10k |
|
|
|
``` |
|
Art by wada_kazu / γγ γγ (pixiv page private?) |
|
``` |