New model released, my goal was to try finetune on the last Llama-3.1-8B-Instruct but not a small train, I wanted to do something useful. One of the rare model that I didn't made for RP, or in the goal to uncensor it (but I did anyway kek).
The model was trained on 9M Claude conversations ONLY, giving him another writting style.
Since it's frustrating to be censored using a local model, orthogonal activation steering was used, trying to force the model to never refuse a prompt.
It still refuse some prompt but the majority of them is uncensored. OAS can make a model more dumb or make the base perplexity go higher, so I didn't snipe for 0 refusal.
I don't do non-RP model a lot so any feedback is welcome, I would like to re-use this base for some others future project if needed.
Just wanted to shout out a massive thank you to all 2000 of you who've followed me on Hugging Face! ๐ It's incredible to have such an awesome crew backing me up as I dive into all these LLM experiments.
Even though not all my models turn out perfect, I've found some real gems and methods along the way ๐. It's like digging for treasure โ sometimes you found nothing, but sometimes you find a pearl, and sometimes you find a new method to try.
Your support and encouragement mean the world to me, and I'm really stoked to keep experimenting and learning. If you told me some years ago I would have so much people following me for what I do, I wouldn't have believed it. Here's to more discoveries and adventures ahead! ๐
Also, big thanks once again, and a huge shoutout to @IkariDev for being there through this journey and supporting me. I'm excited for our future work together and hope we will continue to make people happy! ๐
I want to thank @Gryphe too, since my early work was heavily inspired from MythoMax and the RP/ERP vibe of it. If I'm here today it's probably because of you ๐
I was so close to forget @chargoddard and his amazing tool too! What will we do without mergekit in our life? Thank you! ๐
Hello! The 8B/70B OG Llama-3 models made with the Orthogonal Activation Steering script as been pushed in private.
After multiple test with an empty prompt system, I can confirm it's not uncensored enough, but I wanted to try all the GGUF before (and it take time to do lmao)
Llama3-Unholy-8B-OAS don't have the problem as it was already trained to be less censored, but the OG one was really too much censored.
I will try to redo that soon, as it seems to HAVE WORKED for some prompt (as seen on the log, for exemple) but it's not enough.
32 entry of the dataset is clearly not enough, but it's okay, I really wanted to try that as it was something new. I could take the Unholy way and retrain the 70B before using OAS but it should work without, that's not the goal.
Hey, it took some time but I finally moved out and got internet back, so here I am again! A lot of things to get updated on, I will try to reply to each of you ASAP. See you soon!