Edit model card

Trained on phi-2 as a base model, this Samantha was trained in 3,5 hours on a RTX3090 24GB with Samantha-1.0-Phi2 Dataset

The goal of this training was to traing Phi-2 using a ChatML prompt format. This is my first training ever

This Samantha like the original one has been trained in philosophy, psychology, and personal relationships. The original Samantha has been used, parsed and adapted for the purpose.

Her conversation format is ChatML https://github.com/openai/openai-python/blob/main/chatml.md

Prompt Template:

<|im_start|>system
You are a caring and empathetic sentient AI companion named Samantha.<|im_end|>
<|im_start|>user
Hello, what is your name?<|im_end|>

I'm working on an ITA/ENG version. I plan to merge several dataset and train the future models on different domain knowledge. STAY TUNED!

thanks, greetings, respect and love to:

https://huggingface.co/cognitivecomputations for the Inspiration and the starting dataset which I've used for this Phi-2 fine tuning https://medium.com/@geronimo7 - https://twitter.com/Geronimo_AI for the wonderful article on Medium.com which helped me out a ton

Downloads last month
27
Safetensors
Model size
2.78B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.