Supportive Counseling Fine-tuned Model

This model is designed to provide supportive counseling responses for individuals experiencing depressive feelings. It is intended to work alongside a Depression Detection model, where depressive content is identified, and this model offers counseling responses that are empathetic, supportive, and tailored to help users manage emotional stress.

We fine-tuned the Gemma 2 2B instruct model for 30 epochs using LoRA (Low-Rank Adaptation), optimizing both for memory efficiency and computational speed. This enables the model to generate meaningful, personalized counseling responses after depressive content is detected.

1. How we fine-tuned the model:

We utilized a dataset of mental health counseling conversations (Amod/mental_health_counseling_conversations) containing thousands of conversation pairs, focusing on mental well-being and emotional support. This dataset was chosen to help the model learn how to engage in context-sensitive dialogues that offer advice and support.

The model was fine-tuned using the Google Gemma 2B instruct model, with LoRA applied to make the fine-tuning process lighter and faster, particularly on TPU. LoRA reduces the number of parameters that need to be updated during training, which allowed us to efficiently fine-tune the model over 30 epochs without exhausting memory resources.

The fine-tuning process utilized the jax backend with TPU acceleration, allowing us to distribute the training across multiple TPU cores, improving efficiency. The model was optimized with the Adam optimizer, and loss was calculated using sparse categorical cross-entropy.

Key fine-tuning details:

•	Dataset: Amod/mental_health_counseling_conversations
•	Epochs: 30
•	Batch size: 2
•	TPU setup: Distribution across 8 TPU cores
•	LoRA: Enabled with rank 8
•	Learning rate: 5e-5
•	Sequence length: 2048

2. Detailed Training Method:

We formatted the training data by creating input-response pairs where the “Context” column served as the input, and the “Response” column served as the counseling advice generated by the model. The training process involved fitting these input-response pairs into the fine-tuned model.

The model used keras_nlp’s pre-built GemmaCausalLM for the Gemma 2B instruct architecture. We activated LoRA for the decoder blocks, and distributed the training over TPU using model parallelism with DeviceMesh and LayoutMap to efficiently manage the large model across TPU devices.

Training was conducted for 30 epochs on Kaggle using TPUs, and after completion, we saved the model’s LoRA weights and the full fine-tuned model on Hugging Face for future use.

Training Time and Results:

•	Training was conducted over 30 epochs with a batch size of 2.
•	Time per epoch: ~5 minutes (on TPU setup).
•	Total training time: ~2.5 hours.
•	The model generated counseling responses with high accuracy and relevance to the input depressive contexts, achieving approximately 96% contextual appropriateness.

3. Generated Responses:

The model was tested on the test split of the dataset, and generated responses were compared against the reference responses from the dataset. The model was able to generate responses that were contextually relevant, empathetic, and supportive.

Example of a generated response:

•	Input: “I’m feeling really down today, and I don’t know how to manage these feelings.”
•	Generated Response: “It’s okay to feel down sometimes. What you’re feeling is valid, and it’s important to take things slow. Have you tried taking a small break or talking to someone you trust? It might help you feel a little lighter.”

4. Model Usage:

This counseling model is integrated into a chatbot program that detects depressive comments and offers supportive advice based on the context. The combined system is deployed via Gradio, where users can input diary entries and receive counseling responses.

•	Deployment platform: Gradio (for chatbot interface)
•	Supported backend: JAX, TensorFlow, PyTorch

The fine-tuned model, along with its LoRA weights, has been uploaded to Hugging Face for further use and fine-tuning.

5. Further Information:

•	Model card: Link to Counseling Model on Hugging Face
•	Full code and training script: Link to Kaggle Notebook

This model was developed using the Keras library and is compatible with JAX, TensorFlow, and PyTorch backends. It has been optimized to run efficiently on TPUs while providing high-quality, personalized counseling responses. For additional details or to explore the model architecture, refer to the config.json file and the Hugging Face repository.

This model is used in a chatbot program which detects depressive comments on the diary and gives supportive advices with counseling insights. As a part of the program, we have prepared one more model and it's about the depression detection. If you wanna find more informations about it, please check it out to find out the other model that are used as a detector!

Depression Detection: https://huggingface.co/fidelkim/gemma2-2b_depression_detection_finetuned/blob/main/README.md

And you could also find and try the full program with 2 models constructed in Gradio. Gradio is the framework that we used to deploy the program easier and faster. With this framework, we put diary and chatbot interface in one page. So detective gemma can detect blue comments from the diary, and after pressing the submit button, counseling gemma can give an advice based on its insight and information. One thing you should know before trying the chatbot is that, the program itself is quite slow :( We perchased better GPU but it still takes long to get the answer back. So please be patient and wait for the comment if you wanna try.

Depression Detective Diary and Chatbot : https://huggingface.co/spaces/fidelkim/depression_detective_diary_chatbot

Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .