--- tags: - autotrain - text-generation - pytorch - text-generation-inference - transformers widget: - text: 'I love AutoTrain because ' license: apache-2.0 datasets: - Amod/mental_health_counseling_conversations library_name: peft --- # Model Trained Using AutoTrain This model was trained using AutoTrain and is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the [mental_health_counseling_conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) dataset. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). ## Model description A Mistral-7B-Instruct-v0.2 model finetuned on a corpus of mental health conversations between a psychologist and a user. The intention was to create a mental health assistant, "Connor", to address user questions based on responses from a psychologist. ## Training data The model is finetuned on a corpus of mental health conversations between a psychologist and a client, in the form of context - response pairs. This dataset is a collection of questions and answers sourced from two online counseling and therapy platforms. The questions cover a wide range of mental health topics, and the answers are provided by qualified psychologists. Dataset found here :- * [Kaggle](https://www.kaggle.com/datasets/thedevastator/nlp-mental-health-conversations) * [Huggingface](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations) ### Training hyperparameters The following hyperparameters were used during training: TODO # Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "GRMenon/mental-mistral-7b-instruct-autotrain" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() device = "cuda" if torch.cuda.is_available() else "cpu" # Prompt content: messages = [ {"role": "user", "content": "Hey Connor! I have been feeling a bit down lately. I could really use some advice on how to feel better?"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt').to(device) output_ids = model.generate(input_ids=input_ids, max_new_tokens=512, do_sample=True, pad_token_id=2) response = tokenizer.batch_decode(output_ids.detach().cpu().numpy(), skip_special_tokens = True) # Model response: print(response[0]) ```