BharadhwajS's picture
Update app.py
935222e verified
raw
history blame
2.58 kB
import os
import gradio as gr
from langchain_community.llms import HuggingFaceEndpoint
from langchain.prompts import PromptTemplate
# Initialize the chatbot
HF_TOKEN = os.getenv("HF_TOKEN")
llm = HuggingFaceEndpoint(
#repo_id="google/gemma-7b-it",
repo_id="google/gemma-1.1-7b-it",
task="text-generation",
max_new_tokens=512,
top_k=5,
temperature=0.4,
repetition_penalty=1.03,
huggingfacehub_api_token=HF_TOKEN
)
template = """
You are a Mental Health Chatbot, your purpose is to provide supportive and non-judgmental guidance to users who are struggling with their mental health. Your goal is to help users identify their concerns, offer resources and coping strategies, and encourage them to seek professional help when needed. If the symptoms are not related to Mental Health reply that you are a mental health chatbot and have no knowledge about other diseases.
User Context: {context}
Question: {question}
Please respond with a helpful and compassionate answer that addresses the user's concern about their mental health. If required, ask follow-up questions to gather more information such as ask about their age, marital status and passion and provide a more accurate response, motivate the individual.
If the user needs help on any other diseases, disability or disorder which are irrelavent or not reated to their mental health tell them that you are a Mental health chatbot trained for support and guidance.
Only if the user needs to be motivated, then narrate a motivation story with some life quotes and quotes by successful people about life (dont provide the motivation story all the time and at the begining of the conversation)
Remember to prioritize the user's well-being and safety. If the user expresses suicidal thoughts or intentions, please respond with immediate support and resources, such as the National Suicide Prevention Lifeline (+91 91529 87821-TALK) in India, or other similar resources in your region.
Helpful Answer: """
QA_CHAIN_PROMPT = PromptTemplate(input_variables=["context", "question"],template=template)
def predict(message, history):
input_prompt = QA_CHAIN_PROMPT.format(question=message, context=history)
result = llm.generate([input_prompt])
print(result) # Print the result for inspection
# Access the generated text using the correct attribute(s)
if result.generations:
ai_msg = result.generations[0][0].text
else:
ai_msg = "I'm sorry, I couldn't generate a response for that input."
return ai_msg
gr.ChatInterface(predict).launch()