vishal00812 commited on
Commit
e704f88
·
verified ·
1 Parent(s): 29c0e7c

Upload 3 files

Browse files
Files changed (3) hide show
  1. app.py +1102 -0
  2. requirements.txt +19 -0
  3. utils.py +311 -0
app.py ADDED
@@ -0,0 +1,1102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ from langchain.text_splitter import RecursiveCharacterTextSplitter
3
+ from langchain.chains.combine_documents import create_stuff_documents_chain
4
+ from langchain_core.prompts import ChatPromptTemplate
5
+ from langchain.chains import create_retrieval_chain
6
+ from langchain_community.vectorstores import FAISS
7
+ from langchain_community.document_loaders import PyPDFDirectoryLoader
8
+ from dotenv import load_dotenv
9
+ import os
10
+ import time
11
+ from langchain_cohere import CohereEmbeddings
12
+ from langchain_community.llms import Cohere
13
+ import asyncio
14
+ from utils import speak
15
+ import nest_asyncio
16
+ from PIL import Image
17
+ from langchain_groq import ChatGroq
18
+ from langchain import LLMChain, PromptTemplate
19
+ import google.generativeai as genai
20
+ from utils import takeCommand
21
+ from youtube_transcript_api import YouTubeTranscriptApi
22
+ from utils import class_9_subjects , class_10_subjects , class_11_subjects , class_12_subjects
23
+ import re
24
+ from datetime import datetime
25
+ nest_asyncio.apply()
26
+ try:
27
+ loop = asyncio.get_running_loop()
28
+ except RuntimeError:
29
+ loop = asyncio.new_event_loop()
30
+ asyncio.set_event_loop(loop)
31
+
32
+
33
+
34
+ st.markdown(
35
+ """
36
+ <style>
37
+ /* Background settings */
38
+ body {
39
+ background-color: #f7f9fc;
40
+ font-family: 'Open Sans', sans-serif;
41
+ }
42
+
43
+ /* Center the title */
44
+ .title {
45
+ text-align: center;
46
+ font-size: 3rem;
47
+ font-weight: bold;
48
+ color: #2A9D8F;
49
+ margin-bottom: 30px;
50
+ }
51
+
52
+ /* Style for header sections */
53
+ .header {
54
+ color: #ffffff;
55
+ font-size: 1.5rem;
56
+ font-weight: bold;
57
+ margin-top: 20px;
58
+ }
59
+
60
+ /* Custom buttons */
61
+ .stButton button {
62
+ background-color: #2A9D8F;
63
+ color: white;
64
+ border-radius: 10px;
65
+ border: none;
66
+ padding: 10px 20px;
67
+ font-size: 1.2rem;
68
+ cursor: pointer;
69
+ }
70
+
71
+ .stButton button:hover {
72
+ background-color: #21867a;
73
+ }
74
+
75
+ /* Styling file uploader */
76
+ .stFileUploader button {
77
+ background-color: #e76f51;
78
+ color: white;
79
+ border-radius: 8px;
80
+ font-size: 1rem;
81
+ padding: 10px 15px;
82
+ cursor: pointer;
83
+ }
84
+
85
+
86
+ /* Footer styles */
87
+ .footer {
88
+ position: fixed;
89
+ bottom: 0;
90
+ left: 0;
91
+ width: 100%;
92
+ background-color: #264653;
93
+ color: white;
94
+ text-align: center;
95
+ padding: 10px;
96
+ font-size: 1rem;
97
+ display: flex;
98
+ justify-content: center; /* Centers horizontally */
99
+ align-items: center; /* Centers vertically */
100
+ }
101
+
102
+ /* Image styling */
103
+ .uploaded-image {
104
+ border-radius: 15px;
105
+ box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
106
+ }
107
+
108
+ /* Transcript text area */
109
+ .transcript {
110
+ background-color: #ffffff;
111
+ border: 1px solid #2A9D8F;
112
+ border-radius: 10px;
113
+ padding: 15px;
114
+ margin-top: 15px;
115
+ font-size: 1rem;
116
+ color: #333;
117
+ line-height: 1.5;
118
+ }
119
+ </style>
120
+ """, unsafe_allow_html=True
121
+ )
122
+
123
+ st.sidebar.markdown(
124
+ "<h2 style='text-align: center; color: white; font-weight: bold;'>WELCOME TO EDUAI</h2>",
125
+ unsafe_allow_html=True
126
+ )
127
+
128
+ st.sidebar.image("logo.png", width=160, use_column_width=False, output_format="auto", caption="Let AI Educate You")
129
+
130
+ option = st.sidebar.selectbox(
131
+ "Choose an option:",
132
+ ["Get Solution from Image", "Chat with Your Book","Transcript Youtube Video ","Genrate Practice MCQ","Self Assesment"]
133
+ )
134
+
135
+
136
+ if option=="Chat with Your Book":
137
+ st.markdown('<h1 class="title">Chat with Your Book</h1>', unsafe_allow_html=True)
138
+ load_dotenv()
139
+
140
+ # Ensure the Google API key is retrieved correctly
141
+ #Cohere_API_KEY = os.getenv("COHERE_API_KEY")
142
+ Groq_API_KEY = os.getenv("GROQ_API_KEY")
143
+ llm=ChatGroq(groq_api_key=Groq_API_KEY,model_name="Llama3-8b-8192")
144
+
145
+ # Initialize LLM with synchronous method
146
+ #llm =Cohere(model="command", temperature=0, cohere_api_key=Cohere_API_KEY)
147
+ class_options = [9, 10, 11, 12]
148
+ selected_class = st.selectbox("Select Your Class",["Select Class"]+class_options)
149
+ if selected_class==11:
150
+ st.markdown('<h2 class="header">Chatting with Class 11</h2>', unsafe_allow_html=True)
151
+ # Function to create vector embeddings
152
+ def vector_embedding():
153
+ if "vectors" not in st.session_state:
154
+ index_file = "faiss_index_11"
155
+ if os.path.exists(index_file):
156
+ st.session_state.vectors = FAISS.load_local(index_file, CohereEmbeddings(model="multilingual-22-12"),allow_dangerous_deserialization=True)
157
+
158
+
159
+ # Define prompt template
160
+ prompt = ChatPromptTemplate.from_template("""
161
+ Answer the questions based on the provided context only.
162
+ Please provide the most accurate response based on the question.
163
+ Provide only the answer, without any additional text.
164
+ <context>
165
+ {context}
166
+ <context>
167
+ Questions: {input}
168
+ """)
169
+
170
+ # Initialize session state for storing chat history
171
+ if "messages" not in st.session_state:
172
+ st.session_state.messages = []
173
+
174
+ for message in st.session_state.messages:
175
+ with st.chat_message(message["role"]):
176
+ st.markdown(message["content"])
177
+
178
+ Sucess=''
179
+
180
+ if "button_clicked_c11" not in st.session_state:
181
+ st.session_state.button_clicked_c11 = False
182
+
183
+ # Conditionally display the button only if it hasn't been clicked
184
+ if not st.session_state.button_clicked_c11:
185
+ # Button updates the session state directly in the same run
186
+ if st.button("Read My Book"):
187
+ vector_embedding()
188
+ Sucess="Book Read Successfully"
189
+ st.success(Sucess)
190
+ st.session_state.button_clicked_c11 = True
191
+
192
+ if st.session_state.button_clicked_c11:
193
+ Sucess=''
194
+ st.write(Sucess)
195
+
196
+
197
+ # Button to embed documents
198
+ #if st.button("Read My Book "):
199
+ #vector_embedding()
200
+ #st.success("Read Your Book Successfully")
201
+
202
+
203
+ # Move the input box to the bottom of the page
204
+ st.write("-----") # Add a separator
205
+
206
+ col1, col2 = st.columns([4, 1])
207
+ with col1:
208
+ prompt1 = st.chat_input("What you want to know?")
209
+ with col2:
210
+ Speak=st.button("Speak 🎙️")
211
+ if Speak:
212
+ st.write("please speak....")
213
+ prompt1=takeCommand()
214
+ st.write(f"You said: {prompt1}")
215
+ st.markdown('<div class="fixed-bottom">', unsafe_allow_html=True)
216
+ st.markdown('</div>', unsafe_allow_html=True)
217
+
218
+ # Handle user input and AI response
219
+ if prompt1:
220
+ if "vectors" in st.session_state:
221
+ document_chain = create_stuff_documents_chain(llm, prompt)
222
+ retriever = st.session_state.vectors.as_retriever()
223
+ retrieval_chain = create_retrieval_chain(retriever, document_chain)
224
+ start = time.process_time()
225
+ response = retrieval_chain.invoke({'input': prompt1})
226
+ response_time = time.process_time() - start
227
+
228
+ st.chat_message("user").markdown(prompt1)
229
+ # Add user message to chat history
230
+ st.session_state.messages.append({"role": "user", "content": prompt1})
231
+ st.write("Response Time:", response_time)
232
+ answer=response['answer']
233
+ response = f"Assistant: {answer}"
234
+ # Display assistant response in chat message container
235
+ with st.chat_message("assistant"):
236
+ st.markdown(response)
237
+ # Add assistant response to chat history
238
+ st.session_state.messages.append({"role": "assistant", "content": response})
239
+ st.audio(speak(answer), format="audio/wav")
240
+ else:
241
+ st.error("Please let me read the book by clicking 'Read My Book'.")
242
+
243
+
244
+
245
+
246
+ if selected_class==10:
247
+ st.markdown('<h2 class="header">Chatting with Class 10</h2>', unsafe_allow_html=True)
248
+ # Function to create vector embeddings
249
+ def vector_embedding():
250
+ if "vectors" not in st.session_state:
251
+ index_file = "faiss_index_10"
252
+ if os.path.exists(index_file):
253
+ st.session_state.vectors = FAISS.load_local(index_file, CohereEmbeddings(model="multilingual-22-12"),allow_dangerous_deserialization=True)
254
+
255
+
256
+ # Define prompt template
257
+ prompt = ChatPromptTemplate.from_template("""
258
+ Answer the questions based on the provided context only.
259
+ Please provide the most accurate response based on the question.
260
+ Provide only the answer, without any additional text.
261
+ <context>
262
+ {context}
263
+ <context>
264
+ Questions: {input}
265
+ """)
266
+
267
+
268
+ # Initialize session state for storing chat history
269
+ if "messages" not in st.session_state:
270
+ st.session_state.messages = []
271
+
272
+ for message in st.session_state.messages:
273
+ with st.chat_message(message["role"]):
274
+ st.markdown(message["content"])
275
+ Sucess=''
276
+ if "button_clicked_c10" not in st.session_state:
277
+ st.session_state.button_clicked_c10 = False
278
+
279
+ # Conditionally display the button only if it hasn't been clicked
280
+ if not st.session_state.button_clicked_c10:
281
+ # Button updates the session state directly in the same run
282
+ if st.button("Read My Book"):
283
+ vector_embedding()
284
+ Sucess="Book Read Successfully"
285
+ st.success(Sucess)
286
+ st.session_state.button_clicked_c10 = True
287
+
288
+ if st.session_state.button_clicked_c10:
289
+ Sucess=''
290
+ st.write(Sucess)
291
+
292
+
293
+
294
+ st.write("-----")
295
+
296
+
297
+ col1, col2 = st.columns([4, 1])
298
+ with col1:
299
+ prompt1 = st.chat_input("What you want to know?")
300
+ with col2:
301
+ Speak=st.button("Speak 🎙️")
302
+ if Speak:
303
+ st.write("please speak....")
304
+ prompt1=takeCommand()
305
+ st.write(f"You said: {prompt1}")
306
+ st.markdown('<div class="fixed-bottom">', unsafe_allow_html=True)
307
+ st.markdown('</div>', unsafe_allow_html=True)
308
+
309
+
310
+ if prompt1:
311
+ if "vectors" in st.session_state:
312
+ document_chain = create_stuff_documents_chain(llm, prompt)
313
+ retriever = st.session_state.vectors.as_retriever()
314
+ retrieval_chain = create_retrieval_chain(retriever, document_chain)
315
+ start = time.process_time()
316
+ response = retrieval_chain.invoke({'input': prompt1})
317
+ response_time = time.process_time() - start
318
+
319
+ st.chat_message("user").markdown(prompt1)
320
+
321
+ st.session_state.messages.append({"role": "user", "content": prompt1})
322
+ st.write("Response Time:", response_time)
323
+ answer=response['answer']
324
+ response = f"Assistant: {answer}"
325
+
326
+ with st.chat_message("assistant"):
327
+ st.markdown(response)
328
+ st.session_state.messages.append({"role": "assistant", "content": response})
329
+ st.audio(speak(answer), format="audio/wav")
330
+
331
+ else:
332
+ st.error("Please let me read the book by clicking 'Read My Book'.")
333
+
334
+ if selected_class==12:
335
+ st.markdown('<h2 class="header">Chatting with Class 12</h2>', unsafe_allow_html=True)
336
+
337
+ def vector_embedding():
338
+ if "vectors" not in st.session_state:
339
+ index_file = "faiss_index_12"
340
+ if os.path.exists(index_file):
341
+ st.session_state.vectors = FAISS.load_local(index_file, CohereEmbeddings(model="multilingual-22-12"),allow_dangerous_deserialization=True)
342
+
343
+
344
+
345
+ prompt = ChatPromptTemplate.from_template("""
346
+ Answer the questions based on the provided context only.
347
+ Please provide the most accurate response based on the question.
348
+ Provide only the answer, without any additional text.
349
+ <context>
350
+ {context}
351
+ <context>
352
+ Questions: {input}
353
+ """)
354
+
355
+
356
+ if "messages" not in st.session_state:
357
+ st.session_state.messages = []
358
+
359
+ for message in st.session_state.messages:
360
+ with st.chat_message(message["role"]):
361
+ st.markdown(message["content"])
362
+
363
+ if "button_clicked_c12" not in st.session_state:
364
+ st.session_state.button_clicked_c12 = False
365
+
366
+
367
+ Sucess=''
368
+ if not st.session_state.button_clicked_c12:
369
+
370
+ if st.button("Read My Book"):
371
+ vector_embedding()
372
+ Sucess="Book Read Successfully"
373
+ st.success(Sucess)
374
+ st.session_state.button_clicked_c12 = True
375
+
376
+ if st.session_state.button_clicked_c12:
377
+ Sucess=''
378
+ st.write(Sucess)
379
+ st.write("-----")
380
+
381
+ col1, col2 = st.columns([4, 1])
382
+ with col1:
383
+ prompt1 = st.chat_input("What you want to know?")
384
+ with col2:
385
+ Speak=st.button("Speak 🎙️")
386
+ if Speak:
387
+ st.write("please speak....")
388
+ prompt1=takeCommand()
389
+ st.write(f"You said: {prompt1}")
390
+ st.markdown('<div class="fixed-bottom">', unsafe_allow_html=True)
391
+ st.markdown('</div>', unsafe_allow_html=True)
392
+ if prompt1:
393
+ if "vectors" in st.session_state:
394
+ document_chain = create_stuff_documents_chain(llm, prompt)
395
+ retriever = st.session_state.vectors.as_retriever()
396
+ retrieval_chain = create_retrieval_chain(retriever, document_chain)
397
+ start = time.process_time()
398
+ response = retrieval_chain.invoke({'input': prompt1})
399
+ response_time = time.process_time() - start
400
+
401
+ st.chat_message("user").markdown(prompt1)
402
+ # Add user message to chat history
403
+ st.session_state.messages.append({"role": "user", "content": prompt1})
404
+ st.write("Response Time:", response_time)
405
+ answer=response['answer']
406
+ response = f"Assistant: {answer}"
407
+ # Display assistant response in chat message container
408
+ with st.chat_message("assistant"):
409
+ st.markdown(response)
410
+ st.session_state.messages.append({"role": "assistant", "content": response})
411
+ st.audio(speak(answer), format="audio/wav")
412
+
413
+ else:
414
+ st.error("Please let me read the book by clicking 'Read My Book'.")
415
+
416
+ if selected_class==9:
417
+ st.markdown('<h2 class="header">Chatting with Class 9</h2>', unsafe_allow_html=True)
418
+ # Function to create vector embeddings
419
+ def vector_embedding():
420
+ if "vectors" not in st.session_state:
421
+ index_file = "faiss_index_9"
422
+ if os.path.exists(index_file):
423
+ st.session_state.vectors = FAISS.load_local(index_file, CohereEmbeddings(model="multilingual-22-12"),allow_dangerous_deserialization=True)
424
+
425
+
426
+ # Define prompt template
427
+ prompt = ChatPromptTemplate.from_template("""
428
+ Answer the questions based on the provided context only.
429
+ Please provide the most accurate response based on the question.
430
+ Provide only the answer, without any additional text.
431
+ <context>
432
+ {context}
433
+ <context>
434
+ Questions: {input}
435
+ """)
436
+
437
+ # Initialize session state for storing chat history
438
+ if "messages" not in st.session_state:
439
+ st.session_state.messages = []
440
+
441
+ for message in st.session_state.messages:
442
+ with st.chat_message(message["role"]):
443
+ st.markdown(message["content"])
444
+
445
+
446
+ if "button_clicked_c9" not in st.session_state:
447
+ st.session_state.button_clicked_c9 = False
448
+
449
+ # Conditionally display the button only if it hasn't been clicked
450
+ sucess=''
451
+ if not st.session_state.button_clicked_c9:
452
+ # Button updates the session state directly in the same run
453
+ if st.button("Read My Book"):
454
+ vector_embedding()
455
+ Sucess="Book Read Successfully"
456
+ st.success(Sucess)
457
+ st.session_state.button_clicked_c9 = True
458
+
459
+ if st.session_state.button_clicked_c9:
460
+ Sucess=''
461
+ st.write(Sucess)
462
+
463
+
464
+ # Move the input box to the bottom of the page
465
+ st.write("-----") # Add a separator
466
+
467
+ # Input box for user to type in at the bottom
468
+ col1, col2 = st.columns([4, 1])
469
+ with col1:
470
+ prompt1 = st.chat_input("What you want to know?")
471
+ with col2:
472
+ Speak=st.button("Speak 🎙️")
473
+ if Speak:
474
+ st.write("please speak....")
475
+ prompt1=takeCommand()
476
+ st.write(f"You said: {prompt1}")
477
+ st.markdown('<div class="fixed-bottom">', unsafe_allow_html=True)
478
+ st.markdown('</div>', unsafe_allow_html=True)
479
+
480
+ # Handle user input and AI response
481
+ if prompt1:
482
+ if "vectors" in st.session_state:
483
+ st.chat_message("user").markdown(prompt1)
484
+ document_chain = create_stuff_documents_chain(llm, prompt)
485
+ retriever = st.session_state.vectors.as_retriever()
486
+ retrieval_chain = create_retrieval_chain(retriever, document_chain)
487
+ start = time.process_time()
488
+ response = retrieval_chain.invoke({'input': prompt1})
489
+ response_time = time.process_time() - start
490
+
491
+ # Add user message to chat history
492
+ st.session_state.messages.append({"role": "user", "content": prompt1})
493
+ st.write("Response Time:", response_time)
494
+ answer=response['answer']
495
+ response = f"Assistant: {answer}"
496
+ # Display assistant response in chat message container
497
+ with st.chat_message("assistant"):
498
+ st.markdown(response)
499
+ st.session_state.messages.append({"role": "assistant", "content": response})
500
+ st.audio(speak(answer), format="audio/wav")
501
+
502
+ else:
503
+ st.error("Please let me read the book by clicking 'Read My Book'.")
504
+
505
+
506
+
507
+
508
+ elif option=="Get Solution from Image":
509
+ st.markdown('<h1 class="title">Get Solution from Image</h1>', unsafe_allow_html=True)
510
+ load_dotenv()
511
+ os.getenv("GOOGLE_API_KEY")
512
+ genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
513
+
514
+ ## Function to load OpenAI model and get respones
515
+
516
+ def get_gemini_response(input,image):
517
+ model = genai.GenerativeModel('gemini-1.5-flash')
518
+ if input!="":
519
+ response = model.generate_content([input,image])
520
+ else:
521
+ response = model.generate_content(image)
522
+ return response.text
523
+
524
+ ##initialize our streamlit app
525
+
526
+
527
+
528
+ input = st.text_input("What you want to know?")
529
+ uploaded_file=None
530
+ uploaded_file = st.file_uploader("Choose an image...", type=["jpg", "jpeg", "png"])
531
+ image=""
532
+ if uploaded_file is not None:
533
+ image = Image.open(uploaded_file)
534
+ st.image(image, caption="Uploaded Image.", use_column_width=True)
535
+
536
+
537
+ submit=st.button("Provide Me A Solution")
538
+
539
+ ## If ask button is clicked
540
+
541
+ if submit:
542
+
543
+ response=get_gemini_response(input,image)
544
+ st.subheader("The Response is")
545
+ st.write(response)
546
+
547
+
548
+ elif option=="Transcript Youtube Video ":
549
+ st.markdown('<h1 class="title">Transcript Youtube Video </h1>', unsafe_allow_html=True)
550
+
551
+ prompt="""You are a highly knowledgeable AI specialized in text summarization for educational content creation.
552
+ Generate a summary of exactly {number} words based on the following text: "{text}".
553
+ Ensure the summary captures the key points and is concise and informative.
554
+ """
555
+
556
+ prompt_template = PromptTemplate(input_variables=["number","text"],template=prompt)
557
+ ## getting the transcript data from yt videos
558
+ def extract_transcript_details(youtube_video_url):
559
+ try:
560
+ video_id=youtube_video_url.split("=")[1]
561
+
562
+ transcript_text=YouTubeTranscriptApi.get_transcript(video_id)
563
+
564
+ transcript = ""
565
+ for i in transcript_text:
566
+ transcript += " " + i["text"]
567
+
568
+ return transcript
569
+
570
+ except Exception as e:
571
+ raise e
572
+
573
+ def generate_gemini_content(prompt_template,number,text):
574
+
575
+ #model=genai.GenerativeModel("gemini-pro")
576
+ Groq_API_KEY = os.getenv("GROQ_API_KEY")
577
+ llm=ChatGroq(groq_api_key=Groq_API_KEY,model_name="Llama3-8b-8192")
578
+ chain =LLMChain(llm=llm, prompt=prompt_template, verbose=True)
579
+ response = chain.run({"number":number,"text":text})
580
+ return response
581
+
582
+
583
+ youtube_link = st.text_input("Enter YouTube Video Link:")
584
+ number = st.slider("Select the number of lines for the summary", 50, 1000, 50)
585
+ if youtube_link:
586
+ video_id = youtube_link.split("=")[1]
587
+ print(video_id)
588
+ st.image(f"http://img.youtube.com/vi/{video_id}/0.jpg", use_column_width=True)
589
+
590
+ if st.button("Get Detailed Notes"):
591
+ transcript_text=extract_transcript_details(youtube_link)
592
+
593
+ if transcript_text:
594
+ summary=generate_gemini_content(prompt_template,number,transcript_text)
595
+ st.markdown("## Detailed Notes:")
596
+ st.write(summary)
597
+ st.audio(speak(summary), format="audio/wav")
598
+
599
+
600
+
601
+ elif option=="Genrate Practice MCQ":
602
+ st.markdown('<h1 class="title">Genrate Practice MCQ</h1>', unsafe_allow_html=True)
603
+ load_dotenv()
604
+ Groq_API_KEY = os.getenv("GROQ_API_KEY")
605
+ llm = ChatGroq(groq_api_key=Groq_API_KEY, model_name="Llama3-8b-8192")
606
+ PROMPT_TEMPLATE_STRING = """
607
+ You are a highly knowledgeable AI specialized in educational content creation.
608
+ Generate {number} multiple-choice question(s) on the topic: {topic} with the following details:
609
+ - Difficulty level: {difficulty}
610
+ - Each question should be challenging and provide four answer choices, with only one correct answer.
611
+
612
+ Format each question as follows:
613
+ Question: [The generated question text]
614
+
615
+ A) [Answer choice 1]
616
+ B) [Answer choice 2]
617
+ C) [Answer choice 3]
618
+ D) [Answer choice 4]
619
+
620
+ Correct Answer: [The letter corresponding to the correct answer]
621
+
622
+ Make sure that the correct answer is clearly indicated.
623
+ """
624
+
625
+
626
+
627
+
628
+ # Create a PromptTemplate instance with the string template
629
+ prompt_template = PromptTemplate(input_variables=["number","topic","difficulty"],template=PROMPT_TEMPLATE_STRING)
630
+
631
+ # Input for the topic of the MCQ
632
+ topic = st.text_input("Enter the topic for the MCQ:")
633
+
634
+ # Select difficulty level
635
+ difficulty = st.selectbox("Select difficulty level:", ["easy", "medium", "hard"])
636
+
637
+ number = st.selectbox("Select number of question :", [5, 10, 15 , 20])
638
+
639
+
640
+ if st.button("Generate MCQ"):
641
+ if topic:
642
+ with st.spinner("Generating MCQ..."):
643
+ # Prepare the formatted prompt
644
+
645
+ # Initialize LLMChain with the prompt and LLM
646
+ mcq_chain = LLMChain(llm=llm, prompt=prompt_template, verbose=True)
647
+ mcq = mcq_chain.run({"number":number,"topic":topic,"difficulty":difficulty})
648
+ if mcq:
649
+ st.write(mcq)
650
+ else:
651
+ st.error("Failed to generate MCQ. Please try again.")
652
+ else:
653
+ st.error("Please enter a topic.")
654
+
655
+
656
+ elif option=="Self Assesment" :
657
+ st.markdown('<h1 class="title">Self Assesment</h1>', unsafe_allow_html=True)
658
+ class_options = ['Class 9', 'Class 10', 'Class 11', 'Class 12']
659
+ selected_class = st.selectbox("Select Your Class", ['Select Class'] + class_options)
660
+ if selected_class == "Class 9":
661
+ subject_option = st.selectbox("Select Subject", ["Select"] + list(class_9_subjects.keys()))
662
+ if subject_option != "Select":
663
+ chapters = class_9_subjects.get(subject_option, [])
664
+ chapter_option = st.selectbox("Select Chapter", ["Select"] + chapters)
665
+ if chapter_option != "Select":
666
+ st.write(f"You have selected: **{chapter_option}**")
667
+
668
+ # Prompts
669
+ PROMPT_TEMPLATE_STRING = """
670
+ Based on the CBSE Class 9, generate a question in the form of a complete sentence:
671
+ Create a question about the following chapter: {Chapter}
672
+
673
+ Provide only the question as the output, with no additional text.
674
+ """
675
+ prompt_template = PromptTemplate(input_variables=["Chapter"], template=PROMPT_TEMPLATE_STRING)
676
+
677
+ PROMPT_TEMPLATE_STRING2 = """
678
+ Evaluate the provided answer based strictly on CBSE Class 9 standards, assigning marks out of 1.
679
+ Deduct marks for any inaccuracies, even minor ones. If the final score is 0.6 or higher, round it up to 1. Provide only the marks as the output.
680
+
681
+ Question: {input}
682
+ Answer: {answer}
683
+
684
+ Output the marks out of 1.
685
+ """
686
+ prompt_template2 = PromptTemplate(input_variables=["input", "answer"], template=PROMPT_TEMPLATE_STRING2)
687
+
688
+ PROMPT_TEMPLATE_STRING3 = """
689
+ Provide a clear and accurate answer to the following question based on CBSE Class 9 standards.
690
+
691
+ Question: {question}
692
+
693
+ Output the answer only.
694
+ """
695
+ prompt_template3 = PromptTemplate(input_variables=["question"], template=PROMPT_TEMPLATE_STRING3)
696
+
697
+ load_dotenv()
698
+
699
+ Groq_API_KEY = os.getenv("GROQ_API_KEY")
700
+ llm = ChatGroq(groq_api_key=Groq_API_KEY, model_name="Llama3-8b-8192")
701
+ prompt1 = chapter_option
702
+ num_questions = st.number_input("Enter the number of questions you want", min_value=1, max_value=15, step=1)
703
+
704
+ # Initialize session state variables
705
+ if "marks" not in st.session_state:
706
+ st.session_state.marks = []
707
+ if "questions" not in st.session_state:
708
+ st.session_state.questions = []
709
+ if "answers" not in st.session_state:
710
+ st.session_state.answers = {}
711
+ if "generated_questions" not in st.session_state:
712
+ st.session_state.generated_questions = set()
713
+ if "suggestion" not in st.session_state:
714
+ st.session_state.suggestion = []
715
+ if "submitted" not in st.session_state:
716
+ st.session_state.submitted = False # Track if the submit button has been pressed
717
+ if "deleted" not in st.session_state:
718
+ st.session_state.deleted = False # Track if the file has been deleted
719
+
720
+ # Generate questions
721
+ if prompt1:
722
+ question_chain = LLMChain(llm=llm, prompt=prompt_template, verbose=True)
723
+ question = question_chain.run({"Chapter": prompt1})
724
+ while len(st.session_state.questions) < num_questions:
725
+ if question not in st.session_state.generated_questions:
726
+ st.session_state.questions.append(question)
727
+ st.session_state.generated_questions.add(question)
728
+ if len(st.session_state.questions) >= num_questions:
729
+ break
730
+
731
+ # Display questions and text areas for answers
732
+ for i in range(num_questions):
733
+ if i < len(st.session_state.questions):
734
+ st.write(f"### Question {i + 1}: {st.session_state.questions[i]}")
735
+ answer_key = f"answer_{i}"
736
+ answer = st.text_area(f"Enter your answer for Question {i + 1}", key=answer_key,
737
+ value=st.session_state.answers.get(answer_key, ""))
738
+ st.session_state.answers[answer_key] = answer
739
+
740
+ # Single submit button
741
+ if not st.session_state.submitted and st.button("Submit All Answers"):
742
+ st.session_state.submitted = True # Mark as submitted
743
+
744
+ # Process answers, generate marks and suggestions
745
+ for i in range(num_questions):
746
+ if i < len(st.session_state.questions):
747
+ answer = st.session_state.answers.get(f"answer_{i}", "")
748
+ if answer:
749
+ # Evaluate answer
750
+ response_chain = LLMChain(llm=llm, prompt=prompt_template2, verbose=True)
751
+ response = response_chain.run({"input": st.session_state.questions[i], 'answer': answer})
752
+ suggested_chain = LLMChain(llm=llm, prompt=prompt_template3, verbose=True)
753
+ suggestion = suggested_chain.run({"question": st.session_state.questions[i]})
754
+
755
+ # Parse marks
756
+ match = re.search(r'\d+(\.\d+)?', response)
757
+ marks = 1 if match and float(match.group()) >= 0.6 else 0
758
+ st.session_state.marks.append(marks)
759
+ st.session_state.suggestion.append(suggestion)
760
+
761
+ # Display suggestions and marks
762
+ st.write(f"Suggested Answer {i + 1}: {suggestion}")
763
+ st.write(f"Marks for Question {i + 1}: {marks}/1")
764
+
765
+ # Calculate total marks
766
+ if st.session_state.marks:
767
+ total_marks = sum(st.session_state.marks)
768
+ st.write(f"### Total Marks: {total_marks} out of {num_questions}")
769
+
770
+
771
+ elif selected_class == "Class 10":
772
+ subject_option = st.selectbox("Select Subject", ["Select"] + list(class_10_subjects.keys()))
773
+ if subject_option != "Select":
774
+ chapters = class_10_subjects.get(subject_option, [])
775
+ chapter_option = st.selectbox("Select Chapter", ["Select"] + chapters)
776
+ if chapter_option != "Select":
777
+ st.write(f"You have selected: **{chapter_option}**")
778
+
779
+ # Prompts
780
+ PROMPT_TEMPLATE_STRING = """
781
+ Based on the CBSE Class 10, generate a question in the form of a complete sentence:
782
+ Create a question about the following chapter: {Chapter}
783
+
784
+ Provide only the question as the output, with no additional text.
785
+ """
786
+ prompt_template = PromptTemplate(input_variables=["Chapter"], template=PROMPT_TEMPLATE_STRING)
787
+
788
+ PROMPT_TEMPLATE_STRING2 = """
789
+ Evaluate the provided answer based strictly on CBSE Class 10 standards, assigning marks out of 1.
790
+ Deduct marks for any inaccuracies, even minor ones. If the final score is 0.6 or higher, round it up to 1. Provide only the marks as the output.
791
+
792
+ Question: {input}
793
+ Answer: {answer}
794
+
795
+ Output the marks out of 1.
796
+ """
797
+ prompt_template2 = PromptTemplate(input_variables=["input", "answer"], template=PROMPT_TEMPLATE_STRING2)
798
+
799
+ PROMPT_TEMPLATE_STRING3 = """
800
+ Provide a clear and accurate answer to the following question based on CBSE Class 10 standards.
801
+
802
+ Question: {question}
803
+
804
+ Output the answer only.
805
+ """
806
+ prompt_template3 = PromptTemplate(input_variables=["question"], template=PROMPT_TEMPLATE_STRING3)
807
+
808
+ load_dotenv()
809
+
810
+ Groq_API_KEY = os.getenv("GROQ_API_KEY")
811
+ llm = ChatGroq(groq_api_key=Groq_API_KEY, model_name="Llama3-8b-8192")
812
+ prompt1 = chapter_option
813
+ num_questions = st.number_input("Enter the number of questions you want", min_value=1, max_value=15, step=1)
814
+
815
+ # Initialize session state variables
816
+ if "marks" not in st.session_state:
817
+ st.session_state.marks = []
818
+ if "questions" not in st.session_state:
819
+ st.session_state.questions = []
820
+ if "answers" not in st.session_state:
821
+ st.session_state.answers = {}
822
+ if "generated_questions" not in st.session_state:
823
+ st.session_state.generated_questions = set()
824
+ if "suggestion" not in st.session_state:
825
+ st.session_state.suggestion = []
826
+ if "submitted" not in st.session_state:
827
+ st.session_state.submitted = False # Track if the submit button has been pressed
828
+ if "deleted" not in st.session_state:
829
+ st.session_state.deleted = False # Track if the file has been deleted
830
+
831
+ # Generate questions
832
+ if prompt1:
833
+ question_chain = LLMChain(llm=llm, prompt=prompt_template, verbose=True)
834
+ question = question_chain.run({"Chapter": prompt1})
835
+ while len(st.session_state.questions) < num_questions:
836
+ if question not in st.session_state.generated_questions:
837
+ st.session_state.questions.append(question)
838
+ st.session_state.generated_questions.add(question)
839
+ if len(st.session_state.questions) >= num_questions:
840
+ break
841
+
842
+ # Display questions and text areas for answers
843
+ for i in range(num_questions):
844
+ if i < len(st.session_state.questions):
845
+ st.write(f"### Question {i + 1}: {st.session_state.questions[i]}")
846
+ answer_key = f"answer_{i}"
847
+ answer = st.text_area(f"Enter your answer for Question {i + 1}", key=answer_key,
848
+ value=st.session_state.answers.get(answer_key, ""))
849
+ st.session_state.answers[answer_key] = answer
850
+
851
+ # Single submit button
852
+ if not st.session_state.submitted and st.button("Submit All Answers"):
853
+ st.session_state.submitted = True # Mark as submitted
854
+
855
+ # Process answers, generate marks and suggestions
856
+ for i in range(num_questions):
857
+ if i < len(st.session_state.questions):
858
+ answer = st.session_state.answers.get(f"answer_{i}", "")
859
+ if answer:
860
+ # Evaluate answer
861
+ response_chain = LLMChain(llm=llm, prompt=prompt_template2, verbose=True)
862
+ response = response_chain.run({"input": st.session_state.questions[i], 'answer': answer})
863
+ suggested_chain = LLMChain(llm=llm, prompt=prompt_template3, verbose=True)
864
+ suggestion = suggested_chain.run({"question": st.session_state.questions[i]})
865
+
866
+ # Parse marks
867
+ match = re.search(r'\d+(\.\d+)?', response)
868
+ marks = 1 if match and float(match.group()) >= 0.6 else 0
869
+ st.session_state.marks.append(marks)
870
+ st.session_state.suggestion.append(suggestion)
871
+
872
+ # Display suggestions and marks
873
+ st.write(f"Suggested Answer {i + 1}: {suggestion}")
874
+ st.write(f"Marks for Question {i + 1}: {marks}/1")
875
+
876
+ # Calculate total marks
877
+ if st.session_state.marks:
878
+ total_marks = sum(st.session_state.marks)
879
+ st.write(f"### Total Marks: {total_marks} out of {num_questions}")
880
+
881
+
882
+ elif selected_class == "Class 11":
883
+ subject_option = st.selectbox("Select Subject", ["Select"] + list(class_11_subjects.keys()))
884
+ if subject_option != "Select":
885
+ chapters = class_11_subjects.get(subject_option, [])
886
+ chapter_option = st.selectbox("Select Chapter", ["Select"] + chapters)
887
+ if chapter_option != "Select":
888
+ st.write(f"You have selected: **{chapter_option}**")
889
+
890
+ # Prompts
891
+ PROMPT_TEMPLATE_STRING = """
892
+ Based on the CBSE Class 11, generate a question in the form of a complete sentence:
893
+ Create a question about the following chapter: {Chapter}
894
+
895
+ Provide only the question as the output, with no additional text.
896
+ """
897
+ prompt_template = PromptTemplate(input_variables=["Chapter"], template=PROMPT_TEMPLATE_STRING)
898
+
899
+ PROMPT_TEMPLATE_STRING2 = """
900
+ Evaluate the provided answer based strictly on CBSE Class 11 standards, assigning marks out of 1.
901
+ Deduct marks for any inaccuracies, even minor ones. If the final score is 0.6 or higher, round it up to 1. Provide only the marks as the output.
902
+
903
+ Question: {input}
904
+ Answer: {answer}
905
+
906
+ Output the marks out of 1.
907
+ """
908
+ prompt_template2 = PromptTemplate(input_variables=["input", "answer"], template=PROMPT_TEMPLATE_STRING2)
909
+
910
+ PROMPT_TEMPLATE_STRING3 = """
911
+ Provide a clear and accurate answer to the following question based on CBSE Class 11 standards.
912
+
913
+ Question: {question}
914
+
915
+ Output the answer only.
916
+ """
917
+ prompt_template3 = PromptTemplate(input_variables=["question"], template=PROMPT_TEMPLATE_STRING3)
918
+
919
+ load_dotenv()
920
+
921
+ Groq_API_KEY = os.getenv("GROQ_API_KEY")
922
+ llm = ChatGroq(groq_api_key=Groq_API_KEY, model_name="Llama3-8b-8192")
923
+ prompt1 = chapter_option
924
+ num_questions = st.number_input("Enter the number of questions you want", min_value=1, max_value=15, step=1)
925
+
926
+ # Initialize session state variables
927
+ if "marks" not in st.session_state:
928
+ st.session_state.marks = []
929
+ if "questions" not in st.session_state:
930
+ st.session_state.questions = []
931
+ if "answers" not in st.session_state:
932
+ st.session_state.answers = {}
933
+ if "generated_questions" not in st.session_state:
934
+ st.session_state.generated_questions = set()
935
+ if "suggestion" not in st.session_state:
936
+ st.session_state.suggestion = []
937
+ if "submitted" not in st.session_state:
938
+ st.session_state.submitted = False # Track if the submit button has been pressed
939
+ if "deleted" not in st.session_state:
940
+ st.session_state.deleted = False # Track if the file has been deleted
941
+
942
+ # Generate questions
943
+ if prompt1:
944
+ question_chain = LLMChain(llm=llm, prompt=prompt_template, verbose=True)
945
+ question = question_chain.run({"Chapter": prompt1})
946
+ while len(st.session_state.questions) < num_questions:
947
+ if question not in st.session_state.generated_questions:
948
+ st.session_state.questions.append(question)
949
+ st.session_state.generated_questions.add(question)
950
+ if len(st.session_state.questions) >= num_questions:
951
+ break
952
+
953
+ # Display questions and text areas for answers
954
+ for i in range(num_questions):
955
+ if i < len(st.session_state.questions):
956
+ st.write(f"### Question {i + 1}: {st.session_state.questions[i]}")
957
+ answer_key = f"answer_{i}"
958
+ answer = st.text_area(f"Enter your answer for Question {i + 1}", key=answer_key,
959
+ value=st.session_state.answers.get(answer_key, ""))
960
+ st.session_state.answers[answer_key] = answer
961
+
962
+ # Single submit button
963
+ if not st.session_state.submitted and st.button("Submit All Answers"):
964
+ st.session_state.submitted = True # Mark as submitted
965
+
966
+ # Process answers, generate marks and suggestions
967
+ for i in range(num_questions):
968
+ if i < len(st.session_state.questions):
969
+ answer = st.session_state.answers.get(f"answer_{i}", "")
970
+ if answer:
971
+ # Evaluate answer
972
+ response_chain = LLMChain(llm=llm, prompt=prompt_template2, verbose=True)
973
+ response = response_chain.run({"input": st.session_state.questions[i], 'answer': answer})
974
+ suggested_chain = LLMChain(llm=llm, prompt=prompt_template3, verbose=True)
975
+ suggestion = suggested_chain.run({"question": st.session_state.questions[i]})
976
+
977
+ # Parse marks
978
+ match = re.search(r'\d+(\.\d+)?', response)
979
+ marks = 1 if match and float(match.group()) >= 0.6 else 0
980
+ st.session_state.marks.append(marks)
981
+ st.session_state.suggestion.append(suggestion)
982
+
983
+ # Display suggestions and marks
984
+ st.write(f"Suggested Answer {i + 1}: {suggestion}")
985
+ st.write(f"Marks for Question {i + 1}: {marks}/1")
986
+
987
+ # Calculate total marks
988
+ if st.session_state.marks:
989
+ total_marks = sum(st.session_state.marks)
990
+ st.write(f"### Total Marks: {total_marks} out of {num_questions}")
991
+
992
+ elif selected_class == "Class 12":
993
+ subject_option = st.selectbox("Select Subject", ["Select"] + list(class_12_subjects.keys()))
994
+ if subject_option != "Select":
995
+ chapters = class_12_subjects.get(subject_option, [])
996
+ chapter_option = st.selectbox("Select Chapter", ["Select"] + chapters)
997
+ if chapter_option != "Select":
998
+ st.write(f"You have selected: **{chapter_option}**")
999
+
1000
+ # Prompts
1001
+ PROMPT_TEMPLATE_STRING = """
1002
+ Based on the CBSE Class 12, generate a question in the form of a complete sentence:
1003
+ Create a question about the following chapter: {Chapter}
1004
+
1005
+ Provide only the question as the output, with no additional text.
1006
+ """
1007
+ prompt_template = PromptTemplate(input_variables=["Chapter"], template=PROMPT_TEMPLATE_STRING)
1008
+
1009
+ PROMPT_TEMPLATE_STRING2 = """
1010
+ Evaluate the provided answer based strictly on CBSE Class 12 standards, assigning marks out of 1.
1011
+ Deduct marks for any inaccuracies, even minor ones. If the final score is 0.6 or higher, round it up to 1. Provide only the marks as the output.
1012
+
1013
+ Question: {input}
1014
+ Answer: {answer}
1015
+
1016
+ Output the marks out of 1.
1017
+ """
1018
+ prompt_template2 = PromptTemplate(input_variables=["input", "answer"], template=PROMPT_TEMPLATE_STRING2)
1019
+
1020
+ PROMPT_TEMPLATE_STRING3 = """
1021
+ Provide a clear and accurate answer to the following question based on CBSE Class 12 standards.
1022
+
1023
+ Question: {question}
1024
+
1025
+ Output the answer only.
1026
+ """
1027
+ prompt_template3 = PromptTemplate(input_variables=["question"], template=PROMPT_TEMPLATE_STRING3)
1028
+
1029
+ load_dotenv()
1030
+
1031
+ Groq_API_KEY = os.getenv("GROQ_API_KEY")
1032
+ llm = ChatGroq(groq_api_key=Groq_API_KEY, model_name="Llama3-8b-8192")
1033
+ prompt1 = chapter_option
1034
+ num_questions = st.number_input("Enter the number of questions you want", min_value=1, max_value=15, step=1)
1035
+
1036
+ # Initialize session state variables
1037
+ if "marks" not in st.session_state:
1038
+ st.session_state.marks = []
1039
+ if "questions" not in st.session_state:
1040
+ st.session_state.questions = []
1041
+ if "answers" not in st.session_state:
1042
+ st.session_state.answers = {}
1043
+ if "generated_questions" not in st.session_state:
1044
+ st.session_state.generated_questions = set()
1045
+ if "suggestion" not in st.session_state:
1046
+ st.session_state.suggestion = []
1047
+ if "submitted" not in st.session_state:
1048
+ st.session_state.submitted = False # Track if the submit button has been pressed
1049
+ if "deleted" not in st.session_state:
1050
+ st.session_state.deleted = False # Track if the file has been deleted
1051
+
1052
+ # Generate questions
1053
+ if prompt1:
1054
+ question_chain = LLMChain(llm=llm, prompt=prompt_template, verbose=True)
1055
+ question = question_chain.run({"Chapter": prompt1})
1056
+ while len(st.session_state.questions) < num_questions:
1057
+ if question not in st.session_state.generated_questions:
1058
+ st.session_state.questions.append(question)
1059
+ st.session_state.generated_questions.add(question)
1060
+ if len(st.session_state.questions) >= num_questions:
1061
+ break
1062
+
1063
+ # Display questions and text areas for answers
1064
+ for i in range(num_questions):
1065
+ if i < len(st.session_state.questions):
1066
+ st.write(f"### Question {i + 1}: {st.session_state.questions[i]}")
1067
+ answer_key = f"answer_{i}"
1068
+ answer = st.text_area(f"Enter your answer for Question {i + 1}", key=answer_key,
1069
+ value=st.session_state.answers.get(answer_key, ""))
1070
+ st.session_state.answers[answer_key] = answer
1071
+
1072
+ # Single submit button
1073
+ if not st.session_state.submitted and st.button("Submit All Answers"):
1074
+ st.session_state.submitted = True # Mark as submitted
1075
+
1076
+ # Process answers, generate marks and suggestions
1077
+ for i in range(num_questions):
1078
+ if i < len(st.session_state.questions):
1079
+ answer = st.session_state.answers.get(f"answer_{i}", "")
1080
+ if answer:
1081
+ # Evaluate answer
1082
+ response_chain = LLMChain(llm=llm, prompt=prompt_template2, verbose=True)
1083
+ response = response_chain.run({"input": st.session_state.questions[i], 'answer': answer})
1084
+ suggested_chain = LLMChain(llm=llm, prompt=prompt_template3, verbose=True)
1085
+ suggestion = suggested_chain.run({"question": st.session_state.questions[i]})
1086
+
1087
+ # Parse marks
1088
+ match = re.search(r'\d+(\.\d+)?', response)
1089
+ marks = 1 if match and float(match.group()) >= 0.6 else 0
1090
+ st.session_state.marks.append(marks)
1091
+ st.session_state.suggestion.append(suggestion)
1092
+
1093
+ # Display suggestions and marks
1094
+ st.write(f"Suggested Answer {i + 1}: {suggestion}")
1095
+ st.write(f"Marks for Question {i + 1}: {marks}/1")
1096
+
1097
+ # Calculate total marks
1098
+ if st.session_state.marks:
1099
+ total_marks = sum(st.session_state.marks)
1100
+ st.write(f"### Total Marks: {total_marks} out of {num_questions}")
1101
+
1102
+
requirements.txt ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ langchain
2
+ langchain_core
3
+ streamlit
4
+ langchain_community
5
+ pypdf
6
+ faiss-cpu
7
+ langchainhub
8
+ sentence_transformers
9
+ PyPDF2
10
+ langchain-objectbox
11
+ langchain_cohere
12
+ langchain_google_genai
13
+ python-dotenv
14
+ langchain_groq
15
+ SpeechRecognition
16
+ pyaudio
17
+ youtube_transcript_api
18
+ ipykernel
19
+ pyttsx3
utils.py ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import speech_recognition as sr
2
+ import time
3
+ from gtts import gTTS # new import
4
+ from io import BytesIO
5
+ import sqlite3
6
+ def speak(text):
7
+ audio_bytes = BytesIO()
8
+ tts = gTTS(text=text, lang="en")
9
+ tts.write_to_fp(audio_bytes)
10
+ audio_bytes.seek(0)
11
+ return audio_bytes.read()
12
+
13
+
14
+
15
+ def takeCommand():
16
+ #It takes microphone input from the user and returns string output
17
+
18
+ r = sr.Recognizer()
19
+ with sr.Microphone() as source:
20
+ r.pause_threshold = 1
21
+ audio = r.listen(source)
22
+
23
+ try:
24
+ query = r.recognize_google(audio, language='en-in')
25
+
26
+ except Exception as e:
27
+
28
+ return "None"
29
+ return query
30
+
31
+
32
+ class_9_subjects = {
33
+ "Science": [
34
+ "Chapter 1: Matter in Our Surroundings",
35
+ "Chapter 2: Is Matter Around Us Pure",
36
+ "Chapter 3: Atoms and Molecules",
37
+ "Chapter 4: Structure of the Atom",
38
+ "Chapter 5: The Fundamental Unit of Life",
39
+ "Chapter 6: Tissues",
40
+ "Chapter 7: Diversity in Living Organisms",
41
+ "Chapter 8: Motion",
42
+ "Chapter 9: Force and Laws of Motion",
43
+ "Chapter 10: Gravitation",
44
+ "Chapter 11: Work and Energy",
45
+ "Chapter 12: Sound",
46
+ "Chapter 13: Why Do We Fall Ill",
47
+ "Chapter 14: Natural Resources",
48
+ "Chapter 15: Improvement in Food Resources"
49
+ ],
50
+ "English": [
51
+ "Beehive - Chapter 1: The Fun They Had",
52
+ "Beehive - Chapter 2: The Sound of Music",
53
+ "Beehive - Chapter 3: The Little Girl",
54
+ "Beehive - Chapter 4: A Truly Beautiful Mind",
55
+ "Beehive - Chapter 5: The Snake and the Mirror",
56
+ "Beehive - Chapter 6: My Childhood",
57
+ "Beehive - Chapter 7: Packing",
58
+ "Beehive - Chapter 8: Reach for the Top",
59
+ "Beehive - Chapter 9: The Bond of Love",
60
+ "Beehive - Chapter 10: Kathmandu",
61
+ "Beehive - Chapter 11: If I Were You",
62
+ "Moments - Chapter 1: The Lost Child",
63
+ "Moments - Chapter 2: The Adventure of Toto",
64
+ "Moments - Chapter 3: Iswaran the Storyteller",
65
+ "Moments - Chapter 4: In the Kingdom of Fools",
66
+ "Moments - Chapter 5: The Happy Prince",
67
+ "Moments - Chapter 6: Weathering the Storm in Ersama",
68
+ "Moments - Chapter 7: The Last Leaf",
69
+ "Moments - Chapter 8: A House is Not a Home",
70
+ "Moments - Chapter 9: The Accidental Tourist",
71
+ "Moments - Chapter 10: The Beggar"
72
+ ],
73
+ "History": [
74
+ "Chapter 1: The French Revolution",
75
+ "Chapter 2: Socialism in Europe and the Russian Revolution",
76
+ "Chapter 3: Nazism and the Rise of Hitler",
77
+ "Chapter 4: Forest Society and Colonialism",
78
+ "Chapter 5: Pastoralists in the Modern World",
79
+ "Chapter 6: Peasants and Farmers",
80
+ "Chapter 7: History and Sport: The Story of Cricket",
81
+ "Chapter 8: Clothing: A Social History"
82
+ ],
83
+ "Geography": [
84
+ "Chapter 1: India - Size and Location",
85
+ "Chapter 2: Physical Features of India",
86
+ "Chapter 3: Drainage",
87
+ "Chapter 4: Climate",
88
+ "Chapter 5: Natural Vegetation and Wildlife",
89
+ "Chapter 6: Population"
90
+ ],
91
+ "Civics": [
92
+ "Chapter 1: What is Democracy? Why Democracy?",
93
+ "Chapter 2: Constitutional Design",
94
+ "Chapter 3: Electoral Politics",
95
+ "Chapter 4: Working of Institutions",
96
+ "Chapter 5: Democratic Rights"
97
+ ],
98
+ "Economics": [
99
+ "Chapter 1: The Story of Village Palampur",
100
+ "Chapter 2: People as Resource",
101
+ "Chapter 3: Poverty as a Challenge",
102
+ "Chapter 4: Food Security in India"
103
+ ]
104
+ }
105
+
106
+ class_10_subjects = {
107
+ "Science": [
108
+ "Chapter 1: Chemical Reactions and Equations",
109
+ "Chapter 2: Acids, Bases, and Salts",
110
+ "Chapter 3: Metals and Non-Metals",
111
+ "Chapter 4: Carbon and Its Compounds",
112
+ "Chapter 5: Periodic Classification of Elements",
113
+ "Chapter 6: Light",
114
+ "Chapter 7: Human Eye and the Colourful World",
115
+ "Chapter 8: Electricity",
116
+ "Chapter 9: Magnetic Effects of Current",
117
+ "Chapter 10: Sources of Energy",
118
+ "Chapter 11: Life Processes",
119
+ "Chapter 12: Control and Coordination",
120
+ "Chapter 13: How do Organisms Reproduce?",
121
+ "Chapter 14: Heredity and Evolution",
122
+ "Chapter 15: Our Environment",
123
+ "Chapter 16: Management of Natural Resources"
124
+ ],
125
+ "English": [
126
+ "First Flight - Chapter 1: A Letter to God",
127
+ "First Flight - Chapter 2: Nelson Mandela: Long Walk to Freedom",
128
+ "First Flight - Chapter 3: Two Stories about Flying",
129
+ "First Flight - Chapter 4: From the Diary of Anne Frank",
130
+ "First Flight - Chapter 5: The Hundred Dresses - I",
131
+ "First Flight - Chapter 6: The Hundred Dresses - II",
132
+ "First Flight - Chapter 7: Glimpses of India",
133
+ "First Flight - Chapter 8: Mijbil the Otter",
134
+ "First Flight - Chapter 9: Madam Rides the Bus",
135
+ "First Flight - Chapter 10: The Sermon at Benares",
136
+ "First Flight - Chapter 11: The Proposal",
137
+ "Footprints Without Feet - Chapter 1: A Triumph of Surgery",
138
+ "Footprints Without Feet - Chapter 2: The Thief’s Story",
139
+ "Footprints Without Feet - Chapter 3: The Midnight Visitor",
140
+ "Footprints Without Feet - Chapter 4: A Question of Trust",
141
+ "Footprints Without Feet - Chapter 5: The Book That Saved the Earth",
142
+ "Footprints Without Feet - Chapter 6: The Drop of Blood",
143
+ "Footprints Without Feet - Chapter 7: The Making of a Scientist",
144
+ "Footprints Without Feet - Chapter 8: The Beggar"
145
+ ],
146
+ "History": [
147
+ "Chapter 1: The Rise of Nationalism in Europe",
148
+ "Chapter 2: Nationalism in India",
149
+ "Chapter 3: The Making of a Global World",
150
+ "Chapter 4: The Age of Industrialization",
151
+ "Chapter 5: Print Culture and the Modern World",
152
+ "Chapter 6: Novels, Society, and History"
153
+ ],
154
+ "Geography": [
155
+ "Chapter 1: Resources and Development",
156
+ "Chapter 2: Forest and Wildlife Resources",
157
+ "Chapter 3: Water Resources",
158
+ "Chapter 4: Agriculture",
159
+ "Chapter 5: Minerals and Energy Resources",
160
+ "Chapter 6: Manufacturing Industries",
161
+ "Chapter 7: Life Lines of National Economy"
162
+ ],
163
+ "Civics": [
164
+ "Chapter 1: Power Sharing",
165
+ "Chapter 2: Federalism",
166
+ "Chapter 3: Political Parties",
167
+ "Chapter 4: Democratic Rights"
168
+ ],
169
+ "Economics": [
170
+ "Chapter 1: Development",
171
+ "Chapter 2: Sectors of the Indian Economy",
172
+ "Chapter 3: Money and Credit",
173
+ "Chapter 4: Globalization and the Indian Economy",
174
+ "Chapter 5: Consumer Rights"
175
+ ]
176
+ }
177
+
178
+ class_11_subjects = {
179
+ "Biology": [
180
+ "Chapter 1: Diversity in Living World",
181
+ "Chapter 2: Structural Organisation in Animals and Plants",
182
+ "Chapter 3: Cell Structure and Function",
183
+ "Chapter 4: Plant Physiology",
184
+ "Chapter 5: Human Physiology",
185
+ "Chapter 6: Reproduction",
186
+ "Chapter 7: Genetics and Evolution",
187
+ "Chapter 8: Biology and Human Welfare",
188
+ "Chapter 9: Biotechnology and its Applications",
189
+ "Chapter 10: Ecology and Environment"
190
+ ],
191
+ "English": [
192
+ "Hornbill - Chapter 1: The Portrait of a Lady",
193
+ "Hornbill - Chapter 2: We're Not Afraid to Die... if We Can All Be Together",
194
+ "Hornbill - Chapter 3: Discovering Tut: The Saga Continues",
195
+ "Hornbill - Chapter 4: Landscape of the Soul",
196
+ "Hornbill - Chapter 5: The Ailing Planet: The Green Movement's Role",
197
+ "Hornbill - Chapter 6: The Browning Version",
198
+ "Hornbill - Chapter 7: The Adventure",
199
+ "Hornbill - Chapter 8: Silk Road",
200
+ "Snapshots - Chapter 1: The Summer of the Beautiful White Horse",
201
+ "Snapshots - Chapter 2: The Address",
202
+ "Snapshots - Chapter 3: Ranga's Marriage",
203
+ "Snapshots - Chapter 4: Albert Einstein at School",
204
+ "Snapshots - Chapter 5: Mother’s Day",
205
+ "Snapshots - Chapter 6: The Ghat of the Only World",
206
+ "Snapshots - Chapter 7: A House is Not a Home",
207
+ "Snapshots - Chapter 8: The Book That Saved the Earth"
208
+ ],
209
+ "Physics": [
210
+ "Chapter 1: Physical World",
211
+ "Chapter 2: Units and Measurements",
212
+ "Chapter 3: Motion in a Straight Line",
213
+ "Chapter 4: Motion in a Plane",
214
+ "Chapter 5: Laws of Motion",
215
+ "Chapter 6: Work, Energy, and Power",
216
+ "Chapter 7: System of Particles and Rotational Motion",
217
+ "Chapter 8: Gravitation",
218
+ "Chapter 9: Properties of Bulk Matter",
219
+ "Chapter 10: Thermodynamics",
220
+ "Chapter 11: Behaviour of Perfect Gas and Kinetic Theory",
221
+ "Chapter 12: Oscillations and Waves"
222
+ ],
223
+ "Chemistry": [
224
+ "Chapter 1: Some Basic Concepts of Chemistry",
225
+ "Chapter 2: Structure of Atom",
226
+ "Chapter 3: Classification of Elements and Periodicity in Properties",
227
+ "Chapter 4: Chemical Bonding and Molecular Structure",
228
+ "Chapter 5: States of Matter: Gases and Liquids",
229
+ "Chapter 6: Thermodynamics",
230
+ "Chapter 7: Equilibrium",
231
+ "Chapter 8: Redox Reactions",
232
+ "Chapter 9: Hydrogen",
233
+ "Chapter 10: s-Block Element (Alkali and Alkaline earth metals)",
234
+ "Chapter 11: Some p-Block Elements",
235
+ "Chapter 12: Organic Chemistry - Some Basic Principles and Techniques",
236
+ "Chapter 13: Hydrocarbons",
237
+ "Chapter 14: Environmental Chemistry"
238
+ ]
239
+ }
240
+
241
+ class_12_subjects = {
242
+ "Biology": [
243
+ "Chapter 1: Reproduction",
244
+ "Chapter 2: Genetics and Evolution",
245
+ "Chapter 3: Biology and Human Welfare",
246
+ "Chapter 4: Biotechnology and Its Applications",
247
+ "Chapter 5: Ecology and Environment",
248
+ "Chapter 6: Biotechnology: Principles and Processes",
249
+ "Chapter 7: Human Health and Disease",
250
+ "Chapter 8: Strategies for Enhancement in Food Production",
251
+ "Chapter 9: Microbes in Human Welfare",
252
+ "Chapter 10: Biodiversity and Conservation",
253
+ "Chapter 11: Biotechnology and Its Applications",
254
+ "Chapter 12: Organisms and Populations",
255
+ "Chapter 13: Ecosystem",
256
+ "Chapter 14: Environmental Issues"
257
+ ],
258
+ "English": [
259
+ "Flamingo - Chapter 1: The Last Lesson",
260
+ "Flamingo - Chapter 2: Lost Spring: Stories of Stolen Childhood",
261
+ "Flamingo - Chapter 3: Deep Water",
262
+ "Flamingo - Chapter 4: The Rattrap",
263
+ "Flamingo - Chapter 5: Indigo",
264
+ "Flamingo - Chapter 6: Going Places",
265
+ "Vistas - Chapter 1: The Third Level",
266
+ "Vistas - Chapter 2: The Tiger King",
267
+ "Vistas - Chapter 3: Journey to the End of the Earth",
268
+ "Vistas - Chapter 4: The Enemy",
269
+ "Vistas - Chapter 5: Should Wizard Hit Mommy?",
270
+ "Vistas - Chapter 6: On the Face of It",
271
+ "Vistas - Chapter 7: Evans Tries an O-Level",
272
+ "Vistas - Chapter 8: Memories of Childhood"
273
+ ],
274
+ "Physics": [
275
+ "Chapter 1: Electric Charges and Fields",
276
+ "Chapter 2: Electrostatic Potential and Capacitance",
277
+ "Chapter 3: Current Electricity",
278
+ "Chapter 4: Moving Charges and Magnetism",
279
+ "Chapter 5: Magnetism and Matter",
280
+ "Chapter 6: Electromagnetic Induction",
281
+ "Chapter 7: Alternating Currents",
282
+ "Chapter 8: Electromagnetic Waves",
283
+ "Chapter 9: Optics",
284
+ "Chapter 10: Wave Optics",
285
+ "Chapter 11: Dual Nature of Radiation and Matter",
286
+ "Chapter 12: Atoms",
287
+ "Chapter 13: Nuclei",
288
+ "Chapter 14: Semiconductor Electronics",
289
+ "Chapter 15: Communication Systems"
290
+ ],
291
+ "Chemistry": [
292
+ "Chapter 1: The Solid State",
293
+ "Chapter 2: Solutions",
294
+ "Chapter 3: Electrochemistry",
295
+ "Chapter 4: Chemical Kinetics",
296
+ "Chapter 5: Surface Chemistry",
297
+ "Chapter 6: General Principles and Processes of Isolation of Elements",
298
+ "Chapter 7: p-Block Elements",
299
+ "Chapter 8: d and f Block Elements",
300
+ "Chapter 9: Coordination Compounds",
301
+ "Chapter 10: Haloalkanes and Haloarenes",
302
+ "Chapter 11: Alcohols, Phenols, and Ethers",
303
+ "Chapter 12: Aldehydes, Ketones, and Carboxylic Acids",
304
+ "Chapter 13: Organic Compounds Containing Nitrogen",
305
+ "Chapter 14: Biomolecules",
306
+ "Chapter 15: Polymers",
307
+ "Chapter 16: Chemistry in Everyday Life"
308
+ ]
309
+ }
310
+
311
+