Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -145,6 +145,23 @@ To interact with the Custon GPT Green Data City design tool click the button bel
|
|
145 |
toggle the Explanation of Custom GPT "Create Green Data City" button.
|
146 |
"""
|
147 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
148 |
|
149 |
# Function to draw the grid with optional highlighting
|
150 |
|
@@ -320,13 +337,13 @@ with tab2:
|
|
320 |
|
321 |
# Displaying the image in the left column
|
322 |
with col1:
|
323 |
-
image = Image.open('./data/
|
324 |
-
st.image(image, caption='
|
325 |
|
326 |
# Displaying the text above on the right
|
327 |
with col2:
|
328 |
|
329 |
-
st.markdown(
|
330 |
|
331 |
# Displaying the audio player below the text
|
332 |
voice_option = st.selectbox(
|
@@ -336,12 +353,12 @@ with tab2:
|
|
336 |
|
337 |
|
338 |
if st.button('Convert to Speech'):
|
339 |
-
if
|
340 |
try:
|
341 |
response = oai_client.audio.speech.create(
|
342 |
model="tts-1",
|
343 |
voice=voice_option,
|
344 |
-
input=
|
345 |
)
|
346 |
|
347 |
# Stream or save the response as needed
|
@@ -353,76 +370,34 @@ with tab2:
|
|
353 |
st.audio(audio_file_path, format='audio/mp3')
|
354 |
st.success("Conversion successful!")
|
355 |
|
356 |
-
|
357 |
except Exception as e:
|
358 |
st.error(f"An error occurred: {e}")
|
359 |
else:
|
360 |
st.error("Please enter some text to convert.")
|
361 |
|
362 |
|
363 |
-
|
364 |
-
|
365 |
-
|
366 |
-
|
367 |
st.header("Custom GPT Engineering Tools")
|
368 |
-
st.link_button("
|
369 |
|
370 |
-
if st.button('Show/Hide Explanation of "
|
371 |
# Toggle visibility
|
372 |
st.session_state.show_instructions = not st.session_state.get('show_instructions', False)
|
373 |
|
374 |
# Check if the instructions should be shown
|
375 |
if st.session_state.get('show_instructions', False):
|
376 |
st.write("""
|
377 |
-
|
378 |
-
|
379 |
-
|
380 |
-
|
381 |
-
|
382 |
-
|
383 |
-
|
384 |
-
|
385 |
-
|
386 |
-
|
387 |
-
|
388 |
-
|
389 |
-
|
390 |
-
**Step 4:** Roads:
|
391 |
-
Detail the roads' start and end coordinates, color, and sensors installed.
|
392 |
-
Ensure roads connect significant areas of the city, providing access to all buildings. Equip roads with sensors for traffic flow, smart streetlights, and pollution monitoring. MAKE SURE ALL BUILDINGS HAVE ACCESS TO A ROAD.
|
393 |
-
|
394 |
-
This test scenario would evaluate the model's ability to creatively assemble a smart city plan with diverse infrastructure and technology implementations, reflecting real-world urban planning challenges and the integration of smart technologies for sustainable and efficient city management.
|
395 |
-
|
396 |
-
Example:
|
397 |
-
{
|
398 |
-
"city": "City Name",
|
399 |
-
"population": "Population Size",
|
400 |
-
"size": {
|
401 |
-
"rows": "Number of Rows",
|
402 |
-
"columns": "Number of Columns"
|
403 |
-
},
|
404 |
-
"buildings": [
|
405 |
-
{
|
406 |
-
"coords": ["X", "Y"],
|
407 |
-
"type": "Building Type",
|
408 |
-
"size": "Building Size",
|
409 |
-
"color": "Building Color",
|
410 |
-
"sensors": ["Sensor Types"]
|
411 |
-
}
|
412 |
-
],
|
413 |
-
"roads": [
|
414 |
-
{
|
415 |
-
"start": ["X Start", "Y Start"],
|
416 |
-
"end": ["X End", "Y End"],
|
417 |
-
"color": "Road Color",
|
418 |
-
"sensors": ["Sensor Types"]
|
419 |
-
}
|
420 |
-
]
|
421 |
-
}
|
422 |
-
|
423 |
-
**Step 5:** Finally create a Dalle image FOR EACH BUILDING in the JSON file depicting what a user will experience there in this green open data city including sensors. LABEL EACH IMAGE.
|
424 |
-
|
425 |
-
|
426 |
""")
|
427 |
|
428 |
|
|
|
145 |
toggle the Explanation of Custom GPT "Create Green Data City" button.
|
146 |
"""
|
147 |
|
148 |
+
query2 = """ ***Global Citizen***
|
149 |
+
|
150 |
+
Elian and Dr. Maya Lior's journey to the Cultural Center,a beacon of sustainability and technological integration.
|
151 |
+
Equipped with cutting-edge environmental monitoring sensors, occupancy detectors, and smart lighting systems,
|
152 |
+
the center is a hub for innovation in resource management and climate action.
|
153 |
+
There, they were greeted by Mohammad, a dedicated environmental scientist who, despite the language barrier,
|
154 |
+
shared their passion for creating a sustainable future. Utilizing the Cohere translator, they engaged in a profound dialogue,
|
155 |
+
seamlessly bridging the gap between languages. Their conversation, rich with ideas and insights on global citizenship and
|
156 |
+
collaborative efforts to tackle climate change and resource scarcity, underscored the imperative of unity and innovation in
|
157 |
+
facing the challenges of our time. This meeting, a melting pot of cultures and disciplines, symbolized the global
|
158 |
+
commitment required to sustain our planet.
|
159 |
+
|
160 |
+
As Elain is using the Cohere translator he wonders how to best utilize its efficiently. He studies a Custom GPT called Conversation Analzer.
|
161 |
+
It translates a small portion of the message your sending so you can be comfortable that the essescene of what your are saying is being
|
162 |
+
sent and aides in learning the language. It's mantra is language is not taught but caught. To try out the Custon GPT Conversation
|
163 |
+
Anzylizer click the button below. Additionally, to see how it was built toggle the Explanation of Custom GPT "Conversation Analyzer" button.
|
164 |
+
"""
|
165 |
|
166 |
# Function to draw the grid with optional highlighting
|
167 |
|
|
|
337 |
|
338 |
# Displaying the image in the left column
|
339 |
with col1:
|
340 |
+
image = Image.open('./data/global_image.jpg')
|
341 |
+
st.image(image, caption='Cultural Center Cohere Translator')
|
342 |
|
343 |
# Displaying the text above on the right
|
344 |
with col2:
|
345 |
|
346 |
+
st.markdown(query2)
|
347 |
|
348 |
# Displaying the audio player below the text
|
349 |
voice_option = st.selectbox(
|
|
|
353 |
|
354 |
|
355 |
if st.button('Convert to Speech'):
|
356 |
+
if query2:
|
357 |
try:
|
358 |
response = oai_client.audio.speech.create(
|
359 |
model="tts-1",
|
360 |
voice=voice_option,
|
361 |
+
input=query2,
|
362 |
)
|
363 |
|
364 |
# Stream or save the response as needed
|
|
|
370 |
st.audio(audio_file_path, format='audio/mp3')
|
371 |
st.success("Conversion successful!")
|
372 |
|
|
|
373 |
except Exception as e:
|
374 |
st.error(f"An error occurred: {e}")
|
375 |
else:
|
376 |
st.error("Please enter some text to convert.")
|
377 |
|
378 |
|
|
|
|
|
|
|
|
|
379 |
st.header("Custom GPT Engineering Tools")
|
380 |
+
st.link_button("Conversation Analyzer", "https://chat.openai.com/g/g-XARuyBgpL-conversation-analyzer")
|
381 |
|
382 |
+
if st.button('Show/Hide Explanation of "Conversation Analyzer"'):
|
383 |
# Toggle visibility
|
384 |
st.session_state.show_instructions = not st.session_state.get('show_instructions', False)
|
385 |
|
386 |
# Check if the instructions should be shown
|
387 |
if st.session_state.get('show_instructions', False):
|
388 |
st.write("""
|
389 |
+
Upon click "Input Your Conversation" complete the following 8 steps
|
390 |
+
|
391 |
+
1. Input Acquisition: Ask the user to input the text they would like analyzed.
|
392 |
+
2. Key Word Identification: Analyze the text and advise the user on the number of words they would need in order to ensure the purpose of the text is conveyed. This involves processing the text using natural language processing (NLP) techniques to detect words that are crucial to understanding the essence of the conversation. FRIST give the number of words needed and SECOND the words in a bulleted list
|
393 |
+
3. Ask the user if they would like to use your number of words or reduce to a smaller optimized list designed to convey the most accurate amount of information possible given the reduced set.
|
394 |
+
4. Ask the user what language they would like to translate the input into.
|
395 |
+
5. For the newly optimized list of words give the translated words FIRST and the original "language of the input" SECOND . Don't give the definition of the word.
|
396 |
+
6. Show the translated input and highlight the keywords by bolding them.
|
397 |
+
7. Give a distinct 100x100 image of each keyword. Try to put them in a single image so they can be cropped out when needed.
|
398 |
+
8 Allow the user to provide feedback on the analysis and the outputs, allowing for additional or reduction of words.
|
399 |
+
9. Give the final translation with highlighted words and provide an efficiency score. Number of words chosen versus suggested words x 100
|
400 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
401 |
""")
|
402 |
|
403 |
|