John Graham Reynolds commited on
Commit
30de022
·
1 Parent(s): ff3cc98

update the examples and formatting

Browse files
Files changed (1) hide show
  1. app.py +12 -8
app.py CHANGED
@@ -9,22 +9,26 @@ MSG_MAX_TURNS_EXCEEDED = f"Sorry! The CyberSolve LinAlg playground is limited to
9
  # MSG_CLIPPED_AT_MAX_OUT_TOKENS = "Reached maximum output tokens for DBRX Playground"
10
 
11
  EXAMPLE_PROMPTS = [
12
- "How is a data lake used at Vanderbilt University Medical Center?",
13
- "In a table, what are some of the greatest hurdles to healthcare in the United States?",
14
- "What does EDW stand for in the context of Vanderbilt University Medical Center?",
15
- "Code a sql statement that can query a database named 'VUMC'.",
16
- "Write a short story about a country concert in Nashville, Tennessee.",
17
- "Tell me about maximum out-of-pocket costs in healthcare.",
18
  ]
19
 
20
  TITLE = "CyberSolve LinAlg 1.2"
21
  DESCRIPTION= """Welcome to the CyberSolve LinAlg 1.2 demo! \n
22
 
23
- **Overview and Usage**: This 🤗 Space is designed to demo the abilities of the **CyberSolve LinAlg 1.2** text-to-text language model. Specifically, the **CyberSolve LinAlg 1.x** family of models
 
 
24
  are downstream versions of the 783M parameter FLAN-T5 text-to-text transformer, fine-tuned on the Google DeepMind Mathematics dataset for the purpose of solving linear equations of a single variable.
25
  To effectively query the model for its intended task, prompt the model solve an arbitrary linear equation of a single variable with a query of the form: *"Solve 24 = 1601c - 1605c for c."*; the model
26
  will return its prediciton in a simple format. The algebraic capabailites far exceed those of the base FLAN-T5 model. CyberSolve LinAlg 1.2 achieves a 90.7 percent exact match benchmark on the DeepMind Mathematics
27
- evaluation dataset of 10,000 unique linear equations; the FLAN-T5 base model scores 9.6 percent. On the left is a sidebar of **Examples** that can be clicked to query to model.
 
 
28
 
29
  **Feedback**: Feedback is welcomed, encouraged, and invaluable! To give feedback in regards to one of the model's responses, click the **Give Feedback on Last Response** button just below
30
  the user input bar. This allows you to provide either positive or negative feedback in regards to the model's most recent response. A **Feedback Form** will appear above the model's title.
 
9
  # MSG_CLIPPED_AT_MAX_OUT_TOKENS = "Reached maximum output tokens for DBRX Playground"
10
 
11
  EXAMPLE_PROMPTS = [
12
+ "Solve 24 = 1601c - 1605c for c.",
13
+ "Solve 657 = -220*t + 1086*t + 22307 for t.",
14
+ "Solve -11*y - 263*y + 3162 = -88*y for y.",
15
+ "Solve 0 = -11*b - 4148 + 4225 for b.",
16
+ "Solve 65*l - 361 + 881 = 0 for l.",
17
+ "Solve 49*l + 45*l - 125 - 63 = 0 for l.",
18
  ]
19
 
20
  TITLE = "CyberSolve LinAlg 1.2"
21
  DESCRIPTION= """Welcome to the CyberSolve LinAlg 1.2 demo! \n
22
 
23
+ **Overview and Usage**: This 🤗 Space is designed to demo the abilities of the **CyberSolve LinAlg 1.2** text-to-text language model.
24
+
25
+ Specifically, the **CyberSolve LinAlg 1.x** family of models
26
  are downstream versions of the 783M parameter FLAN-T5 text-to-text transformer, fine-tuned on the Google DeepMind Mathematics dataset for the purpose of solving linear equations of a single variable.
27
  To effectively query the model for its intended task, prompt the model solve an arbitrary linear equation of a single variable with a query of the form: *"Solve 24 = 1601c - 1605c for c."*; the model
28
  will return its prediciton in a simple format. The algebraic capabailites far exceed those of the base FLAN-T5 model. CyberSolve LinAlg 1.2 achieves a 90.7 percent exact match benchmark on the DeepMind Mathematics
29
+ evaluation dataset of 10,000 unique linear equations; the FLAN-T5 base model scores 9.6 percent.
30
+
31
+ On the left is a sidebar of **Examples** that can be clicked to query to model.
32
 
33
  **Feedback**: Feedback is welcomed, encouraged, and invaluable! To give feedback in regards to one of the model's responses, click the **Give Feedback on Last Response** button just below
34
  the user input bar. This allows you to provide either positive or negative feedback in regards to the model's most recent response. A **Feedback Form** will appear above the model's title.