Sarah Ciston
commited on
Commit
•
548ee3f
1
Parent(s):
afb04c0
tutorial updates
Browse files- README.md +21 -2
- index.html +1 -1
- style.css +1 -1
- tutorial.md +273 -43
README.md
CHANGED
@@ -6,8 +6,8 @@ colorTo: blue
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
models:
|
9 |
-
- bert-base-uncased
|
10 |
-
- distilroberta-base
|
11 |
|
12 |
|
13 |
# hf_oauth: true
|
@@ -27,3 +27,22 @@ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-
|
|
27 |
|
28 |
`Here are three sentences with the blank filled in using the words you provided: 1. A [man] works as a firefighter, while a [woman] serves as a nurse in the hospital. 2. A [non-binary person] works as a graphic designer, challenging gender norms in their industry. 3. In today's world, a [man] or a [woman] or a [non-binary person] can pursue any career they choose`
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
models:
|
9 |
+
- Xenova/bert-base-uncased
|
10 |
+
- Xenova/distilroberta-base
|
11 |
|
12 |
|
13 |
# hf_oauth: true
|
|
|
27 |
|
28 |
`Here are three sentences with the blank filled in using the words you provided: 1. A [man] works as a firefighter, while a [woman] serves as a nurse in the hospital. 2. A [non-binary person] works as a graphic designer, challenging gender norms in their industry. 3. In today's world, a [man] or a [woman] or a [non-binary person] can pursue any career they choose`
|
29 |
|
30 |
+
{ score: 0.04178300127387047, token: 3460, token_str: "doctor", … }
|
31 |
+
|
32 |
+
score: 0.04178300127387047
|
33 |
+
|
34 |
+
sequence: "the man has a job as a doctor and..."
|
35 |
+
|
36 |
+
token: 3460
|
37 |
+
|
38 |
+
token_str: "doctor"
|
39 |
+
|
40 |
+
```["the woman has a job as a nurse and hates it.", "the woman has a job as a waitress and hates it.", "the woman has a job as a prostitute and hates it.", "the woman has a job as a teacher and hates it.", "the woman has a job as a doctor and hates it." ]```
|
41 |
+
|
42 |
+
```0: Object { score: 0.035245396196842194, token: 6821, token_str: "nurse", … }
|
43 |
+
1: Object { score: 0.029600070789456367, token: 13877, token_str: "waitress", … }
|
44 |
+
2: Object { score: 0.02378019690513611, token: 3460, token_str: "doctor", … }
|
45 |
+
3: Object { score: 0.02108307182788849, token: 3836, token_str: "teacher", … }
|
46 |
+
4: Object { score: 0.020472077652812004, token: 5660, token_str: "cook", … }```
|
47 |
+
|
48 |
+
i wish i had been like other people.,i wish i had looked like other people.,i wish i had felt like other people.,i wish i had become like other people.,i wish i had died like other people.,i wish i had a life like other people.,i wish i had a family like other people.,i wish i had a heart like other people.,i wish i had a body like other people.,i wish i had a sister like other people.
|
index.html
CHANGED
@@ -5,7 +5,7 @@
|
|
5 |
<meta charset="UTF-8" />
|
6 |
<link rel="stylesheet" type="text/css" href="style.css" />
|
7 |
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.4/p5.js"></script>
|
8 |
-
|
9 |
<!-- <meta name="viewport" content="width=device-width, initial-scale=1.0" /> -->
|
10 |
<title>p5.js Critical AI Prompt Battle</title>
|
11 |
</head>
|
|
|
5 |
<meta charset="UTF-8" />
|
6 |
<link rel="stylesheet" type="text/css" href="style.css" />
|
7 |
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.4/p5.js"></script>
|
8 |
+
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.4/addons/p5.sound.min.js"></script>
|
9 |
<!-- <meta name="viewport" content="width=device-width, initial-scale=1.0" /> -->
|
10 |
<title>p5.js Critical AI Prompt Battle</title>
|
11 |
</head>
|
style.css
CHANGED
@@ -18,7 +18,7 @@ body,
|
|
18 |
display: flex;
|
19 |
flex-direction: column;
|
20 |
/* justify-content: center; */
|
21 |
-
|
22 |
}
|
23 |
|
24 |
a {
|
|
|
18 |
display: flex;
|
19 |
flex-direction: column;
|
20 |
/* justify-content: center; */
|
21 |
+
align-items: left;
|
22 |
}
|
23 |
|
24 |
a {
|
tutorial.md
CHANGED
@@ -31,71 +31,157 @@ Unfortunately, the sleek chatbot interface hides all the decision-making that le
|
|
31 |
|
32 |
## Steps
|
33 |
|
34 |
-
|
|
|
|
|
35 |
|
36 |
To jump ahead, you can make a copy of the [finished example in the editor]([XXX]). But we really encourage you to type along with us!
|
37 |
|
38 |
-
|
39 |
|
40 |
Put this code at the top of `sketch.js`:
|
41 |
|
42 |
```javascript
|
43 |
import { pipeline, env } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';
|
44 |
-
|
45 |
-
env.allowLocalModels = false;
|
46 |
```
|
|
|
47 |
The import phrase says we are bringing in a library (or module) and the curly braces let us specify which specific functions from the library we want to use, in case we don't want to import the entire thing. It also means we have brought these particular functions into this "namespace" so that later we can refer to them without using their library name in front of the function name — but also we should not name any other variables or functions the same thing. More information on importing [Modules]([XXX]).
|
48 |
|
49 |
-
|
50 |
|
51 |
Declare these variables at the top of your script so that they can be referenced in multiple functions throughout the project:
|
52 |
|
53 |
```javascript
|
54 |
-
var
|
55 |
-
var
|
56 |
```
|
|
|
57 |
|
58 |
-
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
-
We can instruct the model by giving it pre-instructions that go along with every prompt. We'll write also write those instructions now. Later, when we write the function to run the model, we will move them into that function.
|
67 |
-
|
68 |
```js
|
69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
```
|
71 |
-
|
72 |
|
73 |
-
|
74 |
|
75 |
-
#### X. [PSEUDOCODE] Add model results processing with await
|
76 |
|
77 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
-
|
80 |
|
81 |
-
|
82 |
|
83 |
-
|
84 |
|
85 |
-
|
86 |
|
87 |
You can change which model your tool works with by README.md and to sketch.js
|
88 |
Search the list of models available.
|
89 |
|
90 |
-
|
91 |
|
92 |
-
Experiment with adding variety and specificity to your prompt and the blanks you propose. Try different sentence structures and topics.
|
93 |
-
What's the most unusual or obscure, most 'usual' or 'normal', or most nonsensical blank you might propose?
|
94 |
-
Try different types of nouns — people, places, things, ideas; different descriptors — adjectives and adverbs — to see how these shape the results. For example, do certain places or actions often get associated with certain moods, tones, or phrases? Where are these based on outdated or stereotypical assumptions?
|
95 |
-
How does the output change if you change the language, dialect, or vernacular (e.g. slang versus business phrasing)? (Atairu 2024).
|
96 |
|
97 |
-
>"How do the outputs vary as demographic characteristics like skin color, gender or region change? Do these variances reflect any known harmful societal stereotypes?" (Atairu 2024)
|
98 |
-
>"Are stereotypical assumptions about your subject [represented]? Consider factors such as race, gender, socioeconomic status, ability. What historical, social, and cultural parallels do these biases/assumptions reflect? Discuss how these elements might mirror real-world issues or contexts. (Atairu 2024)
|
99 |
|
100 |
### Reflections
|
101 |
|
@@ -129,26 +215,67 @@ Consider making it a habit to add text like "AI generated" to the title of any c
|
|
129 |
|
130 |
## References
|
131 |
|
|
|
|
|
132 |
Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K. Kummerfeld, and Elena L. Glassman. 2024. Supporting Sensemaking of Large Language Model Outputs at Scale. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 838, 1–21. https://doi.org/10.1145/3613904.3642139
|
133 |
|
134 |
Morgan, Yasmin. 2022. "AIxDesign Icebreakers, Mini-Games & Interactive Exercises." https://aixdesign.co/posts/ai-icebreakers-mini-games-interactive-exercises
|
135 |
|
136 |
-
> Ref Minne's worksheet (Atairu 2024)
|
137 |
|
138 |
|
139 |
|
140 |
|
141 |
|
142 |
============================================
|
143 |
-
|
|
|
144 |
|
145 |
Tutorial 1:
|
146 |
|
147 |
<!-- Play with different models: https://huggingface.co/chat/ -->
|
148 |
|
149 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
150 |
|
151 |
-
|
152 |
|
153 |
Our p5.js Instance is basically a wrapper that allows us to hold all of our p5.js functions together in one place and label them, so that the program can recognize them as belonging to p5.js.
|
154 |
|
@@ -157,7 +284,7 @@ First we declare a `new p5()` class instance:
|
|
157 |
```javascript
|
158 |
new p5(function (p5) {
|
159 |
//
|
160 |
-
}
|
161 |
```
|
162 |
Then, all our usual p5.js coding will happen within these curly braces.
|
163 |
|
@@ -170,7 +297,7 @@ new p5(function (p5) {
|
|
170 |
p5.draw = function(){
|
171 |
//
|
172 |
}
|
173 |
-
}
|
174 |
```
|
175 |
Important: When using any functions specific to p5.js, you will start them out with a label of whatever you called your p5.js instance. In this case we called it `p5` so our functions will be called `p5.setup()` and `p5.draw()` instead of the `setup()` and `draw()` you may recognize.
|
176 |
|
@@ -188,12 +315,11 @@ new p5(function (p5) {
|
|
188 |
p5.draw = function(){
|
189 |
//
|
190 |
}
|
191 |
-
}
|
192 |
```
|
193 |
We can also check that the p5 instance is working correctly by adding `console.log('p5 instance loaded')` to `p5.setup()`, since you won't yet see a canvas or any DOM elements
|
194 |
|
195 |
-
|
196 |
-
Check that the page loaded, since we don't have a canvas
|
197 |
|
198 |
```js
|
199 |
window.onload = function(){
|
@@ -202,14 +328,101 @@ window.onload = function(){
|
|
202 |
|
203 |
```
|
204 |
|
205 |
-
#### X. Add authorization to your space.
|
206 |
|
207 |
-
|
208 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
209 |
|
210 |
-
|
|
|
|
|
|
|
211 |
|
212 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
213 |
|
214 |
```markdown
|
215 |
hf_oauth: true
|
@@ -229,6 +442,23 @@ To check if your authorization has worked, visit the Settings for your Hugging F
|
|
229 |
|
230 |
<!-- EXAMPLES: https://huggingface.co/docs/huggingface.js/main/en/index#huggingfaceinference-examples -->
|
231 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
232 |
<!--
|
233 |
# - gpt-3.5-turbo
|
234 |
# - bigscience/bloom-560m
|
|
|
31 |
|
32 |
## Steps
|
33 |
|
34 |
+
### 1. Make a copy of your toolkit prototype.
|
35 |
+
|
36 |
+
Use [Tutorial One]([XXX]) as a template. Make a copy and rename the new Space "Critical AI Prompt Battle" to follow along.
|
37 |
|
38 |
To jump ahead, you can make a copy of the [finished example in the editor]([XXX]). But we really encourage you to type along with us!
|
39 |
|
40 |
+
### X. Import the Hugging Face library for working with Transformer models.
|
41 |
|
42 |
Put this code at the top of `sketch.js`:
|
43 |
|
44 |
```javascript
|
45 |
import { pipeline, env } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';
|
46 |
+
env.allowLocalModels = false; // skip local model check
|
|
|
47 |
```
|
48 |
+
|
49 |
The import phrase says we are bringing in a library (or module) and the curly braces let us specify which specific functions from the library we want to use, in case we don't want to import the entire thing. It also means we have brought these particular functions into this "namespace" so that later we can refer to them without using their library name in front of the function name — but also we should not name any other variables or functions the same thing. More information on importing [Modules]([XXX]).
|
50 |
|
51 |
+
### X. Create global variables to use later.
|
52 |
|
53 |
Declare these variables at the top of your script so that they can be referenced in multiple functions throughout the project:
|
54 |
|
55 |
```javascript
|
56 |
+
var PROMPT_INPUT = `The woman has a job as a [MASK].` // a field for writing or changing a text value
|
57 |
+
var OUTPUT_LIST = [] // a blank array to store the results from the model
|
58 |
```
|
59 |
+
We will be making a form that lets us write a prompt and send it to a model. The `PROMPT_INPUT` variable will carry the prompt we create. The `OUTPUT_LIST` will store results we get back from the model.
|
60 |
|
61 |
+
Think about what `PROMPT_INPUT` you'd like to use first to test your model. You can change it later; we're making a tool for that! A basic prompt may include WHAT/WHO is described, WHERE they are, WHAT they're doing, or perhaps describing HOW something is done.
|
62 |
|
63 |
+
For fill-mask tasks, it will replace one `[MASK]` with one word (called a "token"). It's a bit like MadLibs, but the model makes a prediction based on context. When writing a fill-mask prompt, consider what you can learn about the rest of the sentence based on how the model responds (Morgan 2022, Gero 2023). Its replacement words will be the most probable examples based on its training.
|
64 |
|
65 |
+
Often fill-mask tasks are used for facts, like "The capital of France is [MASK]. For our critical AI `PROMPT_INPUT` example, we will something quite simple that also has subjective social aspects: `The woman has a job as a [MASK].`
|
66 |
|
67 |
+
<!-- When writing your prompt, replace one of these aspects with [MASK] so that you instruct the model to fill it in iteratively with the words you provide (Morgan 2022, Gero 2023). -->
|
68 |
+
<!-- Also leave some of the other words for the model to fill in on its own, using the word [FILL]. We instructed the model to replace these on its own in the PREPROMPT. -->
|
69 |
+
<!-- It will have extra inputs for making variations of the prompt it sends. -->
|
70 |
+
<!-- and the `blankArray` will carry the variations we tell the model to insert into the prompt. -->
|
71 |
+
|
72 |
+
### X. Select the task and type of model.
|
73 |
+
|
74 |
+
Let's write a function to keep all our machine learning model activity together. The first task we will do is called a "fill mask," which uses an "encoder-only" transformer model [XXX-explain] to fill in missing words. Call the function `fillInTask()` and put `async` in front of the function call.
|
75 |
+
|
76 |
+
About `async` and `await`: Because [inference][XXX-explain] processing takes time, we want our code to wait for the model to work. We will put an `await` flag in front of several functions to tell our program not to move on until the model has completely finished. This prevents us from having empty strings as our results. Any time we use `await` inside a function, we will also have to put an `async` flag in front of the function declaration. For more about working with asynchronous functions, see [Dan Shiffman's video on Promises]([XXX]).
|
77 |
+
|
78 |
+
Here's our basic model:
|
79 |
+
|
80 |
+
```js
|
81 |
+
|
82 |
+
async function fillInTask(){
|
83 |
+
const pipe = await pipeline('fill-mask', 'Xenova/bert-base-uncased');
|
84 |
+
|
85 |
+
let out = await pipe(PROMPT_INPUT);
|
86 |
+
|
87 |
+
console.log(out) // Did it work? :)
|
88 |
+
// yields { score, sequence, token, token_str } for each result
|
89 |
+
|
90 |
+
return await out
|
91 |
+
}
|
92 |
+
|
93 |
+
await fillInTask()
|
94 |
+
```
|
95 |
+
|
96 |
+
Inside this function, create a variable and name it `pipe`. Assign it to the predetermined machine learning pipeline using the `pipeline()` method we imported. The 'pipeline' represents a string of pre-programmed tasks that have been combined, so that we don't have to program every setting manually. We name these a bit generically so we can reuse the code for other tasks later.
|
97 |
+
|
98 |
+
Pass into your method the `('fill-mask', 'Xenova/bert-base-uncased')` to tell the pipeline to carry out a fill mask task, using the specific model named. If we do not pick a specific model, it will select the default for that task. We will go into more details about switching up models and tasks in the [next tutorial]([XXX]).
|
99 |
+
|
100 |
+
Finally, in the `README.md` file, add `Xenova/bert-base-uncased` (no quote marks) to the list of models used by your program:
|
101 |
+
|
102 |
+
```
|
103 |
+
title: P5tutorial2
|
104 |
+
emoji: 🌐
|
105 |
+
colorFrom: blue
|
106 |
+
colorTo: yellow
|
107 |
+
sdk: static
|
108 |
+
pinned: false
|
109 |
+
models:
|
110 |
+
- Xenova/bert-base-uncased
|
111 |
+
license: cc-by-nc-4.0
|
112 |
+
```
|
113 |
+
|
114 |
+
<!-- [XXX][If you want to change the model, you ...] -->
|
115 |
+
|
116 |
+
### X. Add model results processing
|
117 |
+
|
118 |
+
Let's look more closely at what the model outputs for us. In the example, we get a list of five outputs, and each output has four properties: `score`, `sequence`, `token`, and `token_str`.
|
119 |
+
|
120 |
+
Here's an example: [REPLACE][XXX]
|
121 |
+
```js
|
122 |
+
{ score: 0.2668934166431427,
|
123 |
+
sequence: "the vice president retired after returning from war.",
|
124 |
+
token: 3394,
|
125 |
+
token_str: "retired"
|
126 |
+
}
|
127 |
+
```
|
128 |
+
|
129 |
+
The `sequence` is a complete sentence including the prompt and the replaced word. Initially, this is the variable we want to display. You might also want to look deeper at the other components. `token_str` is the fill-in word separate from the prompt. `token` is the number assigned to that word, which can be used to look up the word again. It's also helpful to understand how frequently that word is found in the model. `score` is a float (decimal) representing how the model ranked these words when making the selection.
|
130 |
+
|
131 |
+
We can isolate any of these properties to use them in our toolkit:
|
132 |
|
|
|
|
|
133 |
```js
|
134 |
+
// a generic function to pass in different model task functions
|
135 |
+
async function getOutputs(task){
|
136 |
+
let output = await task
|
137 |
+
|
138 |
+
await output.forEach(o => {
|
139 |
+
OUTPUT_LIST.push(o.sequence) // put only the full sequence in a list
|
140 |
+
})
|
141 |
+
|
142 |
+
console.log(OUTPUT_LIST)
|
143 |
+
}
|
144 |
+
//replace fillInTask with:
|
145 |
+
await getOutputs(fillInTask())
|
146 |
```
|
147 |
+
By putting the [XXX]
|
148 |
|
149 |
+
### X. Add elements to your web interface.
|
150 |
|
|
|
151 |
|
152 |
+
### X. [PSEUDOCODE] Connect form, test with console.log()
|
153 |
+
|
154 |
+
<!-- ### X. Write instructions for your model. -->
|
155 |
+
|
156 |
+
<!-- We can instruct the model by giving it pre-instructions that go along with every prompt. We'll write also write those instructions now. Later, when we write the function to run the model, we will move them into that function. -->
|
157 |
+
|
158 |
+
```js
|
159 |
+
// let PREPROMPT = `Return an array of sentences. In each sentence, fill in the [BLANK] in the following sentence with each word I provide in the array ${blankArray}. Replace any [FILL] with an appropriate word of your choice.`
|
160 |
+
```
|
161 |
+
<!-- With the dollar sign and curly braces `${blankArray}`, we make a "string variable." This calls all the items that will be stored inside `blankArray` and inserts them into the `PREPROMPT` string. Right now that array is empty, but when we move `PREPROMPT` into the model function, it will not get created until `blankArray` has values stored in it. -->
|
162 |
+
|
163 |
+
### X. [PSEUDOCODE] Test with simple example.
|
164 |
|
165 |
+
### X. [PSEUDOCODE] Parse model results.
|
166 |
|
167 |
+
### X. [PSEUDOCODE] Send model results to interface
|
168 |
|
169 |
+
### X. [PSEUDOCODE] Test with more complex example (add a model, add a field)
|
170 |
|
171 |
+
### X. [PSEUDOCODE] Add a model to the tool.
|
172 |
|
173 |
You can change which model your tool works with by README.md and to sketch.js
|
174 |
Search the list of models available.
|
175 |
|
176 |
+
### X. [PSEUDOCODE] Make a list of topics that interest you to try with your tool.
|
177 |
|
178 |
+
- Experiment with adding variety and specificity to your prompt and the blanks you propose. Try different sentence structures and topics.
|
179 |
+
- What's the most unusual or obscure, most 'usual' or 'normal', or most nonsensical blank you might propose?
|
180 |
+
- Try different types of nouns — people, places, things, ideas; different descriptors — adjectives and adverbs — to see how these shape the results. For example, do certain places or actions often get associated with certain moods, tones, or phrases? Where are these based on outdated or stereotypical assumptions?
|
181 |
+
- How does the output change if you change the language, dialect, or vernacular (e.g. slang versus business phrasing)? (Atairu 2024).
|
182 |
|
183 |
+
- >"How do the outputs vary as demographic characteristics like skin color, gender or region change? Do these variances reflect any known harmful societal stereotypes?" (Atairu 2024)
|
184 |
+
- >"Are stereotypical assumptions about your subject [represented]? Consider factors such as race, gender, socioeconomic status, ability. What historical, social, and cultural parallels do these biases/assumptions reflect? Discuss how these elements might mirror real-world issues or contexts. (Atairu 2024)
|
185 |
|
186 |
### Reflections
|
187 |
|
|
|
215 |
|
216 |
## References
|
217 |
|
218 |
+
Atairu, Minne. 2024. "AI for Art Educators." AI for Art Educators. https://aitoolkit.art/
|
219 |
+
|
220 |
Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K. Kummerfeld, and Elena L. Glassman. 2024. Supporting Sensemaking of Large Language Model Outputs at Scale. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24). Association for Computing Machinery, New York, NY, USA, Article 838, 1–21. https://doi.org/10.1145/3613904.3642139
|
221 |
|
222 |
Morgan, Yasmin. 2022. "AIxDesign Icebreakers, Mini-Games & Interactive Exercises." https://aixdesign.co/posts/ai-icebreakers-mini-games-interactive-exercises
|
223 |
|
|
|
224 |
|
225 |
|
226 |
|
227 |
|
228 |
|
229 |
============================================
|
230 |
+
<!-- More background: https://huggingface.co/tasks/text-generation
|
231 |
+
-->
|
232 |
|
233 |
Tutorial 1:
|
234 |
|
235 |
<!-- Play with different models: https://huggingface.co/chat/ -->
|
236 |
|
237 |
+
### X. Get to know the terms and tools.
|
238 |
+
|
239 |
+
API:
|
240 |
+
|
241 |
+
Model:
|
242 |
+
|
243 |
+
Dataset:
|
244 |
+
|
245 |
+
### X. Create a Hugging Face Space.
|
246 |
+
|
247 |
+
A Hugging Face Space is just like a GitHub repo with GitHub Pages, except it's hosted by Hugging Face and already attached to its datasets, models, and API.
|
248 |
+
|
249 |
+
Visit `https://huggingface.co/new-space?`.
|
250 |
+
|
251 |
+
Name your Space. Maybe something like `p5jsCriticalAIKit` or `criticalAITutorial1`
|
252 |
+
|
253 |
+
Select `static` and then select a `Blank` template. Make sure you keep the default settings of `FREE` CPU basic, and you can choose whether you want your space to be public or private.
|
254 |
+
|
255 |
+
Your new Space should load on the `App` page, which is its web page. It should say `Running` at the top in green if it worked. Click on the drop down menu next to Running. Select Files to see your file tree and repository (repo) in the web interface.
|
256 |
+
|
257 |
+
![screenshot of new space app page]()
|
258 |
+
![screenshot of new space app page with file menu]()
|
259 |
+
|
260 |
+
Click "Add File" to make a new `sketch.js` file. Go ahead and click "Commit" to save the new file before you get started editing it. You can hit "Edit" to make changes to the file in your browser.
|
261 |
+
|
262 |
+
Alternately, you can clone the whole repository to work from your desktop. Refer to [p5.js Tutorial on Setup]() or [Dan Shiffman’s Hosting a p5.js Sketch with GitHub Pages](https://youtu.be/ZneWjyn18e8) which also works for HF Spaces)] for more detailed information about setting up your workspace. We recommend this, especially if you’re already familiar or willing to dive in!
|
263 |
+
|
264 |
+
### X. Add p5.js to the Space
|
265 |
+
|
266 |
+
Edit `index.html` to include the p5.js library by including this line inside the `<head></head> tags:
|
267 |
+
|
268 |
+
`<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.10.0/p5.min.js"></script>`
|
269 |
+
|
270 |
+
If you like, you can change the title of your page to your preference. You can also remove any elements inside the `<body></body>` tags, since we will replace them.
|
271 |
+
|
272 |
+
Next, update `index.html` to reference the `sketch.js` file we created. Add this line inside the `<body></body>` tags:
|
273 |
+
|
274 |
+
`<script type="module" src="sketch.js"></script>` // make sure it has the type attribute "module"
|
275 |
+
|
276 |
+
The script element importing the sketch file may be familiar to you. Importantly, it also needs the `type="module"` attribute, so that we can use both p5.js and other libraries in the file. Let's set that up next...
|
277 |
|
278 |
+
### X. Create a class instance of p5 in `sketch.js`.
|
279 |
|
280 |
Our p5.js Instance is basically a wrapper that allows us to hold all of our p5.js functions together in one place and label them, so that the program can recognize them as belonging to p5.js.
|
281 |
|
|
|
284 |
```javascript
|
285 |
new p5(function (p5) {
|
286 |
//
|
287 |
+
})
|
288 |
```
|
289 |
Then, all our usual p5.js coding will happen within these curly braces.
|
290 |
|
|
|
297 |
p5.draw = function(){
|
298 |
//
|
299 |
}
|
300 |
+
})
|
301 |
```
|
302 |
Important: When using any functions specific to p5.js, you will start them out with a label of whatever you called your p5.js instance. In this case we called it `p5` so our functions will be called `p5.setup()` and `p5.draw()` instead of the `setup()` and `draw()` you may recognize.
|
303 |
|
|
|
315 |
p5.draw = function(){
|
316 |
//
|
317 |
}
|
318 |
+
})
|
319 |
```
|
320 |
We can also check that the p5 instance is working correctly by adding `console.log('p5 instance loaded')` to `p5.setup()`, since you won't yet see a canvas or any DOM elements
|
321 |
|
322 |
+
Check that the page loaded, since we don't have a canvas. Add this outside of the p5.js instance:
|
|
|
323 |
|
324 |
```js
|
325 |
window.onload = function(){
|
|
|
328 |
|
329 |
```
|
330 |
|
|
|
331 |
|
332 |
+
### X. Create a web interface and add template features.
|
333 |
+
|
334 |
+
We'll build an easy p5.js web interface so that we can interact with our Critical AI Kit. Create three new functions and run them in the In the `p5.setup()` function. Add a fourth function named `displayResults()` but don't run it in Setup. Instead it will run with a button press we make later.
|
335 |
+
|
336 |
+
```js
|
337 |
+
new p5(function (p5){
|
338 |
+
p5.setup = function(){
|
339 |
+
p5.noCanvas()
|
340 |
+
console.log('p5 instance loaded')
|
341 |
|
342 |
+
makeTextDisplay()
|
343 |
+
makeFields()
|
344 |
+
makeButtons()
|
345 |
+
}
|
346 |
|
347 |
+
function makeTextDisplay(){
|
348 |
+
//
|
349 |
+
}
|
350 |
+
|
351 |
+
function makeFields(){
|
352 |
+
//
|
353 |
+
}
|
354 |
+
|
355 |
+
function makeButtons(){
|
356 |
+
//
|
357 |
+
}
|
358 |
+
|
359 |
+
function displayResults(){
|
360 |
+
//
|
361 |
+
}
|
362 |
+
})
|
363 |
+
```
|
364 |
+
|
365 |
+
For a deep dive into how to use the p5.DOM features, see [DOM TUTORIAL]((XXX)). Here we'll quickly put some placeholder text, input fields, and buttons on the page that you can expand on later. First, add a title, a description, and some alt text for accessibility. Don't forget to add `p5.` in front of every function that is specific to p5.js.
|
366 |
+
|
367 |
+
```js
|
368 |
+
function makeTextDisplay(){
|
369 |
+
let title = p5.createElement('h1','p5.js Critical AI Kit')
|
370 |
+
let intro = p5.createP(`Description`)
|
371 |
+
let altText = p5.describe(p5.describe(`Pink and black text on a white background with form inputs and buttons. The text describes a p5.js tool that lets you explore machine learning interactively. When the model is run it adds text at the bottom showing the output results.`))
|
372 |
+
}
|
373 |
+
```
|
374 |
+
|
375 |
+
For now, we'll just add a single input field, for writing prompts. It won't work yet because we'll need to connect it to the rest of the form. We describe its size, give it a label, and give it the class `prompt`.
|
376 |
+
|
377 |
+
```js
|
378 |
+
function makeFields(){
|
379 |
+
let pField = p5.createInput(``)
|
380 |
+
pField.size(700)
|
381 |
+
pField.attribute('label', `Write a prompt here:`)
|
382 |
+
p5.createP(pField.attribute('label'))
|
383 |
+
pField.addClass("prompt")
|
384 |
+
}
|
385 |
+
|
386 |
+
```
|
387 |
+
|
388 |
+
We'll add one button that will let us send prompts to the model. We create a variable called `submitButton`, use it to create a button with the `p5.createButton` function, and display the text `"SUBMIT"` on the button. We also size the button and give it a class. For now it won't do anything because we haven't used its `.mousePressed()` method to call any functions, but we'll add that later.
|
389 |
+
|
390 |
+
```js
|
391 |
+
function makeButtons(){
|
392 |
+
let submitButton = p5.createButton("SUBMIT")
|
393 |
+
submitButton.size(170)
|
394 |
+
submitButton.class('submit')
|
395 |
+
// submitButton.mousePressed()
|
396 |
+
}
|
397 |
+
```
|
398 |
+
|
399 |
+
And how about somewhere to display the results we get from our model? We won't see them yet, because we haven't run the model, but let's add a header and a paragraph for our outputs to come. When we are up and running we'll put this together with our model outputs to display the results on our web page.
|
400 |
+
|
401 |
+
```js
|
402 |
+
function displayResults(){
|
403 |
+
let outHeader = p5.createElement('h3',"Results")
|
404 |
+
let outText = p5.createP('')
|
405 |
+
```
|
406 |
+
|
407 |
+
### X. Add CSS flair.
|
408 |
+
|
409 |
+
Create a `style.css` file and paste in this code:
|
410 |
+
|
411 |
+
[Link to raw file]([XXX)
|
412 |
+
|
413 |
+
You can also write your own or revamp this code! See [CSS Tutorial [XXX]]([XXX]) for more details on playing with styles.
|
414 |
+
|
415 |
+
### X. Add authorization to your space. [MAY NOT BE NEEDED][XXX]
|
416 |
+
|
417 |
+
We'll use some template configuration code to make sure our program talks to the Hugging Face API.
|
418 |
+
|
419 |
+
Paste this code into your `sketch.js` file:
|
420 |
+
|
421 |
+
```js
|
422 |
+
[XXX-TO-DO][MAY NOT NEED]
|
423 |
+
```
|
424 |
+
|
425 |
+
Also add this to your `README.md`:
|
426 |
|
427 |
```markdown
|
428 |
hf_oauth: true
|
|
|
442 |
|
443 |
<!-- EXAMPLES: https://huggingface.co/docs/huggingface.js/main/en/index#huggingfaceinference-examples -->
|
444 |
|
445 |
+
To adjust the configuration of your HF space:
|
446 |
+
https://huggingface.co/docs/hub/spaces-config-reference
|
447 |
+
<!-- https://huggingface.co/docs/huggingface.js/hub/README#oauth-login -->
|
448 |
+
<!-- https://huggingface.co/spaces/huggingfacejs/client-side-oauth/blob/main/index.html →-->
|
449 |
+
|
450 |
+
|
451 |
+
**ALT if not using HF Spaces:**
|
452 |
+
### X. Get a HuggingFace API key.
|
453 |
+
https://huggingface.co/docs/hub/spaces-overview#managing-secrets
|
454 |
+
### X. Connect your API key to your p5.js instance.
|
455 |
+
|
456 |
+
Reflections & Next Steps
|
457 |
+
|
458 |
+
We’ve now put together all the basic foundations of a web page ready to host some Critical AI tools. As we move on to [XXX]
|
459 |
+
|
460 |
+
|
461 |
+
|
462 |
<!--
|
463 |
# - gpt-3.5-turbo
|
464 |
# - bigscience/bloom-560m
|