|
--- |
|
title: Critical AI Prompt Battle |
|
author: Sarah Ciston |
|
editors: |
|
- Emily Martinez |
|
- Minne Atairu |
|
category: critical-ai |
|
--- |
|
|
|
< |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
``` |
|
|
|
1. [PSEUDOCODE] Add the model of choice to the README.md and sketch.js |
|
|
|
2. [PSEUDOCODE] Write instructions for your model. |
|
Set PREPROMPT = `Return an array of sentences. In each sentence, fill in the [BLANK] in the following sentence with each word I provide in the array ${blankArray}. Replace any [FILL] with an appropriate word of your choice.` |
|
|
|
|
|
|
|
5. [PSEUDOCODE] Add async function runModel() wrapping HF API await. { |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
different descriptors β adjectives and adverbs β to see how these shape the results. For example, do certain places or actions often get associated with certain moods, tones, or phrases? Where are these based on outdated or stereotypical assumptions? |
|
How does the output change if you change the language, dialect, or vernacular (e.g. slang versus business phrasing)? (Atairu 2024). |
|
|
|
>"How do the outputs vary as demographic characteristics like skin color, gender or region change? Do these variances reflect any known harmful societal stereotypes?" (Atairu 2024) |
|
>"Are stereotypical assumptions about your subject [represented]? Consider factors such as race, gender, socioeconomic status, ability. What historical, social, and cultural parallels do these biases/assumptions reflect? Discuss how these elements might mirror real-world issues or contexts. (Atairu 2024) |
|
|
|
### Reflections |
|
|
|
Here we have created a tool to test different kinds of prompts quickly and to modify them easily, allowing us to compare prompts at scale. By comparing how outputs change with subtle shifts in prompts, we can explore how implicit bias emerges from [repeated and amplified through] large-scale machine learning models. It helps us understand that unwanted outputs are not just glitches in an otherwise working system, and that every output (no matter how boring) contains the influence of its dataset. |
|
|
|
### Compare different prompts: |
|
|
|
See how subtle changes in your inputs can lead to large changes in the output. Sometimes these also reveal large gaps in the model's available knowledge. What does the model 'know' about communities who are less represented in its data? How has this data been limited? |
|
|
|
### Reconsider neutral: |
|
|
|
This tool helps [reveal/us recognize] that [no version of a text, and no language model, is neutral./there is no 'neutral' output]. Each result is informed by context. Each result reflects differences in representation and cultural understanding, which have been amplified by the statistical power of the model. |
|
|
|
### Consider your choice of words and tools: |
|
|
|
How does this help you think "against the grain"? Rather than taking the output of a system for granted as valid, how might you question or reflect on it? How will you use this tool in your practice? |
|
|
|
## Next steps |
|
|
|
### Expand your tool: |
|
|
|
This tool lets you scale up your prompt adjustments. We have built a tool comparing word choices in the same basic prompt. You've also built a simple interface for accessing pre-trained models that does not require using [a login/another company's interface]. It lets you easily control your input and output, with the interface you built. |
|
|
|
Keep playing with the p5.js DOM functions to build your interface & the HuggingFace API. What features might you add? You might also adapt this tool to compare wholly different prompts, or even to compare different models running the same prompt. |
|
|
|
Next we will add additional aspects to the interface that let you adjust more features and explore even further. |
|
|
|
## Further considerations |
|
|
|
Consider making it a habit to add text like "AI generated" to the title of any content you produce using a generative AI tool, and include details of your process in its description (Atairu 2024). |
|
|
|
## References |
|
|
|
> Ref Katy's project (Gero 2023). |
|
|
|
Morgan, Yasmin. 2022. "AIxDesign Icebreakers, Mini-Games & Interactive Exercises." https://aixdesign.co/posts/ai-icebreakers-mini-games-interactive-exercises |
|
|
|
> Ref Minne's worksheet (Atairu 2024) |
|
|