Spaces:
No application file
No application file
oklingefjord
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,15 @@ sdk: gradio
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
# Meaning Alignment Institute (MAI)
|
11 |
+
|
12 |
+
### Mission
|
13 |
+
MAI aims to align artificial intelligence with human values and meaning. They focus on developing Wise AI, which integrates practical wisdom to make decisions beneficial to humanity, and on exploring post-AGI futures centered around human flourishing.
|
14 |
+
|
15 |
+
### Research Highlights
|
16 |
+
- **Moral Graph Elicitation (MGE):** A process to crowdsource moral wisdom and fine-tune models accordingly.
|
17 |
+
- **Post-AGI Mechanisms:** New institutions and LLM-driven systems to prioritize deeper human values over surface-level preferences.
|
18 |
+
|
19 |
+
### Get Involved
|
20 |
+
Visit [MAI's website](https://www.meaningalignment.org/) to explore their work or contribute.
|
21 |
+
|