--- title: README emoji: 👁 colorFrom: green colorTo: blue sdk: gradio pinned: false --- # Meaning Alignment Institute (MAI) ### Mission MAI aims to align artificial intelligence with human values and meaning. They focus on developing Wise AI, which integrates practical wisdom to make decisions beneficial to humanity, and on exploring post-AGI futures centered around human flourishing. ### Research Highlights - **Moral Graph Elicitation (MGE):** A process to crowdsource moral wisdom and fine-tune models accordingly. - **Post-AGI Mechanisms:** New institutions and LLM-driven systems to prioritize deeper human values over surface-level preferences. ### Get Involved Visit [MAI's website](https://www.meaningalignment.org/) to explore their work or contribute.