We've all become experts at clicking "I agree" without a second thought. In my latest blog post, I explore why these traditional consent models are increasingly problematic in the age of generative AI.
I found three fundamental challenges: - Scope problem: how can you know what you're agreeing to when AI could use your data in different ways? - Temporality problem: once an AI system learns from your data, good luck trying to make it "unlearn" it. - Autonomy trap: the data you share today could create systems that pigeonhole you tomorrow.
Individual users shouldn't bear all the responsibility, while big tech holds all the cards. We need better approaches to level the playing field, from collective advocacy and stronger technological safeguards to establishing "data fiduciaries" with a legal duty to protect our digital interests.
From ancient medical ethics to modern AI challenges, the journey of consent represents one of humanity's most fascinating ethical evolutions. In my latest blog post, I explore how we've moved from medical paternalism to a new frontier where AI capabilities force us to rethink consent.
The "consent gap" in AI is real: while we can approve initial data use, AI systems can generate countless unforeseen applications of our personal information. It's like signing a blank check without knowing all possible amounts that could be filled in.
Should we reimagine consent for the AI age? Perhaps we need dynamic consent systems that evolve alongside AI capabilities, similar to how healthcare transformed from physician-centered authority to patient autonomy.
Curious to hear your thoughts: how can we balance technological innovation with meaningful user sovereignty over digital identity?
💫...And we're live!💫 Seasonal newsletter from ethicsy folks at Hugging Face, exploring the ethics of "AI Agents" https://huggingface.co/blog/ethics-soc-7 Our analyses found: - There's a spectrum of "agent"-ness - *Safety* is a key issue, leading to many other value-based concerns Read for details & what to do next! With @evijit , @giadap , and @sasha
🤗👤 💻 Speaking of AI agents ... ...Is easier with the right words ;)
My colleagues @meg@evijit@sasha and @giadap just published a wonderful blog post outlining some of the main relevant notions with their signature blend of value-informed and risk-benefits contrasting approach. Go have a read!
🇪🇺 Policy Thoughts in the EU AI Act Implementation 🇪🇺
There is a lot to like in the first draft of the EU GPAI Code of Practice, especially as regards transparency requirements. The Systemic Risks part, on the other hand, is concerning for both smaller developers and for external stakeholders.
I wrote more on this topic ahead of the next draft. TLDR: more attention to immediate large-scale risks and to collaborative solutions supported by evidence can help everyone - as long as developers disclose sufficient information about their design choices and deployment contexts.