🦸🏻#1: Open-endedness and AI Agents – A Path from Generative to Creative AI?

Community Article Published December 25, 2024

This unorthodox series on AI Agents and Agentic Workflows we start with an introduction and discussion of open-endedness as an evolutionary approach to creation.

Intro

AI agents. Agentic workflows. Autonomous agents. Intelligent agents. Digital agents. Task-oriented agents. Smart agents. Copilots. AI personas. AI assistants. Embodied agents etc.

The topic of agents is so hot right now that no one even knows what the correct term for them is. What's more, in machine learning, the concept of agents originally meant something different than what it means in AI today. In this series on agentic systems and workflows, we will clarify these terms and set the record straight. There is indeed a lot of confusion and questions surrounding this topic. From theoretical frameworks to practical applications, current innovations to future potential, policies and roadmaps, we will cover it all.

We don’t know if AGI is at the end of this path but we will be open to it. And to emphasize that we will start our series … with open-endedness ;) That was one of the topics suggested by our readers here.

In today’s episode:

  • Why open-endedness is important in the context of AI agents?
  • Historical context: the noticeable milestones of open-endedness in AI
  • What do we mean by open-endedness today
  • What limitations of current AI open-endedness can address
  • The promise of open-endedness: from generation to creation
  • Implications across domains
  • Challenges in achieving open-ended AI
  • Conclusion
  • Bonus: Resources (it's true treasure trove)

If you want to receive our articles straight to your inbox, please subscribe here


Why open-endedness is important in the context of AI agents?

To put it simply, open-ended systems do not have defined boundaries, which means they have the potential to generate novel ideas and solutions beyond what was originally programmed or anticipated. This lack of fixed constraints allows for continuous exploration, discovery, and even the evolution of strategies and behaviors within the system. In AI, open-endedness can be crucial for pushing the limits of creativity, innovation, and problem-solving – traits that are increasingly valuable as AI systems tackle more complex, real-world challenges.

Open-ended systems stand apart from traditional models by avoiding a finite set of outcomes or goals. Instead, they evolve and adapt as new data or scenarios emerge, potentially uncovering solutions that wouldn’t have been accessible through conventional methods. This ability to explore the unknown and redefine possibilities aligns with key breakthroughs in fields like agentic systems.

By embracing open-endedness, we can tap into the unpredictable potential of AI – leading to applications where systems not only adapt but also self-innovate in ways that could revolutionize industries from design and art to science and engineering.

A few recent studies, such as Open-Endedness is Essential for Artificial Superhuman Intelligence by the Google DeepMind team and The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery by Chris Lu et al., make several compelling points and demonstrate the usefulness of open-endedness for AI agents.

Historical context: the important milestones of open-endedness in AI (not exhaustive)

The journey toward open-ended AI arguably began with Norbert Wiener. In the late 1940s, his pioneering work on cybernetics introduced the concept of self-regulating systems. By focusing on feedback loops – systems that could evolve by interacting with their environment – he laid the foundation for what we now consider open-ended AI. Ironically, Wiener himself was skeptical about AI. As we’ve noted in our “History of LLMs” series, he doubted the feasibility of mechanical translation, given the ambiguous nature of language and its emotional connotations.

About the same time and later, John von Neumann and Alan Turing also explored machine capabilities beyond predefined tasks. Von Neumann’s self-replicating automata and Turing’s theoretical universal machine hinted that, with the right programming, machines could evolve and adapt in ways not explicitly encoded. It is important to note that their primary focus was not on open-ended AI as we understand it today. Their contributions were more about the theoretical capabilities of machines rather than evolving behavior in complex, adaptive systems. But Neumann was the first to consider self-replicating automata, and Turing was the first to ask whether machines can think. Both are now part of the open-endedness discourse.

By the mid-1960s, John Holland’s genetic algorithms introduced evolutionary principles into computing, simulating natural selection to evolve solutions over multiple generations. These algorithms demonstrated continuous improvement – a key feature of open-ended systems.

In the 1980s and 1990s, the Artificial Life (ALife) movement advanced these ideas further. Researchers like Christopher Langton and Thomas Ray developed digital environments where virtual organisms could evolve independently. Ray’s Tierra simulation, where digital entities competed and evolved, became a landmark for demonstrating open-endedness in a computational setting.

The introduction of Generative Adversarial Networks (GANs) by Ian Goodfellow in 2014 added a new dimension to open-ended systems. GANs used two neural networks – a generator and a discriminator – that competed against each other to create increasingly realistic data, exemplifying how AI could explore and generate novel outcomes in vast, open-ended spaces.

Finally, AlphaGo by DeepMind revolutionized the field in 2016 by defeating a world-class Go player through unexpected strategies. By combining reinforcement learning with deep learning, AlphaGo not only mastered the game of Go but also developed strategies that had never been seen before. Its ability to innovate without explicit programming to solve complex problems highlighted the power of open-ended exploration in AI systems.

What do we mean by open-endedness today

Open-endedness in AI refers to a system’s ability to continually generate new and unpredictable behaviors, solutions, or outcomes without predefined limits. Without this quality, some argue, achieving and surpassing human-level intelligence – be it AGI, ASI, or another form – remains out of reach.

Unlike traditional AI, which operates within fixed rules, open-ended systems evolve and adapt, exploring possibilities in ways even their creators might not anticipate. This pushes AI beyond static tasks into dynamic, evolving scenarios, much like human learning.

In their research, the Google DeepMind team suggests that open-endedness is observer-dependent, meaning that for a system to be considered open-ended, it needs to produce artifacts that are both novel and learnable from the perspective of the observer.

The “observer” moment is important, otherwise it becomes too challenging to measure or define open-endedness since its interpretation is most of the time somewhat subjective. The way people value innovation, which is key to open-endedness, can vary depending on their personal views or the context (for example, a plane would not be as appreciated that much by humans if they could fly).

What limitations of current AI open-endedness can address

  • Traditional AI systems are typically designed to solve specific problems.
  • They often converge on optimal solutions quickly and then cease to produce further novelty or complexity.
  • This goal-oriented approach limits their potential for ongoing creativity.

The promise of open-endedness: from generation to creation (from GenAI to Creative AI)

Open-endedness promises to move beyond these limitations by fostering ongoing creativity, transitioning from mere generation (GenAI) to true creative AI. A notable example emerged in late 2023 with NVIDIA's Voyager, an LLM-powered agent in Minecraft. Voyager demonstrated open-ended capabilities, exploring, acquiring skills, and making discoveries without human input. It iteratively generated executable code using GPT-4, refining its skills through trial and error. Outperforming prior benchmarks, Voyager collected 3.3x more unique items, traveled 2.3x farther, and unlocked key milestones up to 15.3x faster.

Recent research from Sakana AI and other AI labs, showcased in The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery, has demonstrated the value of incorporating open-endedness into agentic workflows. Moving beyond theory, they applied open-ended algorithms to a real-world problem: scientific research.

The AI Scientist is a groundbreaking framework designed to automate scientific research from start to finish. What sets it apart is its ability to explore research in an open-ended way—generating new ideas, testing them, and building upon previous discoveries without human involvement. Essentially, it mimics the entire scientific process: generating research ideas, writing code for experiments, conducting those experiments, analyzing results, and documenting everything in a report.

image/png

One of the most impressive aspects of this system is its capacity to operate across different areas of machine learning, such as diffusion models and language models. The open-ended nature of The AI Scientist allows it to continuously refine and improve its ideas, potentially accelerating scientific breakthroughs.

While some have critiqued the low quality of papers produced by The AI Scientist, these critics are missing the point. As Luigi Acerbi noticed:

image/png

The AI Scientist represents the first step toward creating AI agents capable of handling complex, multi-step tasks. It has the potential to greatly accelerate scientific progress by reducing costs and human labor while making research more accessible. The potential for AI to autonomously explore new scientific ideas opens up endless possibilities, revolutionizing how we approach discovery across various fields.

Implications across domains

Where else can open-endedness be useful? The short answer is: anywhere a spark of imagination is required. Here are a few areas where I’d like to see advancements:

Novel Design and Engineering: Open-ended AI could autonomously generate innovative designs for products, architecture, and technology. Imagine if self-driving cars could also design themselves for optimal performance. With the vast amount of data they collect, there’s certainly something that humans might miss.

Science: AI-driven breakthroughs like AlphaProteo, which generates novel proteins for biology and health research, are already changing the field. With challenges like climate change still unsolved, AI could be the perfect tool to help develop new solutions.

Education and Learning: The pace of progress is so fast that school curriculums quickly become outdated. Open-ended AI could transform education by offering more personalized learning experiences. It can merge foundational knowledge with real-time updates, adapt lessons to individual students, and assist teachers in identifying areas where students are struggling, ultimately creating a more supportive and responsive learning environment.

Agentic workflows: Without open-endedness, every agent is just a high-level automation tool. True open-endedness allows agents to go beyond simple automation, pushing boundaries and exploring new possibilities.

Challenges in achieving open-ended AI

Defining the necessary conditions for open-endedness is complex and requires rethinking traditional evolutionary and AI paradigms. Open-ended systems are designed to generate novel, creative solutions without predefined goals, but a key issue arises when AI models produce optimal outcomes merely through brute force – relying on a vast number of attempts rather than true insight or reasoning.

An experiment described by a contributor to OpenAI’s O1 model, which is designed to "think" before answering, highlights this issue. Despite its improved reasoning, the model performs poorly on complex tasks, such as competitive programming, with a CodeForces rating of 1800. However, by running the model 10,000 times per problem and filtering the best attempts, it occasionally solves them by chance, achieving results comparable to IOI gold-level competitors. This underscores the core problem: AI can reach successful solutions, but often through trial-and-error, which is computationally expensive and time-consuming.

The challenge, therefore, is twofold. First, how can AI models be engineered to explore vast possibilities without relying solely on brute force? And second, how can these systems be refined to recognize and prioritize high-quality solutions more quickly, without the inefficiency of trial-and-error methods? These questions highlight the technical hurdles in making AI not only creative but efficient. Advancing models to improve reasoning and decision-making, while maintaining their capacity for open-ended exploration, is key to solving this issue.

Conclusion

Nature evolves slowly, and so do humans. Yet, if we succeed in embedding true open-endedness into our machines, empowered by vast computational resources, we could radically enhance creativity at a scale far beyond our own pace.

But do we need this acceleration? Considering the balanced and deliberate pace of nature and human progress, it’s a fair question. By shifting away from problem-specific optimizations, AI systems – especially agents in agentic workflows – could begin to emulate the limitless creativity seen in natural evolution. This would involve rethinking how we design these agents, moving beyond task-based automation toward open-ended exploration, where continuous learning and self-innovation are key.

Integrating open-endedness into AI agents means embracing continuous novelty and unpredictability in their workflows. These agents would no longer be confined to predefined goals but would actively seek out new challenges and solutions.

Bonus: Resources


*Originally published on Turing Post. You can subscribe to our newsletter here.