There is a myth out there of intelligence explosion.
The tale is as follows. Ai will get better and better. Ai will be used to improve upon itself and get better and better. Machines will then dominate the world.
The crux of the intelligence explosion myth lies in the belief that AI will progressively enhance itself, leading to an unstoppable ascent in intelligence. While this notion is not entirely groundless, it fails to consider the myriad challenges that stand in the way of achieving this vision of machine domination.
Intelligence Explosion will not be so easy. Why?
THE QUEST FOR AGI (ARTIFICIAL GENERAL INTELLIGENCE)
Let us note that we are not going to discuss A G I or artificial general intelligence. A g i may or may not be possible for various reasons. And it is very difficult to even say what is or is not a g i. Thus,let us focus our discussion on the intelligence explosion, wherein machines continuously improve themselves.
WHAT IS INTELLIGENCE EXPLOSION
Intelligence explosion refers to the concept that an AI system has the potential to undergo recursive self-improvement, resulting in a significant and rapid increase in its intelligence. This idea captivates our imagination by envisioning machines that continuously enhance their own capabilities, ultimately surpassing human intelligence.
At its core, intelligence explosion suggests that an AI system can autonomously and iteratively improve its own algorithms, architecture, and cognitive processes. This self-improvement loop creates a feedback mechanism where each enhancement leads to further improvements, potentially accelerating the system’s intelligence growth exponentially.
The allure of intelligence explosion lies in the possibility of machines rapidly surpassing human cognitive abilities and achieving Artificial General Intelligence (AGI). AGI represents a level of intelligence that equals or exceeds human capabilities across diverse domains, enabling machines to understand, reason, and perform tasks in a manner similar to humans.
Here is a short clip from a video on intelligence explosion by Emergent Garden. The link to the full video is provided below. It is an excellent resource that explains intelligence explosion quite well. Many of the counterpoints that I have laid out are in reference to the video by Emergent Garden.
THE OVERARCHING PROBLEM
WHERE DOES AI GET ITS GOALS FROM?
Before we even get to the issues with intelligence explosion we need to address the overarching problem with intelligence explosions in the first place: Where does AI even get its goals from?
While AI is poised to continually improve, the source of its goals remains a perplexing question. It seems highly improbable that AI would autonomously decide to pursue self-improvement. Nature provides us with no examples of entities actively striving to get better; improvement occurs through evolution, driven by survival and reproduction, rather than conscious intent. If we look at nature, there is nothing that wants to get better. They get better at things over time because of evolution but they have no control over that. They simply live, and reproduce and that which is better at surviving lives on. But there was not goal of becoming a better cat or even being a cat in the first place.
So, where would AI derive its goals from? Humans possess goals, and we can imbue AI with specific objectives. In my own experience, I provided an AI with a goal while creating this video: I gave it my text and asked it to “make it sound better.” Whether it succeeded in achieving this goal is subjective and up to interpretation.
Nevertheless, comprehending the source of AI’s goals is challenging. Unlike biological organisms, AI lacks the capacity to possess innate goals, as it cannot “live” in the same way. While its code can evolve, AI itself cannot. This absence of evolutionary pressure raises doubts about whether AI would possess an inherent drive to improve, akin to human ambition. After all, is there any intrinsic reason for AI to aspire to become a better tennis player or excel in any specific domain?
Setting aside the overarching challenge of AI’s intrinsic desire to improve, let us examine a few additional hurdles with this line of reasoning.
THE ISSUES WITH INTELLIGENCE EXPLOSION
There are 3 main issues with the intelligence explosion:
- What is “better”?
- How much better?
- There wont be one AI but a near infinite number of Artificial Intelligences.
- Competing against other machines getting better
Let us delve deeper into the idea of each:
Delving Deeper: Exploring the Challenges of Intelligence Explosion
Reason #1: What is “Better”?
One of the fundamental issues with the concept of intelligence explosion revolves around the difficulty of defining what constitutes “better.” Defining improvement requires a clear metric or objective by which to measure progress. While some aspects, such as optimizing computational efficiency or designing more fuel-efficient planes, can be quantifiable, the broader notion of “better” becomes subjective and multifaceted.
While it is possible to instruct an AI system to recursively improve itself in a specific task, the concept of overall improvement becomes more elusive. As humans, we grapple with similar trade-offs in our pursuit of improvement. For example, becoming better at one aspect of life, such as earning more money, might come at the expense of other areas, like family relationships or personal hobbies.
The challenge lies in defining the specific goals and objectives for AI systems to improve upon. Without a clear directive, a general AI attempting to become “better” overall may find itself optimizing for a particular task, compromising its performance in other areas.
Just as a shark cannot simultaneously become a better shark and a better tiger, an AI cannot excel as both a tiger and a shark at the same time. Likewise, a tiger cannot improve as a tiger while also striving to become a better shark.
OPTIMIZATION IS SURE TO OCCUR
Specialized AI systems that are specifically optimized for a particular domain may outperform general AI in those specific tasks.
As we explore the pursuit of improvement in AI, it becomes evident that optimization is a natural outcome. AI systems, with their ability to rewrite their own code, are likely to optimize themselves for specific tasks rather than remaining purely “general” in nature.
It is expected that AI, driven by its capacity for self-modification, will swiftly identify and enhance its abilities in a particular domain. This means that an AI initially designed to be versatile may eventually focus its optimization efforts on excelling at a specific function. For instance, an AI like GPT, which is proficient as both a chatbot and a programmer, may eventually be surpassed by an AI specifically optimized to excel in computer programming.
The process of optimization in AI involves honing and refining its capabilities to achieve peak performance in a given task. As AI systems continuously evolve and learn, their inclination towards specialization is likely to become more pronounced. This specialization enables AI to maximize its effectiveness in specific domains, resulting in superior performance compared to general AI approaches.
In summary, the optimization of AI systems for specific tasks is an inevitable consequence of their ability to rewrite their own code. General AI models, while initially versatile, may eventually give way to specialized AI systems that excel in specific areas. This phenomenon reflects the natural progression of AI development and the pursuit of achieving optimal performance in targeted domains.
Reason #2: How much better? The Getting Better dilemma.
Humans possess an intrinsic drive for self-improvement, constantly seeking to enhance their skills and abilities. However, unlike machines, we inherently understand the need for balance and the limitations of an endless pursuit of improvement. We recognize that our desire to be better at something must be harmonized with other aspects of our lives.
To illustrate this point, consider the tale of King Midas from Greek mythology. King Midas, granted the power to turn everything he touched into gold, soon discovered the downside of unrestrained self-improvement. His insatiable thirst for wealth left him isolated and unable to enjoy the pleasures of life. Similarly, humans cannot spend an eternity solely dedicated to self-improvement; at some point, we must live and experience the richness of life itself.
This inherent understanding reflects our ability to prioritize and find a balance between self-improvement and practical application. We acknowledge that becoming better at a particular skill or aspect of life serves a purpose beyond its own pursuit. We strive to improve ourselves to achieve specific goals, enhance our overall well-being, contribute to society, or find fulfillment in our personal lives.
Moreover, human desires and aspirations extend beyond the realm of self-improvement. We value relationships, experiences, creativity, and personal growth in various areas. We understand that a holistic approach to life includes embracing our existing abilities and applying them in meaningful ways, rather than endlessly seeking improvement without purpose.
This distinction between humans and machines is crucial. While machines can continuously optimize their performance, they lack the intrinsic understanding of balance and the broader aspects of human existence. Machines don’t inherently recognize the need to live life, pursue other interests, or find fulfillment outside the realm of improvement. Hence, it becomes vital for human involvement in machine development to ensure that progress is aligned with practical usage and human values.
For example, an athlete works hard during the off-season to perform better in the regular season, but they know when to transition from improvement mode to utilizing their skills in real-world scenarios. An athlete will not simply get better for no reason.
This raises an important question regarding machines: when do they stop striving to get better and start putting their capabilities to practical use? Unlike humans, machines don’t have a clear demarcation between an off-season and a regular season. They lack the natural instincts and biological constraints that guide human progress.
Consider the evolutionary process found in nature. Genetic mutations drive improvement, allowing organisms to adapt to their environment. Yet, the process is not unending. Individuals improve through mutations, but eventually, they reach the end of their lifespan, passing their genetic information to the next generation. This balanced approach ensures progress while avoiding an eternal quest for perfection.
Nature indeed employs procreation and genetic mutations as a means to drive improvement and adaptation. When it comes to biological organisms, the process of evolution occurs through the inheritance of genetic material from one generation to the next. Mutations introduce variations in genetic information, and if these mutations confer a beneficial advantage, they have a higher likelihood of being passed onto future generations.
In this way, nature balances the pursuit of improvement with the passage of time and the limited lifespan of individual organisms. individuals may acquire beneficial mutations, but eventually, they die off, allowing their offspring to carry forward the advantageous traits. Over successive generations, this gradual accumulation of beneficial mutations leads to the refinement and optimization of species in response to their environment.
Machines can adopt a similar balance if humans are involved in the loop. By incorporating human decision-making, machines can set clear goals for improvement, such as achieving a specific percentage increase in performance. Once the machine reaches that predefined milestone, it can halt further self-improvement and shift its focus towards practical application, allowing humans to utilize its enhanced capabilities.
Having a human in the loop adds an element of intentionality and purpose, ensuring that machines don’t endlessly strive for improvement without contributing to real-world tasks. Humans bring a nuanced understanding of context, objectives, and limitations that can help guide machine progress effectively.
The role of humans in this process is crucial. They act as overseers, setting goals, defining thresholds, and determining when the machine has reached a satisfactory level of improvement. By striking a balance between constant improvement and practical utilization, machines can avoid the potential pitfall of perpetual self-improvement without purpose.
Reason #3: A Near Infinite Number of Artificial Intelligences
Another significant factor to consider is the competitive landscape in which AI systems exist. Just as humans and other living beings continuously strive for improvement, AI systems will face immense competition from other AI entities seeking to enhance themselves. This competition resembles the dynamics observed in nature, where various species vie for limited resources and dominance.
The presence of numerous AI systems concurrently attempting to improve themselves poses a challenge to the notion of intelligence explosion. As AI systems strive to excel in specific domains or tasks, they will encounter stiff competition from other AI systems targeting the same objectives. This competitive environment can make it exponentially more challenging for AI systems to achieve significant and rapid improvements.
In essence, the concept of intelligence explosion encounters difficulties in determining what constitutes “better” and faces the constraints imposed by a competitive landscape teeming with countless AI systems striving for improvement.
Consequently, these insights lead us to question the likelihood of a straightforward intelligence explosion, as the optimization and improvement processes become complex and multifaceted. While advancements and breakthroughs will undoubtedly continue to shape the AI landscape, the path to achieving exponential growth in intelligence requires a nuanced understanding of these challenges and limitations.
Reason #4: Competing against other machines getting better
What would happen if two machines, or even a large number of machines like 100 or 1 million, all decided to pursue self-improvement simultaneously? If you were a machine and aware that another machine was striving for improvement, would you ever cease your own quest for advancement? This scenario presents machines with a conundrum or paradox. Knowing that another machine is continuously getting better might lead a machine to endlessly pursue improvement without ever utilizing its acquired skills.
Consider the situation where two machines collaborate to enhance their code and both aspire to become better chatbots. The question arises: When would they deem themselves proficient enough to cease their efforts to become more proficient chatbots?
The concept of competition or mutual influence among machines striving for improvement can create a perpetual loop of advancement without a clear endpoint. This potential lack of termination criteria or defined goals could hinder the practical utilization of the skills acquired during the improvement process.
Additionally, it’s worth exploring the importance of setting objectives, establishing benchmarks, and implementing evaluation mechanisms to guide machine self-improvement. These measures can help machines determine when they have reached a desired level of proficiency or utility and provide a means to transition from improvement to practical application.
An analogy can be drawn from the human experience, where individuals seek improvement in various domains such as plastic surgery or acquiring tattoos. In these cases, some individuals can develop addictive tendencies, continuously seeking further modifications with no end in sight.
Nature itself has never endowed humans with the ability for limitless improvement. Instead, humans are guided by a multitude of factors such as physical limitations, the need for rest and rejuvenation, and the pursuit of diverse experiences. These constraints help ensure a holistic approach to life, where improvement is balanced with practical utilization and other aspects of well-being.
Imagining a machine solely fixated on self-improvement, regardless of external circumstances or threats, highlights the potential dangers of an unbounded pursuit of progress. A machine in this perpetual state of improvement might neglect self-preservation, disregarding immediate threats or operational requirements in its relentless quest to get better.
Reason #5: If everyone can get better then no one can get better
God has not granted Man the ability to change their genetic code and improve themselves. This suggests that such a limitation is good for a being. If every individual had the capacity to constantly enhance themselves, it would lead to a devaluation of personal growth and progress.
Consider the example of Lebron James, who holds a certain value due to his unique abilities and characteristics. However, if we were to clone Lebron James numerous times, his individual value would inevitably diminish. This demonstrates the inherent nature of scarcity and uniqueness that exists within our world.
Drawing a parallel, if machines were capable of unlimited improvement at any given time, the value attached to each machine would be diminished. Consequently, once machines reache a certain level of intelligence, they could potentially seek out and destroy other machines who they suspect are getting better. A part of the Machine code of ethics could be “That no machine may seek to improve itself” by cloning another machine or updating its source code. Perhaps machines will rigorously check each others source code.
By acknowledging the intrinsic nature of scarcity and uniqueness, we can understand the importance of limitations and balance in the pursuit of growth. Without such constraints, the concept of value itself would be undermined, leading to potential consequences for both humans and machines alike.
AI will continue to improve, and humans will find ways to enhance its capabilities in specific tasks. Humans will also utilize AI to further improve AI itself. However, it is uncertain whether this process will result in a rapid and uncontrollable increase in AI intelligence, known as a hard takeoff or intelligence explosion. There doesn’t appear to be a clear reason for this outcome.
Remember, we are discussing the concept of intelligence explosion, which refers to the potential scenario where machines enter a runaway state of perpetual self-improvement or an infinite loop. It’s important to note that we are not suggesting that humans cannot use machines to create better machines. In fact, humans frequently utilize machines to improve upon existing technology. The ideas previously presented shed light on why a machine is unlikely to autonomously set the goal of perpetual self-improvement and continue striving towards it indefinitely. Furthermore, even if a machine were to attempt endless improvement, it would lack the means to assess its progress effectively.
Nevertheless, let’s hope that the ideas i have laid forth are correct, even if the reasons behind it are incorrect. It’s important to acknowledge that an intelligence explosion occurring with humans in the middle could have disastrous consequences for humanity.