Artificial General Intelligence

Artificial intelligence (AI) is all the rage today. It permeates our lives in ways obvious to us and in ways not so obvious. Some obvious ways are in our search engines, game playing, Siri, Alexa, driving cars, ad selection, and speech recognition. Some not-so-obvious ways are finding new patterns in big data research, solving complex mathematical equations, creating and defeating encryption methodologies, and designing the next generation weapons.

            Yet AI remains artificial, not human. No AI computer has yet passed the Turing Test or the Blois Test. (See discussion blog of November 2, 2020).  AI far exceeds human intelligence in some cognitive tasks like calculating and game playing, AI even exceeds humans in cognitive tasks requiring extensive human training like interpreting certain x-rays and pathology slides. Generally, its achievements, while amazing, are still somewhat narrow. They are getting broader particularly in hitherto exclusively human capabilities like face recognition. But we have not yet achieved what is called artificial general intelligence, or AGI.

            AGI is defined as the point where a computer’s intelligence is equal to and indistinguishable from human intelligence. It defines a point toward which AI is supposedly heading. There is considerable debate as to how long it will take to reach AGI, and even more debate whether that will be a good thing or an existential threat to humans. (See discussion blog of October 23, 2020).

            Here are my conclusions:

1.     AGI will never be achieved.

2.     The existential threat still exists.

            AGI will never be achieved for two reasons. First, we will never agree on a working definition of AGI that could be measured unambiguously.  Second we don’t really want to achieve it and therefore won’t really try.

            We cannot define AGI because we cannot define human intelligence—or more precisely, our definitions will leave too much room for ambiguity in measurement. Intelligence is generally defined as the ability to reason, understand and learn. AI computers already do this depending on how one defines these terms. As discussed in these books, more precise definitions attempt to identify those unique characteristics of human intelligence including the ability to create and communicate memes, reflective consciousness, fictive thinking and communicating, common sense and shared intentionality. Even if we could define all of these characteristics, it seems inconceivable we will agree on a method of measuring their combined capabilities in any unambiguous manner. It is even more inconceivable that we will ever achieve all of those characteristics in a computer.

             More importantly, we won’t try. Human intelligence includes many functions that don’t seem necessary to achieve the future goals of AI. The human brain has evolved over millions of years and includes functions that are tightly integrated into our cognitive behaviors that seem unnecessary and even unwanted to build into future AI systems. Emotions, dreams, sleep, control of breathing and heart rate, monitoring and control of hormone levels, and many other physiological functions are inextricably built into all brain activities. Do we need an angry computer? Why would we waste time trying to include those functions in future AIs? Emulating human intelligence is not the correct goal. Human intelligence makes a lot of mistakes because of human biases. Our goal is to improve on human intelligence—not emulate it.

              The more likely path to future AI is NOT to fully emulate the human brain, but rather to model the brain where that is helpful—like the parallel processing of deep neural networks, and self learning—but create non-human computer-based approaches to problem solving, learning, pattern recognition and other useful functions that will assist humans. The end result will not be an AI that is indistinguishable from human intelligence by any test. Yet is will still be “smarter” in many obvious and measurable ways. The Turing Test and Blois Test are irrelevant.

               If that is true, why why would AI still be an existential threat? The concerns of people like Elon Musk, Steven Hawking, Nick Bostrom and many other eminent scientists is that there will come a time when the self-learning and self programming AI systems will reach a “cross-over” point where they will rapidly exceed human intelligence and become what is called artificial superintelligence or ASI. The fear is that we will then lose control of an ASI in unpredictable ways. One possibility is that an ASI will treat humans similarly to the way we treat other species and eliminate us either intentionally or unintentionally as we eliminate thousands and even millions of other species today.

               There is no reason that a future ASI must go through an AGI stage to achieve this potential threat. It could still be uncontrollable by us, unfriendly to us and never have passed the Turning Test or any other measure of human intelligence.

Previous
Previous

Race and Genetics

Next
Next

Lamarckian Evolution is Making a Comeback