In recent years, there has been a lot of hype around artificial intelligence’s rise to superintelligence — smarter than the smartest human could ever hope to be, with a terrifying access to the largest network of knowledge and machines ever assembled: the internet. Elon Musk, the CEO of Tesla, inc. and SpaceX, has made the controversial claim that AI poses a more significant threat to human civilization than full-on nuclear war. This take on AI is rooted in the belief that AI systems can adopt a sinister and self-interested ideology to take over the world and outsmart their developers. While suitable for a movie such as Transcendence, this obsession has too much hope for what humans can get a machine to achieve.
It is important to note that AI poses a real threat, not in its potential desire for world domination, but in how it can be grossly misused. Military AI and biased recruitment tools present a much more consequential problem than some Black Mirror-esque, glorified image of superintelligent AI. Discussions of general AI systems that are “aware” are often unproductive, misleading and fail to provide any tangible framework for comprehending AI’s real capabilities.
So, what does “sentience” look like in even the most cutting edge AIs of today?
Right now, AI seems to loosely check off boxes that make it appear to understand what is going on. Under mild cross-examination (like inputting nuanced or unexpected stimuli), the AI’s blindspots become readily apparent. For an AI system to even mimic the complexity of the human condition, the AI must be able to feel itself around the world it is in. This will allow it to develop a sense of “being” in some capacity. That’s why research must focus on allowing AI to develop a relational view of itself and its position in the world instead of simply manipulating external factors to seem aware. This could almost be described as a step down; humans seek transcendence, the awareness that there is more than just their perceptions. Some AI researchers hope to accomplish the opposite, in order to force the AI to identify with its non-transcended state.
Today, most AI systems employ algorithms that are exceptionally good at deconstructing stimuli based on corrective feedback, ultimately allowing them to readjust their function based on how well they handled the stimuli. Essentially, AIs are good at receiving a signal, crafting a response being told(sometimes by a researcher) whether their response was good or bad and using that feedback for their next response. Nevertheless, brittleness (the inability for AIs to contextualize unfamiliar situations, even after they have been trained on billions of different scenarios) can expose deeper fundamental flaws in the AI’s understanding. For example, a Google image recognition system can be fairly reliable at recognizing a fire truck that’s coming head-on. However, once it is pointed into the air, the system fails to recognize a fire truck and instead classifies it as a school bus.
AI often fails to understand language or behavior because it relies on a series of correlations that only seek to mimic the behaviors of humans. The correlation sets can be deep and rooted in a lot of complexity (making for some stunning AI), but, to truly develop AI that’s smart, we need to allow it to understand causal relationships in the world. At the same time, the behavioral innateness — the je ne sais quoi of the human condition — within human beings is painstakingly difficult to define, making it superbly hard for researchers to even begin recreating it in AI. Dr. Eric Swanson, an associate professor of Philosophy at the University of Michigan, weighed in on this discussion by stating that “although current AI research has proven to be surprisingly good at learning from static data, it’s not yet good at learning dynamically, and it would be exciting to see current AI research make progress on that problem.”
The intricacies of human consciousness are far too complex to holistically capture in AI. This could be attributed to the fact that we don’t even fully know how we’re aware, making it difficult to recreate our subjective conscious experience in AI. To elaborate on this explanatory gap of understanding, cognitive scientist John Searle proposed the hard problem of consciousness. He defines it by saying that even if we were to fully map out the brain and all of its biomolecular activities, the question of causation still presents a problem. In other words, we may be able to pinpoint every neuron with every conceivable state of being, but we would still lack the reasoning behind why it is that these neural interactions give rise to the phenomenon that we call consciousness. So, if we can’t even capture our own framework of consciousness, how can we ever hope to implement it in AI?
As human beings, we project causality onto the objects and events that occur in our environment. This essentially means that when two events occur, we are able to formulate a timeline in our mind that helps us chronologically understand and categorize various events. This innate developmental (and malleable) process is why we are able to compartmentalize, understand and generalize the information we learn. In other words, we have an intrinsic blueprint for our learning and reasoning processes; we are not simply a blank slate taking in information. Instead, a dynamic relationship between our thoughts and the world around us arises as we interact with the world more and more.
So, how does this relate to AI? Well, contemporary AI seems to follow convoluted models of statistical data organized by neural nets in order to cherry-pick particular stimuli and match them with an appropriate output. This mechanism may allow us to reach a local maximum — the most we can achieve without significant new technology — in terms of AI understanding. We should look to the variety and intensity of the human mind in order to find out what a truly heterogeneous, widely capable AI would look like. Projection of causality must develop as a reality for the AI rather than as an ingredient artificially interwoven in its hardware. Calculated and fundamental programming must provide AI with the ability not just to spew out the correct output, but also to develop its interrelated microworlds of information so that it can begin to adopt more nuanced, complex and higher-order tasks.
When building machines that mimic certain mental operations, we must seriously question whether AI can ever develop an ideology. In other words, can an AI system conjure up its own solidified opinions? Given the current framework of AI, it’s somewhat difficult to imagine AI doing anything outside of its own set of instructions. It’s even more difficult to imagine AI prioritizing self-serving behavior over its rudimentary instructions. Perhaps ideology and opinions are created in a million-year-old stew of biological evolution and societal development.
Can there ever truly be an unforeseen “click” in which AI can begin defining its own parameters and disobeying its programming? Or is it that an AI’s programming will become so hyper-advanced that human beings cannot even keep up with it? Whatever the case is, modern AI has particular niche applications that are disconnected, so a fear as grand as this may not be meaningful just yet.
While we continue to work on self-driving cars and video games with improved graphics, let’s not get taken away by what AI might be able to do for us. Current AI must be utilized in a safe and strategic manner and the unreasonable fear of AI dominating the planet can be limited to science fiction novels.
Ammar Ahmad is an Opinion Columnist who can be reached at ammarz@umich.edu