Why Artificial Intelligence is (Still) Human Intelligence

Source:

At its core, Artificial Intelligence and its partner Machine Learning (abbreviated as AI/ML) is math. Complex math, but math nonetheless. Specifically, it’s probability – the application of weighted probabilistic networks at a computational scale we’ve never been able to perform before, which allows the computed probabilities to become self-training. 

It’s that characteristic more than any other that makes AI seem like wizardry. The little cylinder on the kitchen counter that suddenly lights up when you call it by name feels like something out of science fiction, but that entire process is the end product of the re-ingestion of new data to help fine-tune a highly complex probabilistic graph.

The voice assistant recognizes its “name” not because it’s self-aware but because it has been programmed to match an audio waveform to a database of known waveforms with certain characteristics. It “wakes up” because the audio pattern of its wake word most closely matches a corresponding programmed action – “if waveform x most closely matches input x, then go do action y.” This is a microcosm of the computational network of probabilities that form the heart and soul of AI/ML.

The math is not new. What’s new is our ability to compute it at a heretofore unprecedented scale. Computational costs have come down enough that we can apply these calculations effectively and usefully, taking seconds not days. This doesn’t just represent an improvement to existing computational processes, but represents a total shift in the way we create software.

A New Way to Program

Historically, instructing a computer to perform functions for a user relied on a purely deterministic framework. Essentially that meant that computers interpreted their instructions based on a series of “if-then” statements entered by hand by a programmer: if a user performs action x, the computer must respond with pre-programmed action y. In the beginning, the tasks being asked of computers were simple enough that this was still the most suitable framework for coding. Two decades into the 21st century, however, the complexity of software has grown tremendously. As software’s role in our lives continues to expand, programmers need to anticipate ever more “if-then” instructions for software, accounting for a larger and larger web of possibilities from which the computer might derive instruction.

This complexity is the foundation of entropy in modern software. Because these interconnected chains of simple conditionals create a highly intricate graph that initiates transitions in system state, it becomes almost impossible for human minds and oversight to fully evaluate and confirm every possible decision and resulting system state. This is why operating software is always subject to unanticipated system states a.k.a bugs. Bugs, therefore, are inevitable, because it is impossible to fully anticipate and account for every possible input in this way.

So why is AI/ML such a profound transformation? With it, we can now program behavior into our computing technology that can be trained by real-world, unstructured input without needing to account for every possible variation. Creating a software system that can continue to operate despite imprecise inputs means we can now have computer behavior performing within uncertainties. In other words, the AI/ML computer no longer requires purely ‘black-and-white’ inputs to manage internal system state transitions. It can now receive the much more variable ‘greys’ of reality and still operate successfully.  

The AI/ML coding paradigm means the coder no longer has to find the statistically significant points of similarity between 1000 minimally varied inputs. The computer can now recognize them so that when the 1001st input comes along, it has a frame of reference for where it might fit based on the previous 1000 inputs. This is similar to how a dog might eventually learn that a cookie held in the air, the words “sit” or “down,” or perhaps even a particular whistle tone all mean “sit.” 

A New Generation of Programmers

If the tools of programming – of instructing our machines how we want them to behave – are changing, so too is the role of the coder. Rather than understanding how to create instructions for a computer to follow, they will understand the fundamentals of the mathematical probabilities governing this new type of computational behavior. As the paradigm shifts more and more in the direction of this new normal, the role of the procedural coders might not vanish, but it quite likely will change. The job of finding and fixing the “if-then” statements of procedural code may transition to a programmed AI model watching every relevant parameter, ‘learning’ how to find the inputs it needs rather than requiring programmers to define every possible variant input.

Note, however, that it is still human beings instructing the models and determining resultant system states. When we hear phrases like “AI/ML is coming,” it can conjure images of a reality on par with Terminator or 2001: A Space Odyssey. But all the math that drives AI/ML processing is defined by human beings. So too are the behaviors and definition of inputs to drive transitions in system state. The new coding paradigm is knowing how to move these new technologies efficiently through that arc, not replacing human judgement.

“Artificial” intelligence, then, is a misnomer. It is man-made intelligence, trained by people. Its external behaviors may be unpredictable because of the enormous complexity of its inputs, but it is very difficult to imagine a behavior coming out of an AI/ML model that isn’t constrained by human expectations.

We should be excited by this new technology’s tremendous potential to revolutionize the world of work and the people who do it.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence