Source – iamwire.com

Finally, after many long winters, spring has come in the field of Artificial Intelligence (AI). Experts believe it’s the “new electricity”. AI influencer and Guru, Andrew Ng has said – “this time the hype around artificial intelligence is real.” Its ubiquity was evident in this year’s Consumer Electronic Show (CES 2017) where “its impact was all-pervasive and its presence could be felt throughout the show”.

Artificial Intelligence is the driving force behind remarkable advancements in speech recognition, computer vision, language translation, search engines, recommendations systems and many other applications. In certain specialized tasks and complex games, AI has beaten experts. A widely cited example in this category is that of AlphaGo, a deep learning based system from Google’s DeepMind, that has defeated the world champion Lee Seedol in a 2,500 year old Chinese board game, Go. AI based autonomous driving technology is making rapid progress and hopefully, within a decade, we may find large scale deployment of fully autonomous vehicles. AI enthusiasts have now valid reasons to believe that this time, spring will be eternal.

The resurrection of AI in recent years can be attributed to significant developments in machine learning systems, especially in one of its sub-field called – deep learningMachine learning imparts “computers the ability to learn without being explicitly programmed”. Deep learning is a class of machine learning algorithms that use deep artificial neural networks with multiple hidden layers.

While evolution in machine learning drives the current AI boom, the hype has caused certain misconceptions around the capabilities of these systems. Some of these misconceptions have risen to the level of myths. In this article, we discuss three such common myths around machine learning.

Myth#1: Machines can learn autonomously

Reality: Machine learning is orchestrated by programmers who design the machine’s learning architecture and feed it with the necessary training data.

Most of the machine learning algorithms require large amounts of structured data. Programmers decide the learning approach (e.g., supervised learning, unsupervised learning, reinforcement learning etc.), the learning architecture (e.g., the number of layers of artificial neural network and the number neurons per layer), the learning parameters and the appropriate training data as per the system’s design. In many applications of machine learning, the human effort is enormous.

For example, consider the case of autonomous cars. An article published in Financial Times highlights how self-driving cars are proving to be “labor-intensive for humans”. It talks about how humans are putting significant efforts behind the scene to painstakingly label and tag different objects in the captured images for the training purpose. Sameep Tandon, the CEO of Drive.ai has been quoted in the article, saying – “The annotation process is typically a very hidden cost that people don’t really talk about. It is super painful and cumbersome.

Myth#2: Machines can learn like humans

Reality: Machines are not even close to the way chimpanzees learn.

However, hype is taking precedence over reality – that’s why we find tall claims in some of the articles creating an impression that AI algorithms “can learn like a human”!

If we compare the learning process of a machine with that of a child, it becomes evident that machine learning is still in its infancy. For example, a baby doesn’t need to watch millions of other humans before it learns how to walk. She sets her own goal of walking, observes other humans around, intuitively creates her own learning strategy and refines that through trial and error until she succeeds. Without any outside intervention or guidance, a baby displays curiosity to learn and successfully walks, talks and understands others. Machines on the other hand requires guidance and support at each step of learning.

Moreover, a child easily combines inputs received through multiple sense organs to make the process of learning holistic and efficient. In one article, Dave Gershgorn indicates that “AI research has typically treated the ability to recognize images, identify noises, and understand text as three different problems, and built algorithms suited to each individual task.” Researchers from MIT and Google have published papers explaining the first steps on how a machine can be guided to synthesize and integrate inputs from multiple channels (sound, sight and text) to understand the world better.

Myth#3: Machine learning can be applied to any task

Reality: Currently, machine learning can only be applied to tasks where large number of input data sets exist or can potentially be captured.

Andrew Ng, in one of his HBR articles, points out that “despite AI’s breadth of impact, the types of it being deployed are still extremely limited. Almost all of AI’s recent progress is through one type, in which some input data (A) is used to quickly generate some simple response (B).” Also, most of the successes in AI have come in the applications where companies like Google and Facebook have access to enormous data sets (texts, voices or images) coming from a variety of sources.

Hence machine learning cannot be easily applied to tasks that are not of the type mentioned above or where sufficient data sets are not available. A write-up published in The Verge reiterates this point and observes that “the problem is even bigger when you look at areas where data is difficult to get your hands on. Take health care, for example, where AI is being used for machine vision tasks like recognizing tumors in X-ray scans, but where digitized data can be sparse.

Some startups are trying to overcome the bottleneck of large data dependency for machine learning algorithms. For example, Geometric Intelligence, which was acquired by Uber in last December, is attempting to develop systems that can learn tasks with little data.

Bottom-line

Advances in machine learning and deep learning have brought AI out of its long hibernation. While remarkable innovations have taken place, many more key breakthroughs are awaited in these fields.

The hype around artificial intelligence and machine learning has also led to exaggerated expectations about the current capabilities of these systems. If not corrected, these misconceptions or myths may lead to collective blind spots about the state of progress in these fields.  In this article, we have discussed about three common myths on machine learning and contrasted those with corresponding realities.

Related Posts

Subscribe
Notify of
guest
6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
6
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence