What is transfer learning?

Source:- bdtechtalks.com

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding artificial intelligence.

Today, artificial intelligence programs can recognize faces and objects in photos and videos, transcribe audio in real-time, detect cancer in x-ray scans years in advance, and compete with humans in some of the most complicated games.

Until a few years ago, all these challenges were either thought insurmountable, decades away, or were being solved with sub-optimal results. But advances in neural networks and deep learning, a branch of AI that has become very popular in the past few years, has helped computers solve these and many other complicated problems.

Unfortunately, when created from scratch, deep learning models require access to vast amounts of data and compute resources. This is a luxury that many can’t afford. Moreover, it takes a long time to train deep learning models to perform tasks, which is not suitable for use cases that have a short time budget.

Fortunately, transfer learning, the discipline of using the knowledge gained from one trained AI model to another, can help solve these problems.

The cost of training deep learning models

Deep learning is a subset of machine learning, the science of developing AI through training examples. The concepts and science behind deep learning and neural networks is as old as the term “artificial intelligence” itself. But until recent years, they had been largely dismissed by the AI community for being inefficient.

The availability of vast amounts of data and compute resources in the past few years have pushed neural networks into the limelight and made it possible to develop deep learning algorithms that can solve real world problems.

To train a deep learning model, you basically must feed a neural network with lots of annotated examples. These examples can be things such as labeled images of objects or mammograms scans of patients with their eventual outcomes. The neural network will carefully analyze and compare the images and develop mathematical models that represent the recurring patterns between images of a similar category.

There already exists several large open-source datasets such as ImageNet, a database of more than 14 million images labeled in 22,000 categories, and MNIST, a dataset of 60,000 handwritten digits. AI engineers can use these sources to train their deep learning models.

However, training deep learning models also requires access to very strong computing resources. Developers usually use clusters of CPUs, GPUs or specialized hardware such as Google’s Tensor Processors (TPUs) to train neural networks in a time-efficient way. The costs of purchasing or renting such resources can be beyond the budget of individual developers or small organizations. Also, for many problems, there aren’t enough examples to train robust AI models.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence