Source – jaxenter.com
Computer science has a pretty high barrier to entry. Learning to code is the first and biggest obstacle; many people run straight into that hurdle. So, when Google’s A.I. Experiment announced earlier this week that they had created a new experiment to help people understand machine learning without a single line of code, I was dubious. How could this even be possible?
Teach a robot to teach itself
First, a bit of background.
One of the big problems stopping us from having artificial assistants a la Jarvis is the Polanyi paradox. Named for the philosopher Michael Polyani, it more or less states that we “know more than we can say”. It’s the biggest problem in automation, actually. Automation requites that tasks be broken down into concrete steps. But how can a task be broken down if we don’t even know how we’re doing it?
We can teach a sixteen year old to (more or less) safely drive a car in less than six months. Any adult can do it. And yet, it’s taken some of the best minds in engineering and computer programming over a decade to teach a robot to do the same. Why?
There are a lot of steps in between point A and point B. Most of them are unconscious or so minimal that we hardly even think of it. But a computer needs all of those little rules and things to keep in mind. There’s a famous example – “programming a peanut butter and jelly sandwich”. It’s a lot more difficult than you’d think.
Computers think literally; it’s kind of their thing. This has been the big issue, since programmers can’t cover everything. (Also, programmers in the olden days had to worry about computer processing limitations as well, the poor sods.) And so, we come to machine learning.
Machine learning is the art of teaching computers to teach themselves. Why bother coding every possible interaction that a program might encounter, when you can feed a program a crazy amount of data and have it pick up general rules along the way? The programmers add a bit of correction and guidance to make sure it doesn’t go too far afield.
It’s not that crazy of an approach. After all, that’s how we humans learn to speak.
These days, machine learning uses datasets in the thousands and hundred-thousands, offering the program a chance to figure out a pattern on its own from the examples given to it. Sometimes, it works out quite well. Other times, not so much.
And so, from these datasets, come general rules that help a program determine a cat from a dog or sort pickles for Japanese farmers. Which is all very well and good. But how do they manage to do it without a single line of code?!
Well, technically, there is code. Just not on the end user’s side.
The Teachable Machine works with your computer’s camera to explore the details of machine learning with some fun examples.
The tutorial explains things quite handily. You’re basically creating a small dataset of images to teach the program to respond in one of three specific ways for the gif, sound, or speech responses.
Here’s how it works: You give the program at least thirty photos of you doing a specific, recognizable thing, like sitting quietly at your desk, making a funny face, or drinking a cup of coffee. Each photoset is associated with a specific response from the program.
I make a funny face? The program responds with a cat gif.
I sip at my coffee? The program plays a trumpet.
I sit quietly at my desk? The program says “awesome”.
Things are pretty easy to differentiate between if the three poses are distinct. But things get more interesting if you have three similar datasets. The program definitely has a harder time when the difference between two photos is a wink. It shifts between responses readily as you move, trying to make sense of what you’re doing.
[Full disclosure: if you’re weirded out about the idea of Google having access to your photos as you play around with this tech, all of the images gathered stay on your local network and aren’t uploaded past the initial page.]
It’s definitely fun to play around and see how good the program can get at differentiating at different photo datasets.
Obviously, the limitations are pretty clear. Teachable Machine isn’t going to help solve the robotic driving issue any time soon. But it’s an excellent introduction to machine learning for those of us who might not be the greatest at coding.
And frankly, we need all the machine learning specialists we can get.
A sneak peek at Gartner’s top 10 technology trends for 2018 shows that artificial intelligence and machine learning is at the tip-top of the list. Machine learning specialists are among some of the best paid in the business.
But, there’s a big gap between the open jobs and the skills programmers have. So, making it easier for people to dip their toes into the machine learning pool can only be a good thing. And that is what Teachable Machine is doing: bringing machine learning to the masses… with cat gifs