Machine Learning: Solving Problems Big, Small, and Prickly

Machine Learning: Solving Problems Big, Small, and Prickly

November 18, 2019 99 By Stanley Isaacs


As a kid,
I was really inspired by the explorers. I grew up in Seattle, and Lewis and Clark
were kind of heroes locally, and I wanted
to be an explorer when I grew up. As an electrical engineer,
I would always look for new things that we can do
that just weren’t ever possible. And machine learning
and research is an exploration, it feels like an intellectual exploration. We’ve definitely seen
a big uptake in the last five years in what machines are able to do, compared to say
the previous decade or two. With the advent of a lot more data and a lot more computing power,
we really can think bigger and sort of change the game about
what kind of models we can envision. The real world is actually very messy; hard, logical rules are not the way
to solve real-world problems. So machine learning is
all about learning from examples. Rather than writing 500,000 lines of code, we instead have the machine learn
from observations about the world. We look through
a bunch of these examples, in the machine-learning algorithm, maybe millions,
maybe billions, even trillions, to identify the patterns
and generalize from there. In the task of image recognition,
we’ve been able to train models who take the pixels of an image, and from those pixels,
learn high-level features. It starts to learn that
if you see a cake and you see a kid, it’s maybe a birthday party. If you a cake and lots of kids,
it’s very likely a birthday party. That’s essentially
teaching the machines to do the perceptions that we humans
are so natural and so good at. You realize just how amazing humans are, just how amazing your four year old is
who can recognize faces. Machine learning has really been the beginning of a bigger revolution
in the field of speech recognition. To teach speech recognition,
I’d interact with a noisy room. We used real-world sounds and we mix it in
to the examples that we already have. Is it cold outside?
Is it cold outside? Is it cold outside? Now, no matter
what the noise in the environment, our speech-recognition systems
can understand what you’re saying, they can separate out
one speaker from another. With machine learning,
we have now an algorithm that learns
how to simulate a human linguist. A lot of the language
that we see today, it’s very informal… Blah blah blah blah blah,
and they say OK. … interspersed with
emojis and stickers… Now, with Google,
we’re getting to the point where you can have
a much more natural conversation… Any good Mexican places
around here? Here you go. The Assistant product
that we’re building at Google uses the best of our machine-learning
techniques and speech recognition, image understanding,
natural-language understanding. That’s a promising direction for developing systems that can really
navigate the mess of the real world. We wanted to make this
an open-source project so that everyone outside of Google could use
the same system we’re using inside Google. There are lots of people who have made
very, very creative uses of it, without knowing
a single bit of machine learning. So if they have the ideas,
they don’t need to do the heavy lifting that we’ve already done. I saw a cool example,
where somebody used to have a cat going around their house
all the time, so they trained the model
to identify whenever the cat was there, and they would turn
the sprinklers on, to scare the cat away. This elderly couple in Japan,
who ran a cucumber farm, and one of the big tasks
is to sort the cucumbers into prickly ones, less prickly ones,
straight ones, curved ones, and it’s actually a complicated task. So the wife would spend
many hours a day sorting cucumbers. So the son picked up
a computer-vision model and was able to build a system
to categorize the cucumbers and sort them automatically. All the time wasted sorting cucumbers
is going to be used in much better ways. 387 million people with diabetes
are at risk for diabetic retinopathy. It causes blindness. The way that you
can find signs of diabetic retinopathy is by taking pictures
of the back of the eye. But there’s simply not enough doctors and it takes hours for an interpretation. So we trained an algorithm that can
read the images, right then and there. The algorithm can help the doctors to get
more people screened for the disease. The more you see machine learning and the kinds of things
it can do, the more you see opportunity for it to improve people’s lives. You can use machine learning
to save power at significant scales, even track
the spread of diseases and epidemics. We could use a computer-vision model
for everyone who is visually impaired. We could make speech recognizer
for everyone on the planet and drastically improve the experience of millions and billions of people. I don’t see any area of science,
or even human endeavor, that learn systems can’t help with. If you’d asked me a few years ago
if a computer would be able to do this anytime soon, I would
have said “I don’t really think so.” It’s very, very empowering
to imagine what’s going to come. We’re thinking thoughts and doing things
that, you know, no man has ever done and sort of setting forth and setting foot
in really new intellectual territory here. The promise of AI and machine learning
is that we can actually produce solutions to previously unsolved problems
that will really help people.