The Mind-Blowing AI Breakthrough From Google That You Probably Missed

Amidst the holiday rush and end-of-year reflections in December 2016, Google published an article that largely went unnoticed by the mainstream press and public. But for those paying attention, it revealed a monumental breakthrough in artificial intelligence that may have huge implications for the future of communication technology and machine learning.

The article, with the unassuming title "Zero-Shot Translation with Google‘s Multilingual Neural Machine Translation System", described how Google Translate, the company‘s popular language translation service, had been revamped with a powerful new engine – the Google Neural Machine Translation system (GNMT).

On the surface, this may sound like a routine software update. But digging into the details reveals that the GNMT is no ordinary translation tool.

How Machine Translation Got a Lot Smarter

To understand why this is so significant, we need to look at how language translation software has traditionally worked. Until recently, most systems, including the old Google Translate, used what‘s called phrase-based translation.

Essentially, these systems work like souped-up versions of looking up words and phrases in a foreign language dictionary or phrasebook. The software would break down sentences into words and short phrases, look up the closest matches in its dictionary for the target language, and piece them back together to form a translated sentence.

This method can work decently well for getting the gist across, especially between closely related languages. But it tends to fall short in terms of grammar, nuance, and naturalness because it translates small snippets without understanding broader context and linguistic structure. It‘s also completely dependent on having a large pre-defined vocabulary dictionary – there‘s no capacity to make educated guesses about unknown words or phrases.

The GNMT throws that model out the window. Instead of a static dictionary, it uses a neural network, a complex mathematical system modeled after the human brain that can learn, adapt, and create on its own.

Fed with a large corpus of texts professionally translated by humans, the GNMT system gradually "learned" patterns and built its own complex statistical models representing the mechanics of translation. With millions of parameters and the ability to see entire sentences at once, the system developed a much more sophisticated "understanding" of linguistics and meaning.

A Surprise Discovery

The Google team set out to build a better translation tool. But in the process, they stumbled upon something far more profound.

The GNMT‘s neural network architecture enabled a fascinating capability the researchers call "zero-shot translation." This means the system could translate between two languages it has never seen directly paired together in its training data.

For example, if the system was taught how to translate between English and Korean and between English and Japanese, it was then able to translate between Korean and Japanese – without ever being explicitly taught phrases for that language pair. Somehow, it "figured out" how to do those translations on its own by using English as a bridge.

But here‘s where things get really mind-blowing. To make these zero-shot translations possible, Google‘s team discovered that the GNMT had silently invented its own internal language to aid in the task.

They call this an "interlingua" – a representation of semantics and meaning that‘s not specific to any human language but able to encode what sentences in different languages have in common conceptually. By translating a sentence from the source language first into this interlingua and then into the target language, the system found a more efficient and effective path.

What‘s remarkable is that no one programmed the system to develop this interlingua. It wasn‘t given a pre-defined set of semantic tags or meanings to use. The GNMT created this intermediary language on its own as a byproduct of its learning process – an original innovation to solve the task at hand.

Quietly Crossing a Threshold

This discovery represents an important milestone in the development of artificial intelligence. While still narrow in scope, Google‘s neural network demonstrated a glimmer of human-like ingenuity – the ability to problem-solve in novel ways and generate something new in the process.

Of course, this doesn‘t mean Google Translate is now "conscious" or smarter than humans in any general sense. Its intelligence is limited to a very specific domain. We can‘t have open-ended conversations with it or ask it to compose poetry. At its core, it‘s still a statistical algorithm processing math, not an avatar.

But therein lies the significance. What Google‘s researchers documented was an emerging form of machine intelligence markedly different from the pre-defined, brute force computation we‘re used to seeing from AI. The GNMT couldn‘t solve the task of translation by just following a set of predetermined rules – it had to discover patterns, connect insights, and manipulate symbols in ways it wasn‘t explicitly instructed to.

It hints at the kind of AI that may one day be capable of true reasoning and abstraction, though we still have a long way to go to match the depth and flexibility of human cognition. But we‘re taking baby steps in that direction.

A Glimpse of the Future

So what doors could this technology open? In the near-term, services like Google Translate are likely to get a lot better for a lot more language pairs. As neural machine translation matures, we may see web browsers and smartphones that can near-instantly translate foreign text and speech into something much closer to natural, contextual communication.

It‘s also not a huge leap to imagine this type of AI being applied to other types of data analysis and optimization challenges. By scouring huge datasets and inventing its own representations, future neural networks may discover novel solutions in fields like drug discovery, financial modeling, and logistics that humans haven‘t considered.

More broadly, the fact that today‘s AI can create its own symbolic language is a signpost of how much these systems may surprise us as they develop. While the doomsday scenarios of sci-fi movies are probably far-fetched, it‘s not inconceivable that sufficiently advanced AI could one day devise solutions so foreign to the human mind, so alien in appearance, that we struggle to comprehend them.

For now though, the research team at Google has given us a small but tantalizing preview of the future. They‘ve shown that machines are beginning to do more than just execute instructions – they‘re learning how to learn in ways that may soon make our jaws drop.

As we head into 2017, it‘s an exciting time to follow the world of AI. The translation tools and digital assistants we use every day are due for some potentially remarkable transformations thanks to the diligent efforts of computer scientists around the world pushing this technology forward.

Their breakthroughs may not always make front page news, but when you read the proverbial fine print, you realize just how mind-blowing the achievements of modern AI really are.

So here‘s to an amazing and eye-opening 2017 in the world of artificial intelligence. As the GNMT has shown us, there are certainly more surprises in store – ones that will likely arrive sooner than we think and in ways we can barely imagine.

Similar Posts