For a long time, engineers and scientists sought to make artificial intelligence (AI) perform like the human brain. This feat became feasible with the creation of Google Brain, an AI research team, in 2011. So what does Google Brain entail, and what are its advancements and breakthroughs in AI?

How Google Brain Began

The human brain is likely the most complex creation—an intricate biological machine with many areas simultaneously performing different tasks. However, AI developers aim to make AI systems perform complex operations and solve problems like humans.

In 2011, Andrew Ng, a college professor, Jeff Dean, a Google fellow, and Greg Corrado, a Google Researcher, established Google Brain as a research team for exploring AI.

Initially, the team didn't have an official name; after Ng joined Google X, he began collaborating with Dean and Corrado to integrate deep learning processes into Google's existing infrastructure. Eventually, the team became a part of Google Research and was called "Google Brain."

The founding Brain team members sought to create intelligence that could independently learn from large amounts of data. They also aimed to address existing AI networks' challenges, including language understanding, speech, and image recognition.

In 2012, Google Brain encountered a breakthrough. The researchers fed millions of images obtained from YouTube into the neural network to train it on pattern recognition without prior information. After the experiment, the network recognized cats with a high degree of accuracy. This breakthrough paved the way for a wide range of applications.

The Evolution of Google Brain and AI Development

Google Brain revolutionized how software engineers thought of AI, contributing significantly to its development. The Brain team has achieved tremendous results in many machine learning operations—its successes formed the foundation for AI's speech and image recognition and natural language processing.

Natural Language Processing

One of the Brain team's most important contributions is the development of deep learning and the progression of Natural Language Processing (NLP).

NLP involves teaching computers human languages and helping them interact, delivering improved results with continued exposure. For instance, Google Assistant uses NLP to understand your queries and respond appropriately.

Computer Vision

Graphic illustration showing a man at a computer with a surveillance camera

The Brain team has contributed to Computer Vision—identifying pictures and objects from visual data. In 2012, Google Brain introduced a neural network to classify images into 1000 categories. Presently, there are several unexpected uses for Computer Vision in use right now.

Neural Machine Translation

Google Brain also developed Neural Machine Translation (NMT). Before the introduction of the Brain team, most translation systems used statistical methods; Google's Neural Machine Translation was a significant upgrade.

The system translates whole sentences at once, resulting in more accurate translations that sound natural. Google Brain has also developed network models that can accurately transcribe speech.

3 Applications That Utilize Google Brain

The Brain team has pioneered a host of Google applications since its inception in 2011, including the following.

1. Google Assistant

google assistant home page

The Google Assistant, found in many smartphones today, provides personalized information, helps you set reminders and alarms, makes calls to various contacts, and even controls smart devices around the home.

This assistant relies on the machine learning algorithms provided by Google Brain to interpret speech and give an accurate response. With these algorithms, Google Assistant makes your life easier by learning your preferences and, after prolonged usage, understands you even better.

2. Google Translate

An image of the world map in black and grey with the Google Translate logo in the center

The Google Translate system uses Neural Machine Translation, which employs deep learning algorithms from Google Brain. This allows Google Translate to identify, understand, and accurately translate the text into the desired language.

NMT also uses a "sequence-to-sequence" modeling approach. This means phrases and whole sentences are translated in one go rather than word by word. Over time, as you interact with Google Translate, it gathers information, which allows it to provide more natural-sounding translations in the future.

If you need more insight, check out how to translate audio with Google Translate on your android phone.

3. Google Photos

Google Photos icon in 3D

While Google Photos is primarily a cloud-based photo and video storage application, it uses Google Brain's algorithms to organize and categorize media automatically. This lets Google Photos make it easier for you to manage your stored pictures. So, when you take a picture, Google Photos recognizes you, your friends, objects, and even landmarks and events present in the photo.

The application also adds tags to help you group the picture for future reference. This feature is particularly useful for finding and sharing memories with friends later.

Pushing Boundaries With Deep Learning

Google Brain, since its inception, has dramatically expanded AI using top-notch neural network algorithms. The Brain team has contributed to speech and image recognition breakthroughs, machine learning frameworks, and natural language processing.