Affiliate Disclosure: By buying the products we recommend, you help keep the site alive. Read more.
Artificial intelligence is the frontier of computer science. The science has advanced enough that AI is beating us at our own game — or should we say, games. Some people may fear the rise of Skynet with each AI evolution, but we’re a bit more optimistic.
AlphaGo is the latest AI to beat a human in a board game, but it comes from a long pedigree. Though these five machines started as purpose-built programs, some have found second lives that go beyond their original callings.
In this article, we’ll go through each time a brilliant human lost to a computer and examine what gave each of those computers its decisive edge.
1. Deep Blue, the Chess Master
IBM’s Deep Blue and Garry Kasparov had one of the first high profile battles between man and machine. Kasparov lost, of course, but they had a bit of a complicated history.
After Kasparov first beat Deep Blue’s little brother, Deep Thought, in 1989, IBM returned with its new and improved Deep Blue in 1996. Kasparov lost an opening game, tied a second, but then won three straight games to take the match.
It wasn’t until a second rematch in 1997 that Deep Blue bested Kasparov, winning a six-game match by one game.
Kasparov said he saw intelligence in Deep Blue’s game and accused IBM of intervening. The “intelligence” was actually a bug that caused Deep Blue to act out of character. Basically, the AI was rather primitive, brute forcing its way through possible moves and outcomes…
…and if it could not find an optimal choice, it chose at random.
For each of its moves, Deep Blue modeled out all possible moves and Kasparov’s responses. It was able to model up to twenty moves ahead, evaluating millions of possible positions per second. That modeling required hardware capable of powerful parallel processing.
Parallel processing is breaking down tasks into smaller computing tasks and completing those tasks at the same time. The resulting data is then compiled back together for the result.
Between the two matches, Deep Blue was given a significant hardware upgrade. The winning hardware was a 30-node system running on IBM’s Power PC platform. Each node had secondary processors dedicated to Chess instructions.
All combined, Deep Blue had 256 processors working in parallel.
There are descendants of this hardware working in datacenters, but Deep Blue’s true legacy is Watson, the Jeopardy champion. Eventually, IBM put Deep Blue to work on financial modeling, data mining, and drug discovery, all areas that need large-scale simulations.
2. Polaris, the Poker Champion
The University of Alberta created Polaris, the first AI to beat poker professionals in a tournament. The researchers chose a Texas Hold ‘Em variant for their AI as it relies the least on luck.
Polaris faced off against poker players twice. The first was in 2007 against two players. The hands were pre-dealt — Polaris had one set of cards when facing off against one player, and the reverse hand when playing the other player (to control for luck).
Polaris was later retooled for a 2008 tournament against six players. This was also a pre-dealt set of games. Polaris got a draw in the first game and lost the second, but eventually won the tournament, coming from behind and winning two straight games.
Unlike chess, poker can’t be brute forced through modeling because the AI has a limited picture of the game — it has no idea about its opponents’ hands.
Card deals are almost infinitely unique, making modeling even less effective. The same cards can be a good or worthless hand, just depending on the other cards dealt. Bluffing presents another problem for AI as betting alone isn’t a good indicator of hand strength.
Polaris is a combination of several programs, which are called agents. Each of these programs had its own strategy, and there was another agent that would choose which of these was the best for any given hand.
The strategies used to break down the game of poker are varied and require game theory. The basic idea is to figure out what each players’ best strategy would be based on all available data, and Polaris accomplished this via a technique called bucketing.
Bucketing is used to classify card hands based on strength. It allowed for Polaris to reduce the number of data points needed to keep track of the game. Then it used the probability of all other possible buckets available, deriving these from the visible cards.
Polaris had a unique hardware set up: a cluster of 8 computers with each one having 4 CPUs and 8 GB of RAM. These machines ran the simulations needed to create the buckets and strategies for each agent.
Since then, Polaris evolved into another program called Cepheus, becoming so advanced that researchers have now declared Texas Hold ‘Em to be “weakly solved”.
Games are “solved” when algorithms can determine the outcome of a game from any position. A game is “weakly solved” when the algorithm cannot account for imperfect play. You can try your luck against Cepheus here.
3. Watson, the Jeopardy Genius
AI victories until this point in history have been low-key games, which is why Watson’s victory is such a milestone for mainstream folks: Watson brought the battle of AI right into America’s living rooms.
Jeopardy is a beloved game show known for its challenging trivia, and it has a unique quirk: the clues are the answers and contestants have to come up with the questions. A true test for Watson, who took on well-known Jeopardy champions Brad Rutter and Ken Jennings.
Rutter was the all-time money champion and Ken Jennings had the longest winning streak. A third party chose a random assortment of questions from older episodes to ensure questions were not written to aid or exploit Watson.
Watson won three straight games — one practice and two televised — but there were some odd quirks to some of Watson’s answers. For example, right after Jennings answered a question wrong, Watson responded with the same wrong answer.
However, what made Watson unique was its ability to use natural language. IBM called this Deep QA, which stood for “question answering”. The key achievement was that Watson could search answers with context, not just keyword relevance.
The software is a combination of distributed systems. Hadoop and Apache UIMA work together to index the data and allow for the various nodes of Watson to work together.
Like Deep Blue, Watson was built on IBM’s Power PC platform. Watson was a 90-core cluster with 16 TB of RAM. For the Jeopardy games, all of the relevant data was loaded and stored into RAM.
What relevant data? Well, Watson had access to the full text of Wikipedia. It had an array of dictionaries, thesauruses, encyclopedias, and other reference materials. Watson did not have access to the Internet during the game, but all the local data was about 4 TB.
More recently, Watson has been used to analyze and suggest treatment options for cancer patients. Watson’s latest venture is helping to create personalized learning apps for kids. There are even attempts to train Watson how to cook!
4. Deepmind, the Self-Taught
Google’s Deepmind may finally give nerds something to worry about because it’s beating humans at classic Atari games — well, certain games at least. Humanity still keeps it’s edge in games like Asteroid and Gravitar.
Deepmind is a neural network AI. Neural networks are AIs that are created to mimic the way the human mind works, which it does by creating virtual “neurons” using computer memory.
Deepmind was able to analyze each pixel of the display, decide the best action to take given the win conditions, then respond with controller input.
The AI learned games using a variant of Q-Learning called Deep Learning. This is a learning method where the AI retains the best decision made in certain a situation, then repeats it when it encounters the same situation.
Deepmind’s variant is unique, however, because as it adds external memory sources.
This system of retained information allowed Deepmind to master the patterns of some Atari games, and even drove it to find the optimal strategy of Breakout all on its own.
Why did Deepmind perform poorly in certain games? Because of the way it judged situations. It turns out that Deepmind was only able to analyze four frames at a time, which limited its ability to navigate mazes or react quickly.
Also, Deepmind had to learn each game from scratch and couldn’t apply skills from one game to another.
5. Alpha Go, the Incredible
AlphaGo is another DeepMind project and it’s remarkable because it managed to beat two professional Go champions — Fan Hui and Lee Sedol — by winning its matches 5-0 and 4-1, respectively.
According to the players and match commentators, they all said that the AI played conservatively, which is unsurprising because it was programmed to favor safe moves that would ensure victory over risky moves that would ensure more points.
Go was once thought to be out of reach for AI, but Alpha Go is now the first AI to be ranked professionally in the game.
The game has a simple set up: two players try to conquer the board using white and black stones. The board is a 19 x 19 grid with 361 intersections, and the placement of stones determine each player’s territory. The goal is to end with more territory than the other.
The number of potential moves and game states is massive, to say the least. Yes, far greater than chess, if you were wondering.
Alpha Go uses the previously mentioned Deep Learning AI system, which means that Alpha Go keeps memory of the games it’s played and studies them as experience. It then searches through them, selecting the choice that has the greatest number of positive potential outcomes.
Alpha Go needs a lot of computer power to run its computation-heavy algorithm. The version that played the matches ran on a distributed set of servers with a total of 1,920 CPUs and 280 GPUs — an enormous amount of power that allowed for 64 simultaneous search threads during play.
Like Watson, DeepMind is heading for medical school. Deepmind announced a partnership with the UK’s NHS to analyze health records. The project, Streams, will help identify patients at risk for kidney damage.
Artificial Intelligence Is Getting Serious
There’s a lot of research going into AI right now.
Google is hoping that AI can assist their search business. A project called Rankbrain is looking to use AI to enhance the effectiveness of Page Rank. Microsoft and Facebook both released chatbots. Tesla’s leading the bleeding edge with its automatic driving mode, and Google is right behind with its self-driving cars.
It might be hard to see the connection between these projects and the training of an AI to win games, but each of these AIs has shaped machine learning in some way.
As the field has evolved, it has allowed AIs to work with more complex datasets. Those nearly infinite number of moves in Go can translate to the nearly infinite number of variables on the open road. So really, these games are just the beginning — a practice phase, if you will.
The really interesting stuff is just around the corner, and it’s very possible that we’ll be able to experience it all first hand.
What excites you about AI? Is there a game you think that AI can’t eventually conquer? Let us know in the comments.