Now AI Wins at Computer Games

On This Site

Current Events

Share This Page

Follow This Site

Follow SocStudies4Kids on Twitter

March 1, 2015

Artificial intelligence now claims winning video games among its successes.

Google has announced that a team of its scientists has created an AI system that can learn to better its performance under a number of varied circumstances, as exemplified by the system's demonstrating that it can teach itself to play Atari 2600 video games.

The scientists gave the system, named "deep-Q-Network" (DQN), minimal information on the game or its various scenarios. Instead, the scientists built the system to use a machine-learning algorithm that enhanced its ability to "learn" from its experiences.

Among the games that the AI system "won" at were such classics as Pong and Space Invaders and newer games such as boxing, 3D car-racing games, and Seaquest, a submarine game.

DQN scored 75 percent of what a professional human player scored on half of the 49 games it played. In some instances, DQN came up with solutions to scenarios that humans had not.

Google representatives said that future versions of such a system could be used in a variety of ways, including in the manufacture of driverless cars, a project on which Google has been working for some time.

DQN is in the vein of Deep Blue, an IBM computer that won at chess against grandmaster Garry Kasparov in 1997, and Watson, an IBM computer that defeated all human opponents on the TV quiz show Jeopardy! in 2011.

Progress in computing is such that DQN runs on a run-of-the-mill desktop computer.

London-based DeepMind AI owns DQN.

The report announcing the system's progress appeared in the Feb. 25, 2015 issue of the journal Nature.

Search This Site

Custom Search

Social Studies for Kids
copyright 2002–2015
David White