When it comes to directing a video game's characters ("If x happens, do y"), there is only so much current artificial intelligence (A.I.) can do.
And while their skills are specialized and prized, A.I. programmers can devote years to a single game. They have to consider all the events that might occur and map out characters' possible reactions. In a first-person shooter, for example, the A.I.-controlled character needs to know how to collect ammunition, seek his target, get within range to fire, and then escape.
What if making certain kinds of A.I. didn't have to be that laborious? What would happen if an algorithm, extrapolating from a few decisions made by players, could figure things out for itself--and even reuse those lessons from one game to the next?
And what if all this could be done by someone with no A.I. training at all?
Those are questions that our game prototype "Robotany" wants to answer.
Set in a garden, the game features small robot-like creatures that sustain the lives of plants. The player manipulates graphs of the robots' three sensory inputs--three overlapping A.I.'s--and these manipulations teach the A.I.'s how to direct characters in new situations.
"The scheme behind Robotany requires that we ask the user to describe what the A.I. should do in just a few example situations, and our algorithm deduces the rest," said the game's product owner, Andrew Grant. "In essence, when faced with something the user hasn't described, the algorithm finds a similar situation that the user did specify, and goes with that."
The game was developed as part of GAMBIT's eight-week summer program, which brings together young artists, programmers, and project managers from U.S. and Singaporean institutes.
Robotany's team of eleven pushed game research in a unique direction by taking advantage of the human brain's ability to identify patterns.
"If we ask the user about a bunch of random example situations and draw conclusions from that," said Grant, "it turns out that you still need a really long list of example situations. With our approach, we can drastically reduce the number of examples we need to make an interesting A.I., well before you'd traditionally get anything good."
Added game director Jason Begy, "The player can effectively give the characters some instructions and then walk away indefinitely while the game runs."
Other A.I. developers have been enthusiastic about this new approach.
"Robotany represents a great new direction for game A.I.," said Damian Isla, who was the artificial intelligence lead at Bungie Studios, makers of the Halo franchise. "It's one in which the A.I.'s brains are grown organically (with help from the player), rather than painstakingly rebuilt from scratch each time by an expert programmer."
MIT Media Lab researcher and GAMBIT summer program alum Jeff Orkin suggested solving this kind of challenge would be "one of the holy grails of A.I. research." He said the video game industry spends an incredible amount of time and money micromanaging the decisions that characters make. "It would be a boon to the game industry, as long as the system still provided designers with an acceptable degree of control."
Rethinking the visual interface to the AI was a key component for addressing these technological questions. Begy described the need for more effective visual design as "absolutely necessary if the project is to succeed," because A.I.'s can handle countless variables while human players training the A.I. cannot. For similar projects, he would recommend having a skilled user interface staffer on hand and testing interfaces with players as much as possible. "In the final design, we went for many robots, each of which was only paying attention to two variables," said Begy. "This is reflected in the training system in Robotany, which is made up of simple two-dimensional graphs."
The Robotany team, honored as a finalist in the student competition at the upcoming Independent Games Festival, China, was also comprised of producer Shawn Conrad (MIT); artists Hannah Lawler (Rhode Island School of Design), Benjamin Khan (Nanyang Technological University), and Hing Chui (Rhode Island School of Design); quality assurance lead Michelle Teo (Ngee Ann Polytechnic); designer Patrick Rodriguez (MIT), programmers Biju Joseph Jacob (Nanyang Technological University) and Daniel Ho (National University Singapore), and audio designer Ashwin Ashley Menon (Republic Polytechnic).
Additional Information
Singapore-MIT GAMBIT Game Lab (gambit.mit.edu)
The Singapore-MIT GAMBIT Game Lab is a research collaboration between the Massachusetts Institute of Technology and the Interactive Digital Media R&D Programme Office hosted by the Media Development Authority of Singapore. The lab experiments with the theory, aesthetics, culture, craft, legacy, technology and play of games, developing, sharing, and deploying prototypes, findings and best practices to challenge and shape global game research and industry. GAMBIT builds collaborations between Singapore institutions of higher learning and MIT departments to identify and solve research problems using a multi-disciplinary approach that can be applied by Singapore's digital game industry.
Media Development Authority (www.mda.gov.sg)
The Media Development Authority of Singapore promotes the growth of globally competitive film, television, radio, publishing, music, games, animation and interactive digital media industries. It also regulates the media sector to safeguard the interests of consumers, and promotes a connected society.
Resources
Video trailer
http://www.youtube.com/watch?v=K8zSY5kMscI
Poster (PDF)
http://gambit.mit.edu/images/a4_robotany.pdf
Poster thumbnail
http://gambit.mit.edu/images/a4_robotany_tmb.jpg
Gameplay image
http://gambit.mit.edu/images/robotany5.jpg