Singapore-MIT GAMBIT Game Lab spacer
  CMS MIT
spacer New Entries Archives Links subheader placeholder
Updates
left edge
pinstripe
GumBeat debuts!

GumBeat is the latest of the games created for the 2008 GAMBIT Summer program that is now available for download. Team PanopXis really brought it home with this one (of course, I'm a bit biased: I was one of the product owners for this team, alongside Matthew Weise). So now, please...

  • Take on the role of a young woman exploring her personal boundaries... no, that's not it.
  • Challenge a corrupt government bent on security at all costs... nah, still not quite right.
  • Blow big bubbles to persuade your peers that fun and joy needn't be opposed to civic order... yes!

Master our mastication-engine and gleefully guide a cohort of cavorting citizens past the police in order to persuade city hall to relax its War on Snacks in GumBeat!

I am so proud of team PanopXis. I think their focusing on user testing, quick iteration, and their willingness to cut features that were in but broken were what really let them shine. Though all of them showed a lot of talent and determination and excelled in their development, I'm only going to be able to discuss how they handled testing in this post. That being said, I think in this arena, a couple of useful decisions were made that really improved the quality of the game, and I'd like to draw attention to them in the hopes that they are useful to someone else working with teams of a similar size and scope.

Team PanopXis
Team PanopXis

During the prototyping stage, they narrowed in on some basic concepts and hooks that would persist throughout the entire project: gum, police, impressing crowds, popping bubbles, and tagging NPCs with gum. Once they'd established the basic context, Nick Ristuccia designed several different mockups. Perhaps the most impressive was the live-action version of Gumbeat that they ran; students sneaking through the lab, lifting sodas and smuggling them back to their lab. The use of a physical, tangible, active test session gave them access to a lot of mechanics that survived the development process and persist in the game you can download today. This was followed by numerous board and paper versions of the game, allowing them to experiment with different mechanics and verbs.

Rather than zooming out to the large-scale strategic view right away, they honed in on what was spatially and physically intuitive. A lot of different options emerged, but how much work they would take to implement wasn't necessarily transparent. But this also meant that when we needed something new, there was actually a fairly large set of ideas we'd looked at before and that we could draw on later, a kind of persistent brainstorming session that we could refer back to. Moving from physical prototypes to digital editions can lead to challenges with the interface and the flow of feedback to the player, since the kind of feedback you get from a designer standing by saying "and now this happens" is clearly going to communicate differently than the flashing bells and lights of a graphical interface.

Early on, the team decided to bring many of their design concepts to the entire team, so that everyone would understand what the goals and decisions were; this can be difficult for large teams, but on a small team, especially dealing with a fluid design, it appeared to be a great choice. The programmers and artists knew what the designer was looking for, the designer knew what QA would be testing, and QA knew what everyone else would be needing feedback on, eliminating a lot of the second-guessing that can hamstring an otherwise agile process. I think the Scrummaster, Sharon Lynn Chu, set very high expectations for keeping open lines of communication within the team, and by the end of the cycle, it paid off. Keeping the trans-disciplinary work of building a game transparent and open to feedback risks having too many cooks, but Sharon seemed to get the team focused on what their particular expertise could offer and how they would be affected by changes, quickly finding problems and solutions that crop up in any implementation of a game design.

Another particularly helpful decision was when the QA lead on the project, Jun-Cheng Kim, implemented survey questionnaires for the playtesters. I gulped at first: these things can be unwieldy and ineffective even when used by professionals. However, they were short and the questions were individually focused while dealing with a large cross-section of the game: how players felt about specific mechanics, what they perceived as the overall goal, and what they identified with in the main character. These surveys didn't just inform the team by their results, they also gave directions for the team to observe and ask players as they were playtesting. After a playtest session, JC would then review all of the received surveys and present the results to the team, who would come up with an itemized set of changes and responses. Efficient results that solved multiple feedback issues or addressed large inconsistencies were highly valued, while Matt and the Dev Staff gave excellent advice about how to handle particularly thorny complaints. In places where we weren't sure about how to proceed or if we'd nailed something strongly enough, we found answers by dropping the game in front of a non-developer and seeing if it actually worked.

One question I still have: we decided that it made a bit more sense to go for a combination of multiple-choice and 'fill in the blank' type responses; yes/no questions felt very leading. Asking too many leading questions, I felt, would miss the point of the survey, which was to create a record of the player's thought processes + experience. Focused questions like ranking features or asking if players got to specific points in the game gave the team the concentrated feedback they were looking for, but supplying open-ended questions allowed them to discover blind spots. Someone with a stronger background in ethnographic survey or HCI might have a lot of knowledge that would be useful for strengthening this kind of data gathering.

Lastly, I'd like to point to one result of all this testing, a result of which I'm particularly proud. One of the successes of the game was the establishment of several different goals for players. The narrative has a a single driving goal and a short epilogue, but the way multiple gameplay systems combine enables a few open-ended challenges. Feel free to collect not just 10 but 20 or even all 30 citizens; try to max out the city's happiness (no, I don't know exactly what that means, philosophically, but roll with it), or trick the police officers into assaulting each other as supposed chiclet activists. As The Only Haven You Can Trust notes, these kinds of varied goal structures can be a powerful hook in a game's design, and I believe this sort of gameplay is definitely a result of the kind of player-focused testing strategy that PanopXis maintained as a core principle of development.

right edge
bottom curves