9:00am: Flash Forward
Most of the speakers were given 45 seconds to pitch their talk; it was well run, people did a good job. I learned a little bit more about why the schedule seems a bit more meh than normal to me this year: it’s because the schedule turns out to be full of talks about Uncharted or Saints Row, neither of which I particularly care about!
Which raises the question of what I do care about this year. Ideally, I’d be listening to my soul, but I think I’m not quite there yet. Still: panels bad, vague translated talks bad, geeky technical talks good but I’m not involved in those details enough for them to work for me this year. (Geeky team organization talks, though…)
And Flash Forward did change my plans for two or three time slots this year. So that’s something. And last year’s Nintendo keynote was quite disappointing, so I’m happy with this as an experiment.
11:00am: Margaret Robertson, The Gamification of Death
My favorite talk of the day; I’ve broken it out to a separate blog entry.
12:05pm: DB Cooper, Take That 2.0: Techniques and Skills for Exertion Sounds for Video Games
I went to this one on a lark after hearing about it in the Flash Forward session, and I’m glad I did. It was great to see a live demonstration of voice acting, and I imagine (and Sarah Elmaleh, who would know, agreed) that the advice given was very practical.
When hitting somebody: your jaw is shut, since you’re in control. There are some exceptions involving yelling (e.g. a kiai in a martial arts attack). You put consonants at the front, vowels at the end. There are choices for the consonants at the start; you use violent vowel sounds at the end. (I didn’t copy them down, but she gave specific examples.
When you’re being hit: jaw falls open, you’re surprised, it hurts. Your tongue is relaxed. You make different sounds depending on the hit: during a body hits the air comes out, whereas hits on extremities lead to sounds of pain. This time, the consonant goes at the end. It’s hard to imagine being hit, so she had the voice actor hit himself with a 2-pound weight. Now she’s hitting him on his arm: extremity strikes are an insult to that part of your body, and it stings. A-sounds.
To get a big sound, you need a big lungful of air. So shout with your arms open! But your arms get tired when you do that. Solution: use a stretchy thing, so you get resistance to help you make a loud sound without getting your arms as tired. Sounds come from lower down instead of just up in your throat; also, you can feel the stress in your body.
Grappling sounds. This time, she grabbed the voice actor from behind and asked him to make sounds like he was wrestling. You don’t generally have a partner when recording; again with stretchy thing, this time something he could step on and pull up on.
Agony: make a “bloodcurdling barf”.
She closed with a dying in a fire demonstration: wow, I’m amazed nobody came running in.
2:00pm: Caleb Howard, Asking the Impossible on SSX: Creating 300 Tracks on a Ten-Track Budget
Another talk I wouldn’t have gone to if I hadn’t heard about it in the Flash Forward session; this time, though, I wasn’t so happy with that choice, and in fact almost left in the first five minutes, because it was way too slow to get started.
It was about generating the mountains and tracks for the new SSX game. Which they wanted to do a lot of! The traditional method: lock the pipeline. This prevents fast iteration.
In early 90s, he did procedural generation for films. Now: games. In SSX, Todd Batty asked for the impossible, namely 300 tracks. Previous iterations started with ribbons, built a mountain around it; he started with one of those old ribbons this time, and generated a mountain around it procedurally.
Generating the mountain took around 2 seconds. So lots of iteration possible: could fiddle with sliders and run it again over and over. People bought into the idea almost immediately, despite the significant change from what previous versions of the game had done.
The procedural approach:
Start with the simplest possible modular loop. Path → terrain → instances → mesh → lights → audio → effects → iterate again. Start with very simple tools in each bucket, get them in the hands of the artists ASAP, improve as you discover which limitations matter. (Went through 4.5 major revisions of the tools.)
First workflow: Start with 2D manifolds. Specify the kind of track you’d like (turns, steepness), feed into a search engine driven by NASA data. Next, find paths. Find gully lines down mountains. Analyze for curvature and slope. Wanted to be able to tweak, e.g. add tunnels and bridges; hard to fit onto the topological data.
This was a problem, but they decided to press forward. 2D manifolds were causing problems; switch to 3D voxel sets. Very memory-intensive, but easy to work with. Added editing tools: trace out curved rods (e.g. subtract to dig a mesh); 3D model → voxels; add in noise (to fill in details beyond the 30m data that NASA provides).
This was rich enough to let them make progress, and for the art department to use and give feedback to the tool makers.
Next problems: uniform voxel sizes means high memory usage (or low detail) everywhere; noise led to floating islands. 130GB of memory usage for a single track, and designers want to make bigger tracks!
Third version of workflow: hierarchical volumes, sometimes with high-resolution voxels and sometimes with low-resolution. This let designers make full tracks, but they wanted to make still bigger ones.
A new surfacer is crucial now; fortunately, he found a paper that explained how to write one meeting their constraints. One person could build an entire game-worthy track providing 3 minutes of gameplay within a single day.
Then lots of newcomers showed up to help finish things. But the newcomers were used to the old, non-procedural ways. They couldn’t build tracks as fast; more importantly, they were used to building handcrafted tracks. Eventually, used procedural tracks to get 80% of the way, handcrafted the rest; good results, but took significantly longer.
Fourth workflow: still too memory intensive. Went from hierarchical voxels to point clouds. Different samples given different significance, so could still start in low res and then add fine touches as necessary.
Final workflow:
- Path generation driven by difficulty curve + branching probability. Add attributes, some derived and some specified.
- Then sweep out the curve. The tool for this allows for a varying profile to be used as you sweep down the mountain.
- Skirt generation: flesh out the world around the path, build a mountain around it.
- Surfacing from that plus hints as to where more gameplay detail will be needed.
- Place instances: rocks, trees, lights, … Procedural plus manual tweaks.
- Surface alteration, e.g. snowdrifts against rocks.
3:30pm: Eric Zimmerman, Let the Games Be Games: Aesthetics, Instrumentalization & Game Design
I suspect this was a rather interesting talk, but it turned into my nap of the day. Pity that didn’t happen during the previous talk instead. What little notes I took:
Motivated by a problem that’s bugging him: instrumentalization. Solution is thinking of games as an aesthetic form.
He reread Art in Theory, a book filled with manifestos and writings on the meaning of art. Art was never just about itself. Random samples from the book were always relevant.
Why games? Ludic century: games will be the dominant cultural form of the next century. Industrial age → information age → ludic age. Systems, play, design, will focus on the first of those.
Pickup artist culture: models important parts of human nature as systems, people are reduced to instruments. Instrumentalization.
Guitar Hero as a vector for learning guitar. Latter inherently good, former not: games are being instrumentalized. Latter good because they’re an aesthetic form; if we think of games as an aesthetic form, that will be an antidote against the instrumentalization of games.
“We don’t have to be apologists for games. We should be snobs, connoisseurs.”
5:00pm: Randy Smith, Landing On Mars: Our Rocky Path to Inventing New Gameplay
This was a postmortem for Tiger Style’s new game, Waking Mars. Exploring Mars in a jetpack. Primarily growing rather than shooting. Throwing stuff, e.g. seeds. (Lots of plants.)
He started off by showing three prototypes that they experimented with after finishing their previous project; the one that provided the seed for Waking Mars was an idea called Descent where you go deeper and deeper into a cave. They built a prototype; their conclusions were: Tomb Raider already exists, and caves are boring: too few goals / rewards / etc.
They thought an SF theme would help with the latter, though they also feared it might alienate the potential audience. Next version: Mars Descent. Hard SF; the book Our Universe was an inspiration. Wanted an environmental theme (alien plants), added a jetpack.
Started in a broken outpost, need to explore nearby cavern. Oxygen collecting mechanic to survive; headlamp mechanic kept from earlier version. Throws rocks. Eventually find a jetpack; fuel limited, but refills after landing. Lunar Lander style flight.
Plants: light plants, oxygen plants, latter affects the environment. Weeds eat oxygen plants, repelled by light. Water plants. Progression gated by growing plants.
This wasn’t much fun. People liked the concept; not so much the mechanics. Time pressure and oxygen collection made people reluctant to explore. The headlamp was touch-native, but it took a small screen and filled it with black. Lack of inventory meant that missing a throw really hurt, requiring backtracking.
Doing your chores. Natural enough in a garden game, but: information density = meaningful choices divided by time (or by number of actions). And growing a plant was one meaningful choice that required lots of actions: low information density. Not inherently bad, but they thought not great for iOS: screen too small, audience too ADHD.
Needed to pin down core gameplay. They decided to start by looking at the mid-level design: how to work with the terrain? Gate areas by growing plants? Different areas only grow certain plants? Interrelationship between plant types? (The latter made them wonder: do they need the player avatar, maybe it’s better as a god game?) Environmental conditions: different plants only grow in different oxygen / nitrogen / carbon dioxide mixes?
Second playable. Still headlamp, still Lunar Lander jetpack, no fuel limit. Seeds show up quickly, can only grow in fertile terrain, happens much more quickly. (Oxygen plants grow automatically, water plants need headlamp.) Have airlock plants, inventory. Light plants: help you see, but fire projectile. Idea is the you’re designing our own level, out of components that help and hurt you. Headlamp repels certain types of creatures. Escorting seeds to top of a level while dodging a bat.
This mid level focus didn’t solve their problems: shift to the low level. Ideas: cup and ball plant; cave fisher; pests; …
Why combat works: high stakes drama, clear feedback on win / loss / intermediate progress, nuanced input is meaningful (leading to depth and mastery). Meaningful = contributes toward a result a player cares about. Combat, racing, platforming often has these categories; he saw it in Thief, too.
With that in mind: Lunar Lander gameplay and missing throws don’t help towards this, because failure just means a bit of repetition. So there’s ultimately a predictable outcome with no choice. Whereas seed type/location choice gameplay involves choices between different meaningful results. (Maybe that’s the wrong example – he also talked about actions involving physics, e.g. seeds/stalactites falling.) In general, simulations are interesting, and our brains are wired to work with them.
Decided to focus on meaningful collisions. Works: player has nuanced input, physics is unpredictable but acceptable, meaning is clear. Collisions lead to further interactions (seeds drop, they don’t explode), leading to chains of events and emergent gameplay possibilities.
Third playable worked well, focusing on the low level worked right. Ended up keeping a similar mid level; simplified by getting rid of oxygen/nitrogen/carbon dioxide mix, instead just gating levels via biomass.
One focus: not caving, not gardening, yes ecosystem. Made it easier to be innovative.
(Side note: I like Randy Smith’s talks, but it sounds like Richard Lemarchand’s talk at the same time was something rather special; I guess I’ll have to put him on my “must attend” list in future years.)
Post Revisions:
This post has not been revised since publication.
Hello!
Thank you for your kind words about my program at the GDC. I’m glad the session was valuable to you! I certainly had a blast presenting it. See you next year!
DB
3/9/2012 @ 11:02 pm
[…] favorite talk on the first day of GDC was Margaret Robertson’s The Gamification of Death: How the Hardest Game Design Challenge […]
3/10/2012 @ 9:30 am