Rachels Lab Notes

Game Development as seen through the Blum Filter

Archive for October, 2009

“Frames per second” is not relevant

with 8 comments

Insomniacs Mike Acton created quite a stir when he claimed yesterday that keeping your game at 60fps doesn’t matter. Unfortunately, his article bases this only on correlation between frame rate and final score. While it’s of great importance to the business end of making games, there’s little reasoning on the technical side why 60fps and 30fps make so little difference.

(Based on what else I read from Mike, I’m sure he’s reasoned about it alright – it’s just not part of this particular article.)

The concept of frame rate matters at all because the update rate of the entire world is tied to the refresh rate – the game progresses “one frame” at a time. That, in turn, is done since the final draw that a world update triggers needs to be tied to the screen refresh to avoid screen tearing)

But “the world” is not a monolithic block – games do many things, and the order in which you do them matters very much. Mick West has an excellent article on what he calls “response lag” – the time it takes for a button press to finally cause a result on screen. And effectively, you measure this lag time by stating how often you need to run “the world” (or the main game loop) until the input causes a visible effect.

That, in turn, means that with the same internal delay, a 30fps game takes twice as long to react to a button press as a 60fps game. Now, if your game is well engineered, that internal delay is “only” 3-4 frames. At 30fps, that’s just below the human reaction time – that’s why most gamers don’t complain about 30fps if it’s a well-engineered game engine.

Things start to get different once the gamer is not reacting, but planning ahead and trying to hit a button at an exact time. Music games, for example. Or FPS players who lead a shot. Here what matters is not so much the lag per se – we can learn to anticipate that. What matters is that that lag is constant, so there are no timing surprises.

So what matters is predictable lag that does not exceed the human reaction time – which is just so achievable with 30fps, and a monolithic step function.

And that’s the core assumption everybody was making for the “60fps or die” argument – your step function is monolithic. It was, for a long time, with a single core synced to the screen refresh. That argument simply isn’t true any more.

We have multiple cores available that can run completely decoupled from each other1. That means, for example, that I can sample the controller at a much higher rate than I refresh the screen. We might run the physics engine at a time step significantly shorter than the duration of a single frame. In other words, we can run components of the world at different speeds and only have the drawing synced to the screen.

The tricky issue here is that we are sampling – aliasing effects can be introduced along the way. Which is really all that “lag” is – it’s the result of discrete sampling of a continuous world.

And that is the part that surprises me most – while we have a ton of really smart guys working on keeping our lag small, nobody seems to treat it as an aliasing problem, bringing signal theory to bear on it. Am I missing something?


  1. Multi-core is not a panacea. Any fixed overhead you incur by frame compels you to lower framerates. If you have a fixed absolute overhead for distributing tasks, it takes a larger relative chunk of a 16ms frame than a 33ms frame. 

Written by labmistress

October 30th, 2009 at 8:43 am

Posted in Uncategorized

Structure Padding Analysis Tools

with 3 comments

One issue we always face is the fact that we don’t have enough memory. Ever. And even if we did, we like our data structures small and crunchy for performance reasons.

Sure, we pay attention to it all the time – and yet, occasionally, something slips through. So it’s nice that somebody wrote a tool – Cruncher# – to just load your PDB and examine all your structures for unnecessary padding.

And if you desire to look at ELF files (i.e. you’re under Linux, or working on a certain console), there’s pahole. Not only does it point out padding (or ‘holes in your structure’), it’s also friendly enough to tell you how many cache lines your structure will consume.

Let’s hope the two authors inspire each other!

Written by groby

October 22nd, 2009 at 8:19 am

Posted in Tool Time

Bits and Nibbles, First Edition

without comments

I’d like to be able to regularly write a full article here, but unfortunately time is a scarce resource. (Just participating in the HN discussion on the previous artcle took up a large part of my free time.)

So, instead, here’s “bits and nibbles” – a collection of links that are worth reading.

  • Packing Data into fp Render Targets – a quick recap if you need it

  • Crytek Presentations – lots of technical details on what the Crysis guys are doing. It’s an older link, but it’s worth adding to your library if you don’t have those presentations yet.

  • Chocolux – a GPU raytracer in WebGL. If that was gibberish to you: Run a realtime raytracer in your browser without any plug-ins.

  • Twitter Dynamics for Game Developes – If you’re new to twitter (or haven’t even used it yet!), this is an excellent introduction to it, what tools you might want to use, and a short list of interesting people to follow. (Me? I’m @groby.)

  • Zynga makes $500K a day – Zynga? You know, the guys who make Mafia Wars on Facebook? Not too shabby. My guess is that it’s about half the revenue Blizzard makes, with much better iteration times, and less capital outlay.

Written by groby

October 21st, 2009 at 7:09 am

Posted in Bits And Nibbles

The hidden cost of C++

with 68 comments

As a game developer, I’m concerned with performance. Yes, we’re living in next-gen land, and there’s a lot of performance – but then somebody comes along and squeezes every last drop out of a platform you develop for, and you better do that too.

As such, I’m occasionally involved in discussions about the merits of using C++. As such, one topic that comes up in every single discussion is the ‘hidden cost’ of using C++. And the reply is inevitably “There’s no such thing! It’s all in the source code!”

Well, yes, it is. But let me explain what I mean by it.

In C, if I have a line of code like this:

  1. a = func(b,c);

I can take a rough guess at the cost of the function by the name of it. The only other part I need to have a mental model of the performance is the overhead involved. And in C, the cost of a function call is pretty fixed. The only ‘surprise’ that can happen is that it’s inline and thus faster than you expected.

Not so in C++. Is it a function call, a member function call, or is it an anonymous constructor? Are b and c implicitly invoking copy constructors for other classes as part of type coercion? Is that a normal assignment, or an assignment operator? Is there a cast operator involved?

And once I have answered those questions, I have to look at all the classes involved. If they have a non-empty destructor, cost is added. Should those destructors be virtual, more cost is added. Virtual cast operators? Add some more.

As the end result, your overhead can grow dramatically. Especially those virtual calls are quite costly. The total runtime of a loop can easily vary by 10x or more based on those parameters.

Of course, they are not really hidden – if I look at the source code, I can easily see them. The real hidden cost is that now, instead of looking at one piece of source – the function itself – I need to look at up to four different classes. Add possible ancestors to find out if a call is virtual.

That is the hidden cost. The mental model for a simple function call became incredibly large and complex, and every function call is potentially as complex. Which makes reasoning about performance a rather hard thing to do.

Worse, it makes profiling harder than necessary. All the type coercions that happen at the API level will show up as separate functions, not attributed to the callee, but the caller.

All that translates ultimately into either worse performance or longer development time. Neither one is something you like to hear about.

Written by groby

October 20th, 2009 at 7:04 am

Posted in Language

Hello World

with 3 comments

… ’cause every good project starts with “Hello World”.

Welcome to my new blog!

What’s the point of this blog? It is, in a way, my lab notebook for everything concerning games development. That’s my profession and my craft, and I have a lot of things that I don’t want to get lost, so I’m keeping lab notes – like every other scientist.

You’ll hear a lot about scaling our game development process, cutting back development effort without impacting the quality, and the occasional other things that cross my mind. There’ll probably be some mumbling about graphics and AI, too, because I just like those two fields.

Anyway – welcome, and I hope you enjoy being here!

Written by labmistress

October 4th, 2009 at 1:44 pm

Posted in Uncategorized