Rachels Lab Notes

Game Development as seen through the Blum Filter

“Frames per second” is not relevant

with 8 comments

Insomniacs Mike Acton created quite a stir when he claimed yesterday that keeping your game at 60fps doesn’t matter. Unfortunately, his article bases this only on correlation between frame rate and final score. While it’s of great importance to the business end of making games, there’s little reasoning on the technical side why 60fps and 30fps make so little difference.

(Based on what else I read from Mike, I’m sure he’s reasoned about it alright – it’s just not part of this particular article.)

The concept of frame rate matters at all because the update rate of the entire world is tied to the refresh rate – the game progresses “one frame” at a time. That, in turn, is done since the final draw that a world update triggers needs to be tied to the screen refresh to avoid screen tearing)

But “the world” is not a monolithic block – games do many things, and the order in which you do them matters very much. Mick West has an excellent article on what he calls “response lag” – the time it takes for a button press to finally cause a result on screen. And effectively, you measure this lag time by stating how often you need to run “the world” (or the main game loop) until the input causes a visible effect.

That, in turn, means that with the same internal delay, a 30fps game takes twice as long to react to a button press as a 60fps game. Now, if your game is well engineered, that internal delay is “only” 3-4 frames. At 30fps, that’s just below the human reaction time – that’s why most gamers don’t complain about 30fps if it’s a well-engineered game engine.

Things start to get different once the gamer is not reacting, but planning ahead and trying to hit a button at an exact time. Music games, for example. Or FPS players who lead a shot. Here what matters is not so much the lag per se – we can learn to anticipate that. What matters is that that lag is constant, so there are no timing surprises.

So what matters is predictable lag that does not exceed the human reaction time – which is just so achievable with 30fps, and a monolithic step function.

And that’s the core assumption everybody was making for the “60fps or die” argument – your step function is monolithic. It was, for a long time, with a single core synced to the screen refresh. That argument simply isn’t true any more.

We have multiple cores available that can run completely decoupled from each other1. That means, for example, that I can sample the controller at a much higher rate than I refresh the screen. We might run the physics engine at a time step significantly shorter than the duration of a single frame. In other words, we can run components of the world at different speeds and only have the drawing synced to the screen.

The tricky issue here is that we are sampling – aliasing effects can be introduced along the way. Which is really all that “lag” is – it’s the result of discrete sampling of a continuous world.

And that is the part that surprises me most – while we have a ton of really smart guys working on keeping our lag small, nobody seems to treat it as an aliasing problem, bringing signal theory to bear on it. Am I missing something?

  1. Multi-core is not a panacea. Any fixed overhead you incur by frame compels you to lower framerates. If you have a fixed absolute overhead for distributing tasks, it takes a larger relative chunk of a 16ms frame than a 33ms frame. 

Written by labmistress

October 30th, 2009 at 8:43 am

Posted in Uncategorized