Archive for the ‘Uncategorized’ Category
Yes, I’ve been absent from this blog since forever. I can’t help it. There was lots of work to be done at work – we released a tiny little game last week. There was preparation for SIGGRAPH. No, I didn’t give a talk yet, but at least it was the first one where I got a shout out(pg.35). Hey, it’s a start! ;)
And then there’s the biggest issue – I tried to go for a long-form blog, and it’s not working for me. Sometimes, long is good. And sometimes, I just need to throw out a short link. Like today.
If you’ve ever done cross-platform multithreading, you hate it. There’s no unified API, not all operations are supported on all platforms… in short, it sucks. Thankfully, @bjoernknafla to the rescue. He has written amp, a cross-platform threading library, and made the amp source available on github under a BSD license. And, what makes me even happier, he stuck to a C-style API.
So my threading issues are solved. Both in my source code and in terms of resource starvation of this blog. Expect higher throughput!
While that’s an interesting idea – maybe for another day – I’m interested in looking at larger-scale problems than that. And since I’m lately itching to write a renderer again (the last one was 1998), that’s where I started.
The goal of this project is to explore the “pressure points” in writing games in C++1, to see if in some of these areas, the pain and delay can be alleviated. To make it as painful as possible, it’s of course going to be cross-platform. That means OS X and Windows, unless somebody wants to gift me a devkit for Xenon or PS3…
And even after that first bit of code, just hoisting out the most basic platform abstractions, a few lessons are to be had.
I hope to keep the preprocessor mostly banned, except in some very limited areas where the libraries abstract actual hardware. It’s cause for a lot of confusion in real-world projects, so let’s try to keep it out. It is interesting to think if you could achieve the same effect without a preprocessor, just with language constructs – and what those would be.
Too Useful to be banned
Some C++ features are too useful to be banned. Writing a render engine in pure C, as Vince suggested, is possible, but tedious. So I guess I have to loosen my stance and allow some features to creep back in.
Sure, you can prefix every single function, but it’s a messy business. It carries no performance cost to have them, so they’re in.
Mostly, to get the convenient syntax of member function invocations. On non-virtual members, this carries a predictable cost. (I.e. I can predict what code the compiler will generate without having to look up the class API). Casts and copy constructors are still out, since they can implicitly generate code that I won’t be aware of when examining code.
So, basically, C-style structs that have member functions and access protection are what’s allowed.
With both of these, I’m curious about their impact on compilation time.
Some features are missing from C++ that would be extremely useful:
Yes, the new standard has them – but at a high readability price.
Header files are a completely pointless waste of time, a remnant from the late 70’s. As a side effect, that would allow mapping platform-specific enums to abstraced enums without having to expose the platform-specific header that contains them.
Braces/semicolons do add a lot of noise. Can I get a whitespace-scoped language, like Python?
There are many issues I don’t care about right now. Consequently, I’ll go “off the shelf” with them.
I really don’t want to write the 22nd implementation of a vector class, thank you. I’ve written enough of that. Since they’re all owned by my employers, I’m opting for an open-source one. So far, CML is the chosen one. It is, rather poignantly, making the point that this is a facility that’s sorely missing from C++ for game development.
Window System Code
While I really do want to examine the effects cross-platform code carries, some things are too gross to be touched by humans. I started out using GLUT, but its shortcomings meant native window handling. That’s not what really concerns me here, so I’ll be basing things on SDL for now – at least for purposes of handling the windowing system.
I didn’t need to touch that yet, but if it comes to that, there’s dlmalloc, or Fluid Studios’ memory system (no link, since their atrocious website doesn’t have links – it’s all Flash. But you can find it by starting here), and probably many others. And unless I absolutely have to touch memory management, I don’t want to go there.
I’m not sure if that’s even relevant for much longer. I see the number of teams writing entire engines definitely shrinking, since it’s very expensive. Often, off-the-shelf engines (or something another team in your company wrote) will be good enough. Since I’m a systems-level gal, I hope to stay on a team that works on an engine, though. Hence the focus on it. ↩
To get the formalities out of the way: I work at EA, but this is my private blog. The views and opinions expressed here are mine, and mine alone. (Unless it’s something really stupid, then my cat typed it.)
Also, nothing in here reflects knowledge internal to EA. Not only am I a lowly peon in a large machine who doesn’t get told anything of importance, I certainly wouldn’t share if I was told internal info either. Professional pride and all that.
While those are valid concerns, they miss a major fact that’s looming for video games: AAA games are getting expensive enough that they just might outgrow the size of their market, core video gamers. And even if they don’t, it is a rather crowded market, and revenue only arrives in bursts every 2-4 years for each title.
What EA is doing here is branching out into a market with a far larger customer base, lower costs, and monthly revenue. And, more importantly, server-based games can’t be pirated. EA just cut out a large threat that is in the process of killing the PC market and even has significant impact on the console market. (After all, Microsoft just killed almost a million accounts on XBox live for hacking)
Furthermore – assuming Playfish indeed delivers on its revenue & profit valuations – EA has traded cash-on-hand against a profitable revenue stream. And while it’s nice to sit on $2.2B cash, a positive cash-flow is a better choice in the long term.
And then there are the layoffs. While I appreciate the sentiment of many online commenters that “EA shafted their employees to buy Playfish”(paraphrased), that’s almost certainly not true. As I said above, EA has plenty of cash to go buy something. There was no need to cut jobs for that acquisition, and I don’t think there’s any relation.
Riccitello simply realizes that the console games market as is has matured and is saturated. There’s only room for so many titles, so it makes sense to focus on those with the best chances to make money.
Yes, gamers are now going to scream about the “evil EA” that just produces sequels – but I believe the sales numbers of all the excellent new IP EA has produced last year clearly tell the story. Vocal gamers profess a love for new IP, but what’s mostly getting bought is the sequel to the established title.
Leaving out Nintendo consoles – because they thrive on mostly first-partly titles – let’s look at the top titles for the year so far:
- Halo 3
- Resident Evil 5
- Killzone 2
- Call of Duty: World at War
- FIFA Soccer 10
- Call of Duty: Modern Warfare 2
Notice a trend there? Every single one is a sequel. Gamers buy what they perceive as “established quality”. And I can’t really blame them – $60 is a lot of money to invest for entertainment.
To be clear: I have NO insight what titles actually have been cancelled. EA remains tight-lipped about that. For all I know, we might have cancelled only sequels and focused on new IP, although I’d consider that unlikely.
And for analysts like Pachter, who now complain that EA “didn’t spot” the underperforming titles last year: It is incredibly hard to judge what a game will be until you’re at least a year or so into execution. Making games is not formulaic. It takes a great idea, an awesome team, and a spark of magic to make a good game happen. You can control the first two, but the last one is out of your control. You’ll see it manifest at some point, but it takes some work before you know it’s there or not.
So, even though I don’t like it on a personal level, I think EA has at least the right ideas. You can certainly debate the merit of the concrete steps taken, but I believe AAA console games will shrink in importance, while online gaming increases – so this seems a decent decision. Lots of things could have done better; we certainly have internal talent capable of building social games. But we missed the boat, and that means there was pressure to buy market share out of the box.
What I do have an issue with is the handling of this, though. If you have to lay off people, let them know. Then people at least know where they stand. There is still no official word on what studios and titles will be affected by those 1,500 layoffs – which means everybody is worried to some extent. It is incredibly hard to do good work if you don’t know you’ll still have a job the next day – and that endangers that spark of magic I mentioned, the one ingredient to a successful game that’s incredibly hard to come by in the first place.
Insomniacs Mike Acton created quite a stir when he claimed yesterday that keeping your game at 60fps doesn’t matter. Unfortunately, his article bases this only on correlation between frame rate and final score. While it’s of great importance to the business end of making games, there’s little reasoning on the technical side why 60fps and 30fps make so little difference.
(Based on what else I read from Mike, I’m sure he’s reasoned about it alright – it’s just not part of this particular article.)
The concept of frame rate matters at all because the update rate of the entire world is tied to the refresh rate – the game progresses “one frame” at a time. That, in turn, is done since the final draw that a world update triggers needs to be tied to the screen refresh to avoid screen tearing)
But “the world” is not a monolithic block – games do many things, and the order in which you do them matters very much. Mick West has an excellent article on what he calls “response lag” – the time it takes for a button press to finally cause a result on screen. And effectively, you measure this lag time by stating how often you need to run “the world” (or the main game loop) until the input causes a visible effect.
That, in turn, means that with the same internal delay, a 30fps game takes twice as long to react to a button press as a 60fps game. Now, if your game is well engineered, that internal delay is “only” 3-4 frames. At 30fps, that’s just below the human reaction time – that’s why most gamers don’t complain about 30fps if it’s a well-engineered game engine.
Things start to get different once the gamer is not reacting, but planning ahead and trying to hit a button at an exact time. Music games, for example. Or FPS players who lead a shot. Here what matters is not so much the lag per se – we can learn to anticipate that. What matters is that that lag is constant, so there are no timing surprises.
So what matters is predictable lag that does not exceed the human reaction time – which is just so achievable with 30fps, and a monolithic step function.
And that’s the core assumption everybody was making for the “60fps or die” argument – your step function is monolithic. It was, for a long time, with a single core synced to the screen refresh. That argument simply isn’t true any more.
We have multiple cores available that can run completely decoupled from each other1. That means, for example, that I can sample the controller at a much higher rate than I refresh the screen. We might run the physics engine at a time step significantly shorter than the duration of a single frame. In other words, we can run components of the world at different speeds and only have the drawing synced to the screen.
The tricky issue here is that we are sampling – aliasing effects can be introduced along the way. Which is really all that “lag” is – it’s the result of discrete sampling of a continuous world.
And that is the part that surprises me most – while we have a ton of really smart guys working on keeping our lag small, nobody seems to treat it as an aliasing problem, bringing signal theory to bear on it. Am I missing something?
Multi-core is not a panacea. Any fixed overhead you incur by frame compels you to lower framerates. If you have a fixed absolute overhead for distributing tasks, it takes a larger relative chunk of a 16ms frame than a 33ms frame. ↩
… ’cause every good project starts with “Hello World”.
Welcome to my new blog!
What’s the point of this blog? It is, in a way, my lab notebook for everything concerning games development. That’s my profession and my craft, and I have a lot of things that I don’t want to get lost, so I’m keeping lab notes – like every other scientist.
You’ll hear a lot about scaling our game development process, cutting back development effort without impacting the quality, and the occasional other things that cross my mind. There’ll probably be some mumbling about graphics and AI, too, because I just like those two fields.
Anyway – welcome, and I hope you enjoy being here!