Temporal sampling frequency (aka 'framerate')

Originally posted to Shawn Hargreaves Blog on MSDN, Friday, November 18, 2011

Game player:  "d00d, teh framerate totally sux0rz!"

Game developer:  "we are experiencing aliasing due to low sampling frequency along the temporal axis..."

One of the many decisions that goes into making a game is whether it is better to run at a high framerate, which means drawing many frames per second and thus having little time to spend on each, or should we choose a lower framerate, get more time per frame, and thus be able to draw larger numbers of higher quality objects?  The perfect balance is a matter of heated debate, varying with the game, genre, and personal preference, but there are some widely accepted truths:

Yet movies are animated at a mere 24 fps!  Why has Hollywood chosen a framerate so much lower than most game developers?

Perhaps this is just a historical legacy, preserved for backward compatibility with decisions made in the 1920s?  But when IMAX was designed in the late 1960s, they specified new cameras, film, projectors, and screens, while keeping the 24 fps sampling frequency.  And in the early 21st century, Blu-ray and HD DVD fought an entire format war during the transition to high definition digital video, but oh look, still 24 fps.  It sure looks like the movie world just doesn't see any reason to go higher, similar to how few game developers care to go above 60 fps.

Ok, next theory: perhaps the difference is because games are interactive, while movies are just prerecorded entertainment?  The lower the framerate, the more latency there will be between providing an input and seeing the resulting change on screen.  The pause button on my DVR remote has ~.5 sec latency, which is irrelevant when watching my favorite romantic comedy but would be a showstopper when trying to nail a Halo headshot.

And a final theory: perhaps the difference is due to aliasing?  Realtime graphics are usually point sampled along the time axis, as we render individual frames based on the state of the game at a single moment in time, with no consideration of what came before or what will happen next.  Movies, on the other hand, are beautifully antialiased, as the physical nature of a camera accumulates all light that reaches the sensor while the shutter is open.  We've all seen the resulting blurry photos when we try to snap something that is moving too quickly, or fail to hold the camera properly still.  Motion blur is usually considered a flaw when it shows up uninvited in our vacation snapshots, but when capturing video it provides wonderfully high quality temporal antialiasing.

So which theory is correct?  Do games care more than movies about framerate because of latency, or because of aliasing?

We can find out with a straightforward experiment.  Write a game that runs at 60 fps.  Make another version of the same game that runs at 30 fps.  Make a third version at 30 fps with super high quality temporal antialiasing (aka motion blur).  Get some people to play all three versions.  Get more people to watch the first people playing.  Compare their reactions.

If you try this experiment, you will find the results depend on which game you choose to test with.  Many observers do indeed think motion blurred 30 fps looks the same as 60 fps, so temporal aliasing is surely important.  Players also find the two equivalent with some games, while reporting a big difference with other games.  So the significance of latency depends on the game in question.

Sensitivity to latency is directly proportional to how hands-on the input mechanism is.  When you move a mouse, even the slightest lag in cursor motion will feel very bad (which is why the mouse has a dedicated hardware cursor, allowing it to update at a higher framerate than the rest of whatever app is using it).  Likewise for looking around in an FPS, or pinch zooming on a touch screen.  You are directly manipulating something, so expect it to respond straight away and for the motion to feel pinned to your finger.  Less direct control schemes, such as pressing a fire button, moving around in an FPS, driving a vehicle, or clicking on a unit in an RTS, can tolerate higher latencies.  The more indirect things become, the less latency matters, which is why third person games can often tolerate lower framerates than would be acceptable in an FPS.

What can we learn from all this rambling?

Yeah. yeah, so I should talk about how to actually implement motion blur.  Next time...

Blog index   -   Back to my homepage