There is no plateau
Posted at 22:18 on 17th August 2006 - permalink

At some point we’ve all read (on a magazine letters page, forum or news site) a version of the following argument:

“The move from 2D to 3D graphics was a big step. The only advance that’s possible from 3D graphics is prettier 3D graphics. Ergo, graphics technology has plateaued and I was right to buy those 600 shellacked alligators stuffed with sawdust* instead of a ‘next generation’ console/graphics card.”

In fact, let’s take a specific example:

“It’s finally happened, something that some people have been predicting for a time: we’ve reached the plateau of how impressive graphics can be. In this generation, there’s not much more to be done. Realism has been achieved, for better or worse (…) And none of it, especially the characters who are supposed to be human, look (sic) …human.”

– Jane Pinckard,

But then making fun of Pinckard’s inane wittering is like shooting fish in a barrel of dead fish. (“Gamers, for the most part, don’t care to read about how a game makes you feel.”) More importantly, the ‘plateau’ argument has also been made by more perceptive critics and even a few developers (although not usually those whose products rely on cutting edge graphics as a major selling point).

Whenever a new console generation appears on the horizon you can count on some commentators to point blank refuse to acknowledge that, in time, developers are going to figure out novel (and geniunely progressive) uses for all that additional processing power. Even the cynical practice of making “the same, but prettier” games as the last generation almost inevitably leads developers to discover and experiment with new possibilities. Advances in visual representation will continue to play a large part in this.

The belief that graphics technology (at least, of the kind that could have a beneficial effect on computer games) has plateaued is a result of subscribing to one or more of the following red herrings:

  1. It is impossible to conceive of better graphics than we can currently generate in realtime.
  2. There has not been any significant advancement in graphics in the last several years.
  3. Some new factor has now come into play that negates Moore’s Law.

As good as it gets?

The first fallacy shouldn’t really require much effort to debunk, but we should remember that it’s easy to fall into the trap of holding computer graphics to a different standard of scrutiny simply because it has historically always been easy to distinguish them from photographic or hand-made images and video.

The plateauist (to nearly coin a phrase) believes that we are experiencing diminishing returns from the concept of making 3D graphics out of polygons. (We’ll return to this in discussing fallacy #2.) Increasing the number of triangles in a scene is the only metric that they consider valuable, as it’s meteoric rise has been the yardstick by which 3D graphics innovation has been measured since the technology was invented. Any more sophisticated use of GPU fill rate is dismissed as frippery.

The most likely reason for this lack of foresight is a lack of exposure to any computer graphics beyond those found in games available at retail for the current generation of consoles. Before the arrival of the Xbox 360, this situation automatically turned the clock back several years in terms of hardware power. (A similar effect is experienced in PC games, where the commercial need to support two or three generations of legacy GPUs greatly restricts the use of visual effects which cannot easily be removed or downgraded on older hardware.) This is where the confusion arises – being stuck at a fixed level of hardware, one only gets to see very gradual improvement towards the end of that hardware’s life. Given this limited experience, it’s difficult to imagine increased hardware power offering anything more than what we’re already familiar with, plus minor enhancements.

A more accurate way to view the progression of graphics technology over the last decade or so would be that we were iteratively building machines fast enough to actually compose units of raw pixel-pushing mathematics (I think the term I’m looking for is ‘fragment programs’, although I’ll freely admit I only have a tenuous grasp of the inner workings of modern GPUs) into a meaningful language. We have had to make individual triangles small and plentiful and subtly shaded enough to be imperceptible before we can tackle the really big challenges in computer graphics.

These challenges are innumerable. Convincing human characters could be considered the Holy Grail. We can neither sculpt nor animate even a reasonable approximation of a person (or animal) using state of the art offline rendering, let alone in realtime. Then there’s water, cloth, vegetation. Reflections, which subtly affect almost every surface we see around us, are limited in games to dramatically placed mirrors and ice hockey rinks. We are taking our first baby steps into physics in general.

While gamers may wax lyrical about the landscapes depicted in Oblivion or S.T.A.L.K.E.R., even the best of these is little more than a passable imitation of a particularly featureless kind of landscape photograph. Pretty much any photograph you can take outdoors that isn’t occluded by a wall, bush, or particularly boxy building could not be recreated in a game. The computational demands of simulation are as much a limiting factor as sheer rendering brute force. And in case it sounds like I am treating photorealism (itself merely a stepping stone rather than a final destination) as the only goal of computer graphics, analogous problems exist for non-photorealistic rendering also.

Even the current parameters of our display hardware are lacking – while a 1080p screen might represent the acme of home cinema, it’s not sharp enough to allow a player of a first-person game to read a newspaper held in their avatar’s hands. Even the best monitors have huge holes in their colour reproduction. Nor is 60 frames per second adequate to allow fast movement without smothering everything in calculated motion blur. (And no, your eyes are not ‘limited’ to 60 frames per second.)

I think we can safely conclude that there are still lofty targets for graphics technology to aim for, but we have not addressed the parts of the plateauists’ argument that concede this point but seem to present evidence that this progression will not inevitably occur.

Stuck in a rut?

The second fallacy (the assertion that there has not been any significant advancement in graphics in the last several years) attempts to skewer our hopes of ever tackling the challenges described above by demonstrating specific (quite arbitrary) ways in which advancement in graphics technology has slowed over the past few years.

The lazy (and least convincing) method used to ‘prove’ this is to present carefully selected screenshots from two games of the same genre released several years apart, and dispensing with such trivialities as context and discussion of the technology or gameplay, to simply state that they’re virtually identical. You can see a textbook example of this here. (search for ‘Goldeneye’) (Warning: pop-ups.) Sadly, I can’t criticise this article because the author has ingeniously shielded his half-baked punditry from criticism beneath a layer of KERAZY self-deprecating humour.

Such parlour tricks usually segue into the central pillar of the ‘diminishing returns’ argument – the observation used in the made-up quote at the top of this article. The conceit that the ‘paradigm shift’ from 2D to 3D representation has overshadowed all subsequent advances in 3D (or 2D) graphics technology has for years gone more or less unchallenged. The opinion that nothing that we’ve subsequently done with 3D representation is as important (or impressive) as the impact of 3D graphics broaching the mainstream in the 1990s is admittedly a subjective one, but it’s based on oversimplification.

There wasn’t a single point in time when the industry decided en masse to switch from 2D to 3D representation, and what did take place wasn’t so much a transition from one method to the other as much as an expansion of the toolset available to developers. Even now, many games gain nothing from 3D representation beyond minor aesthetic improvements (and sometimes not even that), whereas others are only practically possible in 3D.

Polygonal 3D games existed long before the release of the PS1, N64 and the first generation of 3D accelerators. It’s no surprise that the iterative process of improving graphics tech hasn’t given us any single holy shit moment to rival trading up from 16- to 32-bit consoles. And those iterative advances, while so easily dismissed individually, add up surprisingly quickly. There are games scheduled for release by Christmas that would have been inconceivable five years ago. (This holds true for pretty much any given five year period out of the last 20-or-so years at least.) You could pick one of the later games from the current (departing) console generation (GTA:SA, God of War, SoTC, RE4, Halo2) and try to argue that these games would lose nothing if backported to the PS1. You could do that, and you could sit down and watch the Star Wars trilogy reconstructed in ascii art and still get a basic idea of the plot.

The determined plateauist will at this stage play the elitism card: claiming that even if graphics technology has forged ahead, these advances have had no effect on the underlying gameplay. This argument opens a tremendous can of worms and this article is too long as it is. How much does the HDR lighting in Oblivion add to the game? Or the eye-tracking in Half Life 2? Or the subtly captured, reactive animation of the Ganados in Resident Evil 4? Ironically, many recent improvements in graphics instead of drawing attention to themselves are designed to be better at convincing us that nothing untoward is happening. Better technology allows new situations to be simulated but also gives developers access to an ever-broadening palette of consciously and unconsciously observed phenomena which can heighten immersion by bypassing the filters in our brains that remind us we’re manipulating dots on a screen.

Transmuting raw processing power into summer blockbuster spectacle is not the only valid goal graphics technology can or should be aiming for, and appreciation of this fact should be enough to convince even the most ardent plateauist that graphics aren’t stagnating.

The sky is falling?

Finally we come to the third argument, the last line of defence: Even if graphics can get better, and have been getting better, who’s to say that it won’t become too expensive to keep making them better?

This conjecture cannot be disproved conclusively, of course. Given a long enough time frame and the right market conditions and it would probably be possible for the industry to paint itself into a corner, creating a gap between the cost of creating detailed visuals and the demand necessary to meet this cost. But a systemic failure like this would be a tall order. Back in the early 1990s (and we seem to be going back there an awful lot in this article), PC gaming, although hardly starved for investment and developer attention, was thrashing around increasingly hopelessly for several years before a tiny shareware team brought out Doom. These days, the problems posed by ballooning hardware power and development budgets are constantly being chipped away at by thousands of developers.

Artists are capable of creating visuals of a quality far beyond anything we have the capacity to display in realtime. You may recall that when the Dreamcast was invented, Namco’s Soul Edge team, faced with the prospect of having to comprehend more than 256 colours, and the surely crippling expense incurred by not having to make a game in an environment so restricted it was like building a ship in a bottle, didn’t commit ritual suicide. Instead they made the incredible home version of Soul Calibur.

It’s heart-warming stories like this that make me sceptical of developers who claim that (all) development costs (everywhere) are spiralling out of control and that going back to bedroom coding is the only solution. Of course nobody wants excess for the sake of it, but I can’t shake the feeling that most of these rants against big-budget games are really misdirected (generalised) rants against (specific instances of) bad management. Tools and middleware are constantly getting better (again, this is an area where I have minimal first-hand experience, but nosing around Z-Brush’s website and seeing testimonials from Weta Workshop, Rick Baker and Ken ‘Protofiend’ Scott suggests there’s little to worry about on that front).

Closing comments

It seems that many people adopt this argument not based on any sober analysis of the facts, but purely to affect pretensions of ideological purity. There will be time enough to do that once technology has actually progressed far enough to be able to make a decent stab at recreating a high percentage of the visual challenges that our environment and imagination can spawn.

People ran out of the theatre (well, tent, usually) in terror at the first moving pictures. Just because subsequent films haven’t relied on absolute ignorance of the technology to provoke any kind of audience reaction doesn’t mean that advances in film-making techniques have been futile.

Once the technology to ‘do’ convincing people and water (and the Taj Mahal and raspberry jam and polyester and the horsehead nebula) is freely available to everyone who wants to make a game, then we’ve reached a ‘de facto’ plateau, at least for most things we want to make.

To draw another parallel with the film world, the purist Dogme 95 manifesto doesn’t limit practitioners to the technology used by the Lumiere brothers. Let’s actually figure out how to have our gratuitous excesses before we start swearing them off.

So in summary, provided we’re willing to put a little trust in the idea that developers aren’t all idiots with no imaginations, I don’t think we’re in any danger of reaching a plateau. If you give in to temptation and buy that shiny new piece of pixel-pushing hardware, you will eventually see it put to good use (which doesn’t excuse the silliness of being an early adopter, but is perhaps some consolation). You might not get your socks blown off by some single quantum leap in graphics, but you will be able to go to places that would remain off limits if you stuck with your old hardware forever.

*with apologies to Steve Purcell

Tags: , , ,


↑ back to top ↑