3D graphics performance is all about control

A desert tortoise with the caption "I used to race Achilles, then I took an arrow to the knee"

The Elder Scrolls Online, Zenimax’s upcoming MMOG take on the Elder Scrolls universe is going to have nowhere near the sort of graphical fidelity that you’re used to from, say, Skyrim. And there’s good reasons for that. Reasons that are applicable to pretty much every graphical title you’re likely to encounter, from IMVU and Second Life to … well, Skyrim and The Elder Scrolls Online.

Leaving aside the individual hardware for just a moment, perhaps the single most important factors in graphical performance, is control.

The customer has 3D graphics hardware, and your software contains a tuned and optimised 3D graphics rendering engine and pipeline. That’s great.

However, the best 3D graphics engine on the best 3D graphics hardware can still run about as well as a wounded tortoise if you don’t exercise proper control.

Titles that perform well exercise considerable content control. They know their limits and they work to them. They control what you see, when you see it and how much of it you can see, all the while allowing for the fact that the user herself is a bit of a random factor. Most of the time, that high level of control is relatively invisible. Things look good, it all runs at nice frame-rates and anything that doesn’t look good or perform well has been carefully trimmed away or hidden.

That’s under ideal circumstances. Start using the user’s network hardware and graphical performance immediately drops by a noticeable percentage. If they’re on wireless, then it drops even more – independent of the speed of the network.

This gets stickier when you add in large numbers of users, who can be spread far and wide, or all trying to crowd into the same location at once.

Worst of all, for 3d graphics performance, is user-created content.

When the content is first-party, the developer has that control that we spoke of earlier. Every 3D engine has its strengths and weaknesses. You create content for its strengths, and you avoid showing its weaknesses. That often means teams of creators, working together, negotiating the whens, hows, wheres and whats of content display, deciding which bits can get more detail, and who has to cut back for that detail to exist. Even for teams of professionals with careful measuring and testing, it doesn’t always work out as well as it could.

You see it a lot in each generation of games consoles, as games start looking better and smoother, even though the hardware isn’t changing. Everyone’s learning how to hide what doesn’t work well, and show off what does. The games in the last year of a console’s lifetime are almost always far superior in performance to the first year – and they exercise the most control.

For The Elder Scrolls Online, this trade-off means cutting back on detail, cutting back on textures, and cutting back on polygons, relying instead on a more stylised kind of representation for things, like you might find in any mass-market MMOG. It’s a strategy that hasn’t exactly resonated with fans of recent single-player titles in the series who don’t understand why it makes such a difference. I’ve seen it described variously as looking dated and flat.

Relying on the user’s network hardware, and letting hundreds or thousands of people wander around relatively freely is a recipe for disastrous frame-rates unless you reign in control somewhere else; in this case, in the level of detail.

General purpose virtual environments, like Second Life, are at the other end of the content-development scale. There’s no content control as far as 3d graphics performance is concerned. Content can appear at any time. It can be transformed, moved, replicated or removed at any moment. There’s no careful planning of interactions between disparate pieces of content. Some content plays well with other content, and some of it just doesn’t.

Each individual user – the random factors that I mentioned before – is largely responsible for what is seen and what is juxtaposed with what else. Control over 3d graphics performance simply doesn’t exist. Worse, most of us just end up cranking up our graphics settings until performance drops to a point we find just usable enough.

I’m commonly asked why Second Life doesn’t just switch to one of the commercial 3D engines. You know, the sort that deliver nice frame-rates and beautiful content for single-player games that don’t tickle the network card much.

And you know, it could, but at the end of the day, it’s the network and the content that would still end up making it run like a wounded tortoise. The control needed to make them run well doesn’t exist – and quite possible can’t exist. At least not until our systems are so far in advance of the kind of content we’re creating that the content looks… well, dated, stylised and flat.

15 thoughts on “3D graphics performance is all about control”

  1. Do you think we’re gonna have software consistently beating the Turing test for random topic natural language conversation before we get machines optimizing 3d environments better than human experts?

      1. But it can’t be distilled into algorithms that a program can follow without further human supervision?

        It seems so much could be done automaticly, baking multilayered static textures, creating lower resolution versions of textures to display at bigger distances and smaller dimensions, merging neighboring coplanar polygons, union CSG for overlapping objects to remove hidden vertexes, baking static lights on static geometry, merging vertexes that are too close to each other etc. Perhaps even do “horizon impostors”, rendering things past a certain distance into a lower resolution and lower framerate spherical (or cylindrical or cubical or whatever) panoramic (’cause after enough distance parallaxing is so small it is almost indistinguishable from a flat surface bellow a certain speed). And if even using parametric content, automaticly optimized meshes etc the amount of data is still a tad too big, in some cases it might even be possible to kinda compress the data kinda doing away with redundant information by analyzing similar content and only completly sending one of each group , and for all the others items of the group only send the delta from that reference.

        Which things humans do when optimizing realtime 3d content that couldn’t be done automaticly?

        1. Many of those things are already done by 3D engines, including Second Life’s. That’s pretty much what we’ve been doing for the last decade or so (where they work – they don’t for all cases).

          Going beyond these incremental efficiencies in any significant way… well, that’s hard. Very hard. Each year or two someone works out a new little trick that makes things work a little better and they get incorporated into most every graphics engine where it makes sense, and we wait for the next little spark. It’s a slow business, short on big leaps.

          1. In the end you didn’t answer my main question; of the things the humans working on a game without player editable content do that makes those perform faster than games where players got a bigger say on what goes on, which can’t be turned into an algorithm a program can apply automaticly?

          2. It’s all case-by-case. There is no specific technique, only knowledge and experience. It’s different for every model, and every texture. You craft each thing to look right in its proper place, at the proper time at the necessary view-angles. If you have to, you redesign the scene, or force the camera.

            Oh, and you test and test and test, clocking frame-rates and rendering performance so that when the user comes into the scene, it doesn’t suddenly bog them down. That’s why these engines achieve snappy performance. The content is made and tuned specifically. If a given model or geometry bogs down a given scene, you throw it out and use something else. If the scene doesn’t perform well enough, discard models and textures and redesign and create new content until it does.

  2. Blue Mars is a perfect case study for this.

    They got around a lot of the network issues with streaming user-generated content by not require the majority of it be streamed, via City downloads. But still obviously they were boxed on how to extend ‘control’ to people that wanted to run with lower settings and conceded to make a ‘Lite’ client. Which was neutered to mostly depend on Google Maps.

  3. the multitude of different graphics boards in PCs has been key to the rise of consoles, with a console the designer at least knows what he is designing for. a Core set of standards accepted by manufacturers and game designers and updated regularly would help enormously.

  4. Damn for got to fill in my details and now it won’t me post the below because it thinks it is a duplicate.

    The multitude of different graphics boards in PCs has been key to the rise of consoles, with a console the designer at least knows what he is designing for. a Core set of standards accepted by manufacturers and game designers and updated regularly would help enormously.

  5. It is hard to compare Blue Mars and Second Life. BM kept control of the content coming into the game. SL provides a high degree of design freedom.

    The biggest difference is BM had a few gigabyte of content. SL currently has over 192 terabytes of content.

Comments are closed.