I don’t know why I am so interested in these types of off-the-path rendering techniques. Perhaps it has something to do with the fact that they are so different from the classic “push more polygons” pipeline. Or maybe that they serve a singular purpose of trying to make the experience better through a non-traditional use of traditional content like triangles and textures. Still, now that the dust has settled and everyone has chosen their sides in the fight I wanted to go back and analyze Megatextures more. I wanted to question if it was the right choice for id Software and what can be done with it in the future.
I’ve been following this technique for far too long. A likely side effect of my addictions, I find myself combing the whitepapers and lectures of companies like Crytek, id Software, Dice, Valve, Epic, Ubisoft, and many more. I guess it’s not the worst thing for an engineer to do with his spare time but, as I’ve mentioned in previous posts, wanting to follow in the steps of giants often leaves me standing in a dark place from the large shadows they cast. And fully knowing my flaws, I still venture into this technology to ask myself if it is something worth investigating for my own indie-scaled games.
In The Beginning
When Megatextures were first introduced in a modified version of idTech4, John Carmack used a much more traditional terrain-based scheme where the geometry was assumed to be a relatively 2D top-down sprawl of land. This approach has been used for at least a decade prior to video games in the simulation industry. Terrain topology rendering was something critical in military applications to display high resolution satellite images for tactical planning. It seemed like a reasonable approach at the time, but games like Quake Wars: Enemy Territory showed the strengths and weaknesses on this approach. Vertical cliffs, walls, and ceilings were an impossibility with a technique that was designed to drape itself over the landscape like a table cloth.
Not satisfied with the rudimentary version that was developed in idTech4, the latest incarnation of Megatextures was a complete re-write that evolved out of an “ah ha” moment for the developer. Though it’s not entirely likely that Megatextures were the first of their kind (there were similar whitepapers with slightly varied approaches), it certainly was the first to be proven on the gaming battlegrounds at 60 frames per second.
Megatextures now appeared to be rendered (from the nuggets I’ve gathered) as follows:
- Render the geometry to the frame buffer, writing out attributes such as texture coordinates (uv).
- Read the frame buffer out to system memory.
- Process each pixel in the buffer. (handle cache misses, loading/unloading pages, transforming global uv coordinates into cache-relative coordinates, jpeg decoding and DXT recoding at run-time, so on)
- Cache misses involve updating all relevant texture attributes (color, normal, specular).
- Upload the new processed buffer with it’s cache-relative uv’s.
- Upload the changes made to the cache textures (if any)
- Oh yeah.. NOW actually render the scene using a deferred shading.
- Overlay translucent surfaces (particles, windows?) using traditional forward rendering and shaders.
To be honest, I am kind of shocked that Megatextures work at all let-alone at 60 frames per second. Though most of these steps do not seem overly offensive step 3, and by extension step 4, is where the secret sauce gets made. It is this step that is also where Megatextures succeed and fail at the same time…
Promise vs Execution
When we first saw the demo of Rage that showed a merchant hanging out in his little hut the promise was grand. We listened to Carmack describe how an artist masterfully painted a 4096×4096 texture for his face alone. He went on to say that every facet of the world could experience that level of uniqueness without the degrading frame rate that we would all expect. As impressed as I was at that time, a few things didn’t seem right to me. I couldn’t wrap my brain around this idea of unique pixels everywhere. I kept asking myself, “how big is this game going to be?”. At first I justified it as thinking that they would only use it to enhance the terrain and create cooler cliffs and canyons but it was later clarified that any opaque surface, including characters were going to follow this pipeline and it made me very nervous. I was going to need a bigger hard drive.
After going going dark for a bit Rage returned but it wasn’t the same. There were rumors that Rage was having trouble with Megatextures on some platforms and it was clear that there was a real moment for the company to reflect on whether it was right to move forward with it. Carmack even mentioned in his keynote about the disappointment shared by the art team when they first saw the end product of their hard work processed and compressed down to game format.
The platform issues were eventually resolved but Rage looked a bit more… decimated. Images were kind of blurry and the texel density looked to be fairly horrific in some areas, notably very low light areas as well as surfaces that were deemed “unseen” by their automated importance algorithms. It was later discovered that Rage was undergoing some changes and aggressive compression was a bullet point in that list. The end result was an image quality that still holds up beautifully but only under a list of perfect conditions:
- Well lit spaces such as outdoors or in direct contact of a static light source.
- Surfaces that are considered to be visible by the optimizer.
- Smaller confined areas like the sewers appear to have more quality retained in the compression process.
In wondering around the Rage environments I find some rooms to have nearly poster-sized blocks of solid color, an artifact of a shadowed wall that is then lit by a dynamic light source. I’m fairly certain that many of these issues resulted from two major factors.
- Fear that an entire 10′s million dollar game would be developed and then run at single digit frame rates after giving such a strong marching order to maintain 60 frames per second.
- Speculation that, without standardizing the texture density to something reasonable, the game would end up shipping on 4 Blurays or about 50 DVD’s, or each boxed copy would include a coupon for $100 off a new hard drive.
Now that Rage has shipped and some of these fears have maybe been answered for the company I am hopeful to see an improvement in the texture density for future id Software games that use idTech5.
The Future of Megatexture
I feel like this technique has some serious challenges ahead of it. While many games are fighting to stay smaller in an age of digital distribution is seems like Megatextures are running the other way. While idTech4 focused on all dynamic lighting and visibility across uniform surfaces, Rage took a step back to the Quake 3 era of baked lighting and long build times for developers. Rage is roughly a 25GB game and the development build of the game is said to be roughly 1TB of data. If the less aggressively compression version of Rage is even half that size (500GB) I question what it would mean for gamers. I question what it would mean for people who still pay by the gigabyte for bandwidth or maybe only have a 256GB hard drive but still want to play the next id Software game.
As much as I really love the idea of this technology I think that id Software is going to have to invest big money in smarter and less invasive compression techniques than HD Photo. They’ll need to improve how they decimate their formats, or stop baking the shadows into their color maps and allow the compression to work with the full color band. It does make me wonder why their dark maps were combined into the color maps but I’m sure it had to do with storage and performance of including yet another texture in the cache pipeline.
For a company like id Software who has a fully working version of this technology, I don’t see them backing down but I would hope to see some minor changes. Do we really need every pixel to be unique?! I would love to see this technique used less as a ubiquitous blanket approach and more of a way to intelligently stream in super resolution images for individual objects. I’d like to see a modified version of idTech5 that goes back to the promise of that 4096 texture for a character’s face, or a similar sized texture used on a wall but allow that texture to be reused, essentially treating each image as it’s own Megatexture. This would allow large organic terrains to continue using their massive painted and decal-covered textures while embracing basic tiling and reuse for more sterile and rigid surfaces, or instanced objects such as characters and environment props. Something like this I would imagine could get a lot of use on a Mars space station maybe , just dropping that out there.
Closure :’( Maybe… I Don’t Know.
I know that it just isn’t worth it for me to chase Megatextures for my own project. The development process would be long and the pipeline would be a nightmare to find a place for all of that source content. Knowing that I’ll likely be the one to develop most of the content, I doubt that my own skills as an amateur crayon artist would come close to maximizing that technology. If I am searching for some cool technical achievement as inspiration to my next creation, sadly I don’t think I’ll find it here. I’ll have to keep telling myself that as I stare at beautiful Megatextured vistas on my screen.
I know that this post sounds more like a eulogy than a critique, but I just don’t know where Megatexture 2.0 belongs. I do hope that I am wrong, and maybe 3.0 has a few tricks up its sleeves (assuming there is a 3.0). I am certain that id Software is going to continue to push this technology because of the massive investment made. I just hope that it turns profitable for them at some point.
I really feel like Rage was the testbed for this technique but their future idTech5 pillars like Doom 4 are going to be the proving ground that vindicates or damns Megatextures. The success or failure of Megatextures may also be the guiding decision to continue perusing Megageometry or the now famed Sparse Voxel Octree approach to representing a world. If storage is an issue now for data that can be lossy (textures and sound), then what will come of data that can’t? Would gamers buy a 256GB voxelized Wolfenstein? It feels like owning a space shuttle; it sounds awesome but where are you going to park it?