This old question popped into my head as I watched my 64bit quad core behemoth cry like a baby as it processed some video clips. I know, it sounds like a stupid question. Given that HD TV’s are selling at an all-time high over the last couple years and most gaming consoles are resting pretty on the HD era. Personally, I don’t see any problem from the users’ perspective. After all, they just have to plug in their TV’s and make sure they have an HD signal from their cable provider. The weight of HD is soling on the shoulders of the content providers.
Game developers are pushed to create movie quality content because of the sub-pixel accuracy of rendering hardware and the high quality video resolutions capable by modern displays. A task that used to take a modeler / texture artist 40 hours in the age of 640×224 Playstation 2′s is suddenly reaching into the heights of 100+ hours to reach comparable quality under an HD display. Beyond the visuals, even the programmers are being effected by this. A much closer relationship between code and art is required, meaning more bottlenecks than ever before. It’s not as easy to build a game by just getting a box to move around in a world; not when actions and events are tied to frames within animation sequences, and physics is tied to movement stored in those animations. All of this data-driven design is meant to give the power to the artists; allowing them to create a game that is visually stunning at HD resolutions, but at the cost of some serialization early in the game creation pipeline.
Video editing has become a grind with some compositions taking several minutes per frame to encode at 30 to 60 frames per second. This means that a 30 second HD trailer could take hours to encode on some of the most powerful user workstations. Dealing with raw footage is no better as most hardrives are beaten into submission by shear force of bandwidth while downloading captured video from digital storage like SDHC cards could take hours alone. Even RAM comes into question when we are attempting to composite multiple video streams at once. The whole process of creating HD footage is exhausting and slow without the assistance of costly dedicated hardware.
Frankly, I just don’t think that most of us are ready to create quality HD content on a shoestring budget. A positive twist to this is that it will get easier. Computers will become more powerful, software will adopt multi-core more fiercely, and content creators will be more comfortable with creating HD content at a higher rate of pace. Until then, times are looking pretty grim for the content creators of the HD era. I am more than happy to be a consumer in the age of HD, but what a pain in the butt it is to be a content provider.
I know that PC’s have been HD since VGA displays could handle 480p and a whopping 256 colors, but it didn’t mature into the mainstream until it reached the living room. Maybe my failing eyes are lying to me, but I am having some serious troubles in distinguishing resolutions above 1080, so I am desperately hoping that it ends there, or I may have to get out of graphics programming. I am genuinely curious if game developers would still be held to today’s standards if we were still developing for 480p displays. It seems like such an insignificant difference to change an entire industry; 480p vs. 1080p, but when I look at the Wii I can’t help but think that it may have been different today without those displays on the mainstream market.