Video game and CGI characters are about to look a hell of a lot better, thanks to a new technique developed by the Institute for Creative Technologies of the University of South Carolina and Imperial College London. At this year’s edition of Siggraph, the team has demonstrated Simulated Dynamic Skin Microgeometry, a method that allows both the capture of very detailed skin features, to within a few microns, and a close to reality rendering of said features when they are in motion.
It works by using a special camera to capture the fine texture of the skin, down to the pores, and applying it as a displacement map on a 3D model. After being textured, that would be enough for a static image, but since skin tends to move quite a lot, it needs more finesse then. While the model itself can alter shape, textures can not, and if you just stretch them, it’ll look about the same as it would do in if you tried it in Paint. Taking stills of the skin for each possible expression, and then switching the textures, would also just be time consuming.
Therefore, wrinkles, creases, basically any form of contraction and stretching of the skin is modeled through the displacement map, which that accurately tells the texture how it should behave, and then by applying sharpening and blurring effects, through anisotropic filtering. It may not sound like much, but this has the effect of tricking the brain into seeing CGI skin stretch and contract, even though it actually doesn’t really do that.
The result can be a human face that looks about as real as CGI and real-time graphics can get, Simulated Dynamic Skin Microgeometry being usable for both applications. So yes, that means it can be used in video games. In the future, you won’t need to capture the details of every pore on the skin of a subject, to render onto a character. Further development of this technology will allow developers to generate accurate skin that behaves realistically, based on a variety of characteristics, such as age, sex, race, and so forth.
We may still be a while away from seeing the first implementation of this in games, but when it does happen, I just hope it won’t end up in the graveyard of exclusively licensed technology, like Digital Molecular Matter. You remember DMM, right? It was the most amazing part of The Force Unleashed, LucasArts licensed it exclusively for a while, they did not much with it, now it’s a part of the Gamebryo Engine, yet people are still using proprietary standardized stuff like Physx.