The SIGGRAPH Technical Papers Program is on the forefront of innovation, bringing teams from around the world into a conference where they can share and expand upon existing ideas and technologies being built. SIGGRAPH 2013 will be held in July 21-25 in California this year with contributors “disseminating new scholarly work in computer graphics and interactive techniques.” This means they will collaborate to bring the industry technical advancements that will accelerate us into the future. The video above is merely a short demonstration of just some of the programs which will be featured in the conference this year.
During the program, the list of tech papers featured will cover innovations and improvements in design features such as shape analysis, fluid design (including snow and water particles), light capture, structural layout and geometry, and image based reconstruction.
Basically, all these features are working in unison to bring us the next steps in advanced graphic imaging prowess for our PC’s and consoles. For example, the last feature in that list: image based reconstruction. Games will be able to recreate real people, and environments, based on pictures and videos alone. Using cameras attached to your gaming system, you can accurately recreate not only your room, but your own facial features in real time. The Princeton team who designed the technology are calling it Real Time 3D Model Acquisition.
This brings me to another point from the video. The trend among programmers and developers is the use of cameras to bring many of these technical innovations to life. Scalable Real-time Volumetric Surface Reconstruction means real time 3-D environment modeling which could translate into using a portable camera (say Google Glass) to 3D renders of entire cities. In fact, many accomplished similar tasks using a stereo camera (e.g., Kinect), lasers (LIDAR) or sonars (although using sonars is not realistic due to rogue sound reflections). The big caveats at the moment: performing this technology in real time means the model will not have much resolution and that it will take a massive amount of computational power for realistic levels of fidelity. Currently, this technology is inefficient and not yet feasible. It is still much more cost effective and efficient to create models by hand. The Standford team gave their notable entry regarding this matter with their 3D City Modeling from Street Level Data for use in augmented realities.
One of the more exciting highlights from the video is Folding and Crumpling Adaptive Sheets. This feature allows for realistic destruction of materials like a piece of paper, an article of clothing, or a metal sheet.
At the moment, many are probably reading this and getting lost in the sea of technical sounding phrases and words. If you even open the linked .pdfs, you will be drowned in complex mathematical formulas and programming jargon.
What does all of this mean? All you need to know is some of the smartest gamers and programmers around the world are working together to create technologies and programs that increase not only what we are capable of creating and doing in games, but also raising the standard on how these games are made.
These technologies might not be applied in games today, but as PS4, Xbox One, and especially PC gaming continue to grow, there will be more time and opportunities to apply these new and evolving features in the games of tomorrow.
Hyper-realism in video games is goal that is becoming more and more obtainable. If we look back even 6 years ago, we have come leaps and bounds in regards to not only graphics but technology as a whole.
Published: Jun 7, 2013 02:06 am