Generating content for stereoscopic displays often requires special methods of rendering a virtual environment or character. Many applications do not come with this support natively. This project was a proof-of-concept to see if we could intercept all of the polygonal and texture data being sent from the application to the graphics for the purposes of creating a “3D snapshot” of the scene as well as use the information to automatically view existing applications in stereo 3D.
The experiment was successful through the use of several freely available tools and intimate knowledge of various graphics applications. Using DeepExplorer, we were able to get information being fed to the graphics card through the use of a dummy DLL which sits between the application and the DLL provided by the graphics driver. Through this one can extract the mesh and texture data and store them to disk. The model can then be imported into a 3D package such as 3D Studio Max or Maya for cleanup and rendering. We took this one step further and printed the model using our rapid prototyping services.
This project is no longer actively pursued as NVIDIA has released stable stereoscopic drivers for their product line which works in much the same way and due to copyright considerations.