Defense: Neural Reconstruction for Real-time Rendering

Manu Mathew Thomas
Computational Media PhD Student
Location
Virtual Event
Advisor
Angus Forbes

Join us on Zoom: https://ucsc.zoom.us/j/91661528917?pwd=UGNZVSsxdFVxYk1aZVY2YVV4UTI0Zz09 / Passcode: 346607

Description: Recent advancements in ray tracing hardware have shifted video game graphics towards more realistic effects such as soft shadows, reflections, and global illumination. These effects are achieved by tracing a lot of rays through the scene, accumulating visibility and illumination components. However, due to the real-time constraints inherent in a game, we limit the number of rays/samples in a scene, causing visual artifacts, including aliasing and noise. A number of existing techniques take advantage of frame-to-frame coherence and reconstruct an image from a few samples spread over multiple frames but require the construction of handcrafted heuristics for the accumulation of samples. This results in ghosting artifacts, loss of detail, and temporal instability.

While machine learning-based approaches have shown promise in image reconstruction for offline rendering, they are expensive for games and other interactive media. Using reduced-precision arithmetic to quantize the neural networks can drastically reduce both their computation and storage requirements. However, the use of quantized networks for HDR reconstruction can cause significant quality degradation.

Our work introduces QW-Net, a neural network for HDR image reconstruction in which 95% of the computations are achievable with 4-bit integer operations. We then demonstrate the capability of this network for supersampling, super-resolution, and denoising tasks suitable for real-time rendering.