T. Davidovic, Thomas Engelhardt, I. Georgiev, P. Slusallek, and C. Dachsbacher
Presented at 38th Graphics Interface 2012 (GI2012)
[bib] [paper] [presentation]
Abstract: Ray tracing and rasterization have long been considered as two fundamentally different approaches to rendering images of 3D scenes, although they compute the same results for primary rays. Rasterization projects every triangle onto the image plane and enumerates all covered pixels in 2D, while ray tracing operates in 3D by generating rays through every pixel and then finding the closest intersection with a triangle. In this paper we introduce a new view on the two approaches: based on the Pluecker ray-triangle intersection test, we define 3D triangle edge functions, resembling (homogeneous) 2D edge functions. Then both approaches become identical with respect to coverage computation for image samples (or primary rays). This generalized “3D rasterization” perspective enables us to exchange concepts between both approaches: we can avoid applying any model or view transformation by instead transforming the sample generator, and we can also eliminate the need for perspective division and render directly to non-planar viewports. While ray tracing typically uses floating point with its intrinsic numerical issues, we show that it can be implemented with the same consistency rules as 2D rasterization. With 3D rasterization the only remaining differences between the two approaches are the scene traversal and the enumeration of potentially covered samples on the image plane (binning). 3D rasterization allows us to explore the design space between traditional rasterization and ray casting in a formalized manner. We discuss performance/cost trade-offs and evaluate different implementations and compare 3D rasterization to traditional ray tracing and 2D rasterization.