For the full details of our approach, read our paper on the topic, however here is a brief explanation of how things work. As with most ray tracers, we trace a ray of light through each pixel of the screen into the scene and see what it intersects. Unlike other ray tracers, each ray has a coherence matrix associated with it, which is a representation of the polarization state of light for that ray (as well as its intensity), and an additional vector perpendicular to the ray that indicates the coordinate system of the coherence matrix.
For each light source that is visible at this intersection point, we need to consider the specular contribution of that light source to this ray back to the eye. We use the Torrance-Sparrow lighting model (which is a physics-based lighting model commonly used in computer graphics that assumes the surface of an object is actually comprised of a distribution of microfacets) to determine the contribution from each light source. However, we incorporate polarization parameters into the Torrance-Sparrow model as described in detail in the paper. The light from each source has a coherence matrix associated with it, representing its polarization state. We use the formulas described in the paper to calculate its contribution to the ray from the intersection point back to the eye. Each contribution is itself in the form of a coherence matrix which needs to be transformed into the proper coordinate system to be summed.
As is typically the case with ray tracers, we recurse backwards to capture mirror reflections in the scene. These mirror reflections also contribute coherence matrices that contribute to the coherence matrix of the parent ray.
One important point is that the coherence matrix is really only valid for a given wavelength of light, so we simulate the color spectrum by performing these calculations at multiple wavelengths.