5.4. Image and Spectrum Rendering¶
5.4.1. Image Rendering¶
Image rendering consists of two main stages:
Hit detection with the detector
Image calculation from ray position and weights
Determining hits for a detector is more complex than for a standard surface, as there are no specific constraints on the detector’s position. Consequently, the detector can potentially be located within other surfaces.
Rather than calculating a single surface hit, intersections are computed for all sections of a ray that lie within the detector’s z-range. Once the coordinates of a potential hit are determined, it is necessary to verify if these coordinates fall within the defined region of the ray section, thereby confirming a valid hit. In some cases, only the virtual extension of the ray section may intersect with the surface, while the ray itself may have altered its direction due to the refraction on an adjacent surface.
To enhance efficiency, these calculations are executed in parallel threads, with each thread processing a subset of rays. Rays that do not reach the detector, whether because they begin ahead of it or are absorbed prior to reaching it, are filtered out as early as possible in the process.
Upon completing the procedure, it becomes clear whether each ray impacts the detector and the specific location of this impact. If the user specifies an extent option, only rays falling within this extent are selected. Otherwise, an automatic rectangular extent is calculated based on the outermost ray intersections.
If the detector is non-planar (e.g., a section of a sphere), the coordinates are initially mapped using a projection method as described in Section 5.4.2. For all intersecting rays, a two-dimensional histogram is generated based on a predefined pixel size. The pixel count exceeds the requested amount because each image is rendered at a higher resolution, allowing for resolution adjustments post-rendering, as detailed in Section 4.8.3.
Image rendering is performed using parallel threads. The generated RenderImage object comprises images for the three tristimulus values X, Y, and Z, which can represent the full spectrum of human-visible colors. An illuminance image is directly derivable from the Y component and the pixel size, negating the need for explicit rendering of this image. The fourth image is an irradiance image, calculated from the ray powers and pixel sizes. Each thread is assigned one of these four images to process.
Following image rendering, the final image may be optionally filtered with an Airy-disk resolution filter (see Section 4.8.7) and then rescaled to the desired resolution.
Fig. 5.18 Detector intersection and image rendering flowchart.¶
5.4.2. Sphere Projections¶
The use of sphere projections and example images are illustrated in Section 4.8.6. The relative distance to the center \(r\) and the z-position of the opposite end of the sphere \(z_m\) are calculated as:
Equidistant
The following equation is an adaptation from [1]:
The projected coordinates are given by:
Orthographic
The hit coordinates \(x\) and \(y\) remain unchanged. For further reference, see [2].
Stereographic
The following formulation is adapted from [3]:
The projected coordinates are given by:
Equal-Area
This equation, adapted from [4], is as follows:
The projected coordinates are given by:
5.4.3. Spectrum Rendering¶
Spectrum rendering operates in a similar way to image rendering. Ray intersections are computed, and only rays that successfully intersect are selected for rendering into a histogram. Unlike a conventional image, this process generates a spectral histogram within a specified wavelength range, derived from the wavelengths and powers of the rays.
In place of a RenderImage
,
a LightSpectrum
object is instantiated,
with its spectral type set to "Histogram"
.
The number of bins for the histogram is determined by the equation:
This formula ensures that \(N_\text{b}\) is odd, thereby providing a well-defined center. Regardless of the number of rays \(N\), the minimum number of bins is fixed at 51, with the count scaling according to the square root of \(N\) beyond a certain threshold. This scaling is necessary because the Signal-to-Noise Ratio (SNR) of the mean increases proportionally with \(\sqrt{N}\) in the presence of normally distributed noise. Consequently, the number of bins is adjusted to maintain a consistent SNR while enhancing the spectrum’s resolution.
5.4.4. Spectrum Color¶
Analogous to Section 5.8.1, the tristimulus values for the light spectrum \(S(\lambda)\) can be calculated using the following integrals:
Subsequent to this calculation, typical color model conversions can be carried out.
References