So in the while also Autodesk has added an Adaptive Sampler to the Arnold renderer. It is very different from the one we first demo’ed here last year. The now build-in Arnold adaptive sampler is image based while ours is a multi-dimensional adaptive sampler.
What’s the main difference ?
A very simple and yet very effective solution is to target samples adaptively in image space. Adaptive sampling starts by taking a fixed number of samples to get a baseline of variance. Here when we say ‘variance’ we intend the variance of the mean of the samples, not the variance of the samples themselves. Then that variance gets turned into an error measure by going through tone mapping so that we don’t over-sample highlights or under sample darker regions.
Image formation for raytracing evaluates the radiance L(x, y) arriving at a set of discrete sample points on the image plane. In the absence of any distribution effects, each image-space coordinate (x, y) uniquely determines a light path, allowing L to be precisely evaluated by tracing a single ray. Traditional adaptive sampling techniques attempt to carefully choose the samples by detecting areas of interest in the function L (e.g., high local contrast or brightness)
based on local image-space behavior. Assuming a reasonable initial coarse sampling, this process works well because L is easy to evaluate exactly for any given image location.
Physically-based light transport, however, includes effects such as motion blur, depth-of-field, and soft shadows. When these effects are considered, there are many light paths that can contribute incoming radiance to a given image location. These paths can be interpreted as points in a higher dimensional domain.
Monte Carlo ray tracing samples and numerically integrates these samples to approximate the desired image space function L(x, y). This approximation is very noisy, which hampers image-space adaptive sampling techniques because features and noise are indistinguishable during adaptive sample refinement. Furthermore, image-space metrics inherently fail to maintain enough information to properly capture the discontinuities of the true multidimensional function.
Our approach (based on Hachisuka and Jarosz MDAS) performs adaptive sampling by placing samples near areas of high frequency in the full multidimensional domain, allowing it to more robustly detect and sample such features. However to do that we need to track each sample and add it to a kD-tree structure for fast neighbor queries. So once the initial samples are computed, we estimate the error in each leaf node in order to find locations in the sampling domain where the multidimensional function would benefit from additional samples. To use an analogy .. the min-samples set is kinda a seed farm with a progressive multi-jitter layout where adaptive trees grow to fill the multidimensional space.
Here some direct comparisons.
(note: Arnold settings are 6×6=36min and 8×8=64max not 12 and 16).
And a video.
Here we’ve been able to integrate AS with Arnold MIS (multiple importance sampling).