The entries in this project are ordered by most recent.

Metaball Field Volumes

Intrigued by implicit surfaces, I looked into metaballs: blobby solids representing the field strength in space from many point charges. Ideally, for every point of space, we would sum the contributions of every point charge. If that sum was greater than a certain threshold this point would be included in the volume, otherwise it would be empty space.

Because I cannot test every point, I implemented ray marching, whereby each ray cast into the scene steps only very small distances at a time, testing the field strength at each stop. If it surpasses the threshold, it interpolates the color and normal of the surface by a weighted average of each point charge's relative position, color, and field contribution.

Because ray marching is computationally demanding, I applied a few optimizations to speed up rendering. For one, I avoided the typical equation for a point charge's contribution at distance r, 1 / r^2, which features an expensive divide step. Instead I used Perlin's polynomial approximation 6r^5 - 15r^4 + 10r^3 which is cheaper but boasts similar properties. Additionally I did not evaluate the contribution from charges too far away to have a meaningful impact, cutting the number of computations per march-step.

Physically Accurate Depth of Field

For increased realism at close-zoomed scenes, I implemented Depth of Field. This imitates the behavior of an aperture of area > 0, and a lens with some specific focal length. Rather than cast rays solely from the camera eye to a pixel, I select a point on the aperture and cast multiple rays through the focal point. At geometry near the focal length this changes little, but for geometry much closer or farther away, the aperture displacement creates a blurred average of colors.

The animated GIF above is a series of renders on the same scene with different focal lengths. We observe different shapes coming in and out of focus as the focal length approches their distance from the camera eye.

Path Tracing II - Multiple Importance Sampling and Microfacet Materials

In order to model various surfaces, I have implemented weighted materials. Any given object can have 3 different bi-directional reflectance distribution functions: Lambert, specular, or a Cook-Torrance microfacet model with Blinn's distribution function. These can be given specific weights in order to create materials of various qualities.

Now that I have non-Lambertian reflections, however, light-importance sampling has become inefficient. With a narrow range of contributing directions for incident rays in specular and Cook-Torrance shaders, most sampled rays will have no contribution. Instead we add another sampling method, BRDF-importance sampling, to ensure that rays are chosen which will not be immediately nullified by the BRDF. Finally, we implement Veach's Multiple Importance Sampling to weight the light-importance and BRDF-importance by their relative liklihoods of being useful.

Path Tracing I - Direct Lighting

As efficient as raytracing may be, it simply isn't physically accurate. In order to create the truly photorealistic images, I'll have to shift paradigms to a fully-fledged pathtracer. This is a tall order, and will be broken up into stages.

First, I familiarize myself with the basic elements. Every object holds a material made up of any number of Bi-Directional Reflectance Distribution Functions. When hit by a ray, an object contributes its color then weighs its BRDF, the direction of some light in the scene, and a random element in order to determine in what direction the next recursive ray will be cast. This randomness will allow for indirect lighting and caustic light patches as ray casting never could, but will take hundreds of samples per pixel to converge to a realistic image.

So far I have implemented a sampler which will bias all secondary rays to be between lights and objects. This has accomplished light gradients and soft shadows.

Bounding Volume Hierarchy Acceleration

While supersampling helps to produce great images, high resolutions can cause render times to skyrocket. In order to cut costs on intersection detection, I implemented a bounding volume hierarchy acceleration structure.

The structure groups geometry primitives and mesh polygons into encompassing boxes, which are then in turn grouped into parent boxes until the entire scene is consolidated into one box. This allows any ray to test for intersection only with those shapes enclosed within already-intersected boxes. In a scene with one million polygons or primitives, this system could perform fewer than 50 intersection tests.

Supersample Anti-Aliasing

To avoid the jagged aliasing associated with pixel-based images (as seen in the image below), I implemented supersample antialiasing. In this model the color of each pixel is determined by a number of raycasts within the pixel's area.

I created a uniform, random, and stratified samplers to gauge the effectiveness of each. Above are two images with the same number of pixels, one with no supersampling and one with 5x5 stratified supersampling.

Material Properties, Recursive Raytracing, and UVs

Now that I could find the objects and their normals, I could begin to calcuate their perceived lightness using lambert and phong style shading. After this I was able to add reflective and refractive behaviors by recursing with my existing raycasting method. This allowed me to render objects of various smoothness, shininess, transparency, and density.

I also mapped the unit plane, sphere, and cube to standard UV coordinates, allowing my renderer to pick up on user-selected textures.

Raycasting and Intersection

The first step in developing my renderer was simple raytracing. I cast one ray into the scene at the center of each pixel. Each ray performed intersection tests with each object until the clostest was determined. The pixel for that ray was then colored according to the normal of that object at the point of intersection.