Website Link: https://saltyminty.github.io/cs184-sp23-websites/proj3-2/index.html
Irene Geng | i-geng
Mingyang Wang | saltyminty
In this project, we expanded the functionality of our raytracer to render additional materials, including mirror (reflective), glass (refractive), and microfacet surfaces. We implemented environment lighting, which allows us to simulate incident radiance from every direction on the sphere. We also implemented a thin lens model for the camera, which allows us to simulate different depth of field effects by adjusting the lens radius and focal distance.
Part 1: Mirror and Glass Materials
Here is a sequence of 7 images of CBspheres.dae rendered with max_ray_depth set to 0, 1, 2, 3, 4, 5, and 100. We used 256 samples per pixel and 4 samples per light.
m = 0 | m = 1 |
m = 2 | m = 3 |
m = 4 | m = 5 |
m = 100 |
When the max ray depth is 0, we only see zero-bounce lighting from the area light on the ceiling, and nothing in the scene is illuminated.
For m = 1, we only see direct lighting, which includes zero- and one-bounce lighting; the shadows are black, similar to in Project 3-1. Additionally, the spheres are black, as it takes at least two light bounces for the glass effects of the spheres to be rendered properly (discussed more below).
At all higher max ray depths (m > 1), the image is rendered with global illumination. We note that the left sphere is reflective and the right sphere is refractive.
For m = 2, the left sphere begins to show a reflection of the environment’s direct lighting from the m = 1 step (we note that the ceiling and other sphere are black, like in the m = 1 case). We see a faint reflection on the right sphere, but do not yet see any refraction (refraction needs at least three bounces, as it must enter and exit the sphere as opposed to reflection just bouncing once).
For m = 3, the left sphere shows a reflection of the m = 2 case (thus containing global illumination, but the other sphere is still black), and the right sphere’s glass material is finally beginning to show refraction properly; this is the first image in the sequence that shows the caustic effect.
For m = 4, the left sphere’s reflection finally shows the refractive effects of the right sphere (and thus looks like glass). We see a spot of light that has been refracted through the right sphere to land on the right wall of the box.
The images rendered with m = 5 and m = 100 look quite similar, with no major additional multibounce effects. The images generally appear less noisy, especially the spot of light on the right wall.
Here is a sequence of 4 images of CB_dragon_microfacet_au.dae rendered with 𝜶 set to 0.005, 0.05, 0.25, and 0.5. We used 256 samples per pixel, 4 samples per light, and max ray depth of 7.
𝜶 = 0.005 | 𝜶 = 0.05 |
𝜶 = 0.25 | 𝜶 = 0.5 |
We observe that as we increase 𝜶, the dragon looks less and less glossy and more matte-like. Another interesting thing of note is that a higher 𝜶 results in a slightly noisier image, likely due to the increased amount/intensity of reflections.
Here are two images of CBbunny_microfacet_cu.dae rendered using cosine hemisphere sampling and our importance sampling. We used 64 samples per pixel, 1 sample per light, and max ray depth of 5.
Cosine Sampling | Hemisphere Sampling |
We see that the cosine sampling case does not converge as well as the hemisphere sampling case: the bunny is a lot darker/less well lit, and both the bunny and the surrounding environment are a lot more noisy. Hemisphere sampling shows a much better lit bunny, and the bunny has a lot better color and metallic look. However, there is still a noticeable amount of noise on the surrounding walls, which can be fixed with more samples.
Here are images of CB_lucy_hg.dae with modified eta and k values. We used parameters corresponding to titanium and cesium metal.
Titanium
| Cesium
|
The titanium Lucy looks silvery, while the cesium Lucy is yellow and gold-like. Cesium is liquid at room temperature and highly reactive, but coding those effects were out of scope for this project and left as an exercise for the reader.
We used grace.exr for the images in this part of the write-up. Here is a converted .jpg of the environment map.
grace.jpg
Environment lighting simulates incident radiance from every direction on the sphere, from a light source that is infinitely far away. We can use environment lighting to model real-world light sources and environments. For example, the environment lighting of grace.exr simulates the lighting inside of a large cathedral, with direct light sources from multiple lamps and stained glass windows and indirect bounce lighting from every surface.
Here is the probability_debug.png file for grace.exr, generated using the save_probability_debug() function.
Here is bunny_unlit.dae and grace.exr rendered with uniform sampling and importance sampling. We used 4 samples per pixel, 64 samples per light, and max ray depth of 7.
Uniform Sampling | Importance Sampling |
The image rendered with importance sampling is much less noisy than the uniform sampling image. The uniform sampling image is almost entirely dark, with specks of color scattered on the model surface. It is additionally somewhat difficult to see the bunny’s shape in the uniform sampling image.
Here is bunny_microfacet_cu_unlit.dae and grace.exr rendered with uniform sampling and importance sampling. We used 4 samples per pixel, 64 samples per light, and max ray depth of 7.
Uniform Sampling | Importance Sampling |
The image rendered with importance sampling is significantly less noisy and is brighter than the uniform sampling image. Moreover, the bunny has a distinct metallic look in the importance sampling image, but has a very limited effect in the uniform sampling image. The uniform sampling image is much more noisy: the copper material of the bunny model and the pillow it’s placed on has many small bright specks (while there are much fewer in the importance sampling image).
In a pinhole camera model, we assume a perfect lens aperture of 0. As a result, every point in the scene appears in focus, as all the rays pass through one individual “pinhole” point in the camera, so every point in camera space receives radiance from a single point.
In the thin-lens camera model, the camera has a lens (of non zero aperture) that bends and refracts light. Only points that lie at the plane of focus (based on the camera’s focal length) will appear exactly in focus, while points in front or behind the plane of focus will be out of focus and blurred. A point in camera space receives radiance from any point on the thin lens, so we must uniformly sample over the thin lens disk.
Here is CBdragon_microfacet_au.dae focused at 4 visibly different depths. We used 512 samples per pixel, 4 samples per light, and an aperture of 0.23.
d = 4.5 | d = 4.8 |
d = 5 | d = 6 |
As the focal length increases, the plane of focus moves farther back. The above sequence of images shows the plane of focus at the head of the dragon, the body of the dragon, the tail of the dragon, and the back wall of the Cornell Box.
Here is CBbunny_microfacet_cu.dae rendered with 4 different aperture sizes, all with a focal length of 4.6 (focused on the head of the bunny). We used 512 samples per pixel and 4 samples per light.
b = 0.05 | b = 0.3 |
b = 0.6 | b = 1 |
As the aperture size increases, the depth of field decreases. When b = 0.05, we see that everything is in focus. However, when b = 0.3 and b = 0.6, we see that the walls of the Cornell Box, the edges of the light on the roof, and the back ear of the bunny begin to appear out of focus. When b = 1, the edges of the back all are almost indistinguishable, and the ears of the bunny are almost entirely blurred out.