Path Tracer (Part 1)

Overview

In this project, I implemented the first part of ray tracing. Firstly, we need to generate a ray, which includes sampling, representation, and transformation. We also need to check intersection with basic geometries like triangle and sphere. These will be elaborated in Part 1. Then we considered to accelarate the intersection process, where we develop a BVH(Bounding Volume Hierarchy) model in Part 2. Using the bounding box the reduce the checking for each triangle in the scene. In Part 3 and Part 4, I implemented the direct and indirect illumination in the scene respectively. For indirect illumination, we use Russian roulette to decide the terminate condition. In Part 5, we optimize the program by using adaptive sampling which adapt the sample rate based on the illuminace of the bounced light ray.

Part 1: Ray Generation and Intersection

1: Methods

In Part 1, what we need to do is to generate rays and intersect with objects so that we could see the objects in the scene. For each pixel, we should first generate multiple samples because each pixel has a size and by averaging the spectrum of each ray, we can get a result closer to real scene. When generating a ray, what we want to know is an origin and a direction of the ray. The origin is the camera’s position, and the direction is based on the viewing angle of the camera. As what we learned from lecture (https://cs184.org/lecture/rt/slide_006), given a pixel position, and a viewing angle, what we need is to map the pixel position to the particular viewing window and obtain its new position. The vector from the camera to the new pixel position is the direction we want. Since the coordination is normalized (min at 0 and max at 1), transforming any pixel position to the one on the viewing window according to the viewing angle can be obtained by transforming the default viewing window with a bottom left at (0, 0, -1) and top right at (1, 1, -1) to (-tan(radians(hFov)*.5), -tan(radians(vFov)*.5),-1) and (tan(radians(hFov)*.5), tan(radians(vFov)*.5),-1) repectively. Apply this transformation to the particular given point (between (0,0) and (1,1)), the new position of the pixel is where the ray intersect with the viewing window. Having the origin and the intersection, we can easily get the direction of the viewing ray.

When calculating the intersection, the idea is simply writing the formula of the ray and the formula of the triangle or the sphere and set them equal to find the intersection. As for the triangle, a trick we use is the Moller Trumbore algorithm, and we have direved and proved it during discussion section. As for the sphere, what we need is the solution equation for quadratic equation.

2: Implementation Problems

When implementing these parts, one problem I met was the scene was always several straignt lines. I thought it must have some issue with the viewing angle transformation, so I checked my camera.cpp and find thatI put (x – 0.5) and (y – 0.5) out of the tan(). After fixing this, the problem did not change, so I checked nearly everywhere, and I was thinking that it seems the color is correct, just there is no variance in y axis. Finally I noticed that in pathtracer.cpp, I used origin.x in both x and y arguments when calling generate_ray.

The other problem is about normalize. After finishing task 4, I could see two sphere, but they were not as colorful as on the website. So I double checked and I found that normalizing a vertor can fix the color issue.

The problem of vertical lines. The color problem of the spheres.

2: Results

The result for part 1.

Part 2: Bounding Volume Hierarchy

1: Methods

About the BVH construction: The general idea is to split the bounding box of the current node to left and right smaller bounding boxes with less primitives until the primitives of the current node satisfy the leaf condition. For the leaf condition, we can simply set a max_number of the primitives that node has. If the node contains fewer primitives, than we can directly calculate the intersection within that bounding box, because there are usually a few, the computational load is not heavy within that leaf.

Another important issue for BVH construction is how to split the node. The given method is problemetic because it does not check for a node that does not contain any primitives. Instead, I used the centroid_box which is the bounding box of the center of each element in it. In this way, there are always some center points lie on the edge of the centroid bounding box, so if we continuing divide by one of the edge, we can always get two nodes both containing primitives. In this case, I solve the segmentation fault error, and what we do to get the split point is to: 1) find the longest axis of the bounding box, 2) find the middle point of that axis. Then I seperate the primitives into two, and recursively construct the tree on that two child nodes.

About the BVH intersection algorithm: We will have a box intersection function that check whether a ray intersects with a bounding box, and given the whole scene, we just need to go down the tree to find a leaf box that intersects with the ray, and check intersection to the primitives in that leaf box. As for how to go down the tree, if we just want to check whether it intersects or not, we can return true whenever we find a intersection. If we want to update to the closet intersection, we need to go over all the leaves that has an intersection and update the intersection point.

As for checking intersection given a bounding box, what we do is to generalize the 1-D method into 3-D. In 1_D, given a ray o_x + td_x, and a point p_x, we use (p_x – o_x) / d_x to get the t value for intersection. In the case of 3-D, we just need to apply this to all x, y, z axis. For each axis, we find a interval of t by applying the min and max value of the bounding box in that axis. If the three intervals we derived intersect with each other, then the ray intersects with te box.

In the BVH intersection function, we also need to check whether ine intersection interval for t overlap with min_t and max_t of the ray.

2: Implementation Problems

One problem I met was segmentation fault, even after I fixed the centroid splitting method. My problem occurred because I used if ((node->prims).size() != 0) to check whether a node is leaf or not, but this contains problem when the prims variable is not initialized. Instead, I found that I should use the well written funtion isLeaf() in this case.

The other problem is that I got the following result. When I zoomed in I found that only the bounding box is filled with color. This was because I used the centroid box when creating a new node. Instead, I should use boundding box, because every area (containing the whole triangle) in the box should be checked for intersection.

The centroid bounding box. The centroid bounding box. The color just for the centroid bounding box.

3: Results

The BVH for cow. The result for cow.
The result for CBlucy. The result for maxpalnck.

Part 3: Direct Illumination

1: Methods

What we need to do in this part is to first define a material using the BSDF diffuse function. After this, we trace a ray to calculate the irradience of a hit point which is the direct illuminantion we want.

Firstly, to implement the BSDF function, we should know that for diffuse or Lambertian material, the reflected light is equally distributed in all direction. Although the function has wi and wo as input arguments(which is useful for other functions under the same parent class BSDF), we don’t need these two direction to calculate the DiffuseBSDF. So we can simply use the reflectance and divide it by the integral of the cos(theta) of the hemisphere which is PI and return.

Secondly, we need to implement the estimate_direct_lighting() function. The idea is:

1. given a point in the scene, we can get the radiance from a light source. This is obtained by sampling many light rays’ radiance from that source and take average based on pdf of sampling. Since the z axis shows that position of the light source and the point, if the input z direction is negative, which means the light source is behind the object, we should ignore this because the direct light ray from that source will not hit the object.

2. Considering the position of the light source, we should also consider whether other objects in the scene lie between the point and the source. Test it by casting a shadow ray and set the max_t to ensure not checking beyond the source, and then using intersect(). If the light ray intersects with other objects, then the direct ray is block by other object so it will not hit the point.

3. After this, we know that the light ray hits the point, but what we want is what we can see, which is not the radiance but the irradiance. To convert, we should know that the irradiance is based on the angle between the incoming light ray and the source (using cos()), and the type material(the BSDF).

4. Doing this for all the light source in the scene and then sum them, we can get the direct illumination we want.

2: Problems

One probelm is the segfault. After debugging for a while, I know the problem is not in the estimate_direct_lighting() function, but in the bvh.cpp function. This is because segfault usually relates to some null pointer being visited, and it is very likely to happen in the intersection test. My problem was that in bool BVHAccel::intersect(const Ray& ray, BVHNode *node), I used p->intersect(ray, i) but not p->intersect(ray). However, the intersection is not updated in this function.

The second problem I met was that the scene was dark and the shadow is not normal. I noticed that it is something wrong when I cast the shadow ray. I was stuck in whether using w_out, -r.d, -wi, or ri for the direction of the light ray. Although I know the ray should be from the point to the source, but I missunderstood the meaning of the variables. I though wi was the input ray from the light source to the hit point, but it was from hit_p to the light source.

3: Results

-t 8 -s 64 -l 32 -m 6 -t 8 -s 64 -l 32 -m 6

4: Results for noice compare (different l when s=1)

Rendering with 1 light ray. Rendering with 4 light rays.
Rendering with 16 light rays. Rendering with 64 light rays.

As can be seen, the number of light rays used for sampling can obviously affect the noise in the scene. This is because when only using 1 light ray, we just random pick one can what we get for shadow is an absolute result(not softed at all). When using more light rays, we average the each of their spectrum, we can get an overall shadow which is softed and closer to the real scene.

Part 4: Indirect Illumination

1: Methods

I implemented the indirect illumination as https://cs184.org/lecture/path-tracing/slide_057.

1. Instead of sampling from light, I need to sample from BSDF radiance because all we want is the reflected rays but not the direct ray.

2. Then we need a terminate pdf to do Russian roulette. We do this by a linear function of the BSDF radiance, because the BSDF radiance show how strong the reflected ray is, and if it is low enough, we can terminate tracing that ray. Bound the tpdf by 0 and 1. We use a coin flip funtion to generate the randomness.

3. If not terminating, we should recursively analyse the irradiance and accumulate them. To do this, we generate a backward ray from the hit point and trace it to estimate the radiance. To get the irradiance, we multiply it by the material radiance property, the angle and divide by the pdf we get the sample and the not terminating pdf.

2: Global Illumination result

Global illumination of sphere (-s 1024 -l 4 -m 4).

3: Direct and Indirect Illumination result

Comment out lines for indirect illumination to generate an image only have direct illumination. To generate an image only with indirect shadow, we should modify the condition for direct illumination instead of just comment it, because we need the direct light ray as the source for indirect illumination.

Direct illumination of sphere (-s 64 -l 16 -m 5). Indirect illumination of sphere (-s 64 -l 16 -m 5).

4: Different views with max_ray_depth

As can be seen, when max_ray_depth becomes larger, the image becomes slightly brighter. This is because the max_ray_depth shows the times of bounces of the ray, and the scene will certainly becomes brighter when the bounces we traced are more. But the improvement is not effective because our material is diffuse / lambertian surface and the bounced light ray is uniformly distributed.

Bunny with max_ray_depth = 0 (-s 16 -l 4 -m 0). Bunny with max_ray_depth = 1 (-s 16 -l 4 -m 1).
Bunny with max_ray_depth = 0 (-s 16 -l 4 -m 2). Bunny with max_ray_depth = 1 (-s 16 -l 4 -m 3).
Bunny with max_ray_depth = 0 (-s 16 -l 4 -m 100).

5: Views with various sample-per-pixel rates (4 light rays)

As can be seen, fewer sample results in high noise and loss of truth. The higher the sample rate is, the closer the image is to the real scene.

sample-per-pixel rate = 1. sample-per-pixel rate = 2.
sample-per-pixel rate = 4. sample-per-pixel rate = 8.
sample-per-pixel rate = 16. sample-per-pixel rate = 64.
sample-per-pixel rate = 1024.

Part 5: Adaptive Sampling

1: Methods

The idea of adaptive sampling is to stop sampling more light rays when the reflected ray’s illuminance is not strong enough. The algorithm is to use the following formula to cjeck the stopping condition, and the mu and sigma here is based on the illuminance of each sample rate.

2: Results

The bunny scene after adaptive sampling. The bunny sampling rate image after adaptive sampling.

<< Go back to Computer Graphics Collection