Mesh Editor

Overview

In this project, I implemented the Besier curves function, the mesh editting function, and the shadering function. In Section I, we dig into the Bezier Curves, from 1D de Casteljau subdivision (see more in Part 1), to 2D(see more in Part 2), we can generate curved lines and faces given several control points. In Section II, besides from what we achieved in Proj 1: rasterization, we stepped back to see how those triangles are generated and how can we modify those triangles to give a more detailed and smooth image. In Part 3, I implemented the averaging function to make the surface smooth; in Part 4 and Part 5, I implement the flip edge and split edge function repectively; in Part 6, I used what I got from Part 4 and 5, and walked through the process of upsampling (divide the original meshed 1 by 4). In Section III, I implemented several types of shaders to make the 3D object real considering the viewing angle, material, and lighting effects.

Section I: Bezier Curves and Surfaces

Part 1: Bezier curves with 1D de Casteljau subdivision

1: Bezier curves with 1D de Casteljau subdivision

De Casteljau is a method to draw the Bezier curves. It calculates a linear interpolation (or to say, a weighted average point) of two adjacent points on the line segment going through the two points. Since every two adjacent points will generate one weighted average points, a set of n points will generate n-1 points. Do this recursively from level n to level 1 where we only get one points. That point lies on the Bezier curve. Since the weight t we use for the calculation can be any number between 0 and 1, we can draw the level 1 point for each t until we get the whole Bezier curve.

The implementation is like two loops. The outer loop (recursion) is to loop from level n (the initial level) to level 1 (the final sigle point), and the inner loop is to calculate the weighted average for each adjacent points pairs.

2: Results

Original points. After 1 step evaluation.
After 2 steps evaluation. After 3 steps evaluation.
After 4 steps evaluation. After 5 steps evaluation.
The final curve. Different t, and the final curve.

Part 2: Bezier surfaces with separable 1D de Casteljau subdivision

1: Bezier surfaces with 1D de Casteljau subdivision

To implement Bezier surfaces, we will convert the 2D probelm in to 1D. For the 4*4 control points matrix, we firstly do de Castelijau for each row, using the row weight u. So this is the 1D problem as what we do in part 1. We need to do 4 – 1 = 3 levels linear interpolation for each row, and each level we need to do the lerp funtion for 3 -> 2 -> 1 times, which is the difference with part 1. Then for each row, we will have one point corresponding to coordinate u as a result.

Then, for the 4 result points we got for u. We apply the de Casteliau algorithm again using weight v. Then we will get the final result point at (u, v).

2: Result

The tea pot.

Section II: Sampling

Part 3: Average normals for half-edge meshes

1: Implementation

The idea is to use a loop to accumulate the norms of all the faces around the vertex. Inside the loop, one thing we need to do is to calculate the norm of a face, which can be obtained by applying cross product to adjacent edges. Since h is the pointer, we can use h->vertex() to find h’s vertex, and use h->vertex()->position to find the vertex’s position in form of Vector3D. Subtracting the neighboring vertex’s position, we will obtain the vector of the edge. Do this for two edges and cross product them, and then we get the norm. The other thing in the loop is to iterate the halfedge, which is discussed several times in lecture and discussion (use twin()->next()).

2: Result

The tea pot without smoothing. The tea pot with smoothing.
The tea pot without smoothing. The tea pot with smoothing.

Part 4: Half-edge flip

1: Implementation

I implement the flip function by first assigning the halfedges, vertices, edges, and faces of the figure below so that we can track the change of each feature. After flipping, we can assign each feature with the new neighbors. In this case, we get a clear understanding of how the half edge sticks the other elements together and the relationship between each. Before all these, we need to check whether the edge is a boundary.

One issue I was concerned during implementation was wether the direction (clockwise or counterclockwise) matters. Since in class figures are in counterclockwise, but the reference material is in clockwise. I think it shouldn’t matter because the computer can understand it in a different way from the aided graphs we draw. So even if we use a clockwise figure, the code can still do the loop in counterclockwise.

The structure before flip. The structure after flip.

2: Result

The teapot before flip. The teapot after flip.

Part 5: Half-edge split

1: Implementation

Different from the flip function, the split function not only need to update for the existing elements (halfedges, edges, vertices, and faces) but also need to allocate new elements: 1 vertex, 2 faces, 3 edges, and 6 half edges. Then we need to update the relationship for all of them, which can be referred to the figures below.

The structure before split. The structure after split.

One thing that cause error to my implementation is that I did not initialize the vertex. Since other elements can be assigned after we get the position of the new vertex and define the relationship, we only need to find the correct position for the new vertex. The new position is set to be the mid point of v0 and v1 in my case.

2: Results

The cube before split. The cube after split.
The bean before split. The bean after split.

Part 6: Loop subdivision for mesh upsampling

1: Implementation

On the whole, I followed the given “recipe” to implement the upsampling. Some defferences are: 1) I combined step 1 and step 2 so that I could set isNew to false and calculate newPosition in just one loop. 2) After computing the edge’s newPosition which is the position of the new vertex on that edge (since we will split each old edge, each old edge will have a new vertex “on” it), I update the newPosition to the new Vertex in the splitEdge function. In part 5, I assgined the position of a new vertex by the mid point of the edge, and now I change it to the newPosition I’ve calculated for that edge (or vertex). 3) In split and flip loop, I use a temporary iterator to do the ++ and reassgined to the iterator to maintain a stable process of splitting and flip each edge.

The problems I met after first implementation were infinite loop and lack of updating position and loss of many elements after upsampling. So I decided to debug by each function: split->flip->new positions.

Firstly, I tested the split loop. After apply split to the cube, I noticed some lack of edges. Back to my split function, I found that I didn’t update the outside halfedges (h6, h7, h8, h9). After fixed this, my result lacked some of the faces. I noticed that when I assigned the outside halfedges, I used the inner faces but not h6->face(), h7->face(), etc, which is definitely not correct. After fixed these problems, my split seems to be correct.

After split loop.

Then I add the flip loop. The result is, many faces collaped to the inside of the cube. Its hard to figure out why by the final result. So I use gdb to do one step debug (one loop iteration) so that each time I can clearly see which edge is flipped. Firstly I saw that the edge located on the edge of the cude is flipped. Because the neighboring faces are in different plane, when applying flip to it, it will be flipped to the inside of the cude. But this edge is an old edge, I found out that I just check for the two endpoints (one new and one old) for edge flip, but not check whether the edge is old or new. After fixing it, I got a goood flipping result.

After flip loop.

Then I check for updating new position to the vertices. I move the updating edges’ newPosition in splitEdge function. My results look resonable before I update for old vertices. But after I update the old vertices, many points collapsed. I recheck the way I calculated the newPosition for old vertices. I planned to use v->computeCentroid() and then use v->centroid * n to get the sum. But when I tried to compile, it said the computeCentroid() is not defined, so I assumed that it’s defined in other files and has already been called and then I used v->centroid directly. However, v->centroid is not assigned which cause the problem. So I do the sum by myself, which is similar to part 3, by traversing all the neighbor vertices and accumulating the position. After this, I finally got what I expected! So excited!

After updating the edge newPosition. After updating the vertices newPosition.

2: Changing process

The mesh behavior within a upsampling iteration is shown above, after several upsampling iterations, the cube behaves like below. There are several extraodinary points.

After 2nd upsampling. After 3rd upsampling.
After 4th upsampling. After 5th upsampling.

Below is the torus example. After several iterations, it becomes symmetric and smooth.

No upsampling. After 1st upsampling.
After 2nd upsampling. After 3rd upsampling.

3: Cube’s symmetric problem

The cube becomes asymmetric because the edges connecting to the corner vertices are not symmetric in each direction (or plane). In this way, each time when we do subdivision, the subfaces we got are not evenly distributed in the faces of the cube. Therefore, some extraodinary points will occur, and what is important is that those extraodinary points have diffent degrees. Those different degrees of extraodinary points leads to different distribution in different space position, which cause the asymetric effect.

The way I try to fix this effect by the above analysis is to preprocess the cube so that each corner vertex has symmetric edge distribution in the neighboring faces. And the new nerteices I introduced also have symmetric edge distribution around it. The following is a preprocess of splitting. To do this, we have to more the newPosition for each edge out of the split function so that the preprocessingcan just use midpoint splitting.

Preprocessing of splitting each faces’s diagonal. After 1st upsampling.
After 2nd upsampling. After 3rd upsampling.

In this way we know that if each extraodinary point is same, then we will have a symmetric geometry after upsampling. So what we do is to make the vertex that is going to become an extraodinary point to have same and symmetric neighboring edges distribution. We can also use flip to make the vertices hae symmetric edges. The result is like a tetrahedron.

Preprocessing of fliping some diagonals. After 1st upsampling.
After 2nd upsampling. After 3rd upsampling.

Section III: Shaders

Part 7: Fun with shaders

1: Phong shader

The phong shader is the combination of ambient, diffuse, and specular. Ambient gives a natural and constant value which works as some “natural” effects to the final image. Diffuse value depends on the material and incoming light but does not depend on where the viewing direction is, which is known as Lambertian effect. Specular adds the effects brought by the viewing direction, which is calculated by how near the viewing direction is from the reflected ray. The formula to calculate these three factors is:

I implement it by read and calculate all the needed vectors and choose some parameters. My teapot does not “like” red so it reflects most the red light so we can see that it’s a red (or pink).

Teapot with phong shader.

2: Extra credit: environment reflection shader

The idea of environment shader is to map the 3D coordinates for our object to the 3D coordinates in the texture. In this way, given a (x, y, z) coordinates for our object, we can convert it to 2D (u, v) coordinates and look up the RGB of the pixel in the texture image. For environment mapping, what we need is to map each side to a sphere so that we can see the environment around (can imagine this as a ball). So we can just use spherical coordinates, and calculate (phi, theta) from (x, y, z) to form a spherical map. Use(u, v) = (phi/2.0/PI, theta/PI) as the look-up coordinates and than we can get the texture for each pixel.

One thing bothered me for a long time was how to load the image. Finally, by noticing that the vec3 shadeEnvmapReflection() is already declared in the frag file, I was thinking that maybe the load texture step has already been implemented. I found in main.cpp, GLuint tex = makeTex(“envmap/envmap.png”) etc and meshEdit.cpp, glUniform1i(glGetUniformLocation(shaderProgID, “envmap”), 1) is what it used to load the texture, so I just define a sampler2D called envmap. In this way, I can finally load the texture which has already been added to the resources. In the shadeEnvmapReflection() function, another problem I met was that the texture image seems to be upside down, so I need to reconsider the coordination and finally I reverse the z-axis.

Environment image1: empty room. Teapot with environment shader.
Environment image2: me in the museum. Ball with environment shader.

3: Extra credit: bump mapping shader

Below is what I tried for bump mapping. To have a coordination map from the 3D object to the 2D bump texture, I used spherical coordinates, but independent of viewing angle. The idea for bump mapping is to calculate new normal based on the RGB value of the texture. Where B is the in/out direction, and R and G are tangent and bitangent direction.

I looked up to lots of materials of bump mapping and normal mapping, talking about the TBN coordination transformation, but still get hard to implement it well. The result I got below is still a little wierd somehow.

Ball with bump mapping shader.

Section IV: Desgin a Robot Mesh

1: My Robot Design

I used Blender to creat a robot and mesh it defaultly. Besides from the tutorial, I add two hands and two feet to my robot.

The robot model. Upsampling the robot.
Phong shadering the robot. Environment shadering the robot.

 

<< Go Back to Computer Graphics Collection