## Overview

In this project, I implement the rasterization functions. The simplest rasterizer is to rasterize a point, a line, or a triangle. For triangle rasterization, the basic method is checking whether a pixel is inside a triangle, and if it is ,we fill the pixel with the corresponding color (see Part 1 for more). But a major problem is the effect of aliasing. To avoid this, we do supersampling (see Part 2 for more). However, this super sampling process is costy. In stead, we look to another pixel sampling skills which using barycentric coordinates to get a weighted color (see Part 4 for more) or texture (see Part 5 for more), and use bilinear interpolation to antialiasing. To improve the performance of pixel sampling when using texture mapping, we use level sampling (see Part 6 for more). In addition, we also implement transformation functions to translate, rotate, or scale the geometries in a image.

## Section I: Rasterization

### Part 1: Rasterizing single-color triangles

#### 1: Implementation

Firstly I used the functions in slides, but the result only showed some of the triangles correctly. For example, in test 6, I could only see partially colored flowers with obvious triangle edges. After analysis, I thought that the direction of the input mattered in my implementation, so I need to implement the clockwise and counter-clockwise triangles repectively.

After knowing the barycentric coordinates, especially after discussion (for alpha or beta or gamma less than 0), the point lies outside the triangle. I used the barycentric way to implement.

1. Find the restricted rectangle of the input triangle. Specifically, find the i_start, i_end, j_start, and j_end of the scanning process, where “i”s and “j”s are pixel coordinates. The outlines of the rectangle are the verticle and horizontal lines going through the vertices of the triangle and expended outside by one pixel. Specifically, by finding the minimum and maximum of the coordinates of the vertices we can find the start and end coordinates of scanning the rectangle.

2. Scan every (i, j) in the rectangle, relocate the coordinates from the corner to the center (by plus 0.5).

3. Calculate the barycentric coordinates (alpha, beta, gamma).

4. If the (alpha, beta, gamma) are all larger than 0, that is, the pixel lies inside the triangle, we set the color to the pixel.

#### 2: Extra Credit (Optimization)

1. Instead of calculating the boundary of the rectangular each time in the loop, I put it outside the loop.

2. Since I used barycentric coordinates and need to do dividision, and the denominator is the same. So I moved the calculation outside the loop. In addition, a division calculation costs more than a multiplication. So I use production in the loop.

#### 3: Results

Result for test8 may seem weird because it look like lacking of some lines. But when zoomed in, we can see the lines correctly. This is because the resolution of my monitor displys the image at 960 * 640, which is relatively low.

Result for test 1. | Result for test 2. |

Result for test 3. | Result for test 4. |

Result for test 5. | Result for test 6. |

Result for test 7. | Result for test 8. |

### Part 2: Antialiasing triangles

#### 1: Implementation

Besides from Part 1, Part 2 requires us to implement get_pixel_color function. The goal for this function is to calculate the average of the (sample_rates) subpixels color value (RGBA values), and set it to the color of the big pixel.

Because there are more calculation and data type transformation of int, float, and unsigned char, some misuse of syntex can cause bad results. For example:

if I use “avg[0] = (unsigned char) sum[0] / num” to average the sum of the color value, it will occur an error because the division result is still int. Since sum[0] is float less than 1 and num is larger than 1, the result will be 0 and all we get is balck image. So we need to use: “avg[0] = (unsigned char) (sum[0] / num)”.

Having the aided function of computing the averaged color of the big pixel, we can use it to go through our supersampling process. In drawrend.cpp, I first calculated the center distance for subpixels outside the loop, so in the loop I can get the position of every subpixel’s center. Comparing with Part 1, I add another two loops that scan all subpixels in a big pixel. Same as Part 1, we just do more checking of whether inside a triangle for all the subpixel coordinates. Obviously, we can see that the time complexity should be multiplied by the sample rate because of the extra loops.

#### 3: Results

Test 4, sample rate = 1. | Test 4, sample rate = 4. |

Test 4, sample rate = 16. |

<!–

Result for test 3 in sample rate of 1.Result for test 3 in sample rate of 4.

Result for test 3 in sample rate of 9.Result for test 3 in sample rate of 16.Result for test 4 in sample rate of 1.Result for test 4 in sample rate of 4.Result for test 4 in sample rate of 9.Result for test 4 in sample rate of 16.Result for test 5 in sample rate of 1.Result for test 5 in sample rate of 4.Result for test 5 in sample rate of 9.Result for test 5 in sample rate of 16.Result for test 6 in sample rate of 1.Result for test 6 in sample rate of 4.Result for test 6 in sample rate of 9.Result for test 6 in sample rate of 16.

–>

### Part 3: Transforms

#### 1. Implementation

In this part, I just input the matrix for the translation, rotation, and scale. One problem I met was that the angle is in degree, we need to convert it into radious unit. For my robot, I give it a smiling facing and a posture saying hello.

#### 2. Result

The default transformation of the robot. | My transformation of the robot. |

## Section II: Sampling

### Part 4: Barycentric coordinates

#### 1. Implementation

Barycentric coordinates converts a coordinates vector of 2D into a 3D coordinate vector. Instead of showing the position on the x-y coordinations system, it shows the distance between the point with the three vertices of a triangle respectively. So it’s a relative coordination. The distance is measured along the perpendicular line to the opposite edge of a vertices, and is normalized from 0 to 1. If any of the value is smaller than 0, it means the point is outside the triangle. So any point can be seen as a weighted avarage of the three vertices. The closer to a vertex, the more “similar” it is to the vertex. For example, if the vertices colors are represented by red, green, and blue respectively, the closer a point is to the red vertix, the more red it is. The center of the triangle has a same distance to the three points, so it’s the average of red, green, and blue, and shows as color (255, 255, 255) / 3.

In this part, I need to get the color from the texture using the barycentric coordinates. Since I’ve already implemented barycentric coordinates, what I need to do is to check whether the tri parameter is valid. If it is, we just build a 3D vector to store the barycentric coordinates and pass it to the tri->color function which just calculates the color wighted average based on the barycentric coordinates, like what is covered in the discussion section.

#### 2. Results

The result for the round example of barycentric color picking. | The result for the triangle example barycentric color picking. |

### Part 5: “Pixel sampling” for texture mapping

#### 1. Implementation

For texture sampling, we have a image that’s rateraized into triangles. For each triangle, the vertices coordinates are mapped to coordinates in a texture image by some distortion (twist, fish-eye, etc). Using barycentric coordinates and calculates the coordination weighted average of the vertices a pixel is in, we can do for each pixel and map it to the texture image. Refer the color in the texture image to the output image, we will get a converted image.

There are two way to implement, one the nearest sampling. It maps a pixel to a coordinates of the int value of the calculated weighted average directly. The other way is do bilinear sampling, which use the neighboring four pixel’s color average after the mapping to make less aliasing. Specific implementations are decribed as follows:

1. DrawRend::rasterize_triangle: we just need to pass the psm value and call the function tri->color using this and set other to default (which corresponding to level 0).

2. TexTri::color: Instead of just do color average, we need to read the color from texture. To get the correct color in texture, what we do is to calculate the correct coordinates in weighted average of the three vertices coordinates.

3. Texture::sample: sample is like a divider that read the mode and call the corresponding function.

4. Texture::sample_nearest: What we need to do is to use the uv coordinates to local the correct pixel in the texture and return its color. What we need to visit is mipmap. mipmap is a vector and different element refers to different level (the first one is level 0, say 128 * 128, second is level 1, say 64 * 64, …). In part 5 we just need level 0. Inside level 0, it store the height (say 128), width (128), and the the color of each pixel in texel. texel is a 1-D vector converted form 2-D. It’s similar to what we have seen in discussion. The size should be height * width * 4 (4 is for RGBA). So what I do is to convert the uv coordinates into 1D index, and assign RGBA values respectively.

5. Texture::sample_bilinear: instead of just using one pixel, we need to calculated a weighted average of the nearby altogether 4 pixels. One thing that bothers me a lot is how to locate the pixel that lies within the four centers of the neighbor pixels. My final method to do this is to substract the uv coordinate by 0.5. So if the value is less than 0.5, it will fall into a pixel with smaller index value. If it is larger than 0.5, it will fall in the current pixel. So we only need to do (x,y), (x+1,y), (x, y+1), (x+1,y+1) to get the four pixels of which the four centers surround the pixel.

#### 2. Analysis

As can be seen from the results, when sample rate is low (super sampling is not used), for high frequency parts of the image there will be much difference in nereast and bilinear sampling since bilinear use a weighted average of the neighbor pixels. Because the high frequency part has rapid change between pixels, bilinear result will be blurred. But with super sampling, the results for nearest and bilinear is similar because super sampling do antialiasing and filtered some high frequency information.

#### 3. Results

Result for using level 0 nearest sampling, sample rate = 1. | Result for using level 0 bilinear sampling, sample rate = 1. |

Result for using level 0 nearest sampling, sample rate = 16. | Result for using level 0 bilinear sampling, sample rate = 16. |

Result for using level 0 nearest sampling. | Result for using level 0 bilinear sampling. |

Result for using level 0 nearest sampling. | Result for using level 0 bilinear sampling. |

Result for using level 0 nearest sampling. | Result for using level 0 bilinear sampling. |

Result for using level 0 nearest sampling. | Result for using level 0 bilinear sampling. |

### Part 6: “Level sampling” with mipmaps for texture mapping

#### 1: Implementation

Some parts of the output image do minification of the texture image, where one pixel in the output image would actually map to many pixel blocks(like 4 * 4, 16 * 16, etc) or to say, many pixels in an area of the texture would only mao to one pixel in the output image. Therefore, it is not reasonable to only pick one nearest pixel or four (bilinear) pixels to assign the color. The idea of level sampling is to build a mipmap in advance that calulated the small pixel blocks (like, 4 * 4) to form a new pixel. Do this for all the small blocks, we’ll get another texture pixel matrix. If the original image size is 128 * 128, and we do avearge of every 2 * 2 pixels block, we’ll get a new image map of 64 * 64 pixels, continuing doing this, we’ll get map of 32 * 32, 16 * 16, 8 * 8, 4 * 4, 2 * 2, and 1 * 1. This structure is called mipmap.

We use mipmap to trade space for higher speed and better antialiasing results.

1. DrawRend::rasterize_triangle: besides calculating the barycentric coordinates for current pixel, we also need to calculate that for the near by pixels dx and dy. Also, we need to pass lsm as parameter to tri->color.

2. TexTri::color: similar to 1, here we just need to calculate the weighted average of uv coordinates from barycentric coordinates.

3. Texture::sample: I add the lsm as a mode parameter to call the corresponding function.

4. Texture::get_level: in get level, we just need to implement the function we learned in class. We need to find the maximum distance between the current pixel and the neighbor pixel in the uv coordinates. One problem is the uv coordinates are a float number between 0 and 1, and we need to multiply it by the width or length of the texture image.

5. Texture::sample_trilinear: do bilinear for the current level and the the level plus one, do average and get the color returned.

#### 2: Results 1

Result for level 0 nearest sampling. | Result for level 0 bilinear sampling. |

Result for nearest level nearest sampling. | Result for nerest level bilinear sampling. |

#### 3: Results 2

From the following examples we can see that: the results between level 0 and level nearest does not differ much because there is no much minification of the texture image, so the levels we used are close.

Result for level 0 nearest sampling. | Result for level 0 bilinear sampling. |

Result for nearest level nearest sampling. | Result for nerest level bilinear sampling. |

Result for trilinear sampling. |

#### 4: Results 3

In this part I find a high resolution image (4000 * 4000) as texture. We can see that level 0 and level nearest sampling have obvious difference, and bilinear make the aliasing effect less.

Result for level 0 nearest sampling. | Result for level 0 bilinear sampling. |

Result for nearest level nearest sampling. | Result for nerest level bilinear sampling. |

Result for trilinear sampling. |

#### 5: Analysis

Time: using bilinear pixel sampling will take slightly longer time (~0.001s) than nearest sampling, but the antialiasing result is better; using level sampling will take a little longer (~0.004s) than level 0 sampling, but the result is better; using super sampling will multiple the time by the number of the subpixels inside a pixel, and the result is similar to that using level sampling or bilinear sampling so it’s not recommended to use. Space: Using level sampling will take more space, but in discussion we’ve proved that only 4/3 of the origin memory will be used.

Related to the results we got before, we know that bilinear sampling can effectively reduce aliasing then nearest sampling. For mapping that does not have much minification, it’s enought to generate good result by only level zero bilinear sampling, and is not too costy and no extra memory used. For mapping with much minification in some parts, it’s necessary to trade good quality by space using level sampling (bilinear or trilinear for better antialiasing).