Point sampling actually only tells the correct state of a point, rather than the proper state of the entire pixel containing that point. Jaggies looks wrong because it leads to the impression that the figure is not continuous, while the underlying model should be smooth.
TianCal
Point sampling is right on the fact that each pixel can only have one single state so evaluating the middle point of the pixel is actually more accurate than any other points.
WilliamLiuAtCPC
As human eyes are LPF(low pass filter), jaggies are very sharp signals so our eyes may be attracted by them easily.
haodi-zou
In point sampling, a pixel is considered to be "all or nothing" depending on whether its middle point is covered. However, it is possible that a pixel is only partially covered, and this is where the problem comes from. The resulting figure has pixels whose shaded area is either 0 or 1 (discrete), while in reality, the shaded area should be continuous on [0, 1], and this is the reason why jaggies look wrong.
Queenie-Lau
Jaggies often look "wrong" because the small jagged edges are often inconsistent and uneven. Since the edges stand out, if we saw a jagged circle, we already have an image of what a circle should look like (smooth around the edges). It's interesting to note that jaggies can also look "right" when used as a technique for pixel art.
NicholasJJ
Building off @haodi-zou's comment, my guess is that some formula that colors each pixel based on the percent of pixel covered by a triangle with give a smoother outcome. I have a feature turned on on my computer that lets me zoom in to parts of the screen, and looking around I'm seeing a blur effect on the edges of shapes. Whats interesting is that, while the blurring probably smooths out shapes when seen far away, since each pixel is still only one color you still get jaggies when looking closely at the screen: they're just blurry jaggies now.
aramk-hub
Point sampling essentially maps a pixel to being "on" or "off" completely, with no in between. Because of this [0,1] mapping, it can sometimes be quite unrealistic. The reason for this is because sometimes a pixel in theory should be partially shaded (think a circle's round edge or this triangles' outer pixels); however, point sampling does not allow this to happen. Because of this, we get the "jaggies", which have to be solved with antialiasing/better sampling methods. Moreover, jaggies are not always evenly spaced out, as we saw in the triangle in this lecture (alters between 1/2 steps).
Jaggies look "wrong" because they simply are in a sense. They don't represent the real world. You can't find a jagged edge in real life unless it is deliberately made or part of the material is torn off. Graphics attempt to simulate the real world, and a jagged edge does not simulate the real world at all. Because of this, we look for different approaches such as antialiasing, and more specifically supersampling as an approach.
JefferyYC
In an ideal rendering scenario, we could illuminate each pixel partially to exactly render the triangle (ie a formula that calculates the areas covered by the triangle on each pixel). While on a software level it is easy to calculate, is there any constraints on the physical level that is preventing us from doing this (a guess: it is impossible to illuminate continuous areas and the formula has to be binary)?
blahBlahhhJ
I'm thinking to make the sampled triangle look better, maybe we could partially turn on some pixels.
If pixel levels ranges between 0 and 255, we can set each pixel value to p∗256−1 where p is is percentage of the square pixel covered by the triangle. This way we could get some blur effects around the edge, which would probably reduce all these jaggies effect.
CarneAsadaFry
Building off of some of the comments about a higher quality pixel formula, I agree that we could do better by shading the pixel based on percentage covered. However, I wanted to think about how to actually calculate this percentage or something like it without sampling many points in each pixel, which may be slow. One idea that comes to mind is to take each filled in and all of their neighbors, and calculate the distances of their centers to the nearest triangle edge. Then, close pixels get a lighter shade, while "far" interior pixels are shaded darker, and "far" exterior pixels are not shaded at all.
christinemegan
Relevant factors that could be considered for the value of the pixel would be: what percentage of the pixel is within the bounds of the triangle, the position of each corner of the triangle to calculate the angles of each side length, or the values of the pixels surrounding the current pixel.
maleny25
Jaggies look wrong because of the sharp edges when natural shapes like circles have curves as @queenie-lau said. We know what they're supposed to look like so seeing these edges makes us label them as wrong. There must be a better way to cover a pixel with something like a curve to remove some of the jaggies and make point sampling a little more precise instead of an all or nothing type of situation.
CarlQGan
How I perceive sampling from the center is that we would like to preserve the details of pixels and consider those that passes through the center as important, and we preserve the information. However, A better solution could be that we light up pixels according to the percentage of area of the original pattern in that pixel. This would create a gradient effect and eliminate (at least a little bit of) jaggi-ness.
lucywan
I think the value a pixel has should depend on the context of what the pixel is being used to create. If the pixel is part of a billboard, there's not reason to have pixels that are extremely small since people will see them from far away anyways. However, if it's for a digital art piece, the pixel should be small.
Point sampling actually only tells the correct state of a point, rather than the proper state of the entire pixel containing that point. Jaggies looks wrong because it leads to the impression that the figure is not continuous, while the underlying model should be smooth.
Point sampling is right on the fact that each pixel can only have one single state so evaluating the middle point of the pixel is actually more accurate than any other points.
As human eyes are LPF(low pass filter), jaggies are very sharp signals so our eyes may be attracted by them easily.
In point sampling, a pixel is considered to be "all or nothing" depending on whether its middle point is covered. However, it is possible that a pixel is only partially covered, and this is where the problem comes from. The resulting figure has pixels whose shaded area is either 0 or 1 (discrete), while in reality, the shaded area should be continuous on [0, 1], and this is the reason why jaggies look wrong.
Jaggies often look "wrong" because the small jagged edges are often inconsistent and uneven. Since the edges stand out, if we saw a jagged circle, we already have an image of what a circle should look like (smooth around the edges). It's interesting to note that jaggies can also look "right" when used as a technique for pixel art.
Building off @haodi-zou's comment, my guess is that some formula that colors each pixel based on the percent of pixel covered by a triangle with give a smoother outcome. I have a feature turned on on my computer that lets me zoom in to parts of the screen, and looking around I'm seeing a blur effect on the edges of shapes. Whats interesting is that, while the blurring probably smooths out shapes when seen far away, since each pixel is still only one color you still get jaggies when looking closely at the screen: they're just blurry jaggies now.
Point sampling essentially maps a pixel to being "on" or "off" completely, with no in between. Because of this [0,1] mapping, it can sometimes be quite unrealistic. The reason for this is because sometimes a pixel in theory should be partially shaded (think a circle's round edge or this triangles' outer pixels); however, point sampling does not allow this to happen. Because of this, we get the "jaggies", which have to be solved with antialiasing/better sampling methods. Moreover, jaggies are not always evenly spaced out, as we saw in the triangle in this lecture (alters between 1/2 steps).
Jaggies look "wrong" because they simply are in a sense. They don't represent the real world. You can't find a jagged edge in real life unless it is deliberately made or part of the material is torn off. Graphics attempt to simulate the real world, and a jagged edge does not simulate the real world at all. Because of this, we look for different approaches such as antialiasing, and more specifically supersampling as an approach.
In an ideal rendering scenario, we could illuminate each pixel partially to exactly render the triangle (ie a formula that calculates the areas covered by the triangle on each pixel). While on a software level it is easy to calculate, is there any constraints on the physical level that is preventing us from doing this (a guess: it is impossible to illuminate continuous areas and the formula has to be binary)?
I'm thinking to make the sampled triangle look better, maybe we could partially turn on some pixels. If pixel levels ranges between 0 and 255, we can set each pixel value to p∗256−1 where p is is percentage of the square pixel covered by the triangle. This way we could get some blur effects around the edge, which would probably reduce all these jaggies effect.
Building off of some of the comments about a higher quality pixel formula, I agree that we could do better by shading the pixel based on percentage covered. However, I wanted to think about how to actually calculate this percentage or something like it without sampling many points in each pixel, which may be slow. One idea that comes to mind is to take each filled in and all of their neighbors, and calculate the distances of their centers to the nearest triangle edge. Then, close pixels get a lighter shade, while "far" interior pixels are shaded darker, and "far" exterior pixels are not shaded at all.
Relevant factors that could be considered for the value of the pixel would be: what percentage of the pixel is within the bounds of the triangle, the position of each corner of the triangle to calculate the angles of each side length, or the values of the pixels surrounding the current pixel.
Jaggies look wrong because of the sharp edges when natural shapes like circles have curves as @queenie-lau said. We know what they're supposed to look like so seeing these edges makes us label them as wrong. There must be a better way to cover a pixel with something like a curve to remove some of the jaggies and make point sampling a little more precise instead of an all or nothing type of situation.
How I perceive sampling from the center is that we would like to preserve the details of pixels and consider those that passes through the center as important, and we preserve the information. However, A better solution could be that we light up pixels according to the percentage of area of the original pattern in that pixel. This would create a gradient effect and eliminate (at least a little bit of) jaggi-ness.
I think the value a pixel has should depend on the context of what the pixel is being used to create. If the pixel is part of a billboard, there's not reason to have pixels that are extremely small since people will see them from far away anyways. However, if it's for a digital art piece, the pixel should be small.