I understand that there involve some sorts of feature mapping to texture space, but how could define such mapping since the depth information is kinda different in space, like the difference of eyes in this slide?
Thinking more abstractly, what we are doing here is defining a mapping from each (x, y, z) point on this object to a (u, v) coordinate on this texture. There are many ways to actually create this mapping (you could probably even do it by hand if you wanted), but we don't necessarily care about the depth information nor the real shape of the object once we have this mapping.
I like to think of this as like those water vinyl where you can dip a surface into a texture space, which requires the texture surface area (uuu by vvv) to be equal than that of the surface area of the Surface Model (with respect to the units of course)
Here's a cool video explaining UV unwrapping which is the process by which you take your 3D model with no texture on it and get all the faces (like our triangles) out and flattened it into a UV map and then overlay and position a 2D texture on it. First couple minutes give some good prefacing on the process that isn't too technical, but then the rest is cool for my fellow Blender Nerds. https://www.youtube.com/watch?v=scPSP_U858k&ab_channel=BlenderGuru
It's interesting seeing what the texture looks like in 2D. I'm curious how graphics designers work with texture assets for 3D objects: Do they always directly work with a 3D model in their software, or do they ever just work on a 2D, possibly weird-looking texture first, and then project it onto the 3D model?
Is there a simple mathematical function that does this transformation?
How to construct a map from (x,y) to (u,v) that we want?