With generative AI nowadays, I wonder how the process of inbetweening will change in the animation industry, and how that will affect the lead animator's job.
sritejavij
In the case of ray tracing, I'm curious how ray tracing is done in animated and live video games so quickly when it takes minutes to render just on an image as we saw in project 3, and what optimizations are being done
aravmisra
@Michael that is a super interesting thought- and my initial presumption is that a lot of the repetitive work such as interleaving between keyframes may be outsourced to models. If a key animator can provide a few concept drawings/sketches, could a model be used to extrapolate and connect the frames?
S-Muddana
@aravmisra With the way AI video generation tools are rising, it could already be possible to do what you stated where an animator can input a prompt and an ML model would be able to create an animation.
maxwelljin
@sritejavij The project 3 we did does not support GPU rendering. Perhaps we can use modern GPUs with dedicated ray-tracing cores to accelerate the process (it would be much faster than general-purpose GPUs). We can also use levels of detail (objects far from the camera could be rendered with a lower sample rate) and shading and lighting optimizations. Precomputed lighting can be useful for static objects. Many games use a hybrid approach, so only certain elements are ray-traced.
@S-Muddana Yeah, if AI can do the detailed work of filling in frames between key animations, animators might focus more on the big picture. And if AI can even create the keyframes, like you are saying, this could mean animators need to learn new skills to work with AI effectively.
spegeerino
To add on to the discussion about AI use in animation, I think this is a place where it could really shine. There are extremely detailed inputs that can convey almost exactly what needs to happen (the keyframes), and minor errors like photo-generating AIs tend to make are less important in animation since frames are on screen for such a short time that it's harder to spot these errors. Of course, at this point it would probably still be necessary for AI generated work to be vetted by a human for egregious mistakes, but I wouldn't be surprised if these AIs aren't already seeing some kind of use in animation today.
OnceLim
With the recent release of the Spiderverse movies, I was wondering how keyframe animation comes into play in that movie. In the movie, you can see how Miles is animated with a bit of choppiness to it intentionally. So would that mean that the keyframes were implemented like before but the "tweens" are reduced in number to give off that choppiness?
brandonlouie
Another type of "Tweens" are smear frames, which generally assist in giving the illusion of motion blur. Smear frames greatly exaggerate the movement of the animation, often times looking goofy on their own. But, when displayed rapidly one-after-another, the smear frames create a great looking movement animation. Here are some examples of smear frames: https://youtu.be/D46TvFcte0w?si=vgMxGjv0XOGDu7q9
brandonlouie
@OnceLim yeah, I think that's correct! I have a few friends in animation and have read some artist threads on Spiderverse's animation process, and have learned that the character's frames are held for different amounts of time per movement. I'm not sure that I could describe it super well, but I think the relevant things to look into is animation on 2's, 3's, and 4's, which each refer to a holding a drawing for X amount of frames (resulting in less "tweens"). A fun fact is that some of the characters are animated on different numbers of frames, which make them differ in how they appear to move on screen (most notably Spider-Pink vs. Miles)
colinsteidtmann
Animation seems so labor intensive that I decided to do a little more reading. According to this article, https://sites.psu.edu/thebeautyofanimation/2018/03/20/keys-and-in-betweens-the-traditional-animation-process/, animations typically had as many drawings as the frame rate they ran at, which was typically 24 drawings and frames per second. That's 1500 drawings per minute! I thought things would be much faster by now but I looked up how long animation takes these days and according to replies in this reddit post, https://www.reddit.com/r/animationcareer/comments/hounxm/how_long_does_it_take_to_animate_one_minute_of/, a good rule of thumb is "a month per minute." This bewilders me, am I missing something or does it actually take that long to make animations in 2024?
ttalati
Here it seems that we still rely on artists to make images but I think similar to a lot of people I am wondering if the new advances in genAI will also make the role of the artist to make new drawings smaller and instead the main job will be to make sure all the generation is correct and only make edits where there needs to be?
rcorona
Here's a paper which uses a neural network to dynamically animate a controllable character. Something I find super interesting about this is that the model is capable of adapting to the geometry of the environment such that it's movements appear more natural:
With generative AI nowadays, I wonder how the process of inbetweening will change in the animation industry, and how that will affect the lead animator's job.
In the case of ray tracing, I'm curious how ray tracing is done in animated and live video games so quickly when it takes minutes to render just on an image as we saw in project 3, and what optimizations are being done
@Michael that is a super interesting thought- and my initial presumption is that a lot of the repetitive work such as interleaving between keyframes may be outsourced to models. If a key animator can provide a few concept drawings/sketches, could a model be used to extrapolate and connect the frames?
@aravmisra With the way AI video generation tools are rising, it could already be possible to do what you stated where an animator can input a prompt and an ML model would be able to create an animation.
@sritejavij The project 3 we did does not support GPU rendering. Perhaps we can use modern GPUs with dedicated ray-tracing cores to accelerate the process (it would be much faster than general-purpose GPUs). We can also use levels of detail (objects far from the camera could be rendered with a lower sample rate) and shading and lighting optimizations. Precomputed lighting can be useful for static objects. Many games use a hybrid approach, so only certain elements are ray-traced.
I also found a course about real-time rendering:
https://sites.cs.ucsb.edu/~lingqi/teaching/games202.html
@S-Muddana Yeah, if AI can do the detailed work of filling in frames between key animations, animators might focus more on the big picture. And if AI can even create the keyframes, like you are saying, this could mean animators need to learn new skills to work with AI effectively.
To add on to the discussion about AI use in animation, I think this is a place where it could really shine. There are extremely detailed inputs that can convey almost exactly what needs to happen (the keyframes), and minor errors like photo-generating AIs tend to make are less important in animation since frames are on screen for such a short time that it's harder to spot these errors. Of course, at this point it would probably still be necessary for AI generated work to be vetted by a human for egregious mistakes, but I wouldn't be surprised if these AIs aren't already seeing some kind of use in animation today.
With the recent release of the Spiderverse movies, I was wondering how keyframe animation comes into play in that movie. In the movie, you can see how Miles is animated with a bit of choppiness to it intentionally. So would that mean that the keyframes were implemented like before but the "tweens" are reduced in number to give off that choppiness?
Another type of "Tweens" are smear frames, which generally assist in giving the illusion of motion blur. Smear frames greatly exaggerate the movement of the animation, often times looking goofy on their own. But, when displayed rapidly one-after-another, the smear frames create a great looking movement animation. Here are some examples of smear frames: https://youtu.be/D46TvFcte0w?si=vgMxGjv0XOGDu7q9
@OnceLim yeah, I think that's correct! I have a few friends in animation and have read some artist threads on Spiderverse's animation process, and have learned that the character's frames are held for different amounts of time per movement. I'm not sure that I could describe it super well, but I think the relevant things to look into is animation on 2's, 3's, and 4's, which each refer to a holding a drawing for X amount of frames (resulting in less "tweens"). A fun fact is that some of the characters are animated on different numbers of frames, which make them differ in how they appear to move on screen (most notably Spider-Pink vs. Miles)
Animation seems so labor intensive that I decided to do a little more reading. According to this article, https://sites.psu.edu/thebeautyofanimation/2018/03/20/keys-and-in-betweens-the-traditional-animation-process/, animations typically had as many drawings as the frame rate they ran at, which was typically 24 drawings and frames per second. That's 1500 drawings per minute! I thought things would be much faster by now but I looked up how long animation takes these days and according to replies in this reddit post, https://www.reddit.com/r/animationcareer/comments/hounxm/how_long_does_it_take_to_animate_one_minute_of/, a good rule of thumb is "a month per minute." This bewilders me, am I missing something or does it actually take that long to make animations in 2024?
Here it seems that we still rely on artists to make images but I think similar to a lot of people I am wondering if the new advances in genAI will also make the role of the artist to make new drawings smaller and instead the main job will be to make sure all the generation is correct and only make edits where there needs to be?
Here's a paper which uses a neural network to dynamically animate a controllable character. Something I find super interesting about this is that the model is capable of adapting to the geometry of the environment such that it's movements appear more natural:
https://theorangeduck.com/media/uploads/other_stuff/phasefunction.pdf