Lecture 17: Intro to Animation, Kinematics, Motion Capture (51)
jackcsullivan
How are motion paths and styles represented in this context?
Zyy7390
From something similar in CS 189, it could be represented in equations for coordinates change. For example, we might want the bot to proceed along certain direction as quick as possible. Setting this standard will allow us to train the bot to master its own way of "walking".
jpark96
@jack What's really cool about machine learning is that motion paths and styles are all inferred by the machine, so it doesn't have to be represented. There's a whole field on deep reinforcement learning that teaches robots a certain skill implicitly by maximizing a reward. In modern English, this just means you don't teach a horse to move each leg at a specific angle at timestep t, you put a carrot at the end of the race track and see what happens!
tyleryath
If you're interested, check out the work that OpenAI has done on using ML techniques to teach motion: https://openai.com/blog/roboschool/
Zyy7390
The example by @jpark96 is really inspiring! Indeed, with ML/neural network, we just need to set a standard to quantify reward at each time step/at the end, and the bot can learn the means to reach higher reward all by itself. There is also something called "universal approximation theorem" that basically indicates that any function can be mimicked with a 2 level neural network with sufficient amount of nodes.
drewkaul
In physically based animation, there tend to be two common approaches: search based methods and reinforcement learning. Search based methods typically choose a sequence of actions which minimizes some particular cost function by forward simulating some set of action sequences and evaluating them under that cost function. Reinforcement learning based methods utilize an agent which can interact with its environment through taking actions and receiving rewards and try to optimize the agent and set of actions which maximize the accumulated reward.
drewkaul
While machine learning based approaches in animation seem to be very promising, there are significant drawbacks. Physically based rendering tends to have high computational costs, which can severely limit its use in real-time applications and games. Furthermore, it can be very difficult to quantify factors that are indicative of good animation, such as "naturalness" or "smoothness." If anyone is interested in learning more about these approaches, I found the following article quite interesting:
How are motion paths and styles represented in this context?
From something similar in CS 189, it could be represented in equations for coordinates change. For example, we might want the bot to proceed along certain direction as quick as possible. Setting this standard will allow us to train the bot to master its own way of "walking".
@jack What's really cool about machine learning is that motion paths and styles are all inferred by the machine, so it doesn't have to be represented. There's a whole field on deep reinforcement learning that teaches robots a certain skill implicitly by maximizing a reward. In modern English, this just means you don't teach a horse to move each leg at a specific angle at timestep t, you put a carrot at the end of the race track and see what happens!
If you're interested, check out the work that OpenAI has done on using ML techniques to teach motion: https://openai.com/blog/roboschool/
The example by @jpark96 is really inspiring! Indeed, with ML/neural network, we just need to set a standard to quantify reward at each time step/at the end, and the bot can learn the means to reach higher reward all by itself. There is also something called "universal approximation theorem" that basically indicates that any function can be mimicked with a 2 level neural network with sufficient amount of nodes.
In physically based animation, there tend to be two common approaches: search based methods and reinforcement learning. Search based methods typically choose a sequence of actions which minimizes some particular cost function by forward simulating some set of action sequences and evaluating them under that cost function. Reinforcement learning based methods utilize an agent which can interact with its environment through taking actions and receiving rewards and try to optimize the agent and set of actions which maximize the accumulated reward.
While machine learning based approaches in animation seem to be very promising, there are significant drawbacks. Physically based rendering tends to have high computational costs, which can severely limit its use in real-time applications and games. Furthermore, it can be very difficult to quantify factors that are indicative of good animation, such as "naturalness" or "smoothness." If anyone is interested in learning more about these approaches, I found the following article quite interesting:
https://towardsdatascience.com/what-is-physically-based-animation-cd92a7f8d6a4