Lecture 18: Introduction to Physical Simulation (31)

A lot of these issues also hold true in gradient descent algorithms. Adaptive step size in particular since taking too far of a step based on first-order derivatives can put you in weird places on non-convex functions.


I too thought this reminded me of gradient descent, especially when Kanazawa mentioned how greater time step was overshooting. I also originally thought that maybe a really tiny time step would help and just be computationally expensive, but it turns out the errors still compound. I guess it is just because the errors continuously build off of each other.


We can also remove instability with the backwards Euler Method https://en.wikipedia.org/wiki/Euler_method#Modifications_and_extensions


Another way to mitigate errors is to use higher order methods like Runge Kutta! https://www.wikiwand.com/en/Runge%E2%80%93Kutta_methods

You must be enrolled in the course to comment