Very interesting that the adaptive step size algorithm requires computing at least 3 Euler steps and compute the error in order to determine the heuristic step size. I wonder in what instance might adaptive step size techniques be more computationally expensive compared to not having an adaptive step size.
geos98
@Eloyye, I think it all depends on the function we are estimating as well as the step-size we choose to fix (in the non-adaptive step size case).
For example, if you happen to be in a optimal case that you have a periodic function which varies more or less the same "frequency" compared to the step size chosen, then the adaptive step size would be less efficient (since you need to compute three times rather than 1)
However, this does not happen often, so using a adaptive step size would be helpful compared to always using small time-stamps
Sicheng-Pan
How do we know what threshold to use? Also this method still accumulate errors, and this might be a problem for many numerical methods.
waleedlatif1
@Sicheng-Pan I would imagine that this varies a lot based on the problem we are trying to solve, but ideally we would minimize the error while maximizing efficiency. Through sampling for the same issue multiple times, it probably isn't too difficult to find an ideal step size for a particular problem, but I don't think (afaik) there is some global estimate for what the 'best' step size is.
jasonyang7
I remember that there are different ways to estimate a function, such as Newton's method. When are different situations in which one should use different methods of adaptive step sizes?
starptr
I imagine adaptive step size is computationally more efficient than Newton's method. Adaptive step size can reduce the next step size geometrically, allowing us to quickly converge to the step where error < threshold, whereas Newton's method requires us to calculate higher-order differentials for every step towards increasing accuracy.
Very interesting that the adaptive step size algorithm requires computing at least 3 Euler steps and compute the error in order to determine the heuristic step size. I wonder in what instance might adaptive step size techniques be more computationally expensive compared to not having an adaptive step size.
@Eloyye, I think it all depends on the function we are estimating as well as the step-size we choose to fix (in the non-adaptive step size case).
For example, if you happen to be in a optimal case that you have a periodic function which varies more or less the same "frequency" compared to the step size chosen, then the adaptive step size would be less efficient (since you need to compute three times rather than 1)
However, this does not happen often, so using a adaptive step size would be helpful compared to always using small time-stamps
How do we know what threshold to use? Also this method still accumulate errors, and this might be a problem for many numerical methods.
@Sicheng-Pan I would imagine that this varies a lot based on the problem we are trying to solve, but ideally we would minimize the error while maximizing efficiency. Through sampling for the same issue multiple times, it probably isn't too difficult to find an ideal step size for a particular problem, but I don't think (afaik) there is some global estimate for what the 'best' step size is.
I remember that there are different ways to estimate a function, such as Newton's method. When are different situations in which one should use different methods of adaptive step sizes?
I imagine adaptive step size is computationally more efficient than Newton's method. Adaptive step size can reduce the next step size geometrically, allowing us to quickly converge to the step where error < threshold, whereas Newton's method requires us to calculate higher-order differentials for every step towards increasing accuracy.