Adaptive Time Horizon for MPC

The typical approach in the receding horizon framework is to choose a fixed time horizon over which to predict the unknown variables and obtain the optimal control input. If we had perfect estimates we could then make the time horizon as large as possible subject to factors such as computation speed, convergence, stability, and satisfaction of terminal constraints. However, when we do not have perfect predictions, a time horizon which is too large may be detrimental in that the effect of the poor estimate is amplified over a longer time period. Likewise, if we choose our look-ahead horizon to be too small then the solution may not count on the benefits of the underlying optimal control solution due to the fact that it does not look far into the future to see what is actually optimal.Receding horizon control strategies is a potential remedy to this problem in that they utilize the usefulness of the optimal control strategies while at the same time adding a certain element of feedback into the system. Instead of depending on the model to come up with the entire trajectory, receding horizon strategies only compute the trajectory over a given time horizon and then take a single step along that trajectory. While receding horizon methods are able to capture the desirable benefits of optimal solutions, there is an inherent trade-off between using a large horizon to capture the optimal solution and a short horizon to decrease the detrimental effects of poor predictions.

To find the best look-ahead horizon, ideally, we would like to look at how well our current prediction will compare with future values, but since this is a non-causal problem we must formulate a causal approximation. We make the assumption that our past ability to predict the state will reflect on our future ability to predict the state. Therefore, we look at our previous performance and gauge the look-ahead horizon accordingly.

The figure above shows an example of the time horizon adaptation in a navigation example. Here, the robot has limited information about the surrounding environment. To help the robot adapt to the unknown environment we use two behaviors, avoid obstacle and go-to-goal, to allow the robot to traverse towards the green circle. However, it is not immediately apparent how to choose the weights, so we use a receding horizon strategy to adapt the weights at run time. As mentioned above, it is not clear what time horizon should be used and so we also adapt the horizon according to how well we have been doing at prediction the robot trajectory. On the right-hand side of the figure it is evident that both the behavior weights and the time horizon need to change significantly to best navigate the environment.

Investigators

Publications

    • G. Droge, M. Egerstedt. Adaptive Time Horizon Optimization in Model Predictive Control.¬†American Control Conference, 2011 .
    • Droge, Greg; Egerstedt, Magnus; , “Adaptive look-ahead for robotic navigation in unknown environments,”¬†Intelligent Robots and Systems¬†(IROS) , 2011 IEEE/RSJ International Conference on , vol., no., pp.1134-1139, 25-30 Sept. 2011

Comments are closed.