Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. More precisely optimal controls problems involve a dynamic system with input quantities called controls and some quantity called cost to be minimized. An optimal control is a set of differential equations describing the paths of the control variables that optimise the cost. Finding solutions to problems of this nature involves a significantly high degree of difficulty in terms of cost and power compared with the related task of solving optimal open-loop control problems. Moreover stability is a major problem in the feedback control problem which may tend to overcorrect errors that can cause oscillations of constant or changing amplitude. A feedback control problem essentially depends on both state and time variables and so its determination by numerical schemes has one serious drawback it is the so called curse of dimensionality. Therefore efficient numerical methods are needed for the accurate determination of optimal feedback controls.
Piracy-free
Assured Quality
Secure Transactions
Delivery Options
Please enter pincode to check delivery time.
*COD & Shipping Charges may apply on certain items.